— Home of XPL (eXtensible Process Language)
It's like XML, but you can store source code in it.
NPR interviews Patrick Lin, an assistant professor of philosophy and research director of the Ethics and Emerging Sciences Group at California Polytechnic State University; and Joanne Mariner, director of Human Rights Watch's Terrorism and Counterterrorism Program.
I would like to comment briefly on what seems to be settling in as presumptive knowledge regarding technology choices. I'll blog more extensively on this later, sorting it out with HLL; which gives you a hint as to why I interrupt this announcement to comment. Keep in mind that I'm only responding to an NPR interview, not an in depth thesis, and that I've already mentioned that the interdisciplinary discussion on robot ethics is a good thing - so, I discuss. (I'll in fact be picking at just one specific point and think the interview was a good one.)
Patrick Lin, and others, state that increasing machine intelligence / autonomy come down to reliance on either simple rules or learning technology, and that with learning technology, we will not be able to predict behavior. So, the technical choices don't look good.
My quick techie response is first to challenge people to tell me how it is that critical human decision making doesn't fit the rule model, then I'll make what is apparently some startling statements about machine learning.
The laws of war, and other basic decisions can be expressed as rules. if CIVILIAN-NON-COMBATANT then DON'T-FIRE seems to make sense (even if the machine recognition problem is difficult). if ACTION-NOT-SAFE then CANCEL. if OUTNUMBERED then RUN-AWAY. if BATTLE-WON then STOP-FIRING. There just seems to be a whole lot of basic stuff that can be covered by rules - even simple ones. And doesn't this fit the human decision-making model pretty well? (Which is why everyone understands this comment.)
Regarding "unpredictable" learning technology, it seems to me that autonomy is at least to some extent synonymous with lack of predictability - just like with humans and other animals. That seems logical, but in fact, there are effective ways to place limits on the range of behaviors an autonomous robot would develop and use. Learning robots can be programmed to go somewhere and do something, for example, with learning / adaptation used to allow them to adapt to conditions. In the laboratory for example, four-legged robots with a broken leg have adapted their gate to three legs. If a humanoid robot limps the last mile to get home, due to a broken part, that doesn't change the mission. My general point is that design engineers can still control what learning / adaptive robots are programmed to do, even while there's some autonomy in how they do it.
In my contribution to Gerhard Dabringer's interviews below, I spend a section roughly outlining a development process (very generally), that includes training and testing. Even for development engineers with little knowledge of machine learning, I think it makes a worthwhile point. Although the character of development of learning machines is at some points different than traditional development, the overall process is the same. Quality assurance doesn't become obsolete in developing, using, and maintaining these advanced systems.
For a broader debate, there is a series of interviews conducted by Gerhard Dabringer, Austrian military Institute for Religion and Peace. Click here for the interviews and more.
English translation of a Swedish documentary on the same topic: click here.
No comments:
Post a Comment