— Home of XPL (eXtensible Process Language)
It's like XML, but you can store source code in it.
I mentioned in a previous post that I would blog on robot ethics. Let me meander a bit before getting into technical detail on how this relates to HLL. (I mean that I'm not going to get into that detail in this post. This post is a meanderer on the topic.)
The international discussion on robot ethics involves interacting with people from a variety of academic disciplines, such as moral or ethical philosophers, as well as other engineers and scientists who are interested in the subject. In my view, the interdisciplinary discussion is both quite interesting and valuable.
Me – I'm an engineering scientist type. Although I'm willing to opine on just about anything, when I discuss robot ethics, I typically try to remain in my engineering persona (which is not so hard for me).
I would break the discussion in two. There is a part that is concerned about how humans use technology. Even this breaks down further. There are some who are particularly concerned about the increased autonomy of weapons systems; worrying that machines will increasingly make life and death decisions in military roles. Others (sometimes they overlap) are quite concerned about the use of robots in medicine and particularly elderly care; to what extent will machines replace human contact, etc.
I am assured that naming each area will be the subject of lengthy debate, but there has been at least some preliminary agreement in some quarters that human ethics in the use of robots should be referred to as “roboethics.” (Roboethics Facebook group.)
What does that have to do with engineers? Well, plenty. Engineers invent, design, even manage and use technology. Yes – the decisions of those who pay the bills matter a lot. Even that distinction has a place in the discussion. But it is one of those times when knowledge and awareness of ethical concerns within the engineering community (so to speak) can be important.
The other major branch has - at least in my mind at this early stage - more to do, directly, with HLL. Advances in machine intelligence and autonomy should include advances in autonomous machine ethical decision-making. The ultimate challenge for the “moral machine” is autonomous moral agency. If this seems an interesting subject to you, I will again suggest Wallach and Allen's book; Moral Machines, Teaching Robots Right from Wrong. Their blog is (click) here.
And here's a link to the Robot Ethics Facebook group.
No comments:
Post a Comment