Friday, October 29, 2010

The Ghosts in My Machine: Chapter 1

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

As a young man, when I enjoyed idle time and my daydreams tended to wander in strange directions, I found myself considering a rather unimaginative question. As computer languages and tools have evolved to higher and higher levels - “bottom-up” - where will they eventually reach? Where's the top?

To put this contemplation in perspective, the year was 1985. The computer under my desk was a first generation Texas Instruments PC with two floppy disk drives. Ethernet cables were being strung through our offices to network our computers for the first time, allowing messages to stream around the building at the blistering rate of up to one thousand bits per second. The idea of using personal computers to access a wide array of interesting information, to project presentations stored on them, and to somehow integrate them into “teleconferencing” were subjects of advanced industrial research and design. The Apple Macintosh was recent news. Yellow power ties were “in.” On the cutting-edge, people debated whether “object oriented programming” would ever really catch on.

All past generations are perceived as naive; aslant comfort as I tell this story now. For I became somewhat obsessed with that odd and uninspiring question. Where is the top? As unanswerable as it seemed, the more I thought about it, the more I found value in the thinking.

The question did not occur to me entirely at random. The group that employed me at the time was particularly interested in “rule-based expert systems,” a well-developed form of rule-processing that was, at the time, thought of as artificial intelligence technology.
(RULE-1: (IF: <user-1> ((think)) that's <naive{in hindsight}>)
(THEN: ((wait)) 'till <user-1> ((READ-CONTINUE))))
In historical context, it was actually a much more interesting time than empty hindsight might suggest. There was focus on expanding the common roles of computers in data fetching and number crunching to include more powerful forms of symbolic processing. Artificial intelligence research was defined as trying to get machines to do what at the moment, people do better. It was a time to think basic thoughts and explore new directions. And then there was the hype.

The idea of artificial intelligence fascinates. Writers in the popular press could not resist the temptation to contemplate fantastic futures brought about by its commercialization, as if the full blossoming of machine intelligence was only months away. It happens in every generation. Today, in the light of well-funded advances in robotics, we worry too much about machines becoming in some way more intelligent than people and using that intelligence to take over the world – a theme perhaps not yet entirely exhausted in science fiction. There are the annual singularity conferences that include discussion on uploading your mind to a machine so that the ghost of your thought patterns can survive your death. (Sadly, it seems that not everyone will have use of a robot body like Zoe Graystone.) And then one can well wonder about human-machine marriage law once robots have become sufficiently advanced to serve as satisfactory sex mates and companions. (But can it cook?)

In the mid-1980s, perhaps we were too naive to connect such dark and mystical thoughts to our first generation personal computers. There simply wasn't much personality in “C:> dir /P” displayed on a monochrome screen. Rule-processing systems were just part of the parade of options opening up a new world of intelligent automation. But with them, complex business issues would be resolved at the press of a button, quality medical diagnosis and agricultural advice would be delivered to third world countries on floppy disks, the art and skill in untold areas of human endeavor could be translated into computer programs by “knowledge engineers.” As machines began turning out the best advice human experience and machine computation could deliver, the quality of life could improve everywhere and the world would become a better place.

The mood was upbeat. The excitement palpable. Researchers in all fields began competing for new funding to apply the technology in their own fields. Their ideas were wonderful, their plans divine. There was seemingly no end to the possibilities. I had a front row seat to much of it. My job involved presentations and discussions at universities and national laboratories throughout the country. I saw the wonder and curiosity in their faces as the software concepts were first introduced and the thrill of starting new projects with great promise. I exchanged friendly comments and jokes with them as their work proceeded through the first interesting efforts and did my best to respond to the thoughtful and carefully stated questions as problems arose. And then, in what in the vastness of history may seem like nothing more than a split second after the whole thing began, I felt their frustration turning to anger. For most of them, it turned out, rules were not enough.

Leading artificial intelligence researchers of the day were scrambling to make a go of their own commercial enterprises. There was evidence of bottom-up evolution in complexity. The idea that object oriented programming provided an opportunity to effectively model concrete objects and abstract ideas related to the real world was applied in frameworks to allow logical relationships between objects. “Frames” also provided a way to break rule systems into sub-systems.

But these few steps were not enough to keep the artificial intelligence revolution promised by excitable journalist going. It had been an interesting top-down experiment led by a few researchers whose bottom-up developments did not go far enough, fast enough. Too many visionary application builders from outside of engineering were failing. Overly heightened expectations caused the reputation of the whole of the artificial intelligence idea to suffer greatly. It didn't matter that someone somewhere might have been thinking about the next solution. Funding agencies and investors lost interest.

The change in mood turned my role in the experiment into something like a traveling complaint department. I tried, once or twice, to remind those who had used our technology that the scope and limitations of our products had been explained. In some cases, the conversations were merely new versions of old chats in which I had directly stated that significant parts of their design were not supported by our products and would be difficult to implement. But the disappointment was as palpable as the excitement had been before. Their visions had been clear, their ideas wonderful, their plans divine. It must be possible to do what they wanted to do. Something was to blame.

The groups that had tried to implement sophisticated intelligent applications cut across a nice spectrum of academic disciplines and interests. This led to what initially seemed a wide range of application related problems. But as I listened to more stories, some parts of their problems began to sound similar. A pattern emerged. Underlying the diversity of application interests laid a concrete set of basic definable technical problems.

Our company had a particularly good working relationship with one of the leading researchers, so I sent an email describing the problems. The list became part of a conference presentation somewhere. But there was no way that I could stop thinking about them. In my mind, the crash of artificial intelligence technology in the 1980s was transforming itself from a great disappointment to a unique opportunity. I had discovered a clear definition of the difference between what practical application builders wanted to do and what the technology of the day had to offer.

In the normal course of events, I might have worked on designs that addressed each of the individual problems on the list, with each solution having a potential for commercial application. This is the way progress is normally created - “bottom-up.” But the circumstances triggered more philosophical thoughts. The combination of the overly optimistic expectations of funding agencies and application builders demonstrated that bottom-up is not always good enough. Had my degrees been in management or marketing, I would probably have simply noted it as a classic error. But I am an engineer. Problems are meant to be solved and it is somehow not in my nature to be able to turn by brain off to them.

It was in this stream of events that my mind became fixed on that naive and rather unimaginative question. If technology developers had anticipated the needs of application builders and moved directly toward satisfying those needs, the failure would not have been imminent, as in hindsight, it clearly was. But this was cutting edge stuff. How would anyone know what they had not yet discovered through experimentation and experience – trial and error – yet another problem identification step that could lead to more progress?

Something else began to play in my mind. It had already become part of my life's wisdom to recognize that seemingly simple things, the ideas and processes that we mostly take for granted because they are common and therefore presumed uninteresting, are often the most profound. On many occasions I have seen sophisticated ideas initially fail, and then slowly evolve until the most basic logical considerations forced compliance. Then it worked.

Involvement in artificial intelligence naturally involves thinking about intelligence generally, about our own intelligence, and our own thought processes. If artificial intelligence is working on things that at the moment, humans do better than machines, we think about what we do, and how we do them. My brain had not turned off this self-reflection either. Something had triggered my concrete confidence that I could solve problems that had not previously been solved – and to know that the solutions would be useful. I have mentioned it. Is it something that you, dear reader, have taken for granted because it is common and therefore uninteresting? Or do you know to what I refer?

There are pieces of generic processes; simple, common, so ordinary as to be overlooked. A common thought, that systematic problem solving begins with defining the problem, sent me to the mall to find a laboratory notebook. The jottings, ideas, diagrams and pseudo-code represented my obsession with breaking free of the bottom-up approach to development and progress. My eyes turned from the problems immediately surrounding me and slowly upward against a seemingly endless empty space. My search for “high level logic” had begun.


Footnote to chapter 1: The specific “failure” of artificial intelligence technology presented in this article has to do with its presentation and timing, along with overly enthusiastic expectations over a relatively short period. (related article) As techies around the world know, object oriented programming became a mainstream technique supported by extremely popular programming languages. Advanced rule-processing is an added tool in such things as database systems and embedded in a wide range of “smart” products. The ability of software to process “business rules”is commonly marketed. One open-source rule-processing system (Drools) became so popular that it's now included in the JBoss Enterprise Platform and there is a standard set of interfaces for rule processing components in the Java programming language. Although I had not seen “problem definition” as a framework component in a larger generalized process, the importance of problem identification had been pointed out in specifics, particularly with the development of diagnostic systems.

Link to Chapter 2.

Thursday, October 21, 2010

LEGO Mindstorms NXT Robots (leJOS)

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Being oriented to work on tools and middleware, I allow myself to get enthusiastic when software reaches the stage that it makes something else relatively easy to do. So, I feel no regret posting this comment in advance of a working demonstration. (Something does actually work – read on!)

Creation of a robotics demo for HLL has to this point proceeded with a simple robot simulation. Movement and positions were nothing but numbers fed from the simulation and translated in the browser-based GUI to a colorful circle moving around through two rooms, which were nothing more than a 2D outline. (Only tested on Firefox, so if you're not using Firefox, you might not see the first (dynamic) image of the "robot" moving through the rooms below.)



The High Level Logic (HLL) software system deals with what the name implies - high level logic. It's designed to provide and support high level logic for all kinds of applications. So, motor control and sensor feedback really has nothing to do with working on the HLL core system. Robotics is a good and fun application to work with; and certainly to the point back when a robotics research project was paying for the work.

For 2-3 years now, I've had a LEGO Mindstorms NXT kit that didn't get any attention past trying out Microsoft's (at the time new) Robotics Studio. A friend of mine recently decided not to let it go to waste, and built the first robot by following the instructions in the booklet that came with it.


I've switched computers since last installing the Mindstorms software. The copy that came with the kit did not contemplate 64 bit architecture, so the simple installation from CD didn't work. After googling around a bit, I discovered a good set of instructions to get the job done. (Getting to Grips with Installing, Updating and Programming LEGO Mindstorms Kits.)

Software that came with the kit successfully installed, I checked to see if everything was working. Check, check, check. Now to change everything. What I really want is to create some smart high level logic using HLL and send commands from there to the robot through “experts.”

The focus of my work is on HLL, so I'm not interested (well, maybe I'm interested and it's more a question of time and keeping my efforts focused) in working out all the Bluetooth communication and NXT control code myself. Wouldn't it be nice if someone who had already developed that technology had made it available. And someone did. (Actually, I knew that.)

I installed leJOS NXJ, which includes replacing the firmware on the LEGO Mindstorms NXT brick. Now it's ready to interact with Java programs on my computer – well, not quite.

The installed software for leJOS hadn't contemplated 64 bit architecture either. When trying to run the first sample program, I got error messages telling me it couldn't find libraries like “intelbth_x64”. Luckily, leJOS project contributors did something about that this year and the problem was easy to fix (by replacing the ...\leJOS NXJ\3rdparty\lib\bluecove.jar file that came with the downloaded installation with the latest snapshot. Check here to see if that's still the most recent if you have this problem.)

Magnifique! The first sample program ran. It didn't matter that it only passed a Java program that plays a tune to the brick (which immediately played the tune). This is progress.

So, now you also have an accurate status report on the state of getting a LEGO Mindstorms NXT robot to run with HLL. I haven't hooked it up yet. I haven't yet even delved into the samples that come closer to what I want. But the basics are working, and that's an excellent start.

Saturday, October 16, 2010

The Prototype is Dead!

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

After getting back to concentrating on the core code for a little while, I'm happy to report that the HLL prototype is officially dead. The age of "clean up" has ended and everything ahead will have to do with improvement and expanding functionality.

The final throw may seem like a small thing. I had defined "trace" and "monitor" variables in every class I wanted to kick out program trace information to stdout and to the GUI. Pretty nice to see trace information in the browser while developing. It let me see anything I wanted to see in what was basically an ongoing end-to-end test while I worked. I also didn't need to go back and forth between the GUI and console to see what was going on. (For more detail on the browser-based GUI, refer to posts like this one.)

But, while focusing on everything else while building the prototype, I had set the values independently in each class. It never seemed a pressing issue while prototyping, because I would either focus on core development, or on GUI development for lengthy periods of time. So, the inconvenience of changing the values in each class never pushed hard enough against other pressing issues.

So, now there's a new config file called manage_cfg.xml, that's meant to be kindof a catch-all for this sort of thing. And when everyone else wants to switch between the two, they just set the values to true or false in the config file. One place - real quick - that's all there is to it.

Need more reason to understand my celebratory post? Here it is. I started writing error checking code in the program that processes the input. What if there is no value specified for trace? Do I want to warn people, or just let it default to false? "Wait a minute," I thought. This is really a job for an XML schema. So, I quickly created one.

Maybe it's just me, but that strikes me as moving forward. It's like development rather than clean-up.

Friday, October 15, 2010

Do You Want 1000 Entities Cooperating? OK with me!

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Current issues #1: Configuration Files and Processing; reported the importance of configuration files and the need to handle them more "generically" than in the prototype. One particular problem stemmed from the fact that using multiple distributed copies of HLL was a sudden flash insight during prototyping.

Building the prototype with a robotics example application close at hand, I originally thought any user interface would communicate with the robot's HLL directly. That would be the same as communicating with the robot. Then it occurred to me that the Command Center (user interface) needs its own separate intelligence to do the things it needs to do intelligently, so it should have its own personal copy of HLL. Then WOW! - Once that's working, you can get as many "entities" (robots, complex specialized experts, - or any distributed entities for any type of application(s)) as needed to do anything. (And with special regard for robotics, it would simplify an effort to get robots to cooperate.)

OK (yes, I'm being verbose - selling HLL) back to the prototype. Before I thought of that, I set up the system to read a simple XML file that gave the host and port of the HLL system, supporting loose coupling. Then I added the browser-based GUI, which also needed to be loosely coupled, so I created another XML file and explicitly handled its reading in the code. Then I decided the Command Center should have its own copy of HLL and explicitly added again. I handled later configurations better, but, so far as the communication information was concerned, it was clear it had to change.

I reprogrammed the handling of the comm configuration file, so that it reads and stores a list of comm entries of arbitrary length. In order to add information for a new connection, the application programmer needs only extend the XML file.

<?xml version="1.0" encoding="ISO-8859-1"?>
<commconfig>
 <config>
  <name>server</name>
  <type>hll</type>
  <host>anyreachablehost.com</host>
  <port>4049</port>
 </config>
 ... continue ad infinitum
</commconfig>


To send a message to any entity in the list, a method called sendCommand() in HLL's HLLCommUtils class is used.

response = hllCommUtils.sendCommand(commandMessage, "server", true);

sends a message to the "server" at the host and port specified in the XML above, and receives a response. The sendCommand() method is used whether the message is being passed internally ("server" is in fact currently the reserve word for some of HLL's basic internal communication) or to another HLL installation 1000 miles away, using whatever naming convention is chosen for identifying HLL installations. It's only required that the names are unique.

Both "commandMessage" and "response" are complex objects, allowing complex messages to be passed back and forth easily. A variety of message formats are in fact supported; i.e. SOAP, java objects, strings ... The commandMessage can contain information on the recipient of the message that goes beyond host and port. Issue #2 is about making these messages FIPA compliant.

Monday, October 11, 2010

US physics professor: 'Global warming is the greatest and most successful pseudoscientific fraud I have seen in my long life'

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Newton: "Fie on you, Hansen, Mann, Jones et al! You are not worthy of the name scientists! May the pox consume your shrivelled peterkins!"

Newton: "Fie on you, Hansen, Mann, Jones et al! You are not worthy of the name scientists! May the pox consume your shrivelled peterkins!"

Harold Lewis is Emeritus Professor of Physics at the University of California, Santa Barbara. Here is his letter of resignation to Curtis G. Callan Jr, Princeton University, President of the American Physical Society.

Anthony Watts describes it thus:

This is an important moment in science history. I would describe it as a letter on the scale of Martin Luther, nailing his 95 theses to the Wittenburg church door. It is worthy of repeating this letter in entirety on every blog that discusses science.

It’s so utterly damning that I’m going to run it in full without further comment. (H/T GWPF, Richard Brearley).

Dear Curt:

When I first joined the American Physical Society sixty-seven years ago it was much smaller, much gentler, and as yet uncorrupted by the money flood (a threat against which Dwight Eisenhower warned a half-century ago). Indeed, the choice of physics as a profession was then a guarantor of a life of poverty and abstinence—it was World War II that changed all that. The prospect of worldly gain drove few physicists. As recently as thirty-five years ago, when I chaired the first APS study of a contentious social/scientific issue, The Reactor Safety Study, though there were zealots aplenty on the outside there was no hint of inordinate pressure on us as physicists. We were therefore able to produce what I believe was and is an honest appraisal of the situation at that time. We were further enabled by the presence of an oversight committee consisting of Pief Panofsky, Vicki Weisskopf, and Hans Bethe, all towering physicists beyond reproach. I was proud of what we did in a charged atmosphere. In the end the oversight committee, in its report to the APS President, noted the complete independence in which we did the job, and predicted that the report would be attacked from both sides. What greater tribute could there be?

How different it is now. The giants no longer walk the earth, and the money flood has become the raison d’ĂȘtre of much physics research, the vital sustenance of much more, and it provides the support for untold numbers of professional jobs. For reasons that will soon become clear my former pride at being an APS Fellow all these years has been turned into shame, and I am forced, with no pleasure at all, to offer you my resignation from the Society.

It is of course, the global warming scam, with the (literally) trillions of dollars driving it, that has corrupted so many scientists, and has carried APS before it like a rogue wave. It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist. Anyone who has the faintest doubt that this is so should force himself to read the ClimateGate documents, which lay it bare. (Montford’s book organizes the facts very well.) I don’t believe that any real physicist, nay scientist, can read that stuff without revulsion. I would almost make that revulsion a definition of the word scientist.

So what has the APS, as an organization, done in the face of this challenge? It has accepted the corruption as the norm, and gone along with it. For example:

1. About a year ago a few of us sent an e-mail on the subject to a fraction of the membership. APS ignored the issues, but the then President immediately launched a hostile investigation of where we got the e-mail addresses. In its better days, APS used to encourage discussion of important issues, and indeed the Constitution cites that as its principal purpose. No more. Everything that has been done in the last year has been designed to silence debate

2. The appallingly tendentious APS statement on Climate Change was apparently written in a hurry by a few people over lunch, and is certainly not representative of the talents of APS members as I have long known them. So a few of us petitioned the Council to reconsider it. One of the outstanding marks of (in)distinction in the Statement was the poison word incontrovertible, which describes few items in physics, certainly not this one. In response APS appointed a secret committee that never met, never troubled to speak to any skeptics, yet endorsed the Statement in its entirety. (They did admit that the tone was a bit strong, but amazingly kept the poison word incontrovertible to describe the evidence, a position supported by no one.) In the end, the Council kept the original statement, word for word, but approved a far longer “explanatory” screed, admitting that there were uncertainties, but brushing them aside to give blanket approval to the original. The original Statement, which still stands as the APS position, also contains what I consider pompous and asinine advice to all world governments, as if the APS were master of the universe. It is not, and I am embarrassed that our leaders seem to think it is. This is not fun and games, these are serious matters involving vast fractions of our national substance, and the reputation of the Society as a scientific society is at stake.

3. In the interim the ClimateGate scandal broke into the news, and the machinations of the principal alarmists were revealed to the world. It was a fraud on a scale I have never seen, and I lack the words to describe its enormity. Effect on the APS position: none. None at all. This is not science; other forces are at work.

4. So a few of us tried to bring science into the act (that is, after all, the alleged and historic purpose of APS), and collected the necessary 200+ signatures to bring to the Council a proposal for a Topical Group on Climate Science, thinking that open discussion of the scientific issues, in the best tradition of physics, would be beneficial to all, and also a contribution to the nation. I might note that it was not easy to collect the signatures, since you denied us the use of the APS membership list. We conformed in every way with the requirements of the APS Constitution, and described in great detail what we had in mind—simply to bring the subject into the open.<

5. To our amazement, Constitution be damned, you declined to accept our petition, but instead used your own control of the mailing list to run a poll on the members’ interest in a TG on Climate and the Environment. You did ask the members if they would sign a petition to form a TG on your yet-to-be-defined subject, but provided no petition, and got lots of affirmative responses. (If you had asked about sex you would have gotten more expressions of interest.) There was of course no such petition or proposal, and you have now dropped the Environment part, so the whole matter is moot. (Any lawyer will tell you that you cannot collect signatures on a vague petition, and then fill in whatever you like.) The entire purpose of this exercise was to avoid your constitutional responsibility to take our petition to the Council.

6. As of now you have formed still another secret and stacked committee to organize your own TG, simply ignoring our lawful petition.

APS management has gamed the problem from the beginning, to suppress serious conversation about the merits of the climate change claims. Do you wonder that I have lost confidence in the organization?

I do feel the need to add one note, and this is conjecture, since it is always risky to discuss other people’s motives. This scheming at APS HQ is so bizarre that there cannot be a simple explanation for it. Some have held that the physicists of today are not as smart as they used to be, but I don’t think that is an issue. I think it is the money, exactly what Eisenhower warned about a half-century ago. There are indeed trillions of dollars involved, to say nothing of the fame and glory (and frequent trips to exotic islands) that go with being a member of the club. Your own Physics Department (of which you are chairman) would lose millions a year if the global warming bubble burst. When Penn State absolved Mike Mann of wrongdoing, and the University of East Anglia did the same for Phil Jones, they cannot have been unaware of the financial penalty for doing otherwise. As the old saying goes, you don’t have to be a weatherman to know which way the wind is blowing. Since I am no philosopher, I’m not going to explore at just which point enlightened self-interest crosses the line into corruption, but a careful reading of the ClimateGate releases makes it clear that this is not an academic question.

I want no part of it, so please accept my resignation. APS no longer represents me, but I hope we are still friends.

Hal

Harold Lewis is Emeritus Professor of Physics, University of California, Santa Barbara, former Chairman; Former member Defense Science Board, chmn of Technology panel; Chairman DSB study on Nuclear Winter; Former member Advisory Committee on Reactor Safeguards; Former member, President’s Nuclear Safety Oversight Committee; Chairman APS study on Nuclear Reactor Safety

Chairman Risk Assessment Review Group; Co-founder and former Chairman of JASON; Former member USAF Scientific Advisory Board; Served in US Navy in WW II; books: Technological Risk (about, surprise, technological risk) and Why Flip a Coin (about decision making)

Sunday, October 10, 2010

Eventually, it will be illegal for humans to drive

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Google is now telling us that they are behind a project developing cars that drive themselves in traffic.



It's been a "secret" project for a while, according to the report; but they're working with people in a geographic area that's been heavily funded in robotics and unmanned vehicles and has taken home some trophies in competition. (DARPA Grand Challenge)

I'm not familiar with google's autonomous vehicle technology; not being in their inner circle of secret friends. So, I can't tell you anything about how good it really is. And, since they're not using HLL (yet, or as far as I know), I'm not going to endorse their effort. :)

I just thought I'd comment on the first response I got after posting their article link to my Facebook page. "No thanks. I prefer to be the one making decisions behind my wheel ;-)"

I'm not picking on the guy who said it. His comment undoubtedly represents the feelings of a lot of people. It's just a prediction; something I've been quite good at, and probably am even more so in my old age.

But this prediction is at least based on straightforward logic. It's pretty easy really. Autonomous vehicles already outperform humans in a number of ways, and we can expect even more improvement in the future. Most auto companies that have invested in autonomous technology have focused the most attention on safety. Step one is using sensors to detect and understand the environment and surrounding traffic, and to warn or avoid dangerous situations.

The more difficult challenge is to accurately predict when autonomous vehicles will be accepted by state law, opening the door to mass production and sales, which will in turn increase investment in research and development even further and increase the pace at which further improvement is realized. I'm optimistic, not just based on what I've said in this paragraph, but based even more on R&D I've been involved with. We can increase the pace of improvement dramatically, and accomplish things that humans cannot. (Yes, the age of AI is upon us - and yes, we're still human. It's the AI you see ...)

"Eventually, it will be illegal for humans to drive," I commented on Facebook.

"By that time it will be illegal for us to think and our humanity will have already been robbed," responded my Facebook friend.

I understand the sentiment (which is why I do indeed take the conversation seriously) but think the two issues are separate. It will eventually be illegal for humans to drive on public roads and highways because a much safer alternative will be available. Humans driving cars will be considered (relatively) too dangerous. Why should the rest of humanity take the risk of being slaughtered by accidents, when such things are extremely rare when the machine does the driving?

I believe that you'll still be allowed to do most of the thinking, at least for a while. An autonomous vehicle doesn't care whether you go to Aunt Suzie's or the race track on Saturday. But one day, even the arguments about whether it's faster to take the Lincoln or Holland Tunnel will be a thing of the past. Honestly, do you think you'll miss it?

Tuesday, October 5, 2010

Can Unmanned Robots Follow The Laws Of War?

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

NPR interviews Patrick Lin, an assistant professor of philosophy and research director of the Ethics and Emerging Sciences Group at California Polytechnic State University; and Joanne Mariner, director of Human Rights Watch's Terrorism and Counterterrorism Program.

I would like to comment briefly on what seems to be settling in as presumptive knowledge regarding technology choices. I'll blog more extensively on this later, sorting it out with HLL; which gives you a hint as to why I interrupt this announcement to comment. Keep in mind that I'm only responding to an NPR interview, not an in depth thesis, and that I've already mentioned that the interdisciplinary discussion on robot ethics is a good thing - so, I discuss. (I'll in fact be picking at just one specific point and think the interview was a good one.)

Patrick Lin, and others, state that increasing machine intelligence / autonomy come down to reliance on either simple rules or learning technology, and that with learning technology, we will not be able to predict behavior. So, the technical choices don't look good.

My quick techie response is first to challenge people to tell me how it is that critical human decision making doesn't fit the rule model, then I'll make what is apparently some startling statements about machine learning.

The laws of war, and other basic decisions can be expressed as rules. if CIVILIAN-NON-COMBATANT then DON'T-FIRE seems to make sense (even if the machine recognition problem is difficult). if ACTION-NOT-SAFE then CANCEL. if OUTNUMBERED then RUN-AWAY. if BATTLE-WON then STOP-FIRING. There just seems to be a whole lot of basic stuff that can be covered by rules - even simple ones. And doesn't this fit the human decision-making model pretty well? (Which is why everyone understands this comment.)

Regarding "unpredictable" learning technology, it seems to me that autonomy is at least to some extent synonymous with lack of predictability - just like with humans and other animals. That seems logical, but in fact, there are effective ways to place limits on the range of behaviors an autonomous robot would develop and use. Learning robots can be programmed to go somewhere and do something, for example, with learning / adaptation used to allow them to adapt to conditions. In the laboratory for example, four-legged robots with a broken leg have adapted their gate to three legs. If a humanoid robot limps the last mile to get home, due to a broken part, that doesn't change the mission. My general point is that design engineers can still control what learning / adaptive robots are programmed to do, even while there's some autonomy in how they do it.

In my contribution to Gerhard Dabringer's interviews below, I spend a section roughly outlining a development process (very generally), that includes training and testing. Even for development engineers with little knowledge of machine learning, I think it makes a worthwhile point. Although the character of development of learning machines is at some points different than traditional development, the overall process is the same. Quality assurance doesn't become obsolete in developing, using, and maintaining these advanced systems.

For a broader debate, there is a series of interviews conducted by Gerhard Dabringer, Austrian military Institute for Religion and Peace. Click here for the interviews and more.

English translation of a Swedish documentary on the same topic: click here.

Saturday, October 2, 2010

Roboethics and Robot Ethics

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

I mentioned in a previous post that I would blog on robot ethics. Let me meander a bit before getting into technical detail on how this relates to HLL. (I mean that I'm not going to get into that detail in this post. This post is a meanderer on the topic.)

The international discussion on robot ethics involves interacting with people from a variety of academic disciplines, such as moral or ethical philosophers, as well as other engineers and scientists who are interested in the subject. In my view, the interdisciplinary discussion is both quite interesting and valuable.

Me – I'm an engineering scientist type. Although I'm willing to opine on just about anything, when I discuss robot ethics, I typically try to remain in my engineering persona (which is not so hard for me).

I would break the discussion in two. There is a part that is concerned about how humans use technology. Even this breaks down further. There are some who are particularly concerned about the increased autonomy of weapons systems; worrying that machines will increasingly make life and death decisions in military roles. Others (sometimes they overlap) are quite concerned about the use of robots in medicine and particularly elderly care; to what extent will machines replace human contact, etc.

I am assured that naming each area will be the subject of lengthy debate, but there has been at least some preliminary agreement in some quarters that human ethics in the use of robots should be referred to as “roboethics.” (Roboethics Facebook group.)

What does that have to do with engineers? Well, plenty. Engineers invent, design, even manage and use technology. Yes – the decisions of those who pay the bills matter a lot. Even that distinction has a place in the discussion. But it is one of those times when knowledge and awareness of ethical concerns within the engineering community (so to speak) can be important.

The other major branch has - at least in my mind at this early stage - more to do, directly, with HLL. Advances in machine intelligence and autonomy should include advances in autonomous machine ethical decision-making. The ultimate challenge for the “moral machine” is autonomous moral agency. If this seems an interesting subject to you, I will again suggest Wallach and Allen's book; Moral Machines, Teaching Robots Right from Wrong. Their blog is (click) here.

And here's a link to the Robot Ethics Facebook group.


Friday, October 1, 2010

Five Signs You Need HTML5 WebSockets

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

HTML5 WebSocket is an important new technology that helps you build engaging, interactive, real-time web applications quickly and reliably. Sure, HTML5 WebSockets may be the best thing since sliced bread, but is this new technology right for you?

This article identifies five types of web applications that will benefit from HTML5 WebSockets. So, without further ado... give me five!

The Five Signs

1. Your web application has data that must flow bi-directional simultaneously.
2. Your web application must scale to large numbers of concurrent users.
3. Your web application must extend TCP-based protocols to the browser.
4. Your web application developers need an API that is easy to use.
5. Your web application must extend SOA over the Web and in the Cloud.

Read the rest: Five Signs You Need HTML5 WebSockets

Man Controls Robotic Hand with his Mind

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

WiFi on Steroids

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

On September 23, the Federal Communications Commission approved new rules allowing so-called white space spectrum to be used for what has come to be called "WiFi on steroids." White spaces refer to radio airwaves that are not used by broadcasters. According to FCC chairman Julius Genachowski, the opening up of this spectrum will provide a major "platform for innovators and entrepreneurs." What are likely to be some of the first uses of this technology? What impact will the rules have on wireless operators and on companies such as Microsoft, Dell and Google, which have been pushing the FCC to implement the new rules? Wharton legal studies and business ethics professor Kevin Werbach, who has been working closely with the FCC on its latest action and other initiatives, answers these questions during an interview with Knowledge@Wharton.

Continue story, including video discussion: What the New FCC Rules Will Mean for Wireless Users