Friday, September 21, 2012

Artificial Intelligence and Cognition: Defending my Optimism

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

“Engineers are naturally grumpy,” I once said. “If we ever start thinking those damned machines are something other than damned machines, we'll get packed away in white jackets.” Does our own natural attitude playing with our interpretation of progress have something to do with why artificial intelligence seems perpetually elusive?

I'm an optimist on developing artificial intelligence, but not very optimistic about convincing others that anything we do can be counted as much more than a trivial re visitation of things that have already been done. I think that engineering “grumpiness” is a key to understanding that no matter how far we get, accomplishing anything really interesting will still seem to be a long way off.

First let me say that I'm grumpy too. I can't help it. It's really more difficult for me to get a cuddly feeling from a robotic baby seal than it is for an old woman in Japan. I know too much about what's under the hood, or its skin? And my problem, which I'm certain is shared by other engineers and scientists, isn't just an emotional one.

My optimism also comes from knowing what's under the hood. My optimism regarding artificial (meaningful) self-awareness for example comes from the design of robots that learn and adapt. In a design by Peter Nordin (related article), robot software learns about the physical robot it runs by exercising its actuators and discovering their effects. The robot uses this self-knowledge to efficiently begin learning more complex behavior and ultimately about its environment and how to effectively interact with it. Instilled with “motives” their behavior becomes useful. It's also been demonstrated that robots can learn from direct human verbal interaction as a replacement for programming. (For example: English translation of Swedish documentary) Self-aware robots can distinguish between themselves and others and learn through interaction how they should treat others, a pathway to robot ethics.

“Humbug!” pronounced a world-famous engineering professor. (I paraphrase, perhaps very badly, to emphasize engineering grumpiness. I'll leave his real name out to allow my fictional character to more clearly illustrate the problem. It's “reality-based” we can say.) “In that last step,” a small demonstration of an additional idea at the end of a larger project to do other things, “rules were used. RULES! You can't get anywhere with rules. What they're trying to do is still decades away.”

It's not just world-famous engineering professors. As I've said, I suffer from the same affliction, as do many. I read (again) recently about a robot taking a self-awareness test that has often been given to animals. Place a robot in front of a mirror and see if it recognizes itself. “BAAAH HUMBUG!” I thought, seemingly without any ability whatsoever to restrain myself. That's just pattern recognition, no different than recognizing anything else. (The actual test involves changing something about the creatures appearance, typically by placing a mark on its body that it only sees in reflection, and seeing if the creature notices the change as being to itself, usually by pointing to it or touching it on its own body rather than in the reflection.)

Oh, but wait! Will this be the first time such an experiment has been conducted? If so, it would be a dandy wonderful experiment. It's the same one used by psychologists on living creatures. If successful, it in some way would prove that something interesting has happened (no matter how long we've understood that it could.). And in fact, I can actually use a phrase from the paragraph above, in which I was explaining my optimism, to explain the potential importance of this advance. Robots that recognize themselves can use that ability to “distinguish between themselves and others.”

But can we accept any of this as having anything really to do with self-awareness? Or are we (engineers at least) forever going to be the nay-sayers on account of the fact that we know the magic trick behind the “illusion”? I argue that we can be more positive and optimistic, and accepting of progress toward artificial intelligence and cognition by truly embracing our grumpiness. THEY'RE MACHINES!

Are we too often, unknowingly perhaps, making the mistake of thinking that developing artificial intelligence is synonymous with creating a synthetic human? Must machines hold the same mysteries of their inner-workings from us as those of living things in order for us to allow that we're well on the way to artificial intelligence? Must the goal always be hidden away in what we don't already know? (I could segue into the “singularity” here, but I won't … just mention it's where we expect to no longer understand what's going on.)

The trick, for us, I think, is to accept that machines are machines. They aren't something else. Let's extend the description of that latter experiment like this. A machine is let loose in a room with a mirror. It autonomously roams around the room and sees the mirror. Upon further investigation (still, autonomously), it sees its reflection and says, “Oh look, that's me.” That really is a pretty good trick for a machine. Does it wave its hand to confirm that it's looking at itself (perhaps only then learning from the image, how it looks)?

I don't have the sense that I've nailed this argument to the wall simply by posing the question in some context. As a proxy for things we already understand, let us reconsider that most humble of artificial intelligence techniques; rules. During commercialization of rule-based expert systems the 1980s, they were imagined as a step toward all kinds of software magic. These systems were after all, the product of artificial intelligence research. By the end of the 1980s, great expectations had crashed on the limits of those early rule-systems and it was thought that they should never be mentioned in this context again.

Now presume for a moment that I am a technically well-educated human being with experience. I understand how to use a rather wide range of techniques to solve problems, answer questions, and even to trigger decisions. Sometimes the use of one sophisticated technique, some kind of statistically analysis for example, is enough to accomplish what I need. Sometimes it takes a string of sophisticated operations to get where I need to go.

Many of the techniques I use can be performed with the help of a computer. In the example, statistics, there is much software available to support the task. Computers can already do this work much more rapidly and reliably than people. The trick here is to identify the right technique for each task, set things up, and run. I'm so smart. I'm human. I can do that and computers can't. Why? Because I have knowledge stored in my brain about what those techniques do. How do I apply that knowledge? Well … aahh … uhm … it's sort of like rules. Should I go ahead and develop a more autonomous level of useful computer processing, or just say naaaah, that would be so-o-o 1980s?

What if my more autonomous system could in fact perform this feat, but competes oddly against humans regarding which kinds of cases it understands how to handle? In other words, what if it has limitations, but they aren't the same as my limitations? In comparison, there are still some cases that I can handle better than the computer? Oh, yes, by the way: I have limitations. (Admit it, so do you.)

Wednesday, April 4, 2012

High Level Logic and Constructal Law

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

I've just finished reading chapter 1 of Design in Nature, by Adrian Bejan and J. Peder Zane. (How the Constructal Law Governs Evolution in Biology, Physics, Technology, and Social Organizations) It has me wondering if constructal law will turn out to be the thing that will help me explain why HLL is a superior software framework.

Constructal law sees “design in nature” in terms evolving flow structures. From an author's description; “Constructal theory holds that flow architecture arises from the natural evolutionary tendency to generate greater flow access in time and in flow configurations that are free to morph.”

As mentioned, I've just finished chapter 1 of Design in Nature. I'm not ready yet to produce a proof, based on constructal law, that HLL is an advancement, and a fundamental advancement as I believe that it is. But even the concept, as explained, with examples, in the introductory chapter, raises some question whether it should be possible.

It's at least largely about flow. HLL is about logical flow, and designing (or evolving) “generic” (more accurately, “general” or “vastly reusable”) flows for high level logic. That might almost be said of any computer software; except for the “generic” or “general” or “vastly reusable” systematic flow of high level logic. Yes, as I said – not a proof yet.

I'm pushed along a bit by the experience of it all. It's not just about flow, but flow has always been on my mind as a critical aspect of HLL development; every piece of it; every aspect of it.

Recalling my early (mid 1980s) notes on HLL, I was very concerned about finding a more generic structure or container for passing data around. This is part of what, predictably, evolved in computer science while I was busy doing other things. Few understood my excitement over XML or my comments on the broad possible uses of RDF (having actually studied the standard rather than just fingering through an early example).

Fair enough (I hope you think), but the fact that these things evolved without HLL mean that other people were interested too. All I'm saying is that these developments were seriously stokin' my pipe for yet another set of dreams about how software development was and will evolve. One could produce “tree structures” for data that could morph. It seemed quite profound to me, not just convenient.

Because I'm not ready to prove anything yet and expect to discuss this further in the future, I'll be brief; taking you right to the latest. I did not feel that I could create the HLL system properly until WebSockets came along. Browsers are everywhere and that's why they should serve as the interface for applications. I need that symmetry of flow that WebSockets finally provide.

HLL isn't as restrictive as frameworks I've seen, not nearly as specialized. It should be free to express any application, rather easily.

What I've said at this point, is that I'm ready to see HLL in terms of flow architecture that arises from the natural evolutionary tendency to generate greater flow access in time and in flow configurations that are free to morph.

So, my pipe is back to producing dreams and we'll see what becomes of them.
.