Friday, October 29, 2010

The Ghosts in My Machine: Chapter 1

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

As a young man, when I enjoyed idle time and my daydreams tended to wander in strange directions, I found myself considering a rather unimaginative question. As computer languages and tools have evolved to higher and higher levels - “bottom-up” - where will they eventually reach? Where's the top?

To put this contemplation in perspective, the year was 1985. The computer under my desk was a first generation Texas Instruments PC with two floppy disk drives. Ethernet cables were being strung through our offices to network our computers for the first time, allowing messages to stream around the building at the blistering rate of up to one thousand bits per second. The idea of using personal computers to access a wide array of interesting information, to project presentations stored on them, and to somehow integrate them into “teleconferencing” were subjects of advanced industrial research and design. The Apple Macintosh was recent news. Yellow power ties were “in.” On the cutting-edge, people debated whether “object oriented programming” would ever really catch on.

All past generations are perceived as naive; aslant comfort as I tell this story now. For I became somewhat obsessed with that odd and uninspiring question. Where is the top? As unanswerable as it seemed, the more I thought about it, the more I found value in the thinking.

The question did not occur to me entirely at random. The group that employed me at the time was particularly interested in “rule-based expert systems,” a well-developed form of rule-processing that was, at the time, thought of as artificial intelligence technology.
(RULE-1: (IF: <user-1> ((think)) that's <naive{in hindsight}>)
(THEN: ((wait)) 'till <user-1> ((READ-CONTINUE))))
In historical context, it was actually a much more interesting time than empty hindsight might suggest. There was focus on expanding the common roles of computers in data fetching and number crunching to include more powerful forms of symbolic processing. Artificial intelligence research was defined as trying to get machines to do what at the moment, people do better. It was a time to think basic thoughts and explore new directions. And then there was the hype.

The idea of artificial intelligence fascinates. Writers in the popular press could not resist the temptation to contemplate fantastic futures brought about by its commercialization, as if the full blossoming of machine intelligence was only months away. It happens in every generation. Today, in the light of well-funded advances in robotics, we worry too much about machines becoming in some way more intelligent than people and using that intelligence to take over the world – a theme perhaps not yet entirely exhausted in science fiction. There are the annual singularity conferences that include discussion on uploading your mind to a machine so that the ghost of your thought patterns can survive your death. (Sadly, it seems that not everyone will have use of a robot body like Zoe Graystone.) And then one can well wonder about human-machine marriage law once robots have become sufficiently advanced to serve as satisfactory sex mates and companions. (But can it cook?)

In the mid-1980s, perhaps we were too naive to connect such dark and mystical thoughts to our first generation personal computers. There simply wasn't much personality in “C:> dir /P” displayed on a monochrome screen. Rule-processing systems were just part of the parade of options opening up a new world of intelligent automation. But with them, complex business issues would be resolved at the press of a button, quality medical diagnosis and agricultural advice would be delivered to third world countries on floppy disks, the art and skill in untold areas of human endeavor could be translated into computer programs by “knowledge engineers.” As machines began turning out the best advice human experience and machine computation could deliver, the quality of life could improve everywhere and the world would become a better place.

The mood was upbeat. The excitement palpable. Researchers in all fields began competing for new funding to apply the technology in their own fields. Their ideas were wonderful, their plans divine. There was seemingly no end to the possibilities. I had a front row seat to much of it. My job involved presentations and discussions at universities and national laboratories throughout the country. I saw the wonder and curiosity in their faces as the software concepts were first introduced and the thrill of starting new projects with great promise. I exchanged friendly comments and jokes with them as their work proceeded through the first interesting efforts and did my best to respond to the thoughtful and carefully stated questions as problems arose. And then, in what in the vastness of history may seem like nothing more than a split second after the whole thing began, I felt their frustration turning to anger. For most of them, it turned out, rules were not enough.

Leading artificial intelligence researchers of the day were scrambling to make a go of their own commercial enterprises. There was evidence of bottom-up evolution in complexity. The idea that object oriented programming provided an opportunity to effectively model concrete objects and abstract ideas related to the real world was applied in frameworks to allow logical relationships between objects. “Frames” also provided a way to break rule systems into sub-systems.

But these few steps were not enough to keep the artificial intelligence revolution promised by excitable journalist going. It had been an interesting top-down experiment led by a few researchers whose bottom-up developments did not go far enough, fast enough. Too many visionary application builders from outside of engineering were failing. Overly heightened expectations caused the reputation of the whole of the artificial intelligence idea to suffer greatly. It didn't matter that someone somewhere might have been thinking about the next solution. Funding agencies and investors lost interest.

The change in mood turned my role in the experiment into something like a traveling complaint department. I tried, once or twice, to remind those who had used our technology that the scope and limitations of our products had been explained. In some cases, the conversations were merely new versions of old chats in which I had directly stated that significant parts of their design were not supported by our products and would be difficult to implement. But the disappointment was as palpable as the excitement had been before. Their visions had been clear, their ideas wonderful, their plans divine. It must be possible to do what they wanted to do. Something was to blame.

The groups that had tried to implement sophisticated intelligent applications cut across a nice spectrum of academic disciplines and interests. This led to what initially seemed a wide range of application related problems. But as I listened to more stories, some parts of their problems began to sound similar. A pattern emerged. Underlying the diversity of application interests laid a concrete set of basic definable technical problems.

Our company had a particularly good working relationship with one of the leading researchers, so I sent an email describing the problems. The list became part of a conference presentation somewhere. But there was no way that I could stop thinking about them. In my mind, the crash of artificial intelligence technology in the 1980s was transforming itself from a great disappointment to a unique opportunity. I had discovered a clear definition of the difference between what practical application builders wanted to do and what the technology of the day had to offer.

In the normal course of events, I might have worked on designs that addressed each of the individual problems on the list, with each solution having a potential for commercial application. This is the way progress is normally created - “bottom-up.” But the circumstances triggered more philosophical thoughts. The combination of the overly optimistic expectations of funding agencies and application builders demonstrated that bottom-up is not always good enough. Had my degrees been in management or marketing, I would probably have simply noted it as a classic error. But I am an engineer. Problems are meant to be solved and it is somehow not in my nature to be able to turn by brain off to them.

It was in this stream of events that my mind became fixed on that naive and rather unimaginative question. If technology developers had anticipated the needs of application builders and moved directly toward satisfying those needs, the failure would not have been imminent, as in hindsight, it clearly was. But this was cutting edge stuff. How would anyone know what they had not yet discovered through experimentation and experience – trial and error – yet another problem identification step that could lead to more progress?

Something else began to play in my mind. It had already become part of my life's wisdom to recognize that seemingly simple things, the ideas and processes that we mostly take for granted because they are common and therefore presumed uninteresting, are often the most profound. On many occasions I have seen sophisticated ideas initially fail, and then slowly evolve until the most basic logical considerations forced compliance. Then it worked.

Involvement in artificial intelligence naturally involves thinking about intelligence generally, about our own intelligence, and our own thought processes. If artificial intelligence is working on things that at the moment, humans do better than machines, we think about what we do, and how we do them. My brain had not turned off this self-reflection either. Something had triggered my concrete confidence that I could solve problems that had not previously been solved – and to know that the solutions would be useful. I have mentioned it. Is it something that you, dear reader, have taken for granted because it is common and therefore uninteresting? Or do you know to what I refer?

There are pieces of generic processes; simple, common, so ordinary as to be overlooked. A common thought, that systematic problem solving begins with defining the problem, sent me to the mall to find a laboratory notebook. The jottings, ideas, diagrams and pseudo-code represented my obsession with breaking free of the bottom-up approach to development and progress. My eyes turned from the problems immediately surrounding me and slowly upward against a seemingly endless empty space. My search for “high level logic” had begun.


Footnote to chapter 1: The specific “failure” of artificial intelligence technology presented in this article has to do with its presentation and timing, along with overly enthusiastic expectations over a relatively short period. (related article) As techies around the world know, object oriented programming became a mainstream technique supported by extremely popular programming languages. Advanced rule-processing is an added tool in such things as database systems and embedded in a wide range of “smart” products. The ability of software to process “business rules”is commonly marketed. One open-source rule-processing system (Drools) became so popular that it's now included in the JBoss Enterprise Platform and there is a standard set of interfaces for rule processing components in the Java programming language. Although I had not seen “problem definition” as a framework component in a larger generalized process, the importance of problem identification had been pointed out in specifics, particularly with the development of diagnostic systems.

Link to Chapter 2.

2 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete