Saturday, November 13, 2010

Regarding Human Intelligence and Decision Strategy

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

When Will We Have Artificial Intelligence?

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

When are we going to have AI, one survey asks? It's a question relevant to HLL because so much of the thought behind the HLL design comes from the history of AI research and current technology that has come from AI research. The answer to the question when, with reference to HLL, is now. (Or at least as soon as version 1.0 is ready.) And that's no reason to get worried. As the description of HLL claims, you don't even need a high-powered computer science background to build applications with it – just some (OK, but at least reasonably good would be nice) programming knowledge.

The AI question is actually a bit tricky. It really depends on what you mean by AI. Way back in the cave computer days when I was first introduced to the subject, artificial intelligence research was defined as trying to get computers to do things that humans currently do better. Applying that definition, it seems as though the answer may be never. As soon as computers can do something at least as well or better than humans, it's no longer the subject of AI research. Object oriented programming is an example of something that came from AI research. Now a mainstream programming paradigm, many people don't associate it with AI at all.

The variety of ways of thinking about AI is also why some researchers predict AI won't exist far into the future while others (like me) are much more optimistic. People who answer the question may have something very specific in mind and think it will be a long time before it will become reality. You can also think about all the things computers do now – such as mathematical calculation – and make a case that AI already exists (something humans and computers both do, and computers do well). The great variation in predictions on when AI will come, has to do with both the particular sets of things that guessers think need to be done before “AI exists” and how optimistic or pessimistic they are about doing them; while basic research always looks ahead.

You've probably heard that human intelligence is linked to the fact that we have opposable thumbs and other peculiar physical characteristics like standing upright and walking erect. Researchers recognize that in living creatures, intelligence and the characteristics of their physical bodies are linked, which makes robotics fertile ground for AI. (related article) Not all researchers focus exclusively on human intelligence and capabilities however. Some of the most interesting advances have come from looking for ways to mimic the behavior of other creatures, from insects and snakes to mules. The intelligence of a lower species is still intelligence, and some of the developments that come from mimicking their behavior can be applied in layers when mimicking behavior in higher ones.

Where does HLL actually fit in? Twenty-five years ago, when I was first thinking about the “high level logic” problem, I thought of it as a subject for advanced research. Since then, computer languages have advanced considerably and in ways directly matching the requirements of HLL. Strong networking support is a must, which has come from focus on Internet applications. Relatively recent additions to Java (which I've used to build HLL), such as strong support for generics and reflection have transformed some of the challenging bits into stuff that's just pretty cool.(Once again, application developers are not required to have expertise in these techniques – although it's quite alright if they do.)

To some extent, even the concept has been encroached upon (so to speak). The short descriptions of HLL have called it an “agent system” and I worry at times that it will be perceived as nothing more than an alternative to existing agent systems (which I won't mind so much if it becomes a popular one). The overall HLL concept is the thing that remains new, and while fitting into the current modern software development world well, I still think it has potential as a tool in advanced AI research and application development.

HLL development has been proceeding as an ordinary software development project. With use of modern software technology and twenty-five years of thought behind it, not much experimentation is now required; less than the ordinary amount for development of a complex software system, because even details and how it all fits together have previously been thought about. And all that is why it (version 1.0) will be a powerful, light-weight system that is easy to use.

So, is it AI? When people are using it regularly to build applications, I certainly hope it's thought of as AI just as much as rule-processing or object-oriented programming and all the other things that have come from thoughts on developing AI; and yet, fully accepted and integrated into mainstream applications development. Why not integrate HLL support directly into programming languages?

For most people, thoughts on what AI is continuously focus on the future. With twenty-five years of history, I think I've earned the right to use a tired old cliche to end this note with a response. As far as HLL is concerned, the future is now. (Finally!)

Another view: How Long Till Human-Level AI? What Do the Experts Say?
(AGI = Artificial General Intelligence): “I believe the development of AGIs to be more of a tool and evolutionary problem than simply a funding problem. AGIs will be built upon tools that have been developed from previous tools.”



.

Wednesday, November 10, 2010

HLL Factory

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

It occurs to me that HLL application units should be easy to integrate with other types of application components. The current design allows any sort of application component to be integrated into the HLL logic - since HLL was conceived as the "high level" logic for applications. Interaction between HLL units (an agent approach) is fundamental to the design.

But, some application builders may want to incorporate HLL unit technology within other types of programs. I haven't thought of a reference or example application yet, but it seems kind of obvious conceptually. There may be an HLL infrastructure operating already. An application designed in some other way may want to interact with it, with either fixed or variable specifications for its exact purpose.

In such cases, an application may want to make use of an HLL Factory to construct an HLL unit on the fly that will interact with other HLL units just as one of them. The fact that HLL pushes the high level application logic specification into data would make it easy to create or modify the specification on the fly.

Creating a good HLL Factory set-up would take some thought in design, but is quite doable. So much so that I expect, as the concept of an HLL Factory matures (along with completing more of the code to version 1.0), it should surely be done.

I have added HLL Factory to the issues list, but not as a high priority. Since HLL is not version 1.0 yet (but getting closer every day - and I'll remind everyone that is it already possible to build applications) - and there is no existing HLL infrastructure anywhere out there (although there are agent systems running), I sense no great demand right now for dynamics to interact with an existing HLL infrastructure. The day will come however ...

Sunday, November 7, 2010

The Ghosts in My Machine, Chapter 3

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Chapter 1
Chapter 2

Prepare yourself for a surprise ending. Do that now to avoid confusion later.

Around 1990, I met with an industrial engineering professor who had been working for years with artificial intelligence technology. We had a long chat about the possibility of completely automated factories. This was still a decade before frequent online purchasing and customer management systems. But it seemed reasonable to contemplate a future in which everything from initial customer contact, sales, accounting, instructions to the factory floor, robotic manufacturing on demand, packaging, right out to the shipping dock would be fully automated.

Even if you've never considered designing such a system and are unfamiliar with any such work in that direction, it isn't long before the thought occurs that such a system should be broken up into departments and operational segments. Each operation has its own specialized “knowledge” and processing to contend with, and it seems likely that interfaces between operational segments might need only handle common data exchanges. Jumping ahead, one might consider that an agent system could provide loose coupling between operational components as well as the possibility of moving pieces around from time to time for special types of interdepartmental operations.

It isn't a big jump to consider that breaking the application into “departments” is at least in part, an exercise in modeling the framework of the human organization that complete factory automation would replace. The fact that this model already exists, that it's completely common, may be the most important reason that the idea of breaking the software design up this way is so obvious. Compartmentalization is not a concept invented by computer scientists. Humans do it all the time, and have been doing it for ages. If we're going to think more about obvious common ideas, then this idea easily qualifies on that ground.

The concept of an “object” was also not invented by computer scientists. (I merely need to open this idea so that I can be flexible about its use without confusing anyone – i.e. a fair warning against thinking “simply” about common current day object oriented programming structures and techniques.) Objects exist in the world around us and we can ourselves be described as objects and operate as such. They have characteristics and some have capabilities. They interact in various ways and react in various ways to interaction. Complex (compound) objects can be composed of simpler objects; the whole sometimes being equal to more than the simple sum of its parts. The concept of an object or “entity” (in certain modeling models) is fundamental and useful in describing pretty much everything – solid ground for high level logic.

Dwelling on what at this point may seem quite obvious; we're – very generally speaking – looking to design a set of organized objects. One way or another the individuals and / or the organization will be structured to carry out tasks and solve problems, answer questions and whatever. The important distinction here is that the intent is to build a general processing engine; not a specific application that could be described exactly the same way – as a set of organized objects. The expected practical effect, if successful, is that some portion of “common” logic in a wide range of applications can be moved into the general processing engine, where it does not need to be repeatedly rebuilt in the application phase.

Another important distinction is that the High Level Logic Project attempts to do this by searching for “high level logic” rather than by “bottom-up” development. This is why I've chosen such a long-winded approach to introducing HLL. Ideas that are common and fundamental are useful and enduring. They don't need to be derived - "bottom-up" - from the most recent set of application programming frustrations.

So let's take the next step in thinking about departments in human organizations. There are various kinds of organizations, but I didn't mind leaning on my familiarity with the typical American corporate structure, and particularly with project oriented groups. Projects begin by defining a task or problem and end with a specified result. There is at least a rough analogy between this process and the general problem solver outlined in chapter 2. Project organizations are made up of individual people with specialized roles and responsibilities that form sometimes simple and sometimes complex task hierarchies.

Certainly our task cannot be to describe the individual characteristics and operations of all existing human organizations, or even of one large organization like IBM. Organizations change, and a detailed description of any specific organization is not a general model. A general high level logic engine needs entities that can be characterized by application developers or another process. The system for describing organizations needs to be easily extended by adding, defining, and organizing entities. The relationship between entities needs to be “loosely coupled” so that they may move around, help others by use of their special knowledge and capabilities, and form sub-groups to perform tasks as needed.

There is a difference between general and generic. The description above remains pretty generic. Certainly, organizations as “organized objects” seems so. It is possible to add more specifics to a general model (applying to all or most members of a category or group), or one postulated as being general (not specialized or limited to one class of things) without fear of limiting an application developer's creativity and adaptations to meet specific requirements. Allowing developers (and other processes) to customize, add, delete, define, redefine, and organize – i.e. to completely control the specific definition of the generic engine's application does that. (It's something like bringing entity-relationship diagrams to life.)

But still, it plagued me that there is apparently at least a rough analogy between the workings of a project organization and the general problem solver outlined in chapter 2. What minimal organization could assure complex task completion and be highly relevant to a wide range of software applications?

I had spent years working in project groups, and it was that experience that I most thought about when making the analogy. Generalizing a bit, the organization started at the top with an executive (like a VP of Engineering), who reigned over one or more (project) managers, who organized and managed the activities of groups of experts (in most of my personal experience – engineers). In the human world, such a simple three-layered organization has been responsible for carrying out a very large range of very simple to extremely complex tasks.

It was time to describe the characteristics and capabilities needed for a basic three-layer organization model, one that made sense generally, and would be highly relevant to a wide range of software applications. That part turned out to be not particularly difficult, and led to detailed design requirements and specifications (chapter 4, some of which are already hinted at in this chapter).

Minimal description of the three-layer HLL general organization model:
Executive(s): Interacts with external and internal “interests” to formulate, offer, accept, or reject proposals or demands. Decides whether or not to make commitments. Is concerned about the general allocation of resources within an area of responsibility. Assigns complex tasks to managers.

Manager(s): Plan an implement tasks that have been assigned to them, using available resources.

Expert(s): Carry out work details for which specialized knowledge and skills are needed, often in cooperation with other members of a group.
Very clean model so far, perfectly obvious in many respects. On the good side, it could easily be taken as a generalization of many processes including many, many software applications. Then I felt a great discomfort with the human organizational analogy. I and a great many others often feel discomfort with being a cog in the wheel of an organization. It's not quite natural. Something is wrong. (Here comes the surprise ending.)

Don't we all, individually, commonly carry out tasks requiring all three levels described above in our daily lives? What's the difference between a large distributed organization and an individual (in the context of this discussion) other than that large distributed organizations can get a lot more done in a specific amount of time? There isn't necessarily a difference between the structure and function of a multi-person organization (model) and an individual.

A friend of mine who happens to be a cook recently asked me to explain that to him. A cook, interacts with the outside world (outside the kitchen); a good example being the communication of an order from a customer via a waiter (an agent). The cook may accept or reject the order. (If for example, the order is not complete or the kitchen has run out of an ingredient required to make a dish.) If the order is accepted, he commits to the task, then assembles resources (ingredients, cooking implements) according to a plan (complete dish specification and recipes). He then prepares the food and the dish, using specialized knowledge and skills.

The fundamental high level logic had not collapsed, but it became apparent that application developers should be provided with the greatest flexibility in interpreting, mixing and applying its pieces. Large organizations like IBM are human creations, sometimes necessarily physically distributed, that can accomplish more in a particular amount of time. Otherwise (in the context of this discussion) nothing more than a reflection of ourselves and the way we – necessarily – accomplish tasks that require a higher level of intelligence.

From the High Level Logic (HLL) Open Source Project blog.

Thursday, November 4, 2010

XML Configuration Processing – Progress Report

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

The importance of issue #1. Configuration Files and Processing should not be underestimated. The configuration system is used to attach application components to the generic HLL processing system. It makes sense to create a powerful configuration system that is easy for application developers to use and quite flexible so that it does not impede creative development. And when the project begins building tools to further simplify the development process, they will (in part) simplify the construction of configuration files that, in effect, define HLL applications. A powerful configuration processing system will facilitate powerful but easy to use tools. (For programmers, the process is already easy even without the tools.)

In addition, what is becoming a sophisticated configuration processing system will be extended to process plans (management layer) and then further extended to build a simple rule-processing system. (Sophisticated rule-processing can be handled by existing sophisticated rule-processing systems.) All of these features require processing XML files in sophisticated ways. (Including things like handling class and method specifications generically, building things on the fly with reflection techniques, and constructing and modifying the XML files themselves on the fly.)

My progress on developing the configuration system has been interrupted by other demands; just the sort of thing that one must expect in this kind of project. I was, for example, invited to present the HLL project to a group of about 150 IT professionals in Stockholm, which I was quite happy to do. And if you've been following this blog, you know I've spent some time drafting commentary on HLL in a way that I hope provides a meaningful introduction. (If you have been following, please note that I expect chapters in The Ghosts in My Machine to become less historical / philosophical and more concrete and technical, eventually explaining the design and how to build applications.)

The prototype already had a working configuration system. It would not have been a working prototype without one. But it was much less flexible than it should be in version 1.0. In order to get the prototype up and running quickly, I had done things like hard-coding configuration file names and wrote custom handlers for each file. It was good enough for proof-of-concept, but not what I'd call “real code.”

Back to work, progress has been good.

Configuration files and handlers are now specified in a master configuration file which is read and processed. It will also be quite simple to add an option to specify the master file as a command line argument. (So simple, that I may do that before committing the code.)

The HLL predefined configuration files are optionally checked against schemas. This option will be available for configuration files created by application developers as well.

I've decided to stop working on configuration for the moment. It should not take too long to deal with issue #2. FIPA Compliance , and I hope to include this, along with the current configuration file handling improvements in a new source code commit by the end of the week (i.e. tomorrow). Then I think it would be an extremely good idea to update the simple demonstration binary download and tutorial on application development.

It seems to me that moving on directly to work on plan processing before returning to configuration would be a good idea. It is intended that plan processing should be built on the processing capabilities of the configuration system. Work on planning first, allows me to simply paint on a fresh pallet with modified code from the configuration system to get the result that I want – assuring that it works for planning. Then I can return to configuration and simply apply the relevant new bits for the final upgrade.

Monday, November 1, 2010

The Ghosts in My Machine: Chapter 2

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Link to Chapter 1

I understood the differences between what application developers wanted to do and what the “artificial intelligence” technology of the late 1980s supported. The differences were much greater than could be dealt with in a few software updates. What had been, in effect, a broad survey of application needs resulted in a snap-shot of a more basic set of technical requirements. This snap-shot taught me much about the path of development of software technology generally, decades into the future.

Much of that future would evolve with or without me, as developers pushed to realize their dreams and the more basic technology developers – involved in computer languages for example – responded to demand by building their tools bottom-up. But I had become obsessed with what initially seemed like an unapproachable question – one that might raise a slight giggle around a lunch table. As computer languages and tools evolve to higher and higher levels - “bottom-up” - where will they eventually reach? Where's the top? Even if the question is unanswerable, the more I thought about it, the more I found value in the thinking.

There were also two general observations about people that allowed me to consider that my ideas would be at least somewhat unique. The first is that people often tend to mistake common, ordinary ideas as trivial. In fact, the opposite is more likely true. If something is common, it is likely important, or at least extremely useful. Simply cast the thought in different words such as the basic economic measurements of supply and demand, for example, and more people will agree. Something becomes common in any culture because there is a demand for it. The other was that, at the time, it was rather difficult to get many experts to think about the difference between the idea of a machine doing something on its own, and what they were doing. Many programmers scoffed at the idea of specialized rule-processing engines, because they could already use if-then statements in their own programs. Management consultants and statisticians could not see a reason for interest in theoretical math engines that could build their own models and solve problems because statistical packages already allowed them to select from a variety of basic mathematical forms and discover a set of parameter values that produced a “best fit.”

Even as I write today, a quarter century after my thoughts on “high level logic” began, I wonder if I will hit the same snags as I try to drum up interest. That software tools evolve to support higher level functionality is quite commonly understood. And then there is the trigger that set my search in motion. Every systematic analyst (whether professional or not) knows that it is important to define and understand a problem before expending much effort to solve it. They've done it themselves, over and over again, many times.

Let me proceed then to something else that was already common before I began my quest. Not only is it not my invention, but if I recall correctly, it was first introduced to me in grade-school. This is certainly the kind of thing that could potentially send readers to another page, thinking that it is proof that I have nothing new to say. A general problem solving algorithm has been described, that begins with defining the problem. I trust that the process will be recognized by a great many people.
1. Define the Problem or Task
2. Assemble Detailed Information
3. Perform Analysis
4. Identify Alternatives
5. Recommend Solutions
This is a description of an extremely general process; and that certainly is a characteristic sought after in the search for high level logic. Not only do professionals systematically follow such a process in dealing with complex problems and issues, but it is difficult to think of any, even slightly complex process that cannot be cast in its mold. If I, for example, feel hungry at 10:30 at night while watching television, I will already have defined the problem. I am hungry. I will then proceed to the kitchen to assemble information. Looking into the fridge, I will see what is available. I will analyze that information, considering how I might use the ingredients to make something to eat. That will produce a set of alternatives, from which I will choose. (“Recommend Solutions” is a phrase chosen in context. I had been considering the development of better “expert systems.”)

Having given a good bit of thought to rule-based expert system technology, it was easy to see that the above general problem solving algorithm could be used as the design of a software engine, which I describe below. Once I was certain that the idea was concrete and feasible to implement, I began thinking of it as the rule-processing equivalent of “structured programming” - i.e. the addition of for, while, and if-then statements into “high level” languages and functions replacing goto statements. Not miracles, but an extremely useful advance in computer technology.

Rule-based expert systems had a great deal of success in focused diagnostics, which meant no new investigation was needed to see that the same approach could be used in the first step of the general problem solving algorithm. To fill things out however, the process needed to proceed to step 2 rather than too quickly toward the last step – making recommendations. The concept of “objects” (although not necessarily their implementation in object oriented programming languages) easily suggested the transition from step 1 to step 2. Each meaningful “entity” involved in the problem definition would have associated characteristics and resources. Resources can include sources of information, such as databases or flat files. Where circumstantial information is needed in each run, either updated information was needed from these information sources, or the user would be asked.

The need to integrate step 3 into the process was one of the best known problems at the time. The age of open systems development had not yet arrived. Popular rule-based expert system tools had been built as stand-alone products to support only rule-processing, and could not be integrated with other programs and were not extensible with other programs. The JBoss rule processing system mentioned in chapter 1 deals with this problem directly. The rule-processing software is built in Java, carrying with it the characteristics of the underlying language. A rule-process can be initiated from another program and program objects can be referenced directly in rules. This allows programmers much greater flexibility to fashion their own hard-coded relationships between rules and other components in each application.

In a more general approach, these relationships should be discovered automatically in the resources associated with the entities defined for the problem definition stage, the same mechanism used in support of the second step. Then, applications can change merely by changing the data associated with the application, just as is done in general rule-processing and expert systems. The program itself, a general processing engine, would not need to be changed, either when updating an application, or building entirely new applications.

In the final two steps, I primarily fell back on rule-processing and rule-based expert systems concepts again, but operating on the information accumulated from the first three steps.

Another thought occurred to me that fit the idea of artificial intelligence research perfectly. It might make my imagination seem slightly flexible, but I saw a one-to-one relationship between the general problem solving algorithm and the structure of good presentations I had learned in a high school speech course. (I was in fact considering just how general the above algorithm was, and how broadly it could be applied. There seemed no end to the possibilities, when one started replacing words like “problem” while maintaining the same processing logic.)

Monroe's motivated sequence

The basic speech structure

A. Attention step
B. Need step
C. Satisfaction step
D. Visualization step
E. Action step

A. Introduction

B. Body of the speech

C. Conclusion

It occurs to me now, so many years after the fact, that it is too difficult to recall all the thoughts that brought the general problem solving process into correspondence with the idea of presentations. I am not a numerologist, but cannot do better at this moment than to mention that both processes have five steps. After considering several examples, I became convinced that by adding well developed descriptive text as additional resource elements associated with problem entities, a program could automatically generate well-formulated reports in five sections; complete with data gathered and derived through each step in the problem solving process. The reports I envisioned could be as good as, if not better, than those produced by many human consultants.

This was interesting, but obviously not the top of the logic hierarchy. Many different problems arise each day – many different tasks to take care of. It cannot be said that solving one or many represents the top of the human activities management process. Somewhere around 1990, I met with an industrial engineering professor who had been interested in artificial intelligence applications for years. We had a long chat about the possibility of completely automated factories. This was still a decade before frequent online purchasing and customer management systems. But it seemed reasonable to contemplate a future in which everything from initial customer contact, sales, accounting, instructions to the factory floor, robotic manufacturing on demand, packaging, right out to the shipping dock would be fully automated.

Again – not a numerologist – but note that a more general description of the presentation process is given in just 3 parts. Coincidentally, that is the number of elements that I think necessary in the next step in high level logic – the subject of chapter 3.