Wednesday, December 15, 2010

A General Application Engine

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

One of the difficulties in creating a good description of HLL is that most of the good words have already been used. Consider the following.

What is High Level Logic (HLL)?

“High Level Logic” (HLL) occupies a new position in relation to other software development tools and components. What is the highest level of general support that can be given to software development is the theoretical question that drives HLL development. Over the past 25 years, concrete and practical results have developed from this unconventional approach to envisioning the next generation of software tools.



HLL is a software framework for developing a very large number of applications, linking applications and components that have already been created, facilitating relationships between people and organizations, for interacting with people and their agents, and increasing automation of pretty much anything. In short, it does what modern applications should do; and makes an entire infrastructure easier to develop and maintain.

The opening paragraph and accompanying graphic were added to a document to clarify what HLL is; putting readers on the right track right from the start. But now suddenly, there's a contradiction. The paragraph following the graphic says that "HLL is a software framework." The graphic itself places frameworks two levels down from High Level Logic. Readers can be left wondering, which is it?

To me, this is more than just an editorial problem. It's part of a general problem in developing the language that describes computer systems and software components. Another example in HLL is the use of the term "experts." My use of the term is directly related to the definition found in English language dictionaries. I clarify by calling HLL experts, "HLL experts." For those familiar with AI, the term automatically causes one to wonder whether they're the same thing as rule-based expert system components.

I believe I can use the term framework to describe HLL. I believe it fits existing definitions and uses of the term software framework. But as the graphic shows, I think there's a need to make a distinction between HLL's position on the level-of-logic chart verses the many examples of frameworks that I'm currently familiar with. Knowing that any new terminology is one more thing to explain, I still think it's worth trying to come up with a new term.

The first thing I thought of was that HLL is an application engine. It clearly fits earlier uses of the term "engine" to describe software systems that drive applications by processing application components that have been expressed as data. (Rule-processing engine, for example.)

Ther term comes rather close to Google's App Engine and PeopleSoft's Application Engine. Google's App Engine is a platform for developing and hosting web applications in Google-managed data centers. PeopleSoft's Application engine is a batch processing system using blocks of PeopleCode and SQL.

Both the Google and the PeopleSoft "engines" are much more specialized than HLL, so to make the distinction, HLL can be called a general application engine.

What do you think?

.

Tuesday, December 14, 2010

Millions of tires travel billions of miles (1)

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

When I was young, before everything was made from synthetics, we heard stories; stories about undomesticated tires living in the woods near the highway. We had seen them ourselves resting in muddy sties and dangling like monkeys in trees.

According to legend, rubber is alive. As old cars roll down the road they deposit microscopic particles of rubber on the pavement. Each particle is a larva. At the moment of birth the tiny prototires instinctively migrate toward the shoulder of the road. They join others along the way forming thin processions that in the scale of the old west would form a wagon train a thousand miles long.

When they reach the shoulder they form small colonies. As more protos arrive they become larger clusters. When large enough, they become the clutter that motorists observe as they speed by. The last stage happens only on the night of a new moon, when the treads undergo a convulsive involution and emerge as mature tires.

The life of a wild tire is tough. Those that make it to the edge of the road and adulthood face further challenges to their survival. After midnight on the darkest nights, tire harvesters cruise the roadways and load their trucks.

There is an advantage to their isolation in the woods near the highway. It’s key to their survival. No humans live there. Most people don't go there except for the occasional moment or two of relief. That's why undomesticated tires go mostly unnoticed, why modern science hasn't written about them, and why great journalists don't comment on their social and economic plight. People notice them sometimes. But they don't really see them, if you know what I mean.

At first we took the stories for just what they seemed to be, just stories. It was entertaining to hear the theory of their birth as the parent tires of our own car rolled along the road. We thought little of it of course, like ghost stories and claptrap on UFO sightings. How strange to think that some people believe in alien visitations.

But we decided to test the theory of undomesticated tires. It was an advantage living in a small town. There were so many quite country roads not far from where we lived. Many were paved which is absolutely essential to serious tire colony hunters. When we spotted a likely spot, we could stop the car and investigate, chat for a while and enjoy the sun. That led to the first of what is now our traditional Sunday drive. Three of us, now four, scouted the highway looking for ripe clusters.

The most exciting event would take place with the new moon. We began exploring three weeks early hoping to find the best spot. John is a fisherman and lectured endlessly about the effects that temperature, instinct, and even rain might have.

We discussed the possibility of danger and then chuckled it away. "Have you ever heard of anyone being mauled by a freshly constituted clan of retreads?" I've tried to remember who said that first, but always come up thinking it was a joint effort, something that emerged from the loose clutter of jovial prattle.

After about two weeks of intensive searching and note taking, Mary declared a clear winner. Out on State Highway 14 a group of clusters had formed that reminded us of a grand community of fire ants. Surely, we thought, each hill would produce a bicycle tire. The site also had the advantage of interesting terrain. We thought about sitting on the car with sandwiches and maybe a little Lone Star beer. "A real night of it," we said.

We wanted all to go perfectly and so a week later we started the evening working down a checklist. Food - check. Beer - check. Ice - stop at the convenience store on the way. We then headed for the highway for a last check of the site before picking up our supplies. It was still early and we didn't expect anything to happen until after dark.

We headed toward State Highway 14 in the mood for a party. John and Mary began building the excitement, telling stories about what we might expect that night and exaggerating the character of our find. When we reached the highway, we met with a surprise beyond all earlier expectations. Still several miles from what had been our best site was a new colony, with black rolling hills and valleys that contained enough tread for a whole sixteen-wheeler.

Mary got all excited. She kept telling me to stop and turn around so we could inspect the mounds. She sounded frustrated with me as I continued driving, eyes fixed forward like I didn't hear her. I saw something about a hundred yards ahead. About half way Mary saw it too. Then John said, "Geeeezzowie!" There was another great mass of chips, shreds, curls and treads about twice the size of the last one.

A hundred feet ahead came a third dark rolling mass. This third colony tapered off to thin low trails meandering through isolated rubber villages, then shot up again into massive towers. Mounds of varying proportion began appearing on both sides of the road with trails between each forming a continuous chain of bustling cities.

We drove on, dazed. The mass on the right side of the road faded again continuing in a smooth thin trail, not ending until it climbed to a gargantuan metropolis where the anthills of our chosen site once stood. The new moon would appear that night. Like the shores of ancient China, Greece, and Egypt, the edge of the road had become the birthplace of a great civilization.

We were numb and silent as we rolled down the highway in the car. What this meant and how it had come to be in such a short period of time was boggling my imagination. I needed to collect my thoughts.

John broke the silence. "It's midsummer's eve."

In Texas, people have long since forgotten the ancient celebrations of the seasons, having found no reason to celebrate midsummer especially. It's warm most of the year, when it's not hot. Better to rouse a sleepy air-conditioned bar or relax in the shade with an ice-cold glass of tea than to embrace the weather with any ritual.

"Frogs …" John choked. Mary and I looked for frogs on the highway but saw none. We looked at John and then at each other. He knew about midsummer's eve because of his knowledge of fishing and frogs. That didn't matter, or so we thought.

We looked back at John again. He was leaning forward with his hands over his face. Mary and I began rethinking our plan. We decided to park the car and hide in the woods behind one of the communities, somewhere we wouldn’t be detected. We decided to stick to the old plan when it came to sandwiches and beer, except for the ice. We’d have to start out with cold ones after dark.

A spot on the hillside beneath the great metropolis seemed right for our purposes. We could sneak in over the thin trail of rubber huts nearby and drop quickly into cover. So that was the plan. John never said another word, except once. He was staring down at his shoes, red-faced, shaking his head and pulling his hair. “Frogs,” he said. And then again, “Frogs.”





As we saw the last orange glow of sunset we assembled the sandwiches and beer and made our way across, hauling John along by his hand.

At home, Mary's mother looked out of the kitchen window and began to wonder. She had a special sense when it came to her children and she sensed something now. She knew it was best not to worry. She continued to gaze through the window into the back yard where Mary used to play and gazed deeper into her sixth sense. Something was going to happen.

Mary and I scouted the area stealthily, communicating in whispers and hand signals. The plateau held a small group of trees to rest our backs against as we sat and waited through the night. There were bushes hiding us but we could see around them when we wanted to. Mary pointed up the hill and snickered, “The Great Mounds of Roador.” We lost control and chuckled a bit too loudly, then both sounded “sshhh!” and broke up chuckling again. It helped ease the tension.

The location secured, base camp established, we turned our attention to the sandwiches and Lone Star beer. John was sitting quietly in a clump of bushes, staring intently in the wrong direction, down the gentlest part of the slope toward a large nearby pond.

Mary and I took our duties very seriously at first, somehow imagining that we knew what they were. We tested our skills as sentinels, sneaking a peak and darting back to cover again. The moon was bright and the sky clear. We began mapping the features of the metropolis for later comparison, only twice pointing out shadows and correcting ourselves.

Late into the second beer, I sat back and relaxed a bit and shook my head while pealing the label from the bottle. “The Great Mounds of Roador,” I mused. Mary smiled back, encouraging the light chatter.

I pitched my voice higher and became a little sing-songy. “Road decor, in the great tradition of Marshall McLuhan,” I said.

The snickers were quieter now. Mary responded faking a low voice. “Road memorabilia.”

We were on a roll and I fired back with my best imitation of Johnny Carson. “Interstate ennui.”

A question shot across Mary’s face. I shrugged my shoulders. She downed another big gulp of beer.

Sounding more philosophical she asked, “if there's rubber on the road, did the blown tire make a sound?”

By then I felt forced to go on. “Is a tire only the afterburn of the wheel?”

With that, we each took another Lone Star, tipped them in quick salute, and drank. Each swallow took us deeper into the philosophical ramifications of rubber. Around our fourth bottle (each), we were convinced that the situation was of enormous importance to the future of humanity. It was on that account that we began to formulate a daring plan.

..

All the characters in this story are fictional, and any resemblance to actual persons living or dead is purely coincidental.

All rights reserved.
Copyright © 2000-2010 Roger F. Gay.

No part of this story may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author.

HLL on Linux

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

The day has come. I've been developing in Windows for a long time; never seeming to get around to installing Linux on something for testing. After a disk drive on a laptop pooped out, the difficulties of getting Windows back pushed me over the edge. Ubuntu was simple to install, but since I haven't been on a Unix system for many (many, many) years, and this is my first experience with Linux, some of the rest ... well - let's just say an experienced Linux guru would have been posting this a couple of weeks ago. (Not like I dedicated 24/7 to this task though.)

Ubuntu comes with some kind of Open Java, so I eventually figured out how to uninstall that and installed Sun Java. Then I installed Apache Tomcat just in case. I haven't tried to run that yet, so I haven't tested the optional HLL browser built as part of the robot demo.

Quite frankly, I don't even know how to create a script file in Linux yet, so I typed in the java command to run HLL by hand rather than converting my Windows batch files. But sure enough, write once run anywhere - the "Hello World!" demo works. No recompile necessary.

Saturday, December 11, 2010

Java Needs an Opt-Out of Static Typing

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

I've, step-by-step, been explaining the history and character of the high level logic problem and the solution being developed in the HLL Open-Source Project. More than once, I've mentioned the enormous advantages of modern computer languages over what was available twenty-five years ago when the first few of my brain-cells became stimulated by the question. For more than one reason, I've selected Java for the implementation language. (One that I'll note is that it's currently the most popular application language on the planet, matching the intent that HLL should be a production tool, rather than an interesting curiosity.)

One great issue remains, however. Java is a statically-typed language. Why is that a problem, you might ask if you haven't yet seen it as a problem yourself? Let me take an example. I have mentioned JBoss Rules (also known as Drools) quite often in my articles. In The Ghosts in My Machine: Chapter 2 I mentioned how it overcame a critical problem in the ancient art of rule-processing.
Popular rule-based expert system tools had been built as stand-alone products to support only rule-processing, and could not be integrated with other programs and were not extensible with other programs. The JBoss rule processing system mentioned in chapter 1 deals with this problem directly. The rule-processing software is built in Java, carrying with it the characteristics of the underlying language. A rule-process can be initiated from another program and program objects can be referenced directly in rules. This allows programmers much greater flexibility to fashion their own hard-coded relationships between rules and other components in each application.
In HLL, I am about to create a similar openness in the configuration system, and that will be carried over into a simple rule-processing and planning system directly supported by the HLL core (with the option to use a more sophisticated systems such as JBoss Rules). Application developers will be able to specify their own components in XML files that the HLL processor will handle generically.

In the prototype, developers could add their own components to the core – which is very much like saying they can add whatever they want by changing the core source code. (Well, du-uh!) Because I want to maintain the concept of the core, a next sort of “bottom-uppish” evolutionary step is to predefine another package specifically for application components (that may in turn initiate other systems that HLL doesn't know about). The HLL core processing components that deal with configurable application components will have all members of the predefined “application package” imported by using a wild card (import nu.isr.hll.light.application.*;). Application developers will be able to put initiator components in the application package, and the HLL core will process them in whatever way is described in (XML) configuration files.

One HLL unit can also contact another HLL unit with a request to have one of its application components executed, with results returned. But I dream other dreams. I would like to build dynamic HLL applications that can process components that were not defined at compile-time. I'd even like to be able to fetch objects from somewhere out in the WWW that a particular HLL unit has never heard of and run them locally. What I'd like is a LoadUndefinedClass class with methods for calling methods and accessing public fields.

And really, I can almost do this with generics. I can almost build the LoadUndefinedClass class using reflection. Almost isn't good enough, of course. No matter what trick I play on the compiler, the run-time engine is going to throw an error if there's no predefined class for it to refer to.

So, what's stopping us from getting a LoadUndefinedClass class? Purity of the language? Fear that Java programmers might write programs that don't work? Come-on!!!

UPDATE (Sept. 6, 2011) InvokeDynamic class : New JDK 7 Feature: Support for Dynamically Typed Languages in the Java Virtual Machine
.

Saturday, November 13, 2010

Regarding Human Intelligence and Decision Strategy

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

When Will We Have Artificial Intelligence?

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

When are we going to have AI, one survey asks? It's a question relevant to HLL because so much of the thought behind the HLL design comes from the history of AI research and current technology that has come from AI research. The answer to the question when, with reference to HLL, is now. (Or at least as soon as version 1.0 is ready.) And that's no reason to get worried. As the description of HLL claims, you don't even need a high-powered computer science background to build applications with it – just some (OK, but at least reasonably good would be nice) programming knowledge.

The AI question is actually a bit tricky. It really depends on what you mean by AI. Way back in the cave computer days when I was first introduced to the subject, artificial intelligence research was defined as trying to get computers to do things that humans currently do better. Applying that definition, it seems as though the answer may be never. As soon as computers can do something at least as well or better than humans, it's no longer the subject of AI research. Object oriented programming is an example of something that came from AI research. Now a mainstream programming paradigm, many people don't associate it with AI at all.

The variety of ways of thinking about AI is also why some researchers predict AI won't exist far into the future while others (like me) are much more optimistic. People who answer the question may have something very specific in mind and think it will be a long time before it will become reality. You can also think about all the things computers do now – such as mathematical calculation – and make a case that AI already exists (something humans and computers both do, and computers do well). The great variation in predictions on when AI will come, has to do with both the particular sets of things that guessers think need to be done before “AI exists” and how optimistic or pessimistic they are about doing them; while basic research always looks ahead.

You've probably heard that human intelligence is linked to the fact that we have opposable thumbs and other peculiar physical characteristics like standing upright and walking erect. Researchers recognize that in living creatures, intelligence and the characteristics of their physical bodies are linked, which makes robotics fertile ground for AI. (related article) Not all researchers focus exclusively on human intelligence and capabilities however. Some of the most interesting advances have come from looking for ways to mimic the behavior of other creatures, from insects and snakes to mules. The intelligence of a lower species is still intelligence, and some of the developments that come from mimicking their behavior can be applied in layers when mimicking behavior in higher ones.

Where does HLL actually fit in? Twenty-five years ago, when I was first thinking about the “high level logic” problem, I thought of it as a subject for advanced research. Since then, computer languages have advanced considerably and in ways directly matching the requirements of HLL. Strong networking support is a must, which has come from focus on Internet applications. Relatively recent additions to Java (which I've used to build HLL), such as strong support for generics and reflection have transformed some of the challenging bits into stuff that's just pretty cool.(Once again, application developers are not required to have expertise in these techniques – although it's quite alright if they do.)

To some extent, even the concept has been encroached upon (so to speak). The short descriptions of HLL have called it an “agent system” and I worry at times that it will be perceived as nothing more than an alternative to existing agent systems (which I won't mind so much if it becomes a popular one). The overall HLL concept is the thing that remains new, and while fitting into the current modern software development world well, I still think it has potential as a tool in advanced AI research and application development.

HLL development has been proceeding as an ordinary software development project. With use of modern software technology and twenty-five years of thought behind it, not much experimentation is now required; less than the ordinary amount for development of a complex software system, because even details and how it all fits together have previously been thought about. And all that is why it (version 1.0) will be a powerful, light-weight system that is easy to use.

So, is it AI? When people are using it regularly to build applications, I certainly hope it's thought of as AI just as much as rule-processing or object-oriented programming and all the other things that have come from thoughts on developing AI; and yet, fully accepted and integrated into mainstream applications development. Why not integrate HLL support directly into programming languages?

For most people, thoughts on what AI is continuously focus on the future. With twenty-five years of history, I think I've earned the right to use a tired old cliche to end this note with a response. As far as HLL is concerned, the future is now. (Finally!)

Another view: How Long Till Human-Level AI? What Do the Experts Say?
(AGI = Artificial General Intelligence): “I believe the development of AGIs to be more of a tool and evolutionary problem than simply a funding problem. AGIs will be built upon tools that have been developed from previous tools.”



.

Wednesday, November 10, 2010

HLL Factory

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

It occurs to me that HLL application units should be easy to integrate with other types of application components. The current design allows any sort of application component to be integrated into the HLL logic - since HLL was conceived as the "high level" logic for applications. Interaction between HLL units (an agent approach) is fundamental to the design.

But, some application builders may want to incorporate HLL unit technology within other types of programs. I haven't thought of a reference or example application yet, but it seems kind of obvious conceptually. There may be an HLL infrastructure operating already. An application designed in some other way may want to interact with it, with either fixed or variable specifications for its exact purpose.

In such cases, an application may want to make use of an HLL Factory to construct an HLL unit on the fly that will interact with other HLL units just as one of them. The fact that HLL pushes the high level application logic specification into data would make it easy to create or modify the specification on the fly.

Creating a good HLL Factory set-up would take some thought in design, but is quite doable. So much so that I expect, as the concept of an HLL Factory matures (along with completing more of the code to version 1.0), it should surely be done.

I have added HLL Factory to the issues list, but not as a high priority. Since HLL is not version 1.0 yet (but getting closer every day - and I'll remind everyone that is it already possible to build applications) - and there is no existing HLL infrastructure anywhere out there (although there are agent systems running), I sense no great demand right now for dynamics to interact with an existing HLL infrastructure. The day will come however ...

Sunday, November 7, 2010

The Ghosts in My Machine, Chapter 3

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Chapter 1
Chapter 2

Prepare yourself for a surprise ending. Do that now to avoid confusion later.

Around 1990, I met with an industrial engineering professor who had been working for years with artificial intelligence technology. We had a long chat about the possibility of completely automated factories. This was still a decade before frequent online purchasing and customer management systems. But it seemed reasonable to contemplate a future in which everything from initial customer contact, sales, accounting, instructions to the factory floor, robotic manufacturing on demand, packaging, right out to the shipping dock would be fully automated.

Even if you've never considered designing such a system and are unfamiliar with any such work in that direction, it isn't long before the thought occurs that such a system should be broken up into departments and operational segments. Each operation has its own specialized “knowledge” and processing to contend with, and it seems likely that interfaces between operational segments might need only handle common data exchanges. Jumping ahead, one might consider that an agent system could provide loose coupling between operational components as well as the possibility of moving pieces around from time to time for special types of interdepartmental operations.

It isn't a big jump to consider that breaking the application into “departments” is at least in part, an exercise in modeling the framework of the human organization that complete factory automation would replace. The fact that this model already exists, that it's completely common, may be the most important reason that the idea of breaking the software design up this way is so obvious. Compartmentalization is not a concept invented by computer scientists. Humans do it all the time, and have been doing it for ages. If we're going to think more about obvious common ideas, then this idea easily qualifies on that ground.

The concept of an “object” was also not invented by computer scientists. (I merely need to open this idea so that I can be flexible about its use without confusing anyone – i.e. a fair warning against thinking “simply” about common current day object oriented programming structures and techniques.) Objects exist in the world around us and we can ourselves be described as objects and operate as such. They have characteristics and some have capabilities. They interact in various ways and react in various ways to interaction. Complex (compound) objects can be composed of simpler objects; the whole sometimes being equal to more than the simple sum of its parts. The concept of an object or “entity” (in certain modeling models) is fundamental and useful in describing pretty much everything – solid ground for high level logic.

Dwelling on what at this point may seem quite obvious; we're – very generally speaking – looking to design a set of organized objects. One way or another the individuals and / or the organization will be structured to carry out tasks and solve problems, answer questions and whatever. The important distinction here is that the intent is to build a general processing engine; not a specific application that could be described exactly the same way – as a set of organized objects. The expected practical effect, if successful, is that some portion of “common” logic in a wide range of applications can be moved into the general processing engine, where it does not need to be repeatedly rebuilt in the application phase.

Another important distinction is that the High Level Logic Project attempts to do this by searching for “high level logic” rather than by “bottom-up” development. This is why I've chosen such a long-winded approach to introducing HLL. Ideas that are common and fundamental are useful and enduring. They don't need to be derived - "bottom-up" - from the most recent set of application programming frustrations.

So let's take the next step in thinking about departments in human organizations. There are various kinds of organizations, but I didn't mind leaning on my familiarity with the typical American corporate structure, and particularly with project oriented groups. Projects begin by defining a task or problem and end with a specified result. There is at least a rough analogy between this process and the general problem solver outlined in chapter 2. Project organizations are made up of individual people with specialized roles and responsibilities that form sometimes simple and sometimes complex task hierarchies.

Certainly our task cannot be to describe the individual characteristics and operations of all existing human organizations, or even of one large organization like IBM. Organizations change, and a detailed description of any specific organization is not a general model. A general high level logic engine needs entities that can be characterized by application developers or another process. The system for describing organizations needs to be easily extended by adding, defining, and organizing entities. The relationship between entities needs to be “loosely coupled” so that they may move around, help others by use of their special knowledge and capabilities, and form sub-groups to perform tasks as needed.

There is a difference between general and generic. The description above remains pretty generic. Certainly, organizations as “organized objects” seems so. It is possible to add more specifics to a general model (applying to all or most members of a category or group), or one postulated as being general (not specialized or limited to one class of things) without fear of limiting an application developer's creativity and adaptations to meet specific requirements. Allowing developers (and other processes) to customize, add, delete, define, redefine, and organize – i.e. to completely control the specific definition of the generic engine's application does that. (It's something like bringing entity-relationship diagrams to life.)

But still, it plagued me that there is apparently at least a rough analogy between the workings of a project organization and the general problem solver outlined in chapter 2. What minimal organization could assure complex task completion and be highly relevant to a wide range of software applications?

I had spent years working in project groups, and it was that experience that I most thought about when making the analogy. Generalizing a bit, the organization started at the top with an executive (like a VP of Engineering), who reigned over one or more (project) managers, who organized and managed the activities of groups of experts (in most of my personal experience – engineers). In the human world, such a simple three-layered organization has been responsible for carrying out a very large range of very simple to extremely complex tasks.

It was time to describe the characteristics and capabilities needed for a basic three-layer organization model, one that made sense generally, and would be highly relevant to a wide range of software applications. That part turned out to be not particularly difficult, and led to detailed design requirements and specifications (chapter 4, some of which are already hinted at in this chapter).

Minimal description of the three-layer HLL general organization model:
Executive(s): Interacts with external and internal “interests” to formulate, offer, accept, or reject proposals or demands. Decides whether or not to make commitments. Is concerned about the general allocation of resources within an area of responsibility. Assigns complex tasks to managers.

Manager(s): Plan an implement tasks that have been assigned to them, using available resources.

Expert(s): Carry out work details for which specialized knowledge and skills are needed, often in cooperation with other members of a group.
Very clean model so far, perfectly obvious in many respects. On the good side, it could easily be taken as a generalization of many processes including many, many software applications. Then I felt a great discomfort with the human organizational analogy. I and a great many others often feel discomfort with being a cog in the wheel of an organization. It's not quite natural. Something is wrong. (Here comes the surprise ending.)

Don't we all, individually, commonly carry out tasks requiring all three levels described above in our daily lives? What's the difference between a large distributed organization and an individual (in the context of this discussion) other than that large distributed organizations can get a lot more done in a specific amount of time? There isn't necessarily a difference between the structure and function of a multi-person organization (model) and an individual.

A friend of mine who happens to be a cook recently asked me to explain that to him. A cook, interacts with the outside world (outside the kitchen); a good example being the communication of an order from a customer via a waiter (an agent). The cook may accept or reject the order. (If for example, the order is not complete or the kitchen has run out of an ingredient required to make a dish.) If the order is accepted, he commits to the task, then assembles resources (ingredients, cooking implements) according to a plan (complete dish specification and recipes). He then prepares the food and the dish, using specialized knowledge and skills.

The fundamental high level logic had not collapsed, but it became apparent that application developers should be provided with the greatest flexibility in interpreting, mixing and applying its pieces. Large organizations like IBM are human creations, sometimes necessarily physically distributed, that can accomplish more in a particular amount of time. Otherwise (in the context of this discussion) nothing more than a reflection of ourselves and the way we – necessarily – accomplish tasks that require a higher level of intelligence.

From the High Level Logic (HLL) Open Source Project blog.

Thursday, November 4, 2010

XML Configuration Processing – Progress Report

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

The importance of issue #1. Configuration Files and Processing should not be underestimated. The configuration system is used to attach application components to the generic HLL processing system. It makes sense to create a powerful configuration system that is easy for application developers to use and quite flexible so that it does not impede creative development. And when the project begins building tools to further simplify the development process, they will (in part) simplify the construction of configuration files that, in effect, define HLL applications. A powerful configuration processing system will facilitate powerful but easy to use tools. (For programmers, the process is already easy even without the tools.)

In addition, what is becoming a sophisticated configuration processing system will be extended to process plans (management layer) and then further extended to build a simple rule-processing system. (Sophisticated rule-processing can be handled by existing sophisticated rule-processing systems.) All of these features require processing XML files in sophisticated ways. (Including things like handling class and method specifications generically, building things on the fly with reflection techniques, and constructing and modifying the XML files themselves on the fly.)

My progress on developing the configuration system has been interrupted by other demands; just the sort of thing that one must expect in this kind of project. I was, for example, invited to present the HLL project to a group of about 150 IT professionals in Stockholm, which I was quite happy to do. And if you've been following this blog, you know I've spent some time drafting commentary on HLL in a way that I hope provides a meaningful introduction. (If you have been following, please note that I expect chapters in The Ghosts in My Machine to become less historical / philosophical and more concrete and technical, eventually explaining the design and how to build applications.)

The prototype already had a working configuration system. It would not have been a working prototype without one. But it was much less flexible than it should be in version 1.0. In order to get the prototype up and running quickly, I had done things like hard-coding configuration file names and wrote custom handlers for each file. It was good enough for proof-of-concept, but not what I'd call “real code.”

Back to work, progress has been good.

Configuration files and handlers are now specified in a master configuration file which is read and processed. It will also be quite simple to add an option to specify the master file as a command line argument. (So simple, that I may do that before committing the code.)

The HLL predefined configuration files are optionally checked against schemas. This option will be available for configuration files created by application developers as well.

I've decided to stop working on configuration for the moment. It should not take too long to deal with issue #2. FIPA Compliance , and I hope to include this, along with the current configuration file handling improvements in a new source code commit by the end of the week (i.e. tomorrow). Then I think it would be an extremely good idea to update the simple demonstration binary download and tutorial on application development.

It seems to me that moving on directly to work on plan processing before returning to configuration would be a good idea. It is intended that plan processing should be built on the processing capabilities of the configuration system. Work on planning first, allows me to simply paint on a fresh pallet with modified code from the configuration system to get the result that I want – assuring that it works for planning. Then I can return to configuration and simply apply the relevant new bits for the final upgrade.

Monday, November 1, 2010

The Ghosts in My Machine: Chapter 2

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Link to Chapter 1

I understood the differences between what application developers wanted to do and what the “artificial intelligence” technology of the late 1980s supported. The differences were much greater than could be dealt with in a few software updates. What had been, in effect, a broad survey of application needs resulted in a snap-shot of a more basic set of technical requirements. This snap-shot taught me much about the path of development of software technology generally, decades into the future.

Much of that future would evolve with or without me, as developers pushed to realize their dreams and the more basic technology developers – involved in computer languages for example – responded to demand by building their tools bottom-up. But I had become obsessed with what initially seemed like an unapproachable question – one that might raise a slight giggle around a lunch table. As computer languages and tools evolve to higher and higher levels - “bottom-up” - where will they eventually reach? Where's the top? Even if the question is unanswerable, the more I thought about it, the more I found value in the thinking.

There were also two general observations about people that allowed me to consider that my ideas would be at least somewhat unique. The first is that people often tend to mistake common, ordinary ideas as trivial. In fact, the opposite is more likely true. If something is common, it is likely important, or at least extremely useful. Simply cast the thought in different words such as the basic economic measurements of supply and demand, for example, and more people will agree. Something becomes common in any culture because there is a demand for it. The other was that, at the time, it was rather difficult to get many experts to think about the difference between the idea of a machine doing something on its own, and what they were doing. Many programmers scoffed at the idea of specialized rule-processing engines, because they could already use if-then statements in their own programs. Management consultants and statisticians could not see a reason for interest in theoretical math engines that could build their own models and solve problems because statistical packages already allowed them to select from a variety of basic mathematical forms and discover a set of parameter values that produced a “best fit.”

Even as I write today, a quarter century after my thoughts on “high level logic” began, I wonder if I will hit the same snags as I try to drum up interest. That software tools evolve to support higher level functionality is quite commonly understood. And then there is the trigger that set my search in motion. Every systematic analyst (whether professional or not) knows that it is important to define and understand a problem before expending much effort to solve it. They've done it themselves, over and over again, many times.

Let me proceed then to something else that was already common before I began my quest. Not only is it not my invention, but if I recall correctly, it was first introduced to me in grade-school. This is certainly the kind of thing that could potentially send readers to another page, thinking that it is proof that I have nothing new to say. A general problem solving algorithm has been described, that begins with defining the problem. I trust that the process will be recognized by a great many people.
1. Define the Problem or Task
2. Assemble Detailed Information
3. Perform Analysis
4. Identify Alternatives
5. Recommend Solutions
This is a description of an extremely general process; and that certainly is a characteristic sought after in the search for high level logic. Not only do professionals systematically follow such a process in dealing with complex problems and issues, but it is difficult to think of any, even slightly complex process that cannot be cast in its mold. If I, for example, feel hungry at 10:30 at night while watching television, I will already have defined the problem. I am hungry. I will then proceed to the kitchen to assemble information. Looking into the fridge, I will see what is available. I will analyze that information, considering how I might use the ingredients to make something to eat. That will produce a set of alternatives, from which I will choose. (“Recommend Solutions” is a phrase chosen in context. I had been considering the development of better “expert systems.”)

Having given a good bit of thought to rule-based expert system technology, it was easy to see that the above general problem solving algorithm could be used as the design of a software engine, which I describe below. Once I was certain that the idea was concrete and feasible to implement, I began thinking of it as the rule-processing equivalent of “structured programming” - i.e. the addition of for, while, and if-then statements into “high level” languages and functions replacing goto statements. Not miracles, but an extremely useful advance in computer technology.

Rule-based expert systems had a great deal of success in focused diagnostics, which meant no new investigation was needed to see that the same approach could be used in the first step of the general problem solving algorithm. To fill things out however, the process needed to proceed to step 2 rather than too quickly toward the last step – making recommendations. The concept of “objects” (although not necessarily their implementation in object oriented programming languages) easily suggested the transition from step 1 to step 2. Each meaningful “entity” involved in the problem definition would have associated characteristics and resources. Resources can include sources of information, such as databases or flat files. Where circumstantial information is needed in each run, either updated information was needed from these information sources, or the user would be asked.

The need to integrate step 3 into the process was one of the best known problems at the time. The age of open systems development had not yet arrived. Popular rule-based expert system tools had been built as stand-alone products to support only rule-processing, and could not be integrated with other programs and were not extensible with other programs. The JBoss rule processing system mentioned in chapter 1 deals with this problem directly. The rule-processing software is built in Java, carrying with it the characteristics of the underlying language. A rule-process can be initiated from another program and program objects can be referenced directly in rules. This allows programmers much greater flexibility to fashion their own hard-coded relationships between rules and other components in each application.

In a more general approach, these relationships should be discovered automatically in the resources associated with the entities defined for the problem definition stage, the same mechanism used in support of the second step. Then, applications can change merely by changing the data associated with the application, just as is done in general rule-processing and expert systems. The program itself, a general processing engine, would not need to be changed, either when updating an application, or building entirely new applications.

In the final two steps, I primarily fell back on rule-processing and rule-based expert systems concepts again, but operating on the information accumulated from the first three steps.

Another thought occurred to me that fit the idea of artificial intelligence research perfectly. It might make my imagination seem slightly flexible, but I saw a one-to-one relationship between the general problem solving algorithm and the structure of good presentations I had learned in a high school speech course. (I was in fact considering just how general the above algorithm was, and how broadly it could be applied. There seemed no end to the possibilities, when one started replacing words like “problem” while maintaining the same processing logic.)

Monroe's motivated sequence

The basic speech structure

A. Attention step
B. Need step
C. Satisfaction step
D. Visualization step
E. Action step

A. Introduction

B. Body of the speech

C. Conclusion

It occurs to me now, so many years after the fact, that it is too difficult to recall all the thoughts that brought the general problem solving process into correspondence with the idea of presentations. I am not a numerologist, but cannot do better at this moment than to mention that both processes have five steps. After considering several examples, I became convinced that by adding well developed descriptive text as additional resource elements associated with problem entities, a program could automatically generate well-formulated reports in five sections; complete with data gathered and derived through each step in the problem solving process. The reports I envisioned could be as good as, if not better, than those produced by many human consultants.

This was interesting, but obviously not the top of the logic hierarchy. Many different problems arise each day – many different tasks to take care of. It cannot be said that solving one or many represents the top of the human activities management process. Somewhere around 1990, I met with an industrial engineering professor who had been interested in artificial intelligence applications for years. We had a long chat about the possibility of completely automated factories. This was still a decade before frequent online purchasing and customer management systems. But it seemed reasonable to contemplate a future in which everything from initial customer contact, sales, accounting, instructions to the factory floor, robotic manufacturing on demand, packaging, right out to the shipping dock would be fully automated.

Again – not a numerologist – but note that a more general description of the presentation process is given in just 3 parts. Coincidentally, that is the number of elements that I think necessary in the next step in high level logic – the subject of chapter 3.

Friday, October 29, 2010

The Ghosts in My Machine: Chapter 1

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

As a young man, when I enjoyed idle time and my daydreams tended to wander in strange directions, I found myself considering a rather unimaginative question. As computer languages and tools have evolved to higher and higher levels - “bottom-up” - where will they eventually reach? Where's the top?

To put this contemplation in perspective, the year was 1985. The computer under my desk was a first generation Texas Instruments PC with two floppy disk drives. Ethernet cables were being strung through our offices to network our computers for the first time, allowing messages to stream around the building at the blistering rate of up to one thousand bits per second. The idea of using personal computers to access a wide array of interesting information, to project presentations stored on them, and to somehow integrate them into “teleconferencing” were subjects of advanced industrial research and design. The Apple Macintosh was recent news. Yellow power ties were “in.” On the cutting-edge, people debated whether “object oriented programming” would ever really catch on.

All past generations are perceived as naive; aslant comfort as I tell this story now. For I became somewhat obsessed with that odd and uninspiring question. Where is the top? As unanswerable as it seemed, the more I thought about it, the more I found value in the thinking.

The question did not occur to me entirely at random. The group that employed me at the time was particularly interested in “rule-based expert systems,” a well-developed form of rule-processing that was, at the time, thought of as artificial intelligence technology.
(RULE-1: (IF: <user-1> ((think)) that's <naive{in hindsight}>)
(THEN: ((wait)) 'till <user-1> ((READ-CONTINUE))))
In historical context, it was actually a much more interesting time than empty hindsight might suggest. There was focus on expanding the common roles of computers in data fetching and number crunching to include more powerful forms of symbolic processing. Artificial intelligence research was defined as trying to get machines to do what at the moment, people do better. It was a time to think basic thoughts and explore new directions. And then there was the hype.

The idea of artificial intelligence fascinates. Writers in the popular press could not resist the temptation to contemplate fantastic futures brought about by its commercialization, as if the full blossoming of machine intelligence was only months away. It happens in every generation. Today, in the light of well-funded advances in robotics, we worry too much about machines becoming in some way more intelligent than people and using that intelligence to take over the world – a theme perhaps not yet entirely exhausted in science fiction. There are the annual singularity conferences that include discussion on uploading your mind to a machine so that the ghost of your thought patterns can survive your death. (Sadly, it seems that not everyone will have use of a robot body like Zoe Graystone.) And then one can well wonder about human-machine marriage law once robots have become sufficiently advanced to serve as satisfactory sex mates and companions. (But can it cook?)

In the mid-1980s, perhaps we were too naive to connect such dark and mystical thoughts to our first generation personal computers. There simply wasn't much personality in “C:> dir /P” displayed on a monochrome screen. Rule-processing systems were just part of the parade of options opening up a new world of intelligent automation. But with them, complex business issues would be resolved at the press of a button, quality medical diagnosis and agricultural advice would be delivered to third world countries on floppy disks, the art and skill in untold areas of human endeavor could be translated into computer programs by “knowledge engineers.” As machines began turning out the best advice human experience and machine computation could deliver, the quality of life could improve everywhere and the world would become a better place.

The mood was upbeat. The excitement palpable. Researchers in all fields began competing for new funding to apply the technology in their own fields. Their ideas were wonderful, their plans divine. There was seemingly no end to the possibilities. I had a front row seat to much of it. My job involved presentations and discussions at universities and national laboratories throughout the country. I saw the wonder and curiosity in their faces as the software concepts were first introduced and the thrill of starting new projects with great promise. I exchanged friendly comments and jokes with them as their work proceeded through the first interesting efforts and did my best to respond to the thoughtful and carefully stated questions as problems arose. And then, in what in the vastness of history may seem like nothing more than a split second after the whole thing began, I felt their frustration turning to anger. For most of them, it turned out, rules were not enough.

Leading artificial intelligence researchers of the day were scrambling to make a go of their own commercial enterprises. There was evidence of bottom-up evolution in complexity. The idea that object oriented programming provided an opportunity to effectively model concrete objects and abstract ideas related to the real world was applied in frameworks to allow logical relationships between objects. “Frames” also provided a way to break rule systems into sub-systems.

But these few steps were not enough to keep the artificial intelligence revolution promised by excitable journalist going. It had been an interesting top-down experiment led by a few researchers whose bottom-up developments did not go far enough, fast enough. Too many visionary application builders from outside of engineering were failing. Overly heightened expectations caused the reputation of the whole of the artificial intelligence idea to suffer greatly. It didn't matter that someone somewhere might have been thinking about the next solution. Funding agencies and investors lost interest.

The change in mood turned my role in the experiment into something like a traveling complaint department. I tried, once or twice, to remind those who had used our technology that the scope and limitations of our products had been explained. In some cases, the conversations were merely new versions of old chats in which I had directly stated that significant parts of their design were not supported by our products and would be difficult to implement. But the disappointment was as palpable as the excitement had been before. Their visions had been clear, their ideas wonderful, their plans divine. It must be possible to do what they wanted to do. Something was to blame.

The groups that had tried to implement sophisticated intelligent applications cut across a nice spectrum of academic disciplines and interests. This led to what initially seemed a wide range of application related problems. But as I listened to more stories, some parts of their problems began to sound similar. A pattern emerged. Underlying the diversity of application interests laid a concrete set of basic definable technical problems.

Our company had a particularly good working relationship with one of the leading researchers, so I sent an email describing the problems. The list became part of a conference presentation somewhere. But there was no way that I could stop thinking about them. In my mind, the crash of artificial intelligence technology in the 1980s was transforming itself from a great disappointment to a unique opportunity. I had discovered a clear definition of the difference between what practical application builders wanted to do and what the technology of the day had to offer.

In the normal course of events, I might have worked on designs that addressed each of the individual problems on the list, with each solution having a potential for commercial application. This is the way progress is normally created - “bottom-up.” But the circumstances triggered more philosophical thoughts. The combination of the overly optimistic expectations of funding agencies and application builders demonstrated that bottom-up is not always good enough. Had my degrees been in management or marketing, I would probably have simply noted it as a classic error. But I am an engineer. Problems are meant to be solved and it is somehow not in my nature to be able to turn by brain off to them.

It was in this stream of events that my mind became fixed on that naive and rather unimaginative question. If technology developers had anticipated the needs of application builders and moved directly toward satisfying those needs, the failure would not have been imminent, as in hindsight, it clearly was. But this was cutting edge stuff. How would anyone know what they had not yet discovered through experimentation and experience – trial and error – yet another problem identification step that could lead to more progress?

Something else began to play in my mind. It had already become part of my life's wisdom to recognize that seemingly simple things, the ideas and processes that we mostly take for granted because they are common and therefore presumed uninteresting, are often the most profound. On many occasions I have seen sophisticated ideas initially fail, and then slowly evolve until the most basic logical considerations forced compliance. Then it worked.

Involvement in artificial intelligence naturally involves thinking about intelligence generally, about our own intelligence, and our own thought processes. If artificial intelligence is working on things that at the moment, humans do better than machines, we think about what we do, and how we do them. My brain had not turned off this self-reflection either. Something had triggered my concrete confidence that I could solve problems that had not previously been solved – and to know that the solutions would be useful. I have mentioned it. Is it something that you, dear reader, have taken for granted because it is common and therefore uninteresting? Or do you know to what I refer?

There are pieces of generic processes; simple, common, so ordinary as to be overlooked. A common thought, that systematic problem solving begins with defining the problem, sent me to the mall to find a laboratory notebook. The jottings, ideas, diagrams and pseudo-code represented my obsession with breaking free of the bottom-up approach to development and progress. My eyes turned from the problems immediately surrounding me and slowly upward against a seemingly endless empty space. My search for “high level logic” had begun.


Footnote to chapter 1: The specific “failure” of artificial intelligence technology presented in this article has to do with its presentation and timing, along with overly enthusiastic expectations over a relatively short period. (related article) As techies around the world know, object oriented programming became a mainstream technique supported by extremely popular programming languages. Advanced rule-processing is an added tool in such things as database systems and embedded in a wide range of “smart” products. The ability of software to process “business rules”is commonly marketed. One open-source rule-processing system (Drools) became so popular that it's now included in the JBoss Enterprise Platform and there is a standard set of interfaces for rule processing components in the Java programming language. Although I had not seen “problem definition” as a framework component in a larger generalized process, the importance of problem identification had been pointed out in specifics, particularly with the development of diagnostic systems.

Link to Chapter 2.

Thursday, October 21, 2010

LEGO Mindstorms NXT Robots (leJOS)

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Being oriented to work on tools and middleware, I allow myself to get enthusiastic when software reaches the stage that it makes something else relatively easy to do. So, I feel no regret posting this comment in advance of a working demonstration. (Something does actually work – read on!)

Creation of a robotics demo for HLL has to this point proceeded with a simple robot simulation. Movement and positions were nothing but numbers fed from the simulation and translated in the browser-based GUI to a colorful circle moving around through two rooms, which were nothing more than a 2D outline. (Only tested on Firefox, so if you're not using Firefox, you might not see the first (dynamic) image of the "robot" moving through the rooms below.)



The High Level Logic (HLL) software system deals with what the name implies - high level logic. It's designed to provide and support high level logic for all kinds of applications. So, motor control and sensor feedback really has nothing to do with working on the HLL core system. Robotics is a good and fun application to work with; and certainly to the point back when a robotics research project was paying for the work.

For 2-3 years now, I've had a LEGO Mindstorms NXT kit that didn't get any attention past trying out Microsoft's (at the time new) Robotics Studio. A friend of mine recently decided not to let it go to waste, and built the first robot by following the instructions in the booklet that came with it.


I've switched computers since last installing the Mindstorms software. The copy that came with the kit did not contemplate 64 bit architecture, so the simple installation from CD didn't work. After googling around a bit, I discovered a good set of instructions to get the job done. (Getting to Grips with Installing, Updating and Programming LEGO Mindstorms Kits.)

Software that came with the kit successfully installed, I checked to see if everything was working. Check, check, check. Now to change everything. What I really want is to create some smart high level logic using HLL and send commands from there to the robot through “experts.”

The focus of my work is on HLL, so I'm not interested (well, maybe I'm interested and it's more a question of time and keeping my efforts focused) in working out all the Bluetooth communication and NXT control code myself. Wouldn't it be nice if someone who had already developed that technology had made it available. And someone did. (Actually, I knew that.)

I installed leJOS NXJ, which includes replacing the firmware on the LEGO Mindstorms NXT brick. Now it's ready to interact with Java programs on my computer – well, not quite.

The installed software for leJOS hadn't contemplated 64 bit architecture either. When trying to run the first sample program, I got error messages telling me it couldn't find libraries like “intelbth_x64”. Luckily, leJOS project contributors did something about that this year and the problem was easy to fix (by replacing the ...\leJOS NXJ\3rdparty\lib\bluecove.jar file that came with the downloaded installation with the latest snapshot. Check here to see if that's still the most recent if you have this problem.)

Magnifique! The first sample program ran. It didn't matter that it only passed a Java program that plays a tune to the brick (which immediately played the tune). This is progress.

So, now you also have an accurate status report on the state of getting a LEGO Mindstorms NXT robot to run with HLL. I haven't hooked it up yet. I haven't yet even delved into the samples that come closer to what I want. But the basics are working, and that's an excellent start.

Saturday, October 16, 2010

The Prototype is Dead!

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

After getting back to concentrating on the core code for a little while, I'm happy to report that the HLL prototype is officially dead. The age of "clean up" has ended and everything ahead will have to do with improvement and expanding functionality.

The final throw may seem like a small thing. I had defined "trace" and "monitor" variables in every class I wanted to kick out program trace information to stdout and to the GUI. Pretty nice to see trace information in the browser while developing. It let me see anything I wanted to see in what was basically an ongoing end-to-end test while I worked. I also didn't need to go back and forth between the GUI and console to see what was going on. (For more detail on the browser-based GUI, refer to posts like this one.)

But, while focusing on everything else while building the prototype, I had set the values independently in each class. It never seemed a pressing issue while prototyping, because I would either focus on core development, or on GUI development for lengthy periods of time. So, the inconvenience of changing the values in each class never pushed hard enough against other pressing issues.

So, now there's a new config file called manage_cfg.xml, that's meant to be kindof a catch-all for this sort of thing. And when everyone else wants to switch between the two, they just set the values to true or false in the config file. One place - real quick - that's all there is to it.

Need more reason to understand my celebratory post? Here it is. I started writing error checking code in the program that processes the input. What if there is no value specified for trace? Do I want to warn people, or just let it default to false? "Wait a minute," I thought. This is really a job for an XML schema. So, I quickly created one.

Maybe it's just me, but that strikes me as moving forward. It's like development rather than clean-up.

Friday, October 15, 2010

Do You Want 1000 Entities Cooperating? OK with me!

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Current issues #1: Configuration Files and Processing; reported the importance of configuration files and the need to handle them more "generically" than in the prototype. One particular problem stemmed from the fact that using multiple distributed copies of HLL was a sudden flash insight during prototyping.

Building the prototype with a robotics example application close at hand, I originally thought any user interface would communicate with the robot's HLL directly. That would be the same as communicating with the robot. Then it occurred to me that the Command Center (user interface) needs its own separate intelligence to do the things it needs to do intelligently, so it should have its own personal copy of HLL. Then WOW! - Once that's working, you can get as many "entities" (robots, complex specialized experts, - or any distributed entities for any type of application(s)) as needed to do anything. (And with special regard for robotics, it would simplify an effort to get robots to cooperate.)

OK (yes, I'm being verbose - selling HLL) back to the prototype. Before I thought of that, I set up the system to read a simple XML file that gave the host and port of the HLL system, supporting loose coupling. Then I added the browser-based GUI, which also needed to be loosely coupled, so I created another XML file and explicitly handled its reading in the code. Then I decided the Command Center should have its own copy of HLL and explicitly added again. I handled later configurations better, but, so far as the communication information was concerned, it was clear it had to change.

I reprogrammed the handling of the comm configuration file, so that it reads and stores a list of comm entries of arbitrary length. In order to add information for a new connection, the application programmer needs only extend the XML file.

<?xml version="1.0" encoding="ISO-8859-1"?>
<commconfig>
 <config>
  <name>server</name>
  <type>hll</type>
  <host>anyreachablehost.com</host>
  <port>4049</port>
 </config>
 ... continue ad infinitum
</commconfig>


To send a message to any entity in the list, a method called sendCommand() in HLL's HLLCommUtils class is used.

response = hllCommUtils.sendCommand(commandMessage, "server", true);

sends a message to the "server" at the host and port specified in the XML above, and receives a response. The sendCommand() method is used whether the message is being passed internally ("server" is in fact currently the reserve word for some of HLL's basic internal communication) or to another HLL installation 1000 miles away, using whatever naming convention is chosen for identifying HLL installations. It's only required that the names are unique.

Both "commandMessage" and "response" are complex objects, allowing complex messages to be passed back and forth easily. A variety of message formats are in fact supported; i.e. SOAP, java objects, strings ... The commandMessage can contain information on the recipient of the message that goes beyond host and port. Issue #2 is about making these messages FIPA compliant.

Monday, October 11, 2010

US physics professor: 'Global warming is the greatest and most successful pseudoscientific fraud I have seen in my long life'

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Newton: "Fie on you, Hansen, Mann, Jones et al! You are not worthy of the name scientists! May the pox consume your shrivelled peterkins!"

Newton: "Fie on you, Hansen, Mann, Jones et al! You are not worthy of the name scientists! May the pox consume your shrivelled peterkins!"

Harold Lewis is Emeritus Professor of Physics at the University of California, Santa Barbara. Here is his letter of resignation to Curtis G. Callan Jr, Princeton University, President of the American Physical Society.

Anthony Watts describes it thus:

This is an important moment in science history. I would describe it as a letter on the scale of Martin Luther, nailing his 95 theses to the Wittenburg church door. It is worthy of repeating this letter in entirety on every blog that discusses science.

It’s so utterly damning that I’m going to run it in full without further comment. (H/T GWPF, Richard Brearley).

Dear Curt:

When I first joined the American Physical Society sixty-seven years ago it was much smaller, much gentler, and as yet uncorrupted by the money flood (a threat against which Dwight Eisenhower warned a half-century ago). Indeed, the choice of physics as a profession was then a guarantor of a life of poverty and abstinence—it was World War II that changed all that. The prospect of worldly gain drove few physicists. As recently as thirty-five years ago, when I chaired the first APS study of a contentious social/scientific issue, The Reactor Safety Study, though there were zealots aplenty on the outside there was no hint of inordinate pressure on us as physicists. We were therefore able to produce what I believe was and is an honest appraisal of the situation at that time. We were further enabled by the presence of an oversight committee consisting of Pief Panofsky, Vicki Weisskopf, and Hans Bethe, all towering physicists beyond reproach. I was proud of what we did in a charged atmosphere. In the end the oversight committee, in its report to the APS President, noted the complete independence in which we did the job, and predicted that the report would be attacked from both sides. What greater tribute could there be?

How different it is now. The giants no longer walk the earth, and the money flood has become the raison d’ĂȘtre of much physics research, the vital sustenance of much more, and it provides the support for untold numbers of professional jobs. For reasons that will soon become clear my former pride at being an APS Fellow all these years has been turned into shame, and I am forced, with no pleasure at all, to offer you my resignation from the Society.

It is of course, the global warming scam, with the (literally) trillions of dollars driving it, that has corrupted so many scientists, and has carried APS before it like a rogue wave. It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist. Anyone who has the faintest doubt that this is so should force himself to read the ClimateGate documents, which lay it bare. (Montford’s book organizes the facts very well.) I don’t believe that any real physicist, nay scientist, can read that stuff without revulsion. I would almost make that revulsion a definition of the word scientist.

So what has the APS, as an organization, done in the face of this challenge? It has accepted the corruption as the norm, and gone along with it. For example:

1. About a year ago a few of us sent an e-mail on the subject to a fraction of the membership. APS ignored the issues, but the then President immediately launched a hostile investigation of where we got the e-mail addresses. In its better days, APS used to encourage discussion of important issues, and indeed the Constitution cites that as its principal purpose. No more. Everything that has been done in the last year has been designed to silence debate

2. The appallingly tendentious APS statement on Climate Change was apparently written in a hurry by a few people over lunch, and is certainly not representative of the talents of APS members as I have long known them. So a few of us petitioned the Council to reconsider it. One of the outstanding marks of (in)distinction in the Statement was the poison word incontrovertible, which describes few items in physics, certainly not this one. In response APS appointed a secret committee that never met, never troubled to speak to any skeptics, yet endorsed the Statement in its entirety. (They did admit that the tone was a bit strong, but amazingly kept the poison word incontrovertible to describe the evidence, a position supported by no one.) In the end, the Council kept the original statement, word for word, but approved a far longer “explanatory” screed, admitting that there were uncertainties, but brushing them aside to give blanket approval to the original. The original Statement, which still stands as the APS position, also contains what I consider pompous and asinine advice to all world governments, as if the APS were master of the universe. It is not, and I am embarrassed that our leaders seem to think it is. This is not fun and games, these are serious matters involving vast fractions of our national substance, and the reputation of the Society as a scientific society is at stake.

3. In the interim the ClimateGate scandal broke into the news, and the machinations of the principal alarmists were revealed to the world. It was a fraud on a scale I have never seen, and I lack the words to describe its enormity. Effect on the APS position: none. None at all. This is not science; other forces are at work.

4. So a few of us tried to bring science into the act (that is, after all, the alleged and historic purpose of APS), and collected the necessary 200+ signatures to bring to the Council a proposal for a Topical Group on Climate Science, thinking that open discussion of the scientific issues, in the best tradition of physics, would be beneficial to all, and also a contribution to the nation. I might note that it was not easy to collect the signatures, since you denied us the use of the APS membership list. We conformed in every way with the requirements of the APS Constitution, and described in great detail what we had in mind—simply to bring the subject into the open.<

5. To our amazement, Constitution be damned, you declined to accept our petition, but instead used your own control of the mailing list to run a poll on the members’ interest in a TG on Climate and the Environment. You did ask the members if they would sign a petition to form a TG on your yet-to-be-defined subject, but provided no petition, and got lots of affirmative responses. (If you had asked about sex you would have gotten more expressions of interest.) There was of course no such petition or proposal, and you have now dropped the Environment part, so the whole matter is moot. (Any lawyer will tell you that you cannot collect signatures on a vague petition, and then fill in whatever you like.) The entire purpose of this exercise was to avoid your constitutional responsibility to take our petition to the Council.

6. As of now you have formed still another secret and stacked committee to organize your own TG, simply ignoring our lawful petition.

APS management has gamed the problem from the beginning, to suppress serious conversation about the merits of the climate change claims. Do you wonder that I have lost confidence in the organization?

I do feel the need to add one note, and this is conjecture, since it is always risky to discuss other people’s motives. This scheming at APS HQ is so bizarre that there cannot be a simple explanation for it. Some have held that the physicists of today are not as smart as they used to be, but I don’t think that is an issue. I think it is the money, exactly what Eisenhower warned about a half-century ago. There are indeed trillions of dollars involved, to say nothing of the fame and glory (and frequent trips to exotic islands) that go with being a member of the club. Your own Physics Department (of which you are chairman) would lose millions a year if the global warming bubble burst. When Penn State absolved Mike Mann of wrongdoing, and the University of East Anglia did the same for Phil Jones, they cannot have been unaware of the financial penalty for doing otherwise. As the old saying goes, you don’t have to be a weatherman to know which way the wind is blowing. Since I am no philosopher, I’m not going to explore at just which point enlightened self-interest crosses the line into corruption, but a careful reading of the ClimateGate releases makes it clear that this is not an academic question.

I want no part of it, so please accept my resignation. APS no longer represents me, but I hope we are still friends.

Hal

Harold Lewis is Emeritus Professor of Physics, University of California, Santa Barbara, former Chairman; Former member Defense Science Board, chmn of Technology panel; Chairman DSB study on Nuclear Winter; Former member Advisory Committee on Reactor Safeguards; Former member, President’s Nuclear Safety Oversight Committee; Chairman APS study on Nuclear Reactor Safety

Chairman Risk Assessment Review Group; Co-founder and former Chairman of JASON; Former member USAF Scientific Advisory Board; Served in US Navy in WW II; books: Technological Risk (about, surprise, technological risk) and Why Flip a Coin (about decision making)

Sunday, October 10, 2010

Eventually, it will be illegal for humans to drive

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

Google is now telling us that they are behind a project developing cars that drive themselves in traffic.



It's been a "secret" project for a while, according to the report; but they're working with people in a geographic area that's been heavily funded in robotics and unmanned vehicles and has taken home some trophies in competition. (DARPA Grand Challenge)

I'm not familiar with google's autonomous vehicle technology; not being in their inner circle of secret friends. So, I can't tell you anything about how good it really is. And, since they're not using HLL (yet, or as far as I know), I'm not going to endorse their effort. :)

I just thought I'd comment on the first response I got after posting their article link to my Facebook page. "No thanks. I prefer to be the one making decisions behind my wheel ;-)"

I'm not picking on the guy who said it. His comment undoubtedly represents the feelings of a lot of people. It's just a prediction; something I've been quite good at, and probably am even more so in my old age.

But this prediction is at least based on straightforward logic. It's pretty easy really. Autonomous vehicles already outperform humans in a number of ways, and we can expect even more improvement in the future. Most auto companies that have invested in autonomous technology have focused the most attention on safety. Step one is using sensors to detect and understand the environment and surrounding traffic, and to warn or avoid dangerous situations.

The more difficult challenge is to accurately predict when autonomous vehicles will be accepted by state law, opening the door to mass production and sales, which will in turn increase investment in research and development even further and increase the pace at which further improvement is realized. I'm optimistic, not just based on what I've said in this paragraph, but based even more on R&D I've been involved with. We can increase the pace of improvement dramatically, and accomplish things that humans cannot. (Yes, the age of AI is upon us - and yes, we're still human. It's the AI you see ...)

"Eventually, it will be illegal for humans to drive," I commented on Facebook.

"By that time it will be illegal for us to think and our humanity will have already been robbed," responded my Facebook friend.

I understand the sentiment (which is why I do indeed take the conversation seriously) but think the two issues are separate. It will eventually be illegal for humans to drive on public roads and highways because a much safer alternative will be available. Humans driving cars will be considered (relatively) too dangerous. Why should the rest of humanity take the risk of being slaughtered by accidents, when such things are extremely rare when the machine does the driving?

I believe that you'll still be allowed to do most of the thinking, at least for a while. An autonomous vehicle doesn't care whether you go to Aunt Suzie's or the race track on Saturday. But one day, even the arguments about whether it's faster to take the Lincoln or Holland Tunnel will be a thing of the past. Honestly, do you think you'll miss it?

Tuesday, October 5, 2010

Can Unmanned Robots Follow The Laws Of War?

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

NPR interviews Patrick Lin, an assistant professor of philosophy and research director of the Ethics and Emerging Sciences Group at California Polytechnic State University; and Joanne Mariner, director of Human Rights Watch's Terrorism and Counterterrorism Program.

I would like to comment briefly on what seems to be settling in as presumptive knowledge regarding technology choices. I'll blog more extensively on this later, sorting it out with HLL; which gives you a hint as to why I interrupt this announcement to comment. Keep in mind that I'm only responding to an NPR interview, not an in depth thesis, and that I've already mentioned that the interdisciplinary discussion on robot ethics is a good thing - so, I discuss. (I'll in fact be picking at just one specific point and think the interview was a good one.)

Patrick Lin, and others, state that increasing machine intelligence / autonomy come down to reliance on either simple rules or learning technology, and that with learning technology, we will not be able to predict behavior. So, the technical choices don't look good.

My quick techie response is first to challenge people to tell me how it is that critical human decision making doesn't fit the rule model, then I'll make what is apparently some startling statements about machine learning.

The laws of war, and other basic decisions can be expressed as rules. if CIVILIAN-NON-COMBATANT then DON'T-FIRE seems to make sense (even if the machine recognition problem is difficult). if ACTION-NOT-SAFE then CANCEL. if OUTNUMBERED then RUN-AWAY. if BATTLE-WON then STOP-FIRING. There just seems to be a whole lot of basic stuff that can be covered by rules - even simple ones. And doesn't this fit the human decision-making model pretty well? (Which is why everyone understands this comment.)

Regarding "unpredictable" learning technology, it seems to me that autonomy is at least to some extent synonymous with lack of predictability - just like with humans and other animals. That seems logical, but in fact, there are effective ways to place limits on the range of behaviors an autonomous robot would develop and use. Learning robots can be programmed to go somewhere and do something, for example, with learning / adaptation used to allow them to adapt to conditions. In the laboratory for example, four-legged robots with a broken leg have adapted their gate to three legs. If a humanoid robot limps the last mile to get home, due to a broken part, that doesn't change the mission. My general point is that design engineers can still control what learning / adaptive robots are programmed to do, even while there's some autonomy in how they do it.

In my contribution to Gerhard Dabringer's interviews below, I spend a section roughly outlining a development process (very generally), that includes training and testing. Even for development engineers with little knowledge of machine learning, I think it makes a worthwhile point. Although the character of development of learning machines is at some points different than traditional development, the overall process is the same. Quality assurance doesn't become obsolete in developing, using, and maintaining these advanced systems.

For a broader debate, there is a series of interviews conducted by Gerhard Dabringer, Austrian military Institute for Religion and Peace. Click here for the interviews and more.

English translation of a Swedish documentary on the same topic: click here.

Saturday, October 2, 2010

Roboethics and Robot Ethics

Visit the High Level Logic (HLL) Website
— Home of XPL (eXtensible Process Language)

It's like XML, but you can store source code in it.

I mentioned in a previous post that I would blog on robot ethics. Let me meander a bit before getting into technical detail on how this relates to HLL. (I mean that I'm not going to get into that detail in this post. This post is a meanderer on the topic.)

The international discussion on robot ethics involves interacting with people from a variety of academic disciplines, such as moral or ethical philosophers, as well as other engineers and scientists who are interested in the subject. In my view, the interdisciplinary discussion is both quite interesting and valuable.

Me – I'm an engineering scientist type. Although I'm willing to opine on just about anything, when I discuss robot ethics, I typically try to remain in my engineering persona (which is not so hard for me).

I would break the discussion in two. There is a part that is concerned about how humans use technology. Even this breaks down further. There are some who are particularly concerned about the increased autonomy of weapons systems; worrying that machines will increasingly make life and death decisions in military roles. Others (sometimes they overlap) are quite concerned about the use of robots in medicine and particularly elderly care; to what extent will machines replace human contact, etc.

I am assured that naming each area will be the subject of lengthy debate, but there has been at least some preliminary agreement in some quarters that human ethics in the use of robots should be referred to as “roboethics.” (Roboethics Facebook group.)

What does that have to do with engineers? Well, plenty. Engineers invent, design, even manage and use technology. Yes – the decisions of those who pay the bills matter a lot. Even that distinction has a place in the discussion. But it is one of those times when knowledge and awareness of ethical concerns within the engineering community (so to speak) can be important.

The other major branch has - at least in my mind at this early stage - more to do, directly, with HLL. Advances in machine intelligence and autonomy should include advances in autonomous machine ethical decision-making. The ultimate challenge for the “moral machine” is autonomous moral agency. If this seems an interesting subject to you, I will again suggest Wallach and Allen's book; Moral Machines, Teaching Robots Right from Wrong. Their blog is (click) here.

And here's a link to the Robot Ethics Facebook group.