— Home of XPL (eXtensible Process Language)
It's like XML, but you can store source code in it.
By Roger F. Gay
Just read some material related to a 2011 book entitled, “The Believing Brain”, “synthesizing thirty years of research by psychologist, historian of science, and the world's best-known skeptic Michael Shermer.” Although I've never heard of him, Shermer's “theory of belief” rests on the insight that people develop response patterns, which can in any given circumstance turn out to be right or wrong.
The two key words are “patternicity” and “agenticity”. Shermer's “patternicity”, more specifically, refers to the tendency to find patterns where there are none. This seems to give away the author's purpose; to explain why some people believe things that he doesn't. “Agenticity” follows by imagining that causal agents exist to control what is perceived in the patterns; like governments for example. There are a lot of “conspiracy theorists” out there who believe that governments exist and that they have a lot of power, and that the exercise of that power actually has a significant impact via law, regulation, enforcement, and abuse. Evolutionary forces have given us all the tendency to hold such false beliefs, and therefore “science” should reign instead.
Perhaps a fan will add another marketing-driven copy-paste-modify review of his book somewhere, but this is not the purpose of my article. His thesis suggests a higher level of thought and behavior being subject to control by your “Inner Zombie”. So, I thought it might be time to post another comment on the topic. The background for my discussion here is that I'm an old guy. I'm pretty sure I've seen the Inner Zombie at work throughout my lifetime and even recognized it working in me.
Let me start with an effort to be less politically driven by categorizing with less prejudice. I will start with the premise that the theory that anyone can know everything about everything has been disproved. Waiting for answers from “science” (in quotes because I will be discussing the term) before proceeding on each course of action would have led to our extinction. I'm going to move along a different path (than Shermer's) from this reality. Making decisions with less than complete information is one of the highest behavioral skills humans have, and one that AI and science generally tend to struggle with. Cracking into its secrets could be profound.
If you look at the research on child development, which is the most solid and well-researched part of human development science, you will most definitely find innate behavior related to recognizing and classifying patterns. No one who's familiar with AI work would doubt that either. One of the familiar characteristics of intellectual development is generalization. Birds and airplanes fly. Parents point to them by pointing up. In the experience of a young child, they are the same thing (at least have the same name) until someone explains the difference.
It seems quite obvious that our innate ability to generalize is strongly related to our ability to think abstractly. What results from the ability to fly, for example, might be applied to anything that flies. (Go ahead. Take a chance.) Abstract symbols (like language) flow naturally from our lips while applying what we know or think we know about a class of things; knowledge we get from generalizing.
Our simple pattern matching and generalizing can be correct, like noticing that birds and airplanes both fly, or that parents can point upward and express fascination when referring to either one; while our generalizations may be flawed and conclusions that follow may be wrong.
It also seems rather obvious, to an old man at least, that we know the trick behind the absolutely superior human ability to make decisions with less than complete information. And this should surely be worth noting in the artificial intelligence community. I'm suggesting that actual real-world human-level “intuition” may be more easily achieved artificially than anyone who's thoughts on the topic that I've ever heard or read seems to think.
OK, ok. Let's be a little less optimistic since merely thinking an idea isn't implementing it. At least in theory, it seems we have a map. And perhaps the biggest leap of belief to accepting the map is that humans aren't advanced calculating machines that always get the answers right. They just usually draw conclusions sufficient for survival of the species, which is strongly related to individual survival. We don't need to be perfect and we're not.
Where we really need to start is by imagining a group of stereotypical Hollywood “Valley girls” parading through a mall and chatting. Many of the basic facts they rely on, that come directly from direct observation, may be objectively accurate. For most of us, their choice of focus, common frame of reference, and conclusions demonstrate the flexibility and adaptability resulting from our ability to generalize and abstract. Any conclusions they draw from abstract thinking might be wrong. In that respect, they're just like the rest of us.
I'm hoping that the “Valley girl” reference might have evoked a prejudice in you (yes you, the reader); a particular composite that might be useful in communicating my point – even though I have no idea how the IQ of girls from the San Fernando Valley compare with the general population. If we want to aim realistically at human-level behavior, then we have to accept that we're not going to get there by trying to create “perfect” machines that always get the answers right. The latter involves too much focus on a desirable result, great machines, and not enough on a process that's still doing a lot of things much better. Mixing the two can also make us think that humans are inherently flawed, and such an undesirable model that we'd like nothing more to do with them.
We are all scientists, even Valley girls. Let's now follow along as our group of stereotypical girls drive home. Let me further reveal the ending. They get home safe and sound. They're still the same people, yet no matter how flawed we imagine the conclusions they drew while chatting in the mall, they still did well enough with the driving task to get the job done. Why does this make them scientists? It does because, unlike any flights of fancy in their chat (including gossip about the other girls and boys), driving involves constant real-world objective feedback. This is what science is made of. They're engaging in the scientific process in its most primitive and natural form. With each action and reaction, their judgments are tested and they are aware of the results they yield. They have learned from the process.
I'm going to end this article, which has grown too long under the weight my informal blogging style at this point, with one simple reference. I don't know how much each of you will need to think about it, but feel reasonably certain (even though I don't know you so I'm working here with insufficient information) that you'll, to varying extent, understand my point. HAL 9000. (I'm guessing you know how it works, at least to the extent explained by the story.)