That is the question I wondered about sometime last week as a confluence of thoughts came together. This con-fusion of material came from a variety of sources. Together, from all of their multiple perspectives, I was able to grok another part of the matrix, so to speak.
These were the factors:
1. I just reread George Lakoff’s book, Women, Fire, and Dangerous Things, which is a book about, amongst other things, human categorization and cognitive psychology. Crucially, it proposes a system/understanding of human thought/categorization/understanding that is in contrast to the so-called classical categories and the associated ideal of the human mind that is tied up with set theory, modern computational theories and artifacts, and algorithms.
2. The fact that the Watson computer on Jeopardy situation just occurred recently.
3. The fact that one of my students is writing a paper on the growth of artificial intelligence and, within it, makes the simple asssumption that intelligence is pretty much the same as computation.
4. A long-standing recognition that the concepts of truth and of meaning are not necessarily tied together. This is a thought that I had way back in 2004 or thereabouts, but it is something that came up again in the Lakoff book in a central way.
5. Somewhere on Andrew Sullivan’s blog back a week or two ago there was a post about someone who was going to be involved in a study where he was going to be typing on a screen as people sent him questions–and the goal was for him to convince them that he was human. A little creative googling provides me with this link, which seems to be about right.. It notes that this is part of an annual Loebner prize, which involves a Turing Test.
Altogether, these different currents of thought came together at some point in the fleshy matter within my skull to form the question,
Can Robots tell stories?
By that, what I was really getting at was the sort of mental construct that is often used to describe what intelligence is… and here the linkage to AI work often becomes pretty clear–it is the assumption that what “intelligence” is, is the ability to compute/process data so as to solve a problem. For example, going to the wikipedia Turing Test page, you find this quote from AI researchers who are opposed to Turing Tests. Instead of forcing a machine to mimic a human to be considered “intelligent,” such researchers state:
Second, creating life-like simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence. Russell and Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. “Aeronautical engineering texts,” they write, “do not define the goal of their field as ‘making machines that fly so exactly like pigeons that they can fool other pigeons. (Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, pg. 3)
This is what I’m getting at. Robots, Machines, Computers, Algorithms, whatever–these are all material objects that are based, at their very core, on computation. What is apparently assumed in much (although not all) AI research–and in a lot of modern society today–is that this practice of computation/problem-solving–an activity that the aforementioned artifacts and processes do rather well (after much human thought and effort has gone into it…)–is what intelligence is all about.
But I think that’s actually, probably, wrong.
At least–I think that it wrong to assume that computation or algorithms or problem solving form the one, key, central function or driving force of what consciousness and human intelligence (which is what I basically understand being human to be..) is. I do not dispute that computation, logic, and problem-solving are some of the tools that the human brain can employ to deal with reality–but they are just some of the tools it possesses–they are not the engine, but merely tools..
Instead–and this comes out thoughts and research from Lakoff’s book about how humans actually reason based on bodily actions, gestalt perceptions, and the use of imagination and analogy–what appears to be a better metaphor for human intelligence is the concept of story-telling. Humans are not just things that compute–but rather–they are creatures that tell stories–most fundamentally–they are creatures continually telling themselves a story about their own lives–and trying to shape and create that story and give it meaning.
This process is not–unlike computation–controlled by the concept of truth in any strong or deterministic fashion. People lie to themselves all the time–they make shit up, they perceive reality and project visions and thoughts in ways that are–depending on your viewpoint–highly creative or willfully delusional. In all of this–there are always meanings that people embed in their actions–and meanings are thus far more central to what makes us human than the verity or falsity of our actions, knowledge, or perceptions.
Thus–I come back to my question–Can Robots tell stories? If we eventually get to the point that we can create a robot that can create and tell stories–stories that are not just the result of programming in the entire repertoire of human history and trying to run algorithms to rearrange stuff****–but rather stories that are new and obviously come from the robot perceiving its own story and being able to embed meaning into them that others could also then relate to.. then I think we will finally see the dawn of an interesting age where we have a much better grasp of what intelligence is.
However, I do wonder whether such a situation is possible using non-organic means. For example, in the recent battles between IBM’s Watson and humans on Jeopardy, Watson was composed of 90 IBM Power 750 Express servers powered by 8-core processors — four in each machine for a total of 32 processors per machine–at a cost of only around $3 million. This cost does not include the air-conditioned room–or the power usage–whereby this machine required 175,000 watts compared to just 12 for a human brain (and that’s not including the power usage for the air-conditioning to keep it from failing!).
Does this mean I think such machines are not useful?
Hell no! I’m sure they’ll find lots of uses for it–but what I would argue is that this is not the same thing as intelligence, and it is not at all likely to be fruitful in the long term for trying to understand how consciousness or real creative intelligence. Interestingly enough–I have some support for this notion in this article that is about the Watson computer and how the newest AI work is actually about trying to give machines actual bodies to learn with–rather than trying to “program in” the entire world as a closed domain–something that the world really isn’t….
In any case–it shall be interesting to live in a time when androids do really dream of electric sheep–but then again–maybe they never will….
****Note–it’s already possible to create algorithms that generate texts that resemble the meaningless nonsensical rantings of postmodern literary criticism. Go here. Hit Reload as often as you want. However–the point here, which is quickly obvious, is that these texts have no meaning–they are symbols that strung together in a fashion that appears acceptably–but they don’t convey any real information or any actual sense of meaning.