Calculation, Perception, and Intelligence

Is your brain a computer?

Obviously, it’s not made out of silicon, but is your brain just a biological calculator that determines your actions by doing some kind of calculation?

Many people seem to think so, but I’ve never been sold on this idea. As much as I love doing calculations–hell, my last name means something on the order of “skill with calculation” or “skill with numbers”–I have always been a bit skeptical about what I saw as an attempt to oversimplify the processes that made up human behavior.  Some recent reading I’ve done has shed some interesting light on this state of affairs and I thought it would be useful to hash through this a bit. Importantly, this reading occurred after I had my original thoughts that led to the previous games versus stories post.

Above all, the reading that set me thinking along these lines was a biography of Alan Turing by Andrew Hodges titled Enigma. Turing, in case you didn’t know, is widely considered the person who created the fundamentals ideas at the heart of the digital computer, computer science and artificial intelligence.  He was a British Mathematician who provided a renowned answer to one of Hilbert’s Entscheidungsprobleme and whose work always retained a connection to the concrete world of mechanical calculation.  He also was at the heart of the British cryptanalysis group that broke the German codes during the war, greatly helping the Allied war effort.

Reading Hodges book about Turing, one picks up on the fact that Turing was always very interested in what basic human intelligence was and he wanted ways to define and (eventually) test for it. One of the crucial ways that Turing defined intelligence was the ability to strategize and play games.  In particular Turing thought the ability to play Chess was a measure of intelligence and he long worked at developing early electronic calculating machines to play chess against human competitors.

In other words, the earliest computer game ever was chess.

Which is fitting, considering that it is within recent memory that computers finally became good enough to consistently beat skilled human chess masters.  From that disputed accomplishment of “Deep Blue” back in 1997, we more recently saw the rise of “Watson” on the game show Jeopardy.  With these developments, the talk about machine “intelligence” grew. People impressed with Watson spoke of how it showed that “intelligence is tied to an ability to appropriately find relevant information in a very large memory.”

But is that really intelligence?  Is intelligence really only about retrieving information quickly and calculating a decision based on that information?

I’ll admit that certain kinds of activities require those kinds of skills.  Playing a game–which Chess and Jeopardy both clearly require that kind of skillset, but games are a very particular kind of activity.  Specifically, they are an activity where you know the rules in advance.  Actions are specified in a clearly defined way, which allows the determination–in a binary/logical fashion–of their application and achievement.

This is a very artificial state of affairs.  Most human activities are not purely games where rules are yes/no kinds of things.  Most human activities are messier, are filled with unclear ambiguities, and the rules are dynamic and under constant negotiation and change.

In other words, instead of chess, most of reality is more like this:
calvinball

What I’m getting at is that computers such as Deep Blue and Watson were great at doing their specified task, but they were not capable of adapting to rules changes–if someone had tried to make them–much less to creatively develop new rules on the fly.

The idea of “Intelligence as a game”, in other words, ignores major elements of human intelligence–such as creativity and spontaneity–that are essential elements to what it means to be human. (In fact, many of these elements are not just apparent in humans, but in many other life forms “down to” creatures like Octopi and other mollusks…)

To use an analogy–Deep Blue and Watson were exceptionally good tools.  They were like exquisitely sharp automatic saws that were better at cutting through things than ever before.

But as cool as they were as saws, they were still just saws.  If you asked them to pound something together, they would be failures.  If you asked them to screw something together or to pry something apart or to carefully sand something, they would be failures.

And never mind about asking them to design you the house that you were building.

Just like a bandsaw is not an architect, the calculation of sums and differences is not a sufficient model of intelligence.  While I have no doubts that, with a lot of work, computers can be improved to better succeed at something as artificial as a turing test (which is just a particular kind of game), I do not think we’re going to see truly intelligent machines any time soon, much less ones that are nearly as resource efficient as humans are.

But what about Perception?  Why is that in the title?

Originally, I had no intention of talking about perception while tackling the issue of calculation and intelligence, but it was again through my recent reading that I was instigated to address this issue.

Specifically, this past week I was rereading Richard Dawkins’ The Ancestor’s Tale, and I came across a telling remark by Dawkins in The Platypus’ Tale.  In that episode, Dawkins talks about how the platypus uses a network of electrically sensitive cells along its bill to find food in the dark, murky ponds in which it lives. In a very real sense, it uses a kind of radar to find food.

Now, the way that Dawkins describes this process on page 199 is so,

When any animal, such as a freshwater shrimp which is a typical platypus prey, uses its muscles, weak electric fields are inevitably generated. With sufficiently sensitive apparatus these can be detected, especially in water. Given dedicated computer power to handle data from a large array of such sensors, the source of the electric fields can be calculated. Platypuses don’t, of course, calculate as a mathematician or a computer would. But at some level in their brain the equivalent of a calculation is done, and the result is that they catch their prey.

My question is, why does Dawkins consider such an activity a “calculation’ at all?  While it is true that human radar requires the use of computers to do a lot of calculations to understand the signals involved in radar, that is because we consciously created the system that way.  Large arrays of electronic signals are turned into numbers and then complicated additions and subtractions are done on these signals to provide a result.  The reason it is done this way is because computers are just giant calculators and they can only work with numbers!  Asking a computer to perceive the changing signals is impossible. Computers don’t and cannot perceive anything, because they lack the capacity to do anything other than calculate.

It would be like expecting an eye to smell.

So what has happened here?

Well, Dawkins has reversed the story. Specifically:

1. In the real world, platypuses developed the ability to find food creatures by perceiving their electrical fields.  How that perception is built/constructed/registered in their is still not entirely known.

2. Much later on, however, humans managed to recreate this process using machinery that relied upon a process involving the very fast calculation of sums and differences by a particular kind of machine.

3. Humans then discovered that platypuses were achieving the same kind of result as this machine process.

4. Humans then decide that platypuses must be doing the same kind of thing as their machines.

The flaw in this logic should be clear.  Humans have managed to simulate a biological activity using machines, and have then decided that the intrinsic methods used in their simulation must be how the biological process works.

That’s just weak sauce, though.  More importantly, this kind of thing happens all the time and it underlies many of the problems that I find with a lot of discussions about artificial intelligence, free will, determinism, and a number of other things.

This problem is not surprising to me, I must say, because as I’ve written about before (using George Lakoff’s concepts), humans tend to understand their world by movtivated applications of metaphorical concepts residing in their brains.  In this case, Dawkins has long been a huge computer guy–his use of computers, algorithms, and calculations to understand and explain evolution goes back to the 1970’s–and it’s not surprising that he would apply this model here.   More broadly, humans have long used machinery–well at least since the 1600’s–to shape their understanding of how the mind works.  If you watch cartoons from teh 1940’s, for example, the little thought bubbles used gears, pulleys, and the like to represent human thought.  Today, we use computers and think of memory as a hard drive and intelligence as a kind of processor.

But these models–the more we find out about the brain–aren’t all that accurate.  Human memory does not work like computer memory. It is not stored as a concrete listing of symbolic description of events that are stored in one particular place of some biological FAT table.  Instead, they are constructed things–where elements of memory are distributed throughout the brain in a fashion where they overlap with similar or associated memories.  When they are recalled, they are then imaginatively reconstructed.  Recognition of this is one of the reasons why eyewitness testimony is not considered as strong as it once was.

To conclude, I would argue that what goes on in human brains is still not all that well understood.  There are numerous competing theories–but fundamental questions of whether the brain is really like the computer model that people hav applied to it so readily seem to be coming apart at the seems the more people learn about the brain.  In addition, fundamental conceptual ideas–such as whether perception is a calculation or is just something ENTIRELY DIFFERENT need a bit more attention–because it strikes me that perception itself, rather than calculation seems like a more likely candidate as the origin and generator of consciousness and intelligence than calculation–which seems like a very useful, but later, acquisition of the human mind.

And now–it’s time for a nap.

Advertisements

About Prof. Woland

I contain multitudes. Come meet us.
This entry was posted in Uncategorized and tagged , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s