Building off of yesterday’s post about brains and analogies, I had a thought this morning about life, intelligence, and interstellar contact…
First things first, though…
Obviously, I understand that the biological “meat” that we (humans) and all other animals that we know of could be considered a kind of “biological machine.” It functions in a mechanistic manner, and therefore one metaphor that has been common for describing biological entities for the past several hundred years has been to think of them as a kind of machine.
Interestingly enough, the origins of this metaphor occur pretty much at the same time (or a bit after) as real machines of any complexity were starting to be used–namely around the start of the 1600’s. It was at this time that working Clocks began to flourish, and all the cool Automata that adorn them in various town halls and churches started to be built regularly.
More directly, Descartes, himself, was one of the main pushers of this metaphor. He seriously and consciously formulated the idea that animals, in particular, were just complicated machines/automata that were following their programming. Obviously, he never owned a cat.
In any case, this metaphor has been a strong one in Western Society ever since. Even when people were reacting against it–with notions of vitalism and more generally with the whole sublime aesthetic of the Romantic period–this mechanistic metaphor for life was still centrally located within people’s conceptions of the world and it is so well grounded in our modern culture that it is hard escape from it and see the world without its influence.
Of course, part of the reason why it is such a strong concept for us is that elements of it are clearly “true” to us. For example–the idea that there are regular causes for aspects of living organisms that can be traced to particular mechanical/chemical reactions–all of that is correct and has been very useful in bulding up our modern world.
On the other hand, other associated elements of the meat=machine metaphor are much more problematic. Most tellingly, one might point to the strong idea–seen obviously in “Argument by Design“–that because all machines are designed/built/created by a designer/creator–that all lifeforms must therefore also have a designer. While this argument has been demolished in numerous ways (See the “Argument from poor design,” for example…), this meat/machine metaphor has other aspects that are still subtly influential. For example, conceptions of how the brain/mind/intelligence work are often shaped by the tools and technologies that the conceivers had around them.
Now.. in yesterday’s post, I started a thought that the “meatbrain=computer” analogy seemed less realistic/helpful/accurate the more one got into the nitty gritty of how meat brains actually worked. The article in yesterday’s post noted that not only do the processing functions of flesh behave in radically different ways than those of silicon, but how you demarcate where this processing starts and stops and also the whole concept of having a demarcation between structure and processing–for computers roughly hardware and software–totally breaks down in biological entities where structure and processing are not at all separate but are one and the same thing.
Anyway–my unfinished thought yesterday was that a lot of sci-fi technology–such as teleporters, uploading people’s brains to computers, or even fundamentally the concept of computer-based self-aware AI–become more problematic the more you know about this stuff. Furthermore, it tends to shed some doubt on whether these things are at all possible, especially without radical technological innovations that function on radically different principles.
Now.. today’s thought came from an interaction on FB where it occurred to me that one kind of evidence–although by no means compelling–to back up the doubts about the possibility of self-aware computer-based AI is the utter lack of any kind of evidence of such machine-based intelligence at all from the universe around us.
NOW BEAR WITH ME A MINUTE…
Obviously–we have no existence of any kind of life of any kind from outside our planet (although maybe we will find some on Jupiter’s moon Io…)–and thus trying to talk about the absence of such evidence may seem a bit premature and silly in the extreme.
And, in some ways, I totally concede this. However, if you run with the thought that AI is actually possible, then it seems pretty clear that to get AI to work at all, you are going to need a technological society that is at least more advanced than ours, then it is not that far out of the range of logic to assume that such a society would be able to build technological structures that are far more sustainable/reliable/long-lived than ours are. Along with this point, one could note that any kind of computer-based AI intelligence would not suffer from the current systems of biological degeneration that affect all current life-forms that we know of (with this one–slightly scary–exception…), and thus their potential lifespan would seem to be limitless.
In other words–if we make all of the standard, and I think reasonable assumptions about what computer-based AI would be like–it would seem reasonable to assume that there should be some computer-based AI created by former intelligent civilizations cucumbering about the cosmos for millions, hundreds of millions, or even billions of years considering that our local galaxy is at least 13.2 billion years old with 200-400 billion stars and rough calculations currently show there to be somewhere between 100’s of millions to billions of possible inhabitable planets within our galaxy alone.
Now.. this is all just a thought experiment, of course, but I think it is a useful one for AI enthusiasts to contemplate. If AI is possible–and usually contains so many superior aspects with regard to longevity that people assume–then it seems like it would be a lot easier to find than other biological life.. and yet that doesn’t seem to be the case. Obviously, the only real data that will help to determine this problem will be when we actually start getting to explore other worlds for life… but if we do, and if things like bacteria seem to be common–but computer-based nanotech or intelligences aren’t–then I think we will come to see that things like AI may be less possible than we currently imagine..