To be sure, I relished the idea that alien artifacts would be simply incomprehensible to us, even if useful. If you sent 5.25" floppy disks back to medieval times, knights might decide that they make excellent sun visors. In the same way, our cat Posy thinks that a flash drive dangled from a lanyard is simply a great cat toy.
But it's more interesting to think of sending an iPhone back to Thomas Jefferson in 1800, sans instructions and Wi-Fi. He would be awed by the camera and the calculator. He would be fascinated by games like Angry Birds. But without an Internet connection, most of the other functions would be disabled. He would probably fully grasp only about 5% of what it can do. And those would be the least important functions, at that. He would guess that just by seeing all the other icons. It would fascinate and unnerve him, precisely because he partly understands the technology.
It's that kind of half-understanding that creates truly great alien technology. There must be an “uncanny valley” of alien tech that creates that kind of chill-down-the-spine reaction. I’m lifting the idea from from robotics, where a robot that looks almost, but not exactly, human is unnerving for users. A clearly nonhuman robot falls on one side of the valley; users treat it as a thing. A truly human-looking robot falls on the other side; users treat it as a human, at least at first. It’s the robots in between that bother people. They create a strong sense of weirdness.
The uncanny valley may be bad in robotics, but in alien-oriented SF, it’s exactly what I want as a reader. Alien technology that is basically just ramped-up human technology falls on one side of the valley. For example, the spaceships in Frederick Pohl’s Gateway are just that, really good spaceships. I’m not saying a good story can’t be told with them. Of course it can. Gateway is a superb novel. But it's fundamentally about the protagonist's freaked-out response to traveling inside weird, old ships to unknown destinations. The technology itself doesn't bend our heads.
On the other hand, alien technology that’s incomprehensible falls on the other side of the valley. In Roadside Picnic we don’t know what most of the gadgets are, and we can’t do anything interesting with them except get killed in exotic ways. Apart from a number of wonderfully ooky moments, the novel’s fundamentally a monster story where alien things attack and kill for unknown reasons. Again, that doesn’t mean it isn’t a good story. But the technology is there more to illuminate human psychology than to offer serious ideas about what an alien technology could look like.
It’s the novels that fall in between, into that uncanny valley between ramped-up human tech and the incomprehensible, that really grab me as a reader. I like to read about technologies that are precisely half-understandable. The way Thomas Jefferson would have been both tantalized and baffled by the Mail app on an iPhone.
This kind of thing is very hard to do. Writers are as bound by the mental infrastructure of our civilization as anyone else. When a writer does it, it’s genius.
Here’s an example. In Piers Anthony’s Macroscope, the humans receive an alien message. The thing about it is, it destroys the mind of anyone who is smart enough to understand it. People with IQs above 150 are fascinated by it and keep reading until they…fall over comatose. Some die. People with IQs below 150 can’t understand it, so they’re unharmed. That makes the message tantalizing but lethal. As one of the characters says to another, “We know the hard way: there are certain thoughts an intelligent mind must not think” (p. 49). But as it turns out, the alien senders are not simply malevolent. It turns out there is a reason for the signal, and it’s a good one. The way Anthony spins this out, gradually explaining the message and its uses, is fascinating.
The idea of a mind-destroying concept falls into the uncanny valley. It’s analogous to something we have, but it’s qualitatively, ungraspably better. It echoes Godel’s Theorem, which proves that it, the theorem itself, is unprovable. (I’m oversimplifying here.) A theorem that proves its own unprovability is a fascinating, mindbending thing. It upended mathematics when Godel published it in 1931. I have never understood it myself in a whole and complete moment of insight, and there’s a reason for that; it is fundamentally paradoxical. I can understand the pieces one at a time, but not the pieces put together. It gives me the feeling that if I ever did fully grasp all of it, my mind would be both much smarter, and broken. (Godel in fact went insane toward the end of his life.) The point is, we already know of ideas that probably exceed the mental capacity of most human beings. Macroscope invites us to consider the possibility that even higher-octane ideas would break our minds.
Could the uncanny valley give us a glimpse at what alien minds could actually be like? Maybe. This is where I bring in Shannon entropy.
Which I don’t understand in any deep mathematical sense. Here is the little I think I do know, which I’ve learned from Robert Sawyer’s discussion of it in WWW: Wake. Mathematics can be used to work out the complexity, or rather the maximum potential complexity, of a message. One such measure is Shannon entropy. To use the simplest example, a sequence of coin flips, which is purely random, has no complexity at all, and a Shannon entropy value of 1. What that means is, knowing one coin flip gives you no information at all about what the next flip will be. English, on the other hand, is much more predictable. If I say “What did you have for…” you will probably guess that the next word is breakfast. That sentence has a Shannon entropy value of 6, because given five words you can guess the sixth. Given a large number of English sentences, a computer can often guess as far as the eighth or ninth word out just from probability alone. That means that English has a Shannon entropy value of 8 or 9.
What about animals? Dolphin utterances can be predicted about three or four units out, giving dolphin language a Shannon entropy value of 3 to 4. That would seem to suggest that dolphin communication is far from being mere noise, but is not as complex as English.
Could an alien message have a Shannon value higher than 8 or 9, and if so, what would it be like? Laurance Doyle, a mathematician working on SETI at UC Berkeley, offers this example of an English sentence that is grammatically correct, and could be factually correct, but is more complicated than any human mind can manage: “By this time tomorrow he will have had to have been to be going to be finished.” As you can see, there’s just too many tenses and nested clauses to keep track of. Even more importantly, this sentence describes a social situation more complex than human life circa 2012 ever presents. For example, it might describe the situation of a time traveler facing a deadline.
A corpus of sentences like these would have a Shannon entropy higher than 9. How high, I don’t know, and Doyle doesn’t venture a guess. But messages like the ones in Macroscope and His Master’s Voice have a kind of deeply efficient recursiveness. I’m guessing that it would give them a Shannon entropy much higher than 9. In other words, maybe this is what a civilization can do once its cognitive capacity can manage language at a Shannon entropy level above 9.
“Cognitive capacity”? To understand a sentence like Doyle’s, you have to have a large working memory. You have to be able to hold a lot of pieces in mind at once. There is evidence that working memory and intelligence are closely connected, though the exact relationship is unclear. Maybe one of the reasons I can’t fully grasp Godel’s Theorem is that my working memory is not large enough. Which may be just another way of saying I’m not smart enough. (Incidentally, if we ever do create sentient artificial intelligence, it might be smarter than us partly because it has a larger working memory.)
Neither Piers Anthony nor Stanislaw Lem mention Shannon entropy in their novels. But if they had, their characters could have measured the Shannon entropy level of their respective Messages. As far as I understand it, that’s just a straightforward numerical calculation. You don’t have to understand a message to calculate its Shannon entropy. And if they had gotten a value of 15 or 20, they could have reasonably thought, Oh, shit.
If we ever get a Message and its Shannon entropy level is way higher than 9, maybe we shouldn’t be surprised if it says—and does—something that not only we don’t understand, we can’t understand. I can sort of grasp this idea, while at the same time recognizing it’s way beyond our capabilities. And that gives me chills down my spine.
Colom et al (2008). Working memory and intelligence are highly related constructs, but why? Intelligence 36: 584-606. LInk here.
Doyle, Laurance, McCowan, Brenda, Johnston, Simon, and Hanser, Sean (2011). “Information theory, animal communication, and the search for extraterrestrial intelligence.” Acta Astronautica 68, p. 416.
Sawyer, Robert (2009). WWW: Wake. Ace Books, p. 238.
Disclaimer: In complete contradistinction to what I publish, which I research and fact-check with care, in my blog writings I am more interested in being interesting than right. These are the equivalent of first drafts. I toss these off just to have fun. I make no claims to accuracy, factual or otherwise. I may wake up tomorrow and think entirely differently. I may say contradictory things in other blog entries. I may reuse this material in published work (at which time it will be researched and fact-checked thoroughly.) Don’t use anything I say here without checking for yourself. If I am wrong and you can teach me something, I would be grateful if you did. Caveat lector. Have fun.
If you liked this blog entry, you may like my others on SETI: