I’ve just read two new and glorious books on cognitive science, psychology’s hottest new multi-disciplinary field, founded to explore parallels between computers and human minds. “The Information: A history, a theory, a flood” by James Gleick concentrates on information theory which provides cognitive science’s core concept: a definition of a unit or atom of information.

According to information theory all information is comprised of binary bits, basically the setting of a toggle switch’s position, a collapse of two possibilities, on and off, into one outcome, either on or off. Think 20 questions: Determining the yes/no positions on 20 questions you can collapse your guesses from “I haven’t a clue what you’re thinking of—it could be anything.” Down to “You’re thinking of my cat, aren’t you?”

Gleick gives us a rich account of the history of information use and information theory concluding with both an obvious and a weird thesis. The obvious is that we are uniquely flooded with information today. The weird is that the universe always has been flooded with exactly the same amount of information. Since all physical behavior collapses what could possibly happen into what actually did happen, it all could be translated into binary bits. Gleick, like many in the cognitive science assumes the entire universe is therefore information down to the tiniest sub-atomic particle

I loved the book, but I think he’s wrong about everything being information. Anything can become significant in an informational relationship for a sentient being for whom it means something about something else. But the fact that anything could become participant in an informational relationship does not make everything information. If it did, we’d solve one of the greatest questions ever: Where did information come from? The answer would be, it was always already everywhere. But the answer would be vacuous, not explaining why, for example there’s so much more information in you than there is in a cloud.

Brian Christian’s “The Most Human Human” is a masterpiece of practical philosophical science after my own heart and mind. The book is centered upon Christian’s participation in the 2009 Turing Test contest, a contest in which, based on five-minute informal instant message conversations, judges must decide whether they are IMing with real humans or computer simulations of humans. A prize goes to the most human-like computer but also to the most human human, the human best able to convince the judges of his or her humanness.

Brian competes as a human and the book is his rich in splendid reflections on how to be as human as possible not just for the test, but in all of your life. It is certainly one of my favorite books of the year filled with insights I’ll use and muse about for a long time to come. Still, Christian, steeped in cognitive science, assumes that computers programmed by humans for humans are themselves intelligent, and that’s another mistake made by cognitive scientists.

Computers are, as information theory makes clear, just banks of toggle switches that can be programmed any way that suits us. We put meaning in and take meaning out, but by themselves, no matter how many switches they posses, they do not have a capability minds have. I’ll elaborate on the difference in another article, but for now I’ll say that computers by themselves don’t have a capacity to evolve, to undergo what sometimes gets called “The blind watchmaker” effect of evolution wherein by blind trial and error over generations, lineages of organisms acquire novel traits that make them (us) as intricate as watches.

Unlike living things, computers are engineered. Full-blown minds create these mindless artifacts. I’ve coined the term “Amnesiac Watchmaker” to pinpoint a fundamental mistake in cognitive science. The Amnesiac watchmaker engineers an intricate mind-simulating device like a computer. Forgetting that he built it, he stares down in amazement and says, “Wow, it’s a completely self-made mind!” The ghost in the machine is merely the ghost of the person who engineered it and then forgot that he did. This morning, thinking about how to expose this wrench in the works flaw in cognitive science’s assumptions, I wrote the following:

News Item: This editorial, nearly a century old and recently discovered, is causing a stir among information theorists and cognitive scientists:

Are books becoming as intelligent as humans?

Artimus Gompers, 1917

Recently I had an encounter with a book that gave me so much the impression of listening to a human that I placed the book down on the table before me and stared. A book is made of paper, ink and glue. And yet there was no denying that this book was as intelligent or even more intelligent than most of the fellows that I know.

Now, I’m no fool. Of course if all that a book can do is talk to me, then, by gum, maybe that book would just be a record of a person’s thoughts, a mere extension of a human author’s intelligence.

But no, this book was much more than that. I can have real conversations with it. It’s the 1916 Old Farmer’s Almanac. I asked it to name the capital of Spain and it answered me. I asked it the annual rainfall in California for the last ten years and it answered that also.

I asked it to tell me who would win the Great War. It couldn’t answer that, but then even the most intelligent people I know can’t answer every question.

Still, it has the multiplication tables up to 25 and I found that it could tell me some new things. I needed to know how much it would cost me to buy a dozen of zippers for my dry goods store at 20 cents apiece, and it told me. Zippers were invented this year and yet the almanac told me the answer just like that! I was mightily impressed. I thought to myself, what is this world coming to?

I’m thinking about the future of books and how they might become even more intelligent. I recognize that when I ask my Almanac a question, I’m still must thumb through it to get the answer. But what if there were some levers and gears that would do the thumbing for me? We’d set them up to open pages in the almanac so that a different lever would take the reader to a different page. We’d label all of the levers with the topics addressed on each page. Press the lever marked “multiplication table” and it would the gears would take you right to page 221. Then instead of having to thumb through, I would just pull the lever and the page would come right up. This would be a much more human and intelligent tome!

When I was talking to my friend Hiram about all of this he frowned. He said, “That is not intelligence. Intelligence would be if it could combine answers, for example if it could tell you how many inches of rain there would be in California after 20 years.

Hiram is right. That would make a book even more intelligent. So we got talking about how books could do that. What we first settled upon was quite elaborate, involving considerably more levers for combining the rainfall number on page 347 with the multiplication table on page 221.

But then we decided upon an easier solution with less levers and more paper. Since the answer would be the same whether the book calculated it when we asked or just calculated in advance, we could just make a much bigger book, a book containing every answer to any question we might ask it. We would need a lot more page-finding levers, but then that book would clearly be more intelligent than a human even though it was still mere paper, ink and a little horse glue.

Hiram and I agreed that there may come a time when such a book. A book you can talk to, you asking questions and it telling you things from its vast reserves of intelligence.

I worry then about whether there will be any use for humans once books are smarter than we are.