An old Zen master declared that “enlightenment is easy if you have no preferences.” I’ve heard some Buddhists interpret this statement as a prescription to have no preferences, which of course gives you the loopy command “Prefer to be without preference.” Me, I prefer preference to enlightenment, but I do like to cultivate the right preferences, which is why I like evolution so much. It’s how preferences can improve. Life evolves. Even the simplest living things exhibit evolved preference. A bacterium, for example, will move toward sugar water as though it prefers it to plain water.

Watching a bacterium swim toward sugar, you might mind read it as living a purpose-driven life. It looks aware, maybe even conscious. Indeed, if you define purpose, awareness, and consciousness broadly enough, you could say these qualities are applicable to anything that has preferences. But clearly there’s a difference between the human kind of preference, awareness, purpose, and consciousness and that of a bacterium. How does one begin to sort out the differences?

Mind Readers Dictionary: The Podfast : Play in Popup

Mind Readers Dictionary : Play in Popup


Or, to link to last week’s Mind Readers essay, we could describe the problem this way: Preference, purpose, awareness, and consciousness are the kinds of behaviors we attribute to minds, but we doubt that bacteria have full-fledged minds. Because they have no nervous system and so very few moving parts, we’d say they are more like matter-in-motion machines. So, how elaborate does a machine have to be before it starts to look like a mind?

And, to link to the Mind Readers essay from two weeks ago, on the topic of aboutness, we could say this: Living bodies and machines both have a lot of aboutness built into them. Both are built with parts that work with respect to each other and their environments, which is to say their parts are “about” each other—the bacterium’s motion is about that sugar. So, how much aboutness does it take to give the impression of there being preference, purpose, awareness, and consciousness?

The answer dawning on science over the past 175 years is, remarkably little. Darwin’s insight is that a population of organisms interacting with their local environment (with respect to or about their environment) could, by simple trial and error, appear as though even its simplest members actively prefer to survive, reproduce, and purposefully improve their fitness.

The general pattern here is sometimes called simplexity: Complex, seemingly mindful behavior can emerge from simple things following simple rules with respect to each other (about each other). Darwin’s insight into biological simplexity mirrored insights into economic simplexity that were popular in his day: A population of people following the simple rules of self-interest generated capitalism’s “invisible hand,” a complex culturewide “mind” pursuing social progress.

Science continues to get surprises about simplexity. Another big insight came in the 1940s with a field called cybernetics (from the Greek word for “oarsman” or “self-steering,” like that purposeful bacterium steering toward the sugar). Cybernetics focuses on the power of feedback loops. Take a thermostat, for example. It has two parts: a sensor and a switch. The sensor senses room temperature. When it senses that the room temperature is up, the switch turns the heater off. When it senses that the room temperature is down, the switch switches the heater on. As a result, it appears as though the thermostat purposefully prefers to keep the room at the chosen temperature.

Feedback loops are a simple system of aboutnesses. The heater turns on and off with respect to (about) the temperature. The temperature goes up and down with respect to (about) the heater.

Cybernetics was instrumental in the design of computers and many of the other wonders of our engineered world. Their feedback loops are all constructed from little sensor-switch units. The sensor provides the aboutness (a way to sense something outside the switch), and the switch provides the behavioral change that responds to the aboutness. In programming languages, these sensor/switches take the form of if/then statements:

If (this is sensed), then (switch that).

These little sensor/switch units are sometimes referred to as being “fast, cheap, and out of control”—out of control, because when you create a population of them, the population seems to have a mind of its own.

The latest addition to such simplexifying fun and games is called cellular automata. This one’s very easy to understand, especially if you watch it in action in a model called Conway’s Game of Life It’s a sort of checkerboard on which the lighted squares switch on and off depending on what they sense in their eight neighboring squares. A square stays on as long as two or three of its neighbors is on—less than two or more than three, and it turns off. An off square stays off unless exactly three of its neighbors are turned on. The squares all check their neighbors simultaneously, so it’s check/change/check/change, generation after generation. Emerging from these simple rules of local aboutness applied over and over to this population of checkerboard squares, you get a dizzying array of complex, seemingly alive patterns.

One thing you’ll get from the checkerboard is our old friend (and enemy) the infinite loop. You start out with a seemingly random pattern of on and off squares. The pattern evolves over generations but then seemingly at random it falls into a repeating sequence of patterns—a rut, or you could say a groove, if you happen to think the rut is pretty. It oscillates, looping repeatedly between a few patterns with a seeming mindlessness, a mechanical, inhuman patience.

Before leaving the theory of simplexity, it’s worth mentioning what it doesn’t prove about the origins of mind. Every time science discovers some new mechanism of simplexity—a new way that a few simple, powerful rules give rise to complex behavior—many people jump to the conclusion that we’ve solved the consciousness problem: The mind is just Darwinian, just cybernetic feedback loops, just cellular automata. The mind is just a machine. They forget that, no matter how simple the rules governing behavior, there still are rules, and those rules had to come from somewhere. Thermostats, for example, are very simple, but they’re programmed by complex humans to serve human purposes.

Simplexity is fascinating and encouraging. It shows that life can emerge from very simple beginnings. But even simple beginnings must be explained. How physics and chemistry got to where it could produce entities that follow rules—and, particularly, evolvable rules—is not a question that simplexity answers.