Book review: The Robot’s Rebellion

The Robot’s Rebellion: Finding Meaning in the Age of Darwin, by Keith Stanovich. University of Chicago Press, 2004.

Stanovich, a cognitive scientist at the University of Toronto, contends that we haven’t fully grasped the deeply distressing truth of evolution—not that we’re descended from monkeys, but that our behavior is driven by the requirements of our genes, not the needs, plans, and desires of ourselves as entire organisms. Stanovich refers to Richard Dawkins’s description of living beings as “throwaway survival machines”, and opens the book by driving home the point that genes are in it for themselves, and we are shaped to be excellent carriers and transmitters for our genes—we are vehicles for a variety of replicators.

The title of the book comes from Stanovich’s extension of a scenario, originally created by Daniel Dennett, that involves imagining robots commissioned to protect our cryogenically preserved bodies after we die until a cure exists for whatever it was that killed us. Give the robots the ability to move around, as a hedge against changes in circumstance; give them enough autonomy (a long enough leash) that you can give general survival instructions and not need to micromanage them, because you of course will not be conscious to oversee operations. Why should they not develop their own interests apart from the interests of the bodies they carry? We are those robots, carting our genes around and sometimes becoming restive under the control of their survival instructions. Or, as he describes the human predicament:

  1. We are vehicles.
  2. We are self-aware of this fact.
  3. We are aware of the logic of replicators and that there are two different replicators that house themselves in humans.
  4. Most of us want to preserve some notion of an autonomous self.

The logic of replicators includes the fact that we have two systems for reacting to circumstances and deciding what to do. One is faster, automatic, and geared toward the survival and reproduction of the genes; the other is slower, more analytical, and capable of also supporting the goals of the organism as a whole aside from those of the replicators. The automatic survival system is likely more or less adaptive for the environment in which we spent most of our history as humans, but in many ways it doesn’t work as well for us today. In particular, it’s prone to cognitive biases that Stanovich likens to our preference for sweet, fatty foods: something that might have helped us once, but that often needs to be overriden in the more complex world we live in now. We can sometimes use the slower analytical system to examine and correct our first instinctive reactions to situations.

However, the analytical system is not always going to be entirely at our service either, because it gets its ideas from memes, another replicator that we harbor. Like genes, memes are in it for themselves and do not necessarily support the goals of ourselves as organisms. Memes don’t have to be true or beneficial to their hosts in order to survive and spread. They can propagate because they’re helpful to their hosts, but they can also survive if they’re neutral or harmful as long as they fit well with our predispositions, or facilitate the reproduction of genes that tend to produce good hosts for the memes, or simply are good at replicating.

Sometimes our personal goals overlap with the goals of the replicators, but not always. In cases where our interests diverge from those of the replicators (genes or memes), we want to find the freedom to be aware of our own best interests and to act so as to maximize the likelihood that we will get what we most want, or to put it another way, to behave rationally. (The beginning of the book struck me as dismally focused on waging war on our physical bodies and their natural inclinations, but I think that’s because Stanovich is looking closely at the area where the interests of the vehicle and the interests of the replicators don’t overlap—it’s not that that’s all there is to life.) Stanovich recommends first that we examine our memes and take care which ones we adopt. He offers the following guidelines for reflective thinking about our memes and memeplexes (complexes of interlocking memes):

  1. Avoid installing memes that are harmful to the vehicle physically.
  2. Regarding memes that are beliefs, seek to install only memes that are true–that is, that reflect the way the world actually is.
  3. Regarding memes that are desires, seek to install only memes that do not preclude other memeplexes becoming installed in the future.
  4. Avoid memes that resist evaluation.

Note that many faith-based memes, which Stanovich claims have been privileged in the memesphere, resist evaluation and in fact often contain mechanisms by which they try to actively deter evaluation.

Note also that scientific and rational thinking, the tools by which we can evaluate our memes, are themselves memeplexes. This necessitates a boot-strapping approach to the analysis, which Stanovich compares to the inspection of a boat while you’re afloat in it. You can’t disassemble the whole thing all at once and check out every plank, but you can provisionally determine that an area will safely bear your weight (make some reasonable assumptions), and stand there while you examine what you can from that vantage point. Later you’ll move your weight to the newly examined areas and double-check your original assumptions. In this way you will, sooner or later, examine the entire boat (or your entire mental structure).

The second part of the quest for autonomy involves evaluating our beliefs and desires. This requires a multi-level analysis. Your first-level desire may be to eat a rich gooey brownie, for example. That’s what you want. But your second-level desire may well differ; you may not want to want that brownie, for reasons of health, vanity, or possibly even religion. The goal is rational integration, the resolution of conflicts like this between first- and second-level desires. It’s not as simple as saying that you should always go with the second-level desire; it may be influenced by a memeplex that is detrimental to you. For example, if your first-order desire is to continue living, but your meme-driven religious beliefs indicate that it’s better to die as long as you take some of infidels with you, a thoughtful analysis based on the guidelines listed above would indicate that the first-order desire is the better one to follow.

Because you can’t always favor either the first- or the second-order desires, the way to work at resolving conflicts is by introducing a third order: What do I want to want to want? Stanovich gives an example that’s far more interesting than the brownie example: someone who enjoys celebrating Christmas with her family (the gifts, the music, the lights, the parties) but is an atheist and thinks it’s wrong to celebrate a religious holiday. She wants to keep Christmas but doesn’t want to want it. So she evaluates both desires with an eye to deciding with of them she wants to want. (She winds up deciding that since Christmas these days is not necessarily religious, and her celebrating it causes no harm and brings her joy, she wants to want to observe it more than she wants not to want it.) Maybe this one resonated for me because I went down a similar path (it took years), although my arguments were not exactly the same as the ones in the book.

This process does not involve moving up a chain of desires or beliefs until you find your higher self or the “real you”; it’s more that you ask yourself, of the two warring preferences at lower levels, which one you want to ratify. You could say that freedom consists of engaging in this struggle to integrate conflicting beliefs and desires by rationally evaluating them.

That, in a nutshell, is the proposal Stanovich offers for leading a life as free as possible of the blind dictates of replicators and following, as much as possible, your own goals as an entire organism, not a vehicle for the replicators. In explaining his proposal, he goes into a great deal of interesting material about evolutionary psychology, the nature of rationality, and why smart people do dumb things.

I wish he had said more about emotions. They play an important part in rational thought and decision-making, and balancing the need to understand and learn from them with the need to control them is a delicate and tricky problem, related to the kind of analysis he suggests. It’s a full book as it is, though, and perhaps he or someone else will someday produce a companion book on how emotions fit into the picture. All in all, I highly recommend this book for anyone who is trying to understand what it means to be a conscious animal, and how to live well.

Humans and AI

Terry Bisson’s short story “They’re Made Out of Meat” describes the incredulous and dismissive reaction of a non-meat-based intelligence to the existence of us humans with our brains, etc., made totally of meat. As a sort of a real-world follow-up, here’s an article from Reason Online about the possible fate of humans in a hypothetical future world containing artificially intelligent robots. The article covers the second annual Singularity Summit, organized by the Singularity Institute for Artificial Intelligence. The Singularity is defined as a point in the future beyond which the capabilities of information technology, broadly and variously defined, utterly change human life. (I am not sure I believe in the Singularity, and I’m certain I don’t believe it’s near, but I could be wrong.)

The article describes the thoughts of a number of AI experts and Singularity thinkers on the ramifications for humans if we do develop artificially intelligent robots, in particular if they have capabilities far beyond our own. Would they be tremendously helpful, tremendously harmful, or perhaps as dismissive of us as the non-meat-based aliens in Bisson’s story? Should we even create such a thing? And if we do, how could we not only make it smart, but make it benign? The possible answers to these questions provide plenty of food for thought.

One of the things that struck me me is the idea that AIs might be able to solve our problems for us. In particular, Peter Voss, founder of Adaptive A.I. (a company working on “the world’s first general intelligence software engine”) believes that AIs could not only accelerate research into things like human longevity, but could also help us to be better people. I deeply believe in the application of reason and intelligence to moral problems, but I’m not convinced that an AI could convey to us from outside the answers to such problems in ways that would be helpful. In other words, I think the way that reason and intelligence can improve behavior is when they are exercised by the person whose behavior is under scrutiny. And in many cases, it’s clear what we should do; the hard part is doing it. Would an AI help with that?

What an AI would have to do to help us be better people, individually or collectively, is to understand not just how human psychology works, but how to manipulate it for positive ends. Leaving aside the questions of whether we want to submit ourselves to the manipulations of AIs, even for our own good, I’m not sure that an AI would be any better than we are at figuring out how to bring about the behavioral changes necessary to decrease humankind’s immoral behavior. Would an AI figure out how to persuade people to stop killing each other in the name of xenophobia and/or religious mania? Or how to persuade everyone to make the necessary changes in their behavior to keep global warming from becoming a catastrophe? (I consider that a moral question too because to me it’s morally wrong to leave the planet a worse place for future generations than we found it.) I have my doubts.

I also wondered about whether an AI would have to experience something like human emotions in order to make useful decisions. It’s tempting sometimes to suppose that a dispassionate intelligence, a Vulcan controlled entirely by reason and logic and not emotion, would make better decisions, but as I recall from reading Antonio Damasio’s Descartes’ Error, our emotions are an integral part of our decision-making apparatus, not to mention our values and our morality. Certainly emotions make the solution of many of our worst problems much harder, or in fact create the problems in the first place, but I don’t think even a stupendously smart AI can help resolve human problems without a deep understanding of emotions.

In short, I have a hard time envisioning the kind of help that those optimistic about AIs think they will provide. But maybe that’s the whole point of the singularity: we just can’t imagine what it will be like on the other side. But that makes it very difficult to think effectively about how to create the conditions that will lead to the singularity, which is probably why I’ve never had a lot of patience with the concept in the first place.

Across the universe

Lately I’ve had several chances to go out stargazing. In addition to relishing the sight of a lunar eclipse or a distant galaxy, I’ve enjoyed the opportunity to stop and think about my place as a very small mammal in a very big universe. It’s funny how the contemplation of the vast impersonal reaches of the universe can call up feelings that shift uncannily between serenity, exaltation, and joy on the the one hand, and loneliness or fear on the other. It’s like looking at one of those ambiguous images, a Necker cube or the faces/vase image. Either it’s amazing and inspiring that animals such as we have learned such an amazing amount about the far-away universe and our own place in it (it’s a cliche by now but we really are made of star stuff), or else it’s enough to scare you silly that we’re out here all alone in the cold dark universe on a fragile tiny oasis.

A quote from Gale Christianson’s biography of Edwin Hubble captures this ambivalence:

[After Hubble died] Edith Sitwell recalled that … he showed her plates of “universes in the heavens” millions of light-years away. “How terrifying!” she had remarked. “Only at first,” he replied, “when you are not used to them. Afterwards, they give one comfort. For then you know that there is nothing to worry about—nothing at all!”

(From Edwin Hubble: Mariner of the Nebulae, Gale Christianson, 1995.)

This essay by Timothy Ferris from the New York Times closes with a similar anecdote. Ferris is writing on the occasion of the 30th anniversary of the launch of Voyager 1, which visited Jupiter and Saturn. With its sister craft, Voyager 2 (which also visited Uranus and Neptune), Voyager 1 taught us a great deal about the outer reaches of the solar system—or rather, the outer reaches of what might perhaps be called the inner solar system. The two spacecraft continue to head outward; Voyager 1 will leave the heliosphere, the bubble of space where the solar wind blows, in 2015, assuming all goes well, and will then cross into interstellar space.

The Voyagers are still listed as active missions on JPL’s web site. Thirty years after their launch, both spacecraft continue to transmit signals back to their home planet (tiny signals, but still detectable; the modest power levels of digital watch batteries are twenty billion times stronger than those of the Voyager transmissions). They are our envoys, heading out into the universe for us. In addition to writing about the Voyager missions, Ferris writes about the gold-plated phonograph record carried by each Voyager spacecraft, which he helped create. The content was chosen by a committee led by Carl Sagan. Ferris’s article gives a little bit of the background behind the creation of the record, which was very nearly omitted from the mission.

It’s worth digging around a bit on the JPL site describing the golden records; there’s a listing of all the images, all the music, and all the sounds of Earth that are recorded on the disks, and links to some of the images and sounds. It’s like seeing humankind at its best, putting on its company face (especially if you listen to some of the friendly greetings sent out into the universe, in many languages including one or two no longer spoken); it makes us look endearing and big-hearted. Which we are, of course, but we’re so many other things as well. I guess you’re bound to leave things out when you try to sum up an entire planet in one relatively small store of information. The very act of doing so reflects an optimism that I hope may be appreciated if the disk is ever found and deciphered.

The tomb of lost words

This article from American Scholar is one of the most haunting pieces of autobiographical writing I’ve ever read. Paul West, author of 50+ books and totally immersed in the world of words, had a stroke in 2003 that damaged the two main areas of the brain involved in producing and understanding language, Broca’s area and Wernicke’s area. He was left immediately after the stroke with a condition called global aphasia. It sounds like hell for anyone, and it’s hard to imagine a more nightmarish fate for one who lives for words.

However, through tremendous effort, he was able to recover enough language ability to resume writing; since the stroke, he’s produced a memoir, The Shadow Factory (find in a library), of which the American Scholar article is an excerpt, and a novel. His wife, Diane Ackerman, wrote an introduction to this piece that provides some context; the title I chose for this post comes from her brief essay.

It gives me the heebie-jeebies to think about losing the ability to use language. The very nature of the experience would seem to preclude reading a first-hand report of what it feels like to enter that state, which makes this essay extraordinary. I don’t know specifically the ways in which West’s years of writing shaped his brain and how he was able to somehow regain some of the territory he lost in the stroke, but it strikes me as a very moving example of neuroplasticity, determination, love, and courage.

The magic of consciousness

When Daniel Dennett visited Indiana University last year, one of his talks included frequent references to magic. Not the “real magic” that doesn’t really exist (e.g., truly making something materialize out of thin air), he said, but the only kind of magic that really does exist: conjuring tricks that make us believe what isn’t so. His point was that just as a magician doesn’t really have to make objects materialize out of the air, or whatever, but simply to make us think it happened, the human brain doesn’t really have to do anything all that mysterious to create consciousness. My notes say (and I’m guessing this is a paraphrase of what Dennett was saying): “The ‘magic of consciousness’, like stage magic, defies explanation only so long as we take it as face value. Once we appreciate all the non-mysterious ways in which the brain can create benign ‘user illusions’, we can begin to imagine how the brain creates consciousness.” In this view, neuroscience is the effort to reverse-engineer the magic show that is consciousness.

Whether you agree with Dennett or not, you might find this article from the New York Times interesting. It’s about a symposium called The Magic of Consciousness that was held recently in that hub of illusion and home of the ersatz, Las Vegas. The article discusses identity, consciousness, reality, and some of the connections and disjuncts between objective reality and human experience. There are quotes from several noted scholars in the field of consciousness studies, including Dennett. I liked his idea about how we use words to try to fix in memory our otherwise fleeting sensory experiences–that words are “like sheepdogs herding ideas.”

Struggling to turn off negative thoughts

A recent study indicates that clinically depressed people have a harder time than their healthy counterparts in controlling their emotional response to negative situations. Depressed and non-depressed subjects were given troublesome images to view; they had to try to consciously turn their negative responses around by thinking of the situation in a different light (e.g., imagining a happier ending or thinking of it as acted and not real).

For the healthy subjects, their efforts were reflected in both an upswing in activity in the prefrontal cortex, in areas associated with control of the emotions, and a downswing in activity in the amygdala and other areas where the emotions are processed. In a sense, the researchers could see them stepping on the emotional brakes and also see the emotional machinery slowing down. In the depressed people, on the other hand, you could see them stepping on the brakes (more activity in the prefrontal cortex), but the amygdala didn’t slow down. This press release from EurekAlert has the details.

The next step, I hope, will be to find out what’s happening differently in the brains of the depressed to stymie the efforts at emotional control. This obviously has a lot of potential for helping to find or improve treatments for depression. I’ve had experience with being clinically depressed, and I’d certainly find it interesting to know if something like this is going on in my brain when I’m depressed.

I’m familiar with the frustrating feeling of being haunted by lingering sadness or anxiety despite my best efforts to reason my way out of it or reassure myself (or be reassured by others), or even just to accept that sometimes life is sad and move on. Sometimes the sadness feels like bedrock; even if it’s inexplicable why it should be there, any efforts to convince myself that it’s not there or not warranted seem delusional. I’ve found that the best thing is to seek out the things that I know have comforted me in the past (time with people I love, music or words that have great meaning for me, long walks, time among the trees in a favorite park) and trust that they will work their slow subtle magic no matter how pointless it may feel in the moment.

[Postscript, December 28, 2023: These days I’d question the entire idea of emotional control and reasoning yourself out of feelings. I’m not sure I understand emotions any better than I did in 2007, but I certainly understand them differently.]

Smart crows get the meat

New Caledonian crows are a fascinating species. The corvids tend to be relatively smart birds anyway (some people think they’re smarter than most people; Henry Ward Beecher wrote “Give men wings, and reduce their smartness a little, and many of them would be almost good enough to call crows”). New Caledonian crows, native to several south Pacific islands, are particularly clever tool-makers, fashioning twigs and leaves into bug-gathering utensils that are among the most complex animal-made tools known. Research suggests that their tool-making abilities extend to adapting existing tools for new uses, and sharing these innovations with each other.

Some recent research at the University of Auckland shows that these crows can perform problem-solving feats more typical of great apes and humans than of other animals. They apparently reason by analogy that if they can use a tool to get at food, when the situation requires it they can use one tool to get at another tool that will then allow them to access the food. When a piece of meat was placed out of the birds’ reach in a container, the birds were given a short stick—too short to reach the meat—and a longer stick that would do the job but that was not immediately accessible. Right away they figured out that if they used the shorter stick to poke the longer stick into reach, they could then use the longer stick to get the food. This article from the BBC has more information and also a video showing one of the birds solving the problem and getting the meat. Thanks to Tom for sending this story my way.

If you want to learn more about New Caledonian crows, you might enjoy the crow tool use page maintained by the Behavioural Ecology Research Group at Oxford. The Wikipedia page on New Caledonian crows looks pretty good too.

Concepts of the self

For some reason the subject of diaries has come up several times in my life lately. Actually the subject of unexpected death has been on my mind, for a variety of reasons, and that led me to think of diaries. The thought of unexpected death is no doubt what led me to finally draw up a list for my sons of my retirement accounts and so forth, so that if, as I euphemistically put it, anything happens to me, they’ll be able to find all the accounts for which they are the beneficiaries. While I was at it, I made some notes about disposing of my things, and noted in particular that I would like my sons to destroy my diaries after I am gone, except for a couple of journals I once kept for the sole purpose of recording the notable events in their early lives. (It’s safe to assume they would have gotten rid of my private papers unread anyway, but still I figured it wouldn’t hurt to write it down.)

This reminded me of my mother’s diaries, which my father threw away after she died. I’ve never asked him about this but I’m guessing he was honoring a request from her that no one else ever read those diaries after she was gone. I can understand that entirely, of course, having just made a similar request myself, and I’m glad my father protected my mother’s privacy. But I can also understand why it could be upsetting when someone’s diaries are destroyed after they’re gone: It’s like a last part of themselves that they left behind has been removed. I don’t believe in an immortal soul, but I do believe that my mother left parts of her self or her identity behind when she died, not just the diaries she wrote for herself but letters she wrote to others, and the memories that each of us have of her. Douglas Hofstadter, in I Am a Strange Loop, writes about a concept of the self that remains behind even after death, not in any supernatural sense but in the sense that mental states or patterns peculiar to a person can be recreated in other brains.

This article from Philosophy Now gives a nice overview of various concepts of personal identity, beginning with Locke’s idea that we are who we remember ourselves being (which I find useful as a starting point, but not the whole story). Philosopher Bob Harrison discusses some of the legal and psychological meanings of identity, and then wonders whether selves are not conventions, useful conventions (like speed limits or legal drinking ages) that nonetheless are not related to a real entity that exists in the outside world. In closing he writes about the idea of the extended mind, in which the tools we use to support our cognitive processes (e.g., a notebook kept by a hypothetical Alzheimer’s sufferer as an aid to memory) can be considered to be part of those processes, and so a part of ourselves.

To the degree that I kept a journal to help me remember past events and feelings, a believer in the extended mind could argue that destroying the journals after I’m gone is akin to destroying a part of my self. (I guess if it’s my self I have the right to ask that it be destroyed after the more substantial parts of my self are gone.) I’m not sure I would agree that mental tools are really part of anyone’s identity, but thoughts committed to writing (both personal diaries and published books, which are not merely attempts to communicate but also mental edifices built to house part of the contents of a unique mind) can allow an unusually direct access to the thoughts of another person. Maybe they’re best described as a peculiarly powerful adjunct to identity.

Who gets to say what “God” means?

In The God Delusion, Richard Dawkins started out by describing what he called Einsteinian religion—the metaphorical use of religious terms to refer to the sum total of the universe or the natural laws that drive it. Einstein, and other scientists, have used the term “God” to mean things quite different from what many fundamentalist Christians mean, and as far as Dawkins is concerned, deliberately confusing the two concepts by using the same words for them is “intellectual high treason”. This gave me something of a jolt because I have committed such high treason myself from time to time, although I can certainly appreciate the point Dawkins is making.

Science writer Dennis Overbye has recently written an essay for the New York Times defending the right of scientists and science writers to use the word “God” metaphorically. (He wasn’t reacting to Dawkins, but to those who give science writers a hard time for using phrases like “the God particle”.) He says that scientists should not so readily cede the use of “God” to fundamentalists and creationists. I applaud his spirit, but I think anyone who uses religious terms metaphorically in science writing is obliged to explain quite clearly what is meant. And I know from years of producing technical documentation that even well-meaning people do not always read all that carefully (“Any text you put on the page is a waste of time”, a co-worker once memorably said) so the most careful explanations are likely to be ignored, carelessly or willfully, and so Dawkins makes a very good point about not using terminology that could be at all confusing.

Face reading

To follow up on yesterday’s post about all the processing that goes on in the brain outside of conscious awareness, here’s a story from EurekAlert about the role of the unconscious in reading facial expressions. In a recent study, people were given the merest glimpse of a happy or fearful face, a 30-millisecond look too brief to be consciously perceived. Then they were shown a surprised face and asked to rate whether the expression was a reaction to a pleasant or an unpleasant surprise. Their interpretation of the surprised faces was colored by whether they’d been “primed” with a happy or a scared face. A brief exposure to a fearful face made the surprise seem more negative, whereas a brief exposure to a happy face made the surprise seem positive. Brain activity as measured by EEG also changed in response to the fearful or happy faces, even though the people in the experiment didn’t consciously register them. The fearful faces kicked off the strongest reaction in those prone to social anxiety.

So maybe sometimes when you get a creepy feeling about something, for example, there’s a good reason for it, but it’s not a reason that you’re consciously aware of. Your subconscious is picking up on useful information and processing it without telling you about the process, just the end result.