Terry Bisson’s short story “They’re Made Out of Meat” describes the incredulous and dismissive reaction of a non-meat-based intelligence to the existence of us humans with our brains, etc., made totally of meat. As a sort of a real-world follow-up, here’s an article from Reason Online about the possible fate of humans in a hypothetical future world containing artificially intelligent robots. The article covers the second annual Singularity Summit, organized by the Singularity Institute for Artificial Intelligence. The Singularity is defined as a point in the future beyond which the capabilities of information technology, broadly and variously defined, utterly change human life. (I am not sure I believe in the Singularity, and I’m certain I don’t believe it’s near, but I could be wrong.)
The article describes the thoughts of a number of AI experts and Singularity thinkers on the ramifications for humans if we do develop artificially intelligent robots, in particular if they have capabilities far beyond our own. Would they be tremendously helpful, tremendously harmful, or perhaps as dismissive of us as the non-meat-based aliens in Bisson’s story? Should we even create such a thing? And if we do, how could we not only make it smart, but make it benign? The possible answers to these questions provide plenty of food for thought.
One of the things that struck me me is the idea that AIs might be able to solve our problems for us. In particular, Peter Voss, founder of Adaptive A.I. (a company working on “the world’s first general intelligence software engine”) believes that AIs could not only accelerate research into things like human longevity, but could also help us to be better people. I deeply believe in the application of reason and intelligence to moral problems, but I’m not convinced that an AI could convey to us from outside the answers to such problems in ways that would be helpful. In other words, I think the way that reason and intelligence can improve behavior is when they are exercised by the person whose behavior is under scrutiny. And in many cases, it’s clear what we should do; the hard part is doing it. Would an AI help with that?
What an AI would have to do to help us be better people, individually or collectively, is to understand not just how human psychology works, but how to manipulate it for positive ends. Leaving aside the questions of whether we want to submit ourselves to the manipulations of AIs, even for our own good, I’m not sure that an AI would be any better than we are at figuring out how to bring about the behavioral changes necessary to decrease humankind’s immoral behavior. Would an AI figure out how to persuade people to stop killing each other in the name of xenophobia and/or religious mania? Or how to persuade everyone to make the necessary changes in their behavior to keep global warming from becoming a catastrophe? (I consider that a moral question too because to me it’s morally wrong to leave the planet a worse place for future generations than we found it.) I have my doubts.
I also wondered about whether an AI would have to experience something like human emotions in order to make useful decisions. It’s tempting sometimes to suppose that a dispassionate intelligence, a Vulcan controlled entirely by reason and logic and not emotion, would make better decisions, but as I recall from reading Antonio Damasio’s Descartes’ Error, our emotions are an integral part of our decision-making apparatus, not to mention our values and our morality. Certainly emotions make the solution of many of our worst problems much harder, or in fact create the problems in the first place, but I don’t think even a stupendously smart AI can help resolve human problems without a deep understanding of emotions.
In short, I have a hard time envisioning the kind of help that those optimistic about AIs think they will provide. But maybe that’s the whole point of the singularity: we just can’t imagine what it will be like on the other side. But that makes it very difficult to think effectively about how to create the conditions that will lead to the singularity, which is probably why I’ve never had a lot of patience with the concept in the first place.