What makes humans unique?

In recent years we’ve learned that a number of capabilities we once thought unique to humans are in fact shared by other species: tool use, some aspects of mathematical ability, etc. But still, it’s us humans and not the other animals who have language and books and the arts and toxic waste sites and nuclear bombs and all the other strange fruits of our brains. Which cognitive capabilities truly are unique to us? Marc Hauser has proposed four cognitive differences between humans and other animals that constitute what he calls humaniqueness:

  • We can combine different sorts of information to come up with a better understanding of a situation.
  • If we find something that works to solve one problem, we can apply the same solution to another problem.
  • We can use symbolic representations for computational and sensory input.
  • We can separate our modes of thought from our raw perceptual input.

Hauser believes that these abilities together give us a broader and more transferrable sort of problem-solving capacity compared to other animals; we can learn from one situation and apply what we’ve learned to other situations. Could they also be the basis for much of our art? I’m thinking of combining different sorts of information, for example, as the basis for metaphor, one of my favorite of all literary devices. Obviously symbolic representation is important for much art. I haven’t read Hauser’s paper, just this story from Science Daily, so I’m not sure exactly what to make of that last item on the list, but it reminds me of Wordsworth’s idea that poetry “takes its origin from emotion recollected in tranquility” (which I think applies to other arts as well). It also reminds me a bit of the Buddhist idea of mindfulness, the ability to observe what is going on without getting caught up in it, although that doesn’t always come easy.

[Postscript, December 29, 2023: I’ve come to think that the most significant difference between humans and other animals may be that we’re the only animal obsessed with proving that it’s uniquely gifted compared to others. However, maybe it’s a subset of humans (Western humans?) who are highly concerned about this, rather than humans in general.]

A little knowledge

The other day I ran across this post at Brains on Purpose, which I found very interesting and somewhat discomfiting, as it describes the dangers of popularizing neuroscience (oversimplification or distortion, for example, or creating a greater sense of certainty than is warranted). It got me thinking about whether I’m doing my best to report neuroscience news carefully enough that the limitations are clearly spelled out and the provisional nature of much of the work is obvious. (The Catholic Church has a ritual it calls examining your conscience, and I guess that’s the best description of what that article spurred in me.) And it’s a good warning for anyone who either writes about or reads about brain science.

Along the same lines, this article about the Mozart effect from the e-Skeptic debunks the amazingly popular but ill-founded idea that passively listening to Mozart’s music can affect your mental performance significantly. The Mozart Effect meme, once it escaped into the wilds of commercial culture, spawned a mini-industry, even though the research on which it is based doesn’t really prove what it’s supposed to prove. By all means, listen to Mozart and play his music for your children (along with a wide range of other music), but take the claims about IQ scores with a grain of salt.

The Science Cafe in Bloomington last night was interesting, and also provided a couple of examples of how the media misunderstood or oversimplified a story. Peter Todd of Indiana University talked about recent research on mate choice (a topic in which everyone is extremely interested but for which the scientific explanation is still under construction, making it ripe for misunderstanding). In choosing a mate, we’re faced with a situation where we have to know which of a sequence of potential mates is the best one, without being able to go back in time to an earlier possibility or knowing who we might meet in the future. You must evaluate enough of the options to gauge both the range of possible mates and your own desirability as a mate, which affects how good a mate you can expect to end up with (because you need not only to choose but also to be chosen).

To examine how organisms solve this problem, Todd and others have used agent-based models (simplified computer-based simulations). They’ve also looked at real-world demographic data and, more recently, run some speed-dating experiments. (Speed dating is where a small group of equal numbers of men and women, typically matched for age or other demographic factors, gather for a series of brief one-on-one meetings, each man meeting with each woman. Participants know each other by number and check “yes” or “no” on a card for each person they meet. If two people both check “yes” for each other, the facilitator gives each of them the contact information for the other.)

One result, I believe from the agent-based models, is that the optimal number of potential mates to evaluate is 12; that should provide an organism with a good enough idea of both the mating pool and its own place in the pool. Based on that data, the organism should then basically set a threshold for the best it can expect to get, and choose the first one to come along after those 12 that meets or exceeds the threshold. This is all pretty vague if you try to translate it into human terms—does that mean you need 12 serious relationships (yikes!), or should go on dates with a dozen different people, or look at a dozen profiles in an online dating service? However, a German women’s magazine evidently did not worry about the finer points, and advised its readers that the twelfth man is Mr. Right, which is not only oversimplified but incorrect, because you’re not supposed to choose the twelfth one but to select the best one to come along after the twelfth one.

In the speed-dating studies, Todd and his colleagues gathered tons of data about the participants; the results he showed last night had to do with how men and women in different age groups differed with respect to how many offers they made, and how well people gauged their own place in the mating pool. Further data analysis and a one-year followup are planned. He also mentioned that they compared data about what people said they wanted beforehand to data on who they actually chose. When asked, the participants tended to say they wanted someone like themselves, but the choices they made supposedly reflected more of a bias toward the status/fertility tradeoff that evolutionary psychologists believe that men and women make when selecting a mate (women favor high-status males, and men favor females who show signs of fertility–youth, a particular body shape, etc.). The press picked this story up as basically “Men choose attractive women”. (Gee, you think?) By the way, I’ve been reading an excellent critique of evolutionary psychology by David Buller, and just yesterday I read his painstaking analysis of the data supporting the existence of the status/fertility tradeoff, which convinced me that it’s not nearly as well established as it appears to be. However, that’s a subject for another day.

Thinking about dating

This article from the New York Times discusses several online dating services that try to use scientific methods to help people find the best long-term romantic partner. eHarmony uses a lengthy personality questionnaire designed by a psychologist. Chemistry.com (a spinoff from Match.com) and Perfectmatch.com each use an algorithm designed by an anthropologist and a sociologist, respectively. You could consider the use of these systems by large numbers of people as something of an experiment; the algorithms have not been published or peer-reviewed, but the companies involved must have some data about how well they work, and evidently academic researchers are interested in that. Helen Fisher, the anthropologist who developed the system used by Chemistry.com, is hoping to publish not only information about how the algorithm works but data from Chemistry.com users that will validate the algorithm, which will be useful.

Personally I’m skeptical about all of the attempts to come up with a widely applicable, science-based method for helping people find a mate. I believe very strongly in science, of course, and I’m fascinated by the ways scientific research can be applied to the process of mate selection. However, I don’t think we’re anywhere near far enough along in understanding personality and relationships to be able to devise an algorithm that will work across the board. I wonder if people are selling these mate selection systems in part because it’s a huge (and somewhat vulnerable) market to be tapped. It’s like any other product that caters to a widely felt need (diets, baldness cures, wrinkle removers): Science certainly may have something to say about the problem, and maybe some of the products out there really do work, but there are so many remedies being pushed on the gullible that it seems to me you have to be extremely cautious about spending your money on any of them.

I will be interested in seeing Fisher’s work when she publishes it, because her method uses sociological and psychological data and also “indicators linked to chemical systems in the brain”, and I’m very curious about how that works. Some of the things discussed in the NYT article have to do with attraction and falling in love, and brain chemistry certainly mediates how that happens. But it seems to me that as hard as it might seem to find someone with whom to fall in love, what’s really hard is to find someone you will still be in love with, or at least contentedly married to, 30 years down the road. The hormonal rush of falling in love doesn’t always have all that much to do with how well you’ll be able to get along with a person over the long haul. In fact, if it did, a rich subject for literature would disappear. It seems to me that if you were to try to use any indicators from brain chemistry to help people find a good match, you’d need to have looked at the brains of people who were falling in love 30 years ago, and see which of them stayed together, and then use their brain chemistry as the model. I’m curious about whether there’s even a difference in the initial stages between those who stay together and those who don’t, and if there is, I’m not sure that’s what Fisher’s algorithm is using.

One reason I’m skeptical about the emergence any time soon of a widely applicable system for matching people up is the complex nature of some of the concepts involved. Take the idea of similarity, for example. Some similarities make for compatibility, but others make for conflict, and still others don’t matter all that much. If one of you votes for Clinton and one for Obama next Tuesday (or even if one of you is blue and the other red), that might not make much of a difference to your relationship. If one of you likes to hike rugged trails and the other prefers mall-walking, at the very least you’ll have a hard time figuring out what to do together on a Saturday afternoon. If you’re middle-aged and in a relationship with someone who is just as passionately attached to his house as you are to yours, you could be in for a rough ride unless the houses are similar and one or the other of you eventually gives his or hers up. (Which means one of you has to be less stubborn than the other.) It’s a complicated concept, any way you look at it. Some scientific papers are listed in a sidebar on the left of the NYT article; if you have a way to access the one by Ruth Gaunt, check that one out for a good summary of the difficulties involved in analyzing whether similarities in a couple are related to marital happiness. The one just above it is interesting as well (and available for free) but it describes research on dating couples and newlyweds, not couples that have been together for a long time.

Unexpectedly tapping into memories with deep brain stimulation

I had a dream once where there was some kind of gizmo that I could hook up to my brain (painlessly, no mess) and then use to play back my memories on a TV screen, sort of like playing a videotape in a VCR. When I woke up, I felt mildly disappointed, because in the dream I had been happily anticipating the memories I would like to revisit. Maybe this dream is why this story about deep brain stimulation and memory seems so exciting to me.

I knew that DBS, in which implanted electrodes are used to stimulate activity in specific parts of the brain, is used to treat Parkinson’s disease, and is being investigated for use in depression. I didn’t realize that it’s evidently sometimes considered for use in morbidly obese people as a means of suppressing appetite. When DBS was applied to the hypothalamus of a man in Canada in such an attempt, he unexpectedly and vividly recalled a memory from about 30 years earlier, watching the remembered scene as an observer. Increasing stimulation of the electrodes made the details come to life more. After several weeks of constant DBS, the man’s performance on memory tests improved.

Memory improvement is of vital interest to those who study various disorders of memory, in particular Alzheimer’s; a small pilot is underway to see if DBS can help people in the early stages of that disease. It seems like this might also have some potential as a way to probe the way long-term memories are stored. I have to admit, though, that the thing that captures my imagination is the thought that maybe someday we really could learn how to stimulate vivid and detailed memories of past events. This is obviously not as important as helping people with memory disorders, and the process, even if it were possible, probably wouldn’t be all I imagine it to be, but a wannabe memoirist can dream.

Midlife melancholy around the world

There seems to be something inherent in human beings that makes them relatively unhappy in their middle years. Researchers analyzed data from people all around the world and found a common pattern of greater happiness in youth and old age and less happiness in midlife. The pattern appears in data on people in 72 different countries and is consistent across a surprising range of people: rich, poor, childless, with children, male, female, single, married. There is one noticeable difference in the way people are affected: Men and women in England go through their lowest point emotionally at around the same age, but there’s a ten-year difference between men and women in the US (women bottom out around 40 and men around 50). So maybe some factor or factors unknown can influence the timing, but the middle-age slough of despond seems otherwise invariant.

So far, no one knows what it means or why it happens. I like the quote in this press release from a researcher who says that just knowing that it happens and that it’s just a phase could be comforting. (Physically fit 70-year-olds are on average as happy and mentally healthy as a 20-year-old, so there’s something to look forward to.)

I’m wondering if this has to do with the way the future often seems narrower but deeper after a certain point. You’re old enough to realize you have to let go of some of your dreams because they will never happen. The wave function of possibilities you moved in as a young adult has started to collapse because out of all the possibilities, you’ve made choices that have inevitably excluded other choices. There’s definitely a sense of loss involved in watching your horizons narrow, but once you get over that, you have more energy to channel into the things you’ve chosen, and with any luck, some of them will bloom for you. And maybe the process of getting over it is a necessary part of learning how to be happier with life and with yourself.

Shakespeare in the brain

One of the books I’m reading at the moment is Ilium, a science fiction novel by Dan Simmons that is set in the future and features such exotica as quantum transport, sentient machines, and sophisticated nanotechnology. It’s one of those novels where it takes awhile to see how the various story lines are connected and what the big picture is, so reading it is an exercise in puzzle-solving, with lots of those pleasurable little electric jolts to the brain when you see a connection or fit another piece of the puzzle into place.

The most fascinating thing about the novel to me, though, is the way that literature is woven into the tale. For some reason not yet clear to me, some kind of superhuman or post-human entities are re-enacting the Trojan War on a terraformed Mars, and of course the Iliad is one of the oldest human stories preserved in literature. A consortium of sentient machines from the moons of Jupiter are alarmed by the quantum fluctuations they see on Mars, and a small fleet of the machines is dispatched to investigate.

Two of the sentient machines, possibly my favorite characters in the book so far, are students of human literature. One of them is a fan of Shakespeare and the other has learned quite a lot about the bard but has decided that Proust is really his favorite author. Watching the story unfold itself is fun, but even more fun are the literary quotes these two share in their discussions. The beings running the Trojan War are by and large a repellant lot, and the people left on the surface of the earth are leading a bland existence unseasoned by challenge or meaning (although for a few of them I suspect that is about to change). The idea of a future in which the richness of human literature is cherished most fully by biomechanical creatures is somehow haunting.

So I was in a good frame of mind to investigate this press release. Philip Davis, a professor of English, has begun to work with a couple of brain imaging specialists to investigate what happens to brains that are exposed to a particular literary device used to great effect by Shakespeare. The device is the shifting in function of a word—for example, using a noun as a verb or a verb as an adjective. He gives several examples, including this from King Lear: “He childed as I fathered.”

Davis was curious about what happens when the brain has to process these shifts in function, which is an especially interesting question in light of some work that suggests that nouns and verbs are processed in different areas of the brain. You would expect a bit of a hesitation as the brain has to decide how to interpret the shifted word. He and a colleague a set of sentences that illustrate function shift, along with some control sentences that are either normal or that use function shift in a meaningless way (e.g., “The pizza was too hot to sing”). EEGs were taken of subjects as they read the sentences (in the future, further tests including fMRI will be done).

The results showed that processing a Shakespearean function shift produced a distinctive pattern of electrical activity—which in itself is pretty cool, to find a link between a particular pattern of word usage and brain activity. The pattern of activity seems to indicate that comprehension was attained but with some additional effort compared to a more normal sentence. Shakespeare, of course, knew nothing of the electrical activity of the brain, but Davis points out how successfully he exploited that extra effort to add emphasis and depth, and to stretch our minds. It’s a small study so far, but it illustrates the potential for productive interaction between science and the humanities.

By the way, this is my thousandth blog post. The event seems to call for a celebration of some sort, so perhaps a bit of chocolate is in order.

Talking to the plants

We all anthropomorphize from time to time. I’ve speculated about whether we sometimes do this because we feel an irrepressible urge to attribute a mind like our own to anything we deal with (e.g., when the guy on the radio says the CD player evidently doesn’t want to play the CD he just promised us, so he’ll have to play something else). Dogs, cats, plants, CD players, cars (how many of you have given your car a name?), furnaces or other balky or temperamental appliances: We speak of them as if they were people, and sometimes talk to them.

A recent study indicates that one reason we do this might be to combat loneliness. Participants in the study were more likely to treat pets or gadgets as humans, or to express a belief in the supernatural, when they were feeling lonely. When there’s no real live person there to talk to, in other words, people may start treating things around them as if they were people. I would guess, although the press release doesn’t mention it, that this might be especially likely to happen with things like personal computers, which can become in some sense an extension of one’s own brain and particularly one’s memory. At any rate, maybe all those people who say plants grow better if you talk to them just like talking to plants when they can’t find anyone else to talk to. I shouldn’t be dismissive about this, though, because while I don’t care for cats or dogs, I have struck up a friendship in the last year and a half with a newt, which is probably one of the least expressive pets you can have. I know the reason he comes to the front of his tank when I’m around is that he’s hoping I’ll feed him, but still, I say hello to him when he does.

A smiling voice

Wait a minute, shouldn’t that header say “A smiling face”? No; voices can show a smile too. Researchers videotaped test subjects who were answering questions in an increasingly silly interview, and identified on the videotape four different levels of smile, ranging from no smile at all to a full-out grin. Then people who hadn’t seen the video were given just the audio portion to listen to, and it turned out that they were pretty good at identifying the different types of smiles just from characteristics of the voice. Voice communication has emoticons beat when it comes to conveying subtle nuances of emotion, and this is a good example of that. This article from Science Daily has more information.

Plate tectonics and life

The American Astronomical Society is holding its annual meeting in Austin, so you might notice that Thinking Meat is taking on a definitely star-struck feel this week. A story that made the news today is about plate tectonics on Earth-like planets.

The processes of plate tectonics are believed to be responsible for some crucial features that make Earth habitable, including the cycling of carbon dioxide between rock and atmosphere and back again (essential for temperature control, which in turn is essential for maintaining liquid water on the surface). Not to mention the creation of the continents and the maintenance of the magnetic field that helps keep our planet from being pasteurized by cosmic radiation—plate tectonics are good for living things. (Peter Ward and Donald Brownlee spend a whole chapter on plate tectonics in their book Rare Earth: Why Complex Life Is Uncommon in the Universe.)

But the news from Austin is that Earth appears to be at the small end of the range for planets with suitable conditions for plate tectonics. Simulations of rocky planets much larger than Earth showed that the movements of continental plates become easier on bigger planets, where the plates are thinner. Based on the simulations, scientists would expect to find plate tectonics operating on rocky planets three times as massive as the Earth and larger (with an upper limit of about ten times the Earth’s mass, because any bigger than that planets start to turn into gas giants). Perhaps the presence of water on Earth adds enough subterranean pressure to get the plates moving even though our planet is so small. Note that Mars and Venus, both smaller than Earth, do not have plate tectonics, and as far as we know have no life, certainly no complex life. It looks like rocky “super-Earth” planets might be relatively common in the cosmos, which makes this good news for anyone who hopes to someday find evidence for extraterrestrial life.

However, it’s a little disconcerting to realize that Earth is somewhat marginal in the plate tectonics department, especially in light of another news story about how the process might occasionally quit working. Geochemical evidence suggests that about a billion years ago, the volcanism associated with plate tectonics stopped. Around the same time, an ocean basin closed and the subduction zones surrounding it (where rock on the ocean floor collides into continental rock and gets pushed beneath it) were shut down, evidently bringing plate tectonics to a halt. Today subduction, an important driver for volcanism, earthquakes, and associated disturbances, is mostly happening in the Pacific basin, which is predicted to close in about 350 million years, when North and South America run into Eurasia. When this happens, plate tectonics might again grind to a halt (I know that phrase is a cliche, but it seems appropriate for something that involves the collision of continents). So it may be that our planet is not just borderline geologically active, but only intermittently geologically active as well.

Book review: On Deep History and the Brain

On Deep History and the Brain, by Daniel Lord Smail
Berkeley: University of California Press, 2008

When did human history begin? Most general histories pick something like the beginning of the written record, or Mesopotamia. In any event, a date or event is picked before which there is taken to be no human history as we would define it. What we know about how our earlier ancestors lived is described as part of another discipline, perhaps anthropology or paleoarcheology. In On Deep History and the Brain, historian Daniel Lord Smail argues that the choice of a relatively recent date for the start of human history is more or less arbitrary and reflects the structure of an earlier conception of a “sacred history.” He proposes that the entire story of our species be integrated into the narrative of historians. He also offers some exciting suggestions for a possible approach to deep history centered around the human brain and nervous system.

The first two chapters are probably most meaningful to professional historians, but they’re accessible to any reasonably well-educated reader. Smail describes the way an earlier sacred narrative shaped our sense of history, and how vestiges of that narrative still shape our sense of when history began. Historians may reject the story of the garden of Eden, but history is still taken to begin at some base point in the human trajectory—perhaps the earliest farming communities, which contained the seeds of today’s world. Or maybe the fall of the Roman Empire takes the place of the expulsion from the garden, and we begin with the Middle Ages, relatively primitive compared to what came before or after. We still assume there is some starting point at which we open the book onto a human story, rather than recognizing the long gradual process by which we became what we are.

After outlining the history of our view of history, Smail considers and rejects a number of starting points that have been chosen for history. (Many of these arguments are no longer made by historians, but they still form a sort of ghost remnant of ideology that haunts our ideas of history.) He concludes that human societies don’t emerge into history when they start developing a written record, or when they demonstrate a consciousness of their own stories, or when they become politically organized, or when they cross some boundary line (e.g., 4000 B.C.) that separates human from animal societies.

But is there a dividing line inherent in the way humans are able to “transmit their experience to future generations [and] are in some sense the authors of the changes that happen to their societies”? Cultural evolution is Lamarckian, that is, acquired characteristics can be transmitted to others; this is in contrast to Darwinian biological evolution. Cultural evolution thus has an element of human control absent from human biological evolution. Could the advent of cultural evolution offer a justifiable breaking point after which we can legitimately consider human history to have started?

Smail examines this question in the third chapter of the book. He concludes that no, it does not. For one thing, we are not the only animals to have culture, and the cultures of other animals often lack a dimension of intentionality—as early human cultures quite likely did too. So on the far side of the divide, we have culture without much of a guiding hand. On the other side, more recent historical developments are not entirely intentional, but contain elements of chance and blind retention or transmission of changes (which are essential to Darwinian evolution).

He gives an example from his own research, an examination of how written descriptions of property, recorded in various transactions, changed over the later part of the Middle Ages. The parties to a transaction and the notary who recorded their verbal descriptions did not consciously plan to move toward any particular standard system for categorizing the properties, and yet the written descriptions did tend to eventually shift toward such a standard (which might vary from place to place). The shift was probably the result of slight unconscious preferences on the part of the notaries rather than any grand plan or design. In short, the division between Darwinian and Lamarckian factors is blurred rather than clear-cut.

I found the last two chapters to be the most interesting. Although there are many ways to approach a deep history of humankind, Smail proposes one centered around the capabilities and quirks of the human brain. The fourth chapter sketches out this “new neurohistory.”

An important part of this chapter is a critique of evolutionary psychology, which attempts to bring our preliterate past into the study of human societies today. Or rather, I should say it’s a critique of Evolutionary Psychology, although Smail doesn’t use the capitalization to distinguish between the application of evolutionary approaches to human behavior (lowercase evolutionary psychology) and the program laid out by Leda Cosmides, John Tooby, and others (Evolutionary Psychology, or EP). I’m very interested in the former but frequently skeptical of the latter.

When I read a news story about new EP research, I’m often baffled by the leap from the description of a newly identified behavioral trait to the story of why we behave that way. Surely, I think, I must be missing something. There must be something in the paper itself that explains the missing links between the observation of the behavior and the explanation of how it came about. Smail is a historian rather than a psychologist or anthropologist, but his understanding of the process of EP research parallels mine:

“Most work in evolutionary psychology is achieved through the process of reverse engineering—you look at the trait … and then try to imagine the evolutionary context in which it might have been adaptive. It is easy to make mistakes.”

At least I’m not the only observer who thinks imagination plays a key role in the explanatory process.

Smail describes several critiques of Evolutionary Psychology, in particular David Buller’s Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature. Buller makes three relevant arguments: First, natural selection does not homogenize human traits, but produces a diversity of psychological types (not along racial, ethnic, or gender lines, but within populations). I found this the most interesting of his arguments, because I’ve wondered for awhile about whether some personality traits are not adaptive or maladaptive in and of themselves, but might be necessary in balance with other traits within a population.

The second argument is that the human brain is not the product of a set of adaptations to a more or less fixed environment. The social intelligence hypothesis says basically that the development of human intelligence was driven by the need for understanding and relating to our conspecifics; if this is correct, then the shifting social environment gives rise to continuous adaptation of the human brain. The final argument is that the human brain could well have changed in the past 100,000 years rather than being frozen in its response to our ancestral environment.

The rest of the chapter deals with the necessity of taking biology, and in particular neurobiology, into account in history, for example:

“…moods, emotions, and predispositions inherited from the ancestral past, where they evolved at the intersection of human biology and human culture, form a structural backdrop for many things we do and have done. They are interesting for how they tease or suggest. They are also interesting for how they are violated, manipulated, or modulated. And this is precisely where it becomes so important to think with neurohistory. Although the fact is not widely known among historians and is generally overlooked by psychologists and biologists, cultural practices can have profound neurophysiological consequences. Key elements of human economic, political, and social activity … emerged precisely because humans possess relatively plastic or manipulable neural states and brain-body chemistries.”

There’s some shifting of focus here; neurohistory doesn’t necessarily have to deal with deep history (Smail includes examples of applications to more recent history), but taking neurobiology into account does set any historical study into the context of our story as a species. Although he explains in the preface his reasons for writing a book simply to propose this approach rather than trying to also sketch out what a deep history might look like, and his reasons are valid, I still wish the book had said a bit more about what might be covered in a new general history textbook that took the longer view he proposes. Maybe he’ll put that in another book. (And I do appreciate that this one was short enough to finish before I had to take it back to the library!)

A bit later in the neurohistory chapter, it says that the Neolithic revolution

“…created, in effect, a new neurophysiological ecosystem, a field of evolutionary adaptation in which the sorts of customs and habits that generate new neural configurations or alter brain-body states could evolve in unpredictable ways.”

Which is a good way of introducing the final chapter, which deals with a specific focus of neurohistory: psychotropy, or the ways humans manipulate the brain-body states of themselves and others.

Smail defines several types of psychotropy, although the definitions don’t seem to be mutually exclusive. First is teletropy, the influence of other’s moods and emotions through a variety of approaches that include things like religion or seduction. (I don’t remember that he specifically mentions art in this context, but it seems to me like another excellent example.) Teletropy can be symbiotic (both parties benefit) or exploitive, although it can be hard to draw the line, and it might be possible to describe the same behavior either way. (For example, is church-going a result of the joint interests of clergy and laity, or are the latter being exploited and duped into donating money to the former?)

In contrast to this is autotropy, the ways we have of adjusting our own mental and emotional state. These include recreational sex, reading, and gossip (taken to mean the discussion of other humans and their behavior). A subset of autotropic mechanisms that is also used sometimes in teletropy is the ingestion of substances that tweak our moods one way or another. The neurohistorical approach offers a new way to look at our relationship with these substances and practices over time, and this is the richest and juiciest part of the book, in my opinion. Toward the end of the chapter Smail suggests that:

“…it may be possible some day to argue that European societies, between the twelfth and the nineteenth centuries, witnessed a tectonic shift away from teletropic mechanisms manipulated by ruling elites toward a new order in which the teletropies of dominance were replaced by the growing range of autotropic mechanisms available on an increasingly unregulated market. (The rise of the fascist regimes of the twentieth century might well pose a challenge to the simple teleology of this model, reminding us that history is always complex and never linear.)”

I enjoy this kind of big-picture thinking, although obviously this is just the briefest of sketches to demonstrate the possible explanatory power of using a neurophysiological approach to human history. All in all, this book should provide plenty of fodder for thought and debate. Although it’s addressed to historians, I think there’s lots of material here to interest the more general reader, especially in the last two or three chapters.