The colors of people

Yesterday I heard an excellent talk on the evolution of skin color by Nina Jablonski of Penn State. She started by describing how little work there was on the subject when she was asked to give a talk about it in the early 1990s; although it’s an interesting topic, the subject of skin color and race was such a hot potato that people were reluctant to pick it up. Dr. Jablonski described herself as an optimist who believes that we’re socially mature enough to deal with discussions of skin color. Her talk didn’t directly refute the ideologies involved in racism, but was an excellent example of how to sidestep ideology entirely and present the evolution behind skin color variation in humans as a fascinating story of science and history, with an abiding biochemical tension at the heart of it.

The tension has to do with the human body’s love-hate relationship with ultraviolet radiation. In general, UV is deleterious to us (and to many other living things); in particular, it destroys the essential B vitamin folate and causes various sorts of tissue damage and DNA damage. However, certain wavelengths of UVB radiation are essential to the formation in the skin of vitamin D, which helps our bodies absorb calcium and can be very difficult to get enough of from dietary sources. Thus, we mostly could do very well without UV radiation, except that we need vitamin D, not only for bone health but for a host of other reasons.

(A side note on our current take on this problem: For years dermatologists have focused on the bad side of UV radiation, urging the use of sunscreen; also, the lifestyle, in industrialized countries in particular, has tended toward more time spent indoors and less out in the sun, resulting in lower UV exposure. I’ve recently read a bit on vitamin D and depression, mostly because I’m being treated for a severe vitamin D deficiency. I’ve been contemplating whether to continue protecting my skin at all times from UV (especially important because I grew up in a sunny climate at a time when sunscreen was not anywhere near as ubiquitous as it is today) or to let at least a little of the sunshine in so as to boost my vitamin D levels. Jablonski says she advises people to follow the advice of dermatologists to protect their skin and consider dietary supplementation to meet their vitamin D needs; some dermatologists, she reported, are now recommending prudent sun exposure, which means exposing only parts of the body that normally don’t get much sun. One dermatologist asked her to please encourage people to sun their buttocks.)

Exposure to UV radiation is the single most important factor in explaining human skin color, the story of which is based on the presence in our bodies of varying levels of melanin. This nifty chemical produces skin pigmentation and protects against the negative effects of UV radiation. The earliest members of genus Homo were darkly pigmented, living in an area with abundant UVR, but then, in time, we moved to areas with different levels of UV radiation. UVR is generally most abundant at the equator and tapers off at higher latitudes, with several variations. In areas of higher altitude, like the Himalayas and the Andes, UV exposure is higher because there is less protective atmosphere between the earth and the sun. Also, equatorial areas that typically have a lot of cloud cover get less UVR than noncloudy equatorial regions.

As the human habitat has expanded out of Africa and over virtually the entire globe, humans have moved to a variety of UV regimes, and we have evolved accordingly. There’s no single optimal skin color; in a given region, the optimum level of pigmentation depends on the balance between keeping UVR-related damage and mortality to a minimum while maximizing vitamin D production in the skin. Jablonski made two key points: 1) Skin color is not a good indicator of race or genetic grouping because both light and dark pigmentation have evolved independently multiple times (light pigmentation evolved at least twice in humans, for example, and once in Neanderthals; dark pigmentation also likely evolved multiple times). 2) Skin color is an excellent subject for teaching people about evolution, being one of the best examples of evolution in humans, and easily visible to all as a part of everyday life.

The story is full of fascinating little quirks; for example, Tibetans, although they live at high altitude, are not as darkly pigmented as you would expect for the amount of UVR there, because humans moved into that part of the world relatively recently (I think she said it was within the last three or four thousand years), and they came with heavy clothes and structures that protected them from the sun. As a result, they don’t actually get a huge amount of UV exposure, and need to stay relatively pale to get their vitamin D. Another interesting tidbit is that in most human populations (if not all, my notes are unclear on this point, sorry), the women tend to be more lightly pigmented than the men. This may be related to women’s childbearing role; for example, a successful pregnancy requires quite a lot of calcium. However, sexual selection may also play a role in areas where men preferentially choose lighter-skinned women, exaggerating a difference originally caused by natural selection.

Jablonski also explained why some people tan more easily than others. It turns out that some populations, e.g., some of those that live around the Mediterranean, have developed the facility to regulate the melanin content of their skin relatively easily in response to environmental conditions; i.e., they tan easily. She also addressed the question of why the Inuit are as highly pigmented as they are, despite living at such high latitudes. They are exposed to a great deal of reflected UVR bouncing off the snow, so they need the protection from UV, and they also consume some of the most vitamin D-rich foods on the planet, namely, the blubber of marine mammals, so they can afford to miss out on some of the vitamin D production in the skin that lighter pigmentation would allow.

The questions after the talk were all good, in particular one about why there’s (almost always) a broader range of hair colors associated with lighter skin pigmentation than with darker. The reason has to do with the variety of genetic differences that can combine to create light skin color; lighter skin color can be associated with a variety of genetic combinations and thus hair colors. In areas where dark skin is advantageous, however, dark hair is also often advantageous, so the two traits tend to be linked. Jablonski described how we learned that dark hair can be advantageous in a sunny climate from studying thermoregulation in birds. The black feathers of crows, for example, are good at dissipating heat. This is counterintuitive to anyone who has ever had a car with a dark vinyl interior, but heat penetrates white plumage (and I’m guessing is then transmitted readily to the bird), whereas black plumage traps the heat and then dissipates it when a breeze blows or when the bird puffs up its feathers, as crows evidently do in hot weather.

It’s stories like this that make the natural world such a rewarding and engaging object of study, and I wholeheartedly endorse Jablonski’s recommendation that the story of skin color be used as an educational tool for anyone who is teaching about evolution. I think she was addressing mostly professional scholars and educators, but anyone, especially parents and others involved in the care and tending of young minds, can learn more and share this story. Some online resources include a recent NPR story and a story from Discover Magazine. Jablonski has also written a book, Skin: A Natural History, which has some information on skin color. I leave it to you to decide whether to sun your buttocks, but if you want to teach young people, or any people, about human evolution, I highly recommend that you consider using the story of skin color as a wonderful, accessible teaching story.

What is feeling good good for?

A recent EurekAlert article describes some work that examined the effect of positive and negative emotions on a person’s level of adherence to typical cultural values. The study looked at Asians and Europeans; each culture, broadly speaking, has a different attitude toward individuality versus fitting into the group, and these attitudes were examined in individual participants. Then the researchers manipulated the moods of the participants, cheering some up and lowering others slightly into the dumps. The jazzed or bummed participants then were given some things to do that were designed to reveal the degree to which they acted in accordance with their attitudes. The happier ones were more likely to behave in ways that were off their own personal beaten path (Europeans taking more of a group view, Asians acting more independently), indicating that being in a more cheerful frame of mind might predispose people to be more exploratory and open to different ways of being. Mild misery had the opposite effect, reinforcing existing attitudes and behaviors.

It’s a fascinating look into how fluctuations in mood can change something that on the face of it might seem fairly set. Identity is not a static thing. (Incidentally, it’s also a nice story for those of us who like to answer questions about personality—or other topics—with “It depends.”)

(The full article is in the March 2009 issue of Psychological Science: Who I Am Depends on How I Feel: The Role of Affect in the Expression of Culture, Claire E. Ashton-James, William W. Maddux, Adam D. Galinsky, and Tanya L. Chartrand. Psychological Science 20:3, 340–346.)

Of course, the down side of the contingent nature of our behavior is that, as we already know, anxious, fearful people are not always at their best. Maybe that’s why it’s important to keep finding something to laugh at or otherwise feel good about even in trying circumstances. Coincidentally, I also happened across this article from the Association for Psychological Science about the value of positive emotions. The article describes the “broaden and build” model of psychologist Barbara Fredrickson, who has a new book out on the benefits of positivity. The idea is that contentment, playfulness, and serenity appear to help open up the mind to novel ideas (as with the recent experiment into cultural stereotypes), and over time, such moments of broadening add up to a greater sense of purpose, better social connections, and other beneficial outcomes. The article opens with a wonderful anecdote about patas monkeys, who in their youth chase each other around and, in the process, throw themselves onto flexible young trees, which bend and then fling them off in another direction. The monkeys drop this kind of horseplay as they get older, except when they’re being chased by a predator, when they will use a sapling as a slingshot to try to escape death. Evidently those monkeys look like they’re goofing off while they’re actually learning a survival skill.

One of the most endearing things about humans and other animals, it seems to me, is the sense of play, of spontaneous joy in some goofy activity or another, preferably shared. If there’s some cumulative long-term benefit, so much the better. I’m glad psychologists are looking into this kind of thing, and I’m also glad that thinkers before this have examined the question. Edward Abbey, for example, in Desert Solitaire, had this to say about the croaking of frogs in a brief wet spell in the desert:

“Why do they sing? What do they have to sing about? Somewhat apart from one another, separated by roughly equal distances, facing outward from the water, they clank and croak all through the night with tireless perseverance. To human ears their music has a bleak, dismal, tragic quality, dirgelike rather than jubilant. It may nevertheless be the case that these small beings are singing not only to claim their stake in the pond, not only to attract a mate, but also out of spontaneous love and joy, a contrapuntal choral celebration of the coolness and wetness after weeks of desert fire, for love of their own existence, however brief it may be, and for joy in the common life.

Has joy any survival value in the operations of evolution? I suspect that it does; I suspect that the morose and fearful are doomed to quick extinction. Where there is no joy there can be no courage; and without courage all other virtues are useless.”

I can’t do much better than leave you with the words of Cactus Ed. Right now I’m going out on my back patio to joyfully celebrate warmth and sunshine after weeks of midwest ice. (I won’t sing, though, because despite my best efforts I might sound remarkably like the frogs.)

When good things don’t register

A recent experiment at Ohio State, described in this story from Science Daily, looked at how depressed and nondepressed people view positive and negative things in their environment. To examine how people form positive or negative attitudes, researchers used a computer game that neatly sidesteps any possible confusion from pre-existing attitudes about particular topics. The game introduces players to a variety of beans with different appearances. They can accept or reject each bean as it appears on the screen; some beans are good beans, adding points to a player’s score, while others are bad beans, resulting in points being lost. The goodness or badness of a bean is reliably indicated by its appearance, and players have to learn to identify beans based on their experience with the game.

In this particular experiment with the bean game, depressed and nondepressed people were equally good at identifying the bad beans. However, depressed people didn’t do as well as the non-depressed at identifying the good beans. This seems to me to present an interesting chicken-and-egg question: Are people slower to spot the good things because they are depressed, or are they depressed because they’re slower to spot the good things? (I suspect the answer might be “Yes”; i.e., both are true.) The Science Daily article seems to come down on the latter side; it concludes by suggesting that therapists who are treating depressed people might try to make them more aware of the good things in their lives. This is probably excellent advice, but I think there’s more to it than that.

It seems to me—based only on my own experiences with depression—that maybe the crucial missing piece in a depressed person’s experience of the game is that to a depressed person, good things don’t reliably feel good. The word “anhedonia” describes the lack of pleasure in normally enjoyable activities that forms, for me, the core experience of depression, and I think it may be what’s at work in the depressed people’s poorer performance in recognizing the good beans. They just don’t always feel whatever it is that identifies experiences as being positive, pleasurable, or worthwhile. Reminding myself of the many blessings in my life is always a good thing to do, but sometimes it seems like an intellectual exercise that doesn’t really do much to bring back the normal feeling of enjoying those blessings. I wish I knew better what it is that brings that feeling of enjoyment back, or makes it go away, but I’d bet that its absence is at the heart of the difference in performance on the bean game.

The paper is Attitude Formation in Depression: Evidence for Deficits in Forming Positive Attitudes, by Laren R. Conklin, Daniel R. Strunk, and Russell H. Fazio (Journal of Behavior Therapy and Experimental Psychiatry, 40(1) 120—126, 2009).

Real people, fictional people

Characters in novels, movies, and other fictions can seem quite real (we root for one and boo another, for example, and cry sometimes when one of them dies). Yet for all that, we can easily distinguish them from real people, people that we know personally. But how do you know that your mother is real, for example, but Scarlett O’Hara is not?

An ingenious recent fMRI study compared brain activity in cases where people contemplated scenarios involving fictional characters, famous people that they didn’t know personally, and friends or family members. Participants had to determine the plausibility of actions like dreaming about a fictional character (possible), talking with a fictional character (impossible) or having dinner with a real person (possible).

Two brain areas appeared to be involved in the activity of distinguishing flesh-and-blood people from the purely mental constructs that are fictional characters: the anterior medial prefrontal cortex and the posterior cingulate cortex. These are parts of the brain’s default network, which kicks in when we’re not doing anything in particular and our minds go wandering over an internal landscape; both areas are believed to be important in self-referential thought and the recall of autobiographical memories. These brain areas were most active in the tasks involving friends and family, moderately active in tasks involving famous people who were not personally known, and least active in tasks involving a fictional character. The idea is that perhaps you know your mother is real because your brain codes her as being more personally relevant to you than a fictional character is.

The paper is available on PLoS ONE: Reality = Relevance? Insights from Spontaneous Modulations of the Brain’s Default Network when Telling Apart Reality from Fiction, Anne Abraham and D. Yves von Craman. It’s got lots of interesting background, and some fascinating material on the possible relevance of this work and ways it could be extended. I’d love to know, for example, how particularly well-known and loved fictional characters fall on the spectrum of brain activity, and also what an writer’s brain looks like when it’s contemplating characters it has created. Meanwhile, it’s time for me to immerse myself in a fictional world and a hot bath.

Virtual reality between the ears

I’m reading Adam Bede, by George Eliot, and I’ve noticed that she opens several chapters with an invitation to the reader to view a scene as she describes it. I think at one point she even has readers peeking through a window to see inside a house. This invitation to enter another world, whether explicit or implicit, is one of the chief allures of fiction for me. A recent fMRI study reveals some of the brain activity going on when we immerse ourselves in a written narrative. It’s an active process, with different brain areas coming into play to mirror what the characters in the narrative are doing. Our minds are evidently doing something akin to what Eliot describes: experiencing a virtual world to some degree as if it were real. This article from PhysOrg.com describes the work. The paper will appear in a forthcoming issue of Psychological Science, but I haven’t been able to find a citation yet.

Songs that call up memories

A friend recently sent me a link to a Pandora station he had created and thought I might like. That got me started exploring Pandora (an Internet music service based on the results of the Music Genome Project), in particular creating stations based on music I remember from my teen years and twenties. Music is one of the strongest triggers I know for memories of a particular time and place; I enjoy not only recalling my own memories associated with a particular song, but hearing the stories that people close to me share about songs they remember.

However, I recalled all those details (and more), as well as some of the emotions of that time period, without listening to the song. A recent study examined whether there’s a difference in the strength of recall if people hear the song, see the title or lyrics, or see the album cover. The hypothesis was that hearing the song would lead to stronger recall, and that’s certainly what I would have expected. In fact, in a study of 124 undergrads who were prompted to recall songs from five different periods of their pasts, it didn’t matter that much how they were reminded of the song. The memories came back about as strongly for any of the experimental conditions, as long as the subjects were familiar with it and had autobiographical memories associated with it. This story from Science Daily gives an overview. The paper itself goes into a good bit more detail about autobiographical memory and the finer points of the study: Using music to cue autobiographical memories of different lifetime periods, Elizabeth T. Cady, Richard Jackson Harris, and J. Bret Knappenberger. Psychology of Music, Vol. 36, No. 2, 157–177 (April 2008)

Book review: iBrain

iBrain: Surviving the technological alteration of the modern mind, by Gary Small and Gigi Vorgan. New York: HarperCollins, 2008.

If you are looking for a good book summarizing what we know about the effect of technology on our brains, I’d suggest you keep looking. This book provides a mixture of self-help, technical help, popular neuroscience, and a subtle but persistent deprecation of modern communication technology, without, in my opinion, doing any one topic justice. Furthermore, it’s mired in a fundamental confusion about the difference between evolution and the capability of individual brains to change their neuronal circuits in response to the environment (neuroplasticity). This leaves me doubtful about how far I can trust the authors when they present scientific information.

The book addresses digital immigrants, those of us for whom current communication technologies are something we encountered after our formative years. This is fine, except for the fact that it is threaded with anti-technology bias. Although the authors don’t explicitly advise against technology use, instead counseling balance, the vignettes presented throughout the book deal only with the risks and down-side of technology. Loaded language is often used to deliver statistics: Young people don’t watch or use digital media, for example; they expose their brains to them.

The book does cover some interesting research in various relevant areas (e.g., addiction, ADHD), and tackles big topics like the effect of the Internet on politics, entertainment, and crime. These are all useful or important things to consider. However, it tends to cover these topics in a more or less cursory way, because it covers a lot of other turf as well, offering self-help exercises to bolster your interpersonal communication skills and information to help you survive in a digital world. This technology toolkit contains fairly basic advice about things like web searching, cell phone etiquette, and online privacy, most of which is not likely to be new to you if you read this blog. (And even your grandparents, no matter how old they are, could probably tell you to save only the email messages you are likely to have to refer to later.) In short, the book is a hybrid that, in my opinion, doesn’t provide significant new, useful, thoughtfully presented information on any one topic.

At the heart of my quarrel with this book is the first chapter, which assures you that your brain is evolving right now. That’s a questionable statement. Your brain changes into something different as you learn new things, no doubt about that, but that’s neuroplasticity, not evolution. You might charitably think that the authors are using “evolution” in its broader, nonscientific sense of developing into something more advanced. In fact, they offer this definition themselves, but it’s at the head of a section on Darwinian evolution, which talks about natural selection and survival of the fittest and even mentions DNA, but goes on to talk about brain evolution in terms of single-generation changes like the development of a shorthand for text messaging, rather than intergenerational changes in gene frequencies. OK, maybe they’re talking about cultural evolution, but in that case, why bring up Darwin and genes without making plain that biological and cultural evolution are analogous in some ways but not the same? Also, can a single brain even be said to evolve culturally, or does the culture itself evolve? All in all, I found this first chapter a grievous irritant.

The book goes on to discuss the brain gap; I remember the generation gap and the missile gap from my childhood, so maybe I’m not as excited about a new gap as I should be. One worthwhile question about the difference between digital natives and digital immigrants—as yet unanswered—regards the way that immersion in electronic worlds affects the social maturation of young brains. The research the book presents is suggestive, but inconclusive.

Sometimes it seems like the authors are trying to argue it both ways: On the one hand, individual brains change to adapt to their environments; that’s why the brain gap arose, because digital natives and digital immigrants face different environments, to some degree. But the point of adapting is to cope with the environment better, and it’s possible that kids are developing the kind of social skills they need in the world they’re going to have to survive and mate in. If this is happening at the cost of losing ordinary garden-variety social interaction skills, that is worrisome, but it’s not really clear yet that that’s what’s happening, in my opinion. As far as I can tell, young people still go to school and have to interact with their peers and their elders face to face. And it’s worth mentioning that older communication technologies, like the book, have also been blamed for stunting social skills. (Hands up everyone who, as a child, was told by teachers or parents to get your nose out of that book and go play with the other kids.)

I get the feeling that part of what motivated the book is the unease that an older generation feels with the world that young people are creating. This has been a concern of the older generation for centuries. While there may sometimes be room for concern, the fact that the plaintive cry about young people who seem like they’re from another planet has been heard for generations does blunt the urgency for me. And while the authors seem to view online interactions as less desirable than those that take place face to face, they don’t say much about the social benefits of the Internet, like the way it connects far-flung communities that might otherwise never have found each other, or how email can revive old friendships and keep relationships alive, albeit in attenuated form, over long distances. If I communicated with my kids only via email or IM, that would be sad, but email is perfect for maintaining some friendships that would otherwise probably die away.

All in all, I’d say to save your money for a book that goes into a single topic of interest to you in more detail (and more even-handedly and rigorously) than iBrain does in its smorgasbord approach.

The joys of nostalgia

I have always been prone to nostalgia, even when you would have thought I was too young for it. It’s easy to regard this tendency as a character weakness; nostalgia gets bad press sometimes, being perceived as a sentimental waste of time, and it’s long been described as a psychological malady. However, a new paper surveys some recent research on the subject and recasts nostalgia as a psychological strength, a trick whereby we give our meat something to think about that makes us feel happier, more connected to others, and better about ourselves. This press release gives a brief overview, and the paper itself is, of course, much more interesting, if you can get your hands on it (Nostalgia: Past, Present, and Future, by Constantine Sedikides, Tim Wildschut, Jamie Arndt, and Clay Routledge. Current Directions in Psychological Science, 17(5): 304–307)

One of the intriguing things I found in the paper is a comparison of people’s emotional reactions to recalling different kinds of experiences: positive, ordinary, nostalgic. Nostalgic experiences were unique in calling up both negative and positive emotions, but their net effect was likely to be a happy one. One study indicated that in nostalgic memories, even uncomfortable or unhappy events were often viewed side by side with happier ones, and this combination of the bitter and the sweet was perceived in terms of a redemption narrative that allowed loss or upset to be transmuted into something better.

It’s this ability to see the ebb and flow of experience as part of a bigger picture that may contribute to one of the benefits of nostalgia: a kinder view of one’s own self. The article itself quotes from another source to describe something that rang quite true for me:

Nostalgia has been theorized to bestow “an endearing luster” on the self and to cast “marginal, fugitive, and eccentric facets of earlier selves in a positive light”.

(The quotes are from Davis, F. (1979). Yearning for yesterday: A sociology of nostalgia. New York: Free Press, pp. 41–46.)

To return again to the emotion and music theme, this reminded me of how I feel sometimes when listening to music that I’ve known for a long time, particularly the music of the Moody Blues, which means a great deal to me and has accompanied me through many of the events of my life since my early 20s. Somehow looking back at the memories evoked by the music (memories of times both good and bad, and certainly encompassing some eccentric facets of my earlier selves) blends the many aspects of my past into a story that, for all its dark spots, looks lovable to me (rather than filling me with angst over the mistakes I’ve made and the things I’ve lost).

The paper also mentions a couple of other benefits of nostalgia: the alleviation of loneliness (by letting us relive memories of beloved people and recall our bonds with them) and the existential dread of knowing that we must die someday (by supplying a shared sense of meaning). All in all, a very nice rehabilitation of a phenomenon once seen as an illness!

The paper closes with some thoughts on areas that might merit further exploration, in particular the possibly changing role of nostalgia over the lifespan, and the ways nostalgia might provide a thread linking past and present selves and thus contribute to our sense of identity.

Antikythera Mechanism rebuilt

OK, this story is a bit off-topic for this blog, but it’s just too cool not to post. The Antikythera Mechanism, a 2100-year-old device that calculated and illustrated the motions of the sun, moon, and planets, has been reconstructed. The working replica was built by Michael Wright, who used to work at the Science Museum in London and who studied the device for years. You can read about it in Wired’s blog. There’s also a video of Wright demonstrating the device.

I’ve been thinking about why the story of the Antikythera Mechanism haunts me so. For starters, it’s about astronomy, and about the history of how we (small animals in a big place) learned about our surroundings. The device is beautiful in and of itself, especially if you like orreries and other astronomical gadgets and are fascinated by the patterns of various solar system cycles as seen from Earth. It represents some surprisingly advanced technology for its time, and prompts what-if thoughts about how it would have been if we hadn’t lost that technology (not to mention questions about the fragility of our current knowledge, and a sense of appreciation for the way we currently share and preserve what we know). It’s also a compelling object because it’s one reflection of the aggregate contents of the human mind with respect to knowledge of the solar system at the time, one that made its way to us despite being lost on the sea floor for centuries. As science stories go, it’s one of the more satisfying ones.

Does crying help?

The other day I was struggling with a particularly difficult editing assignment, and when I took a break, I said to a friend that it was so frustrating it made me want to cry. He responded that crying seldom helped anything, with which I disagreed; sometimes I find crying to be therapeutic. “Well, if you must,” he said. In the end I didn’t cry; I grabbed a few munchies and went back to the assignment.

But if only this press release from the Association for Psychological Science had come out a few days earlier, I would have had some research to back me up. Not everyone feels better after crying, but a lot of people do. It’s a short release, and part of it is about the difficulties of studying crying in the lab, which I can certainly believe are considerable. However, it also covers some recent research on crying incidents that happened outside the lab and were later described to researchers. Out of the 3,000 incidents they looked at, most people did report feeling better after crying, although one-third said that they didn’t notice any improvement, and one-tenth reported feeling worse. (The work is described in Is Crying Beneficial?, by Jonathan Rottenberg, Lauren M. Bylsma, and Ad J.J.M. Vingerhoets, Current Directions in Psychological Science, 17(6), 400–404, 2008.)

So what’s the difference? It’s probably a long complicated story. However, what we know so far includes the fact that of the crying episodes reported, the people who received some emotional support were the most likely to feel better. On the other hand, people who are alexithymic (less able to name their emotional states) tend to feel worse after crying, and those with anxiety or other mood disorders tend not to feel better after crying. I expect there are a host of other variables, like personal history, how often you cry and how comfortable you are with it, who else is around and how they’re feeling. In my own case the other day, perhaps I was unconsciously weighing the variables: getting the assignment done was the thing most likely to bring relief, in those particular circumstances, so I buckled back down to it.

To tie this into the emotion and music thread, I’ve got a few CDs that I listen to only when I’m all by myself and not busy with anything that requires concentration, so they can work their full therapeutic magic. Every now and then, for example, I need to put on U2’s All That You Can’t Leave Behind and just let go and cry. Some of the lyrics are emotionally intense, particularly in the song Kite, which is about the death of a parent and which I first heard about six months after my mother died. But the album ends with a gentle, hopeful ballad, and by the end of that song I feel calm and peaceful. For however much I paid for that CD, it’s got to be the cheapest form of therapy.