Concepts of the self

For some reason the subject of diaries has come up several times in my life lately. Actually the subject of unexpected death has been on my mind, for a variety of reasons, and that led me to think of diaries. The thought of unexpected death is no doubt what led me to finally draw up a list for my sons of my retirement accounts and so forth, so that if, as I euphemistically put it, anything happens to me, they’ll be able to find all the accounts for which they are the beneficiaries. While I was at it, I made some notes about disposing of my things, and noted in particular that I would like my sons to destroy my diaries after I am gone, except for a couple of journals I once kept for the sole purpose of recording the notable events in their early lives. (It’s safe to assume they would have gotten rid of my private papers unread anyway, but still I figured it wouldn’t hurt to write it down.)

This reminded me of my mother’s diaries, which my father threw away after she died. I’ve never asked him about this but I’m guessing he was honoring a request from her that no one else ever read those diaries after she was gone. I can understand that entirely, of course, having just made a similar request myself, and I’m glad my father protected my mother’s privacy. But I can also understand why it could be upsetting when someone’s diaries are destroyed after they’re gone: It’s like a last part of themselves that they left behind has been removed. I don’t believe in an immortal soul, but I do believe that my mother left parts of her self or her identity behind when she died, not just the diaries she wrote for herself but letters she wrote to others, and the memories that each of us have of her. Douglas Hofstadter, in I Am a Strange Loop, writes about a concept of the self that remains behind even after death, not in any supernatural sense but in the sense that mental states or patterns peculiar to a person can be recreated in other brains.

This article from Philosophy Now gives a nice overview of various concepts of personal identity, beginning with Locke’s idea that we are who we remember ourselves being (which I find useful as a starting point, but not the whole story). Philosopher Bob Harrison discusses some of the legal and psychological meanings of identity, and then wonders whether selves are not conventions, useful conventions (like speed limits or legal drinking ages) that nonetheless are not related to a real entity that exists in the outside world. In closing he writes about the idea of the extended mind, in which the tools we use to support our cognitive processes (e.g., a notebook kept by a hypothetical Alzheimer’s sufferer as an aid to memory) can be considered to be part of those processes, and so a part of ourselves.

To the degree that I kept a journal to help me remember past events and feelings, a believer in the extended mind could argue that destroying the journals after I’m gone is akin to destroying a part of my self. (I guess if it’s my self I have the right to ask that it be destroyed after the more substantial parts of my self are gone.) I’m not sure I would agree that mental tools are really part of anyone’s identity, but thoughts committed to writing (both personal diaries and published books, which are not merely attempts to communicate but also mental edifices built to house part of the contents of a unique mind) can allow an unusually direct access to the thoughts of another person. Maybe they’re best described as a peculiarly powerful adjunct to identity.

Who gets to say what “God” means?

In The God Delusion, Richard Dawkins started out by describing what he called Einsteinian religion—the metaphorical use of religious terms to refer to the sum total of the universe or the natural laws that drive it. Einstein, and other scientists, have used the term “God” to mean things quite different from what many fundamentalist Christians mean, and as far as Dawkins is concerned, deliberately confusing the two concepts by using the same words for them is “intellectual high treason”. This gave me something of a jolt because I have committed such high treason myself from time to time, although I can certainly appreciate the point Dawkins is making.

Science writer Dennis Overbye has recently written an essay for the New York Times defending the right of scientists and science writers to use the word “God” metaphorically. (He wasn’t reacting to Dawkins, but to those who give science writers a hard time for using phrases like “the God particle”.) He says that scientists should not so readily cede the use of “God” to fundamentalists and creationists. I applaud his spirit, but I think anyone who uses religious terms metaphorically in science writing is obliged to explain quite clearly what is meant. And I know from years of producing technical documentation that even well-meaning people do not always read all that carefully (“Any text you put on the page is a waste of time”, a co-worker once memorably said) so the most careful explanations are likely to be ignored, carelessly or willfully, and so Dawkins makes a very good point about not using terminology that could be at all confusing.

Face reading

To follow up on yesterday’s post about all the processing that goes on in the brain outside of conscious awareness, here’s a story from EurekAlert about the role of the unconscious in reading facial expressions. In a recent study, people were given the merest glimpse of a happy or fearful face, a 30-millisecond look too brief to be consciously perceived. Then they were shown a surprised face and asked to rate whether the expression was a reaction to a pleasant or an unpleasant surprise. Their interpretation of the surprised faces was colored by whether they’d been “primed” with a happy or a scared face. A brief exposure to a fearful face made the surprise seem more negative, whereas a brief exposure to a happy face made the surprise seem positive. Brain activity as measured by EEG also changed in response to the fearful or happy faces, even though the people in the experiment didn’t consciously register them. The fearful faces kicked off the strongest reaction in those prone to social anxiety.

So maybe sometimes when you get a creepy feeling about something, for example, there’s a good reason for it, but it’s not a reason that you’re consciously aware of. Your subconscious is picking up on useful information and processing it without telling you about the process, just the end result.

Automatic pilot

The New York Times recently published an article about subconscious mental processing. It describes a number of unexpected findings (e.g., people who are unobtrusively exposed to the scent of a cleaning product are more likely to clean up after themselves when they eat a crumbly snack) that reveal the things our brains get up to behind our backs. The article explains how subcortical areas of the brain, which evolved earlier than the prefrontal cortex, often make quick decisions for us and act as “automatic survival systems”. There’s a line in there about how the prefrontal cortex, with a major role in conscious processing, is often the last to hear the news after a decision is made. This reminds me of a quote I ran across years ago, from Timothy Ferris’s book The Mind’s Sky:

“The mind may rule the self, but it is a constitutional monarch; presented with decisions already made elsewhere in the brain, it must try somehow to put on a good show of their adding up to some coordinated, sensible pattern. Functionally it resembles Ronald Reagan’s presidency: It acts as if it were in control, and thinks it is in control, and believe it has good reasons for what it does, when in actuality it is often just mouthing soothing rationalizations while obeying the orders of unseen agencies hidden offstage.”

The plastic brain

Train Your Mind, Change Your Brain: How a New Science Reveals Our Extraordinary Potential to Transform Ourselves, by Sharon Begley

The Brain That Changes Itself: Stories of Personal Triumph from the Frontiers of Brain Science, by Norman Doidge

You cut your finger, and after a few days the cut heals. You break a leg, and the bone can be set so that it mends. You work out, and the muscles you exercise become bigger. In many ways, you can see how your body responds to your activities, and even to injury, and somehow reshapes and regenerates itself. When it comes to the brain, however, the idea for a long time was that it was much more fixed in its capabilities than the rest of the body. But we’re discovering that in fact the brain is much more flexible, more plastic, than anyone had known.

Neuroplasticity is the brain’s ability to rewire its connections and reorganize itself, an ability that was once thought to be minimal in adulthood. For one thing, it was long accepted as a truism that if you lost neurons during adulthood, you were out of luck because you were not going to get any more. Adult neurogenesis is a relatively recent and very exciting discovery; your brain goes on producing new neurons well into old age. (One of the more poignant stories of neuroplasticity is the first observation of the growth of new brain cells in adults. Terminal cancer patients were undergoing a treatment that left a biomarker indicating new cell growth. These patients gave permission for their brains to be examined after they died, and the biomarker clearly illuminated the presence of new brain cells.)

The other old idea unseated by discoveries made in the past couple of decades is the idea that different areas of the brain were able to do only a single task and could not be pressed into service for other purposes. It turns out that this is not the case at all. For example, in a blind person the visual cortex is used not only for touch but for processing language (using the sensory input from the fingers that comes through reading braille). Deaf people use their audio cortex for enhanced peripheral vision. People who have had strokes are able, with the right kind of therapy, to regain functions that had been damaged and initially thought lost.

Two new books deal with some of the discoveries and ramifications of neuroplasticity. Doidge centers each chapter of his book on a person or several people who have experienced sometimes dramatic occurrences of a particular type of neuroplasticity, which gives him a readable framework for presenting the neuroscience behind the stories. Begley’s book presents a well-organized progression from discoveries of brain reorganization in animals, through a discussion of brain plasticity in youth, and on to recent research into neuroplasticity in adult brains, including the reshaping of the emotional brainscape that is possible through psychiatric therapy or mental training such as meditation. The books complement each other fairly well (although if I had time to read only one, I’d pick Begley’s, for reasons that I explain below). Of necessity, both cover some of the same stories of discovery, but often from slightly different angles.

The stories of physical problems that can be remedied to one degree or another through engaging the plasticity of the brain are remarkable and often touching. Doidge’s case histories made for easy reading, and it was heartening to read about people who recovered significant functionality after a stroke or other injury, or about children who were able to overcome dyslexia. The sections of both books that dealt with the more straightforward aspects of brain reorganization in response to outside stimuli were among the strongest, in my opinion.

And it’s not just that the brain can reorganize itself to work around damaged areas and regain some functionality. The idea of enhancing brain function in normal people or delaying the consequences of aging is very exciting. I really enjoyed learning about the ways that the brain reacts as a muscle would to repeated demands–not through the same mechanisms, obviously, but frequently used areas of the cortex do grow disproportionately large. E.g., a study of London cab drivers’ brains showed that they tended to have a larger hippocampus–important for navigation–and a study of violinsts’ brains showed that the four fingering digits of the left hand get a disproportionate amount of neural real estate. I am taking to heart Doidge’s advice about learning a new language in old age, as part of a program to keep my brain challenged, active, and I hope functioning well (although it will be a good many years before I need to start thinking about which language it will be).

Neuroplasticity is active not just in the somatosensory cortex (which controls physical sensation and movement) or physical functioning, but also in our emotional states and mental functioning. I tend to think of the new neuroplasticity discoveries in terms of the two areas I described above: adult neurogenesis and brain rearrangement in response to outside events. Scientists used to think these things didn’t happen, and now we know they do and are finding out the degree to which they do. Doidge seems to use the term in a broader sense, talking about learning emotional or behavioral patterns, although obviously the news here is not that it happens but rather that we’re starting to understand better how it happens. He’s a psychiatrist and psychoanalyst, and this is evident in his interest in mental and emotional problems and their treatment. (Most of his stories are enjoyable to read, but if you are at all prone to squeamishness or are apt to be revisited by distressing mental images, spare yourself the section about masochist Bob Flanagan in chapter 4.)

Doidge tries to explain Freudian psychoanalysis in terms of rewiring brain circuits, but I’m not convinced that this really adds anything explanatory, and the case history he chose to illustrate this section was a life of such heartbreaking early losses that it’s hard to apply any of the things he talks about to more everyday emotional problems. His treatment of obsessive-compulsive disorder (OCD) seemed a bit skimpy, which was disappointing. (One of the most striking observations of mental activity changing the brain is Jeffrey Schwartz’s finding that therapy for OCD affected his patients’ brain chemistry; Begley mentioned this in her book, and in fact co-wrote an earlier book with Schwartz that covered the OCD treatments and other aspects of neuroplasticity. I was hoping for a more extensive case history from Doidge on the subject.)

Begley sets her book in the context of the Mind and Life Conference of 2004, a gathering of neuroscientists, philosophers, and Buddhist monks to discuss neuroplasticity. (The Mind and Life series of meetings between the Dalai Lama and western scientists is organized by the Mind & Life Institute.) I felt like her book provided a much broader neuroscience background and also a lot more detail about some of the scientific work, which I really appreciated. (That’s why if I had time to read only one book about neuroplasticity, it would definitely be Begley’s.) And it’s always impressive to see the Dalai Lama’s openness to the findings of science.

The only place in Begley’s book that really bothered me was a chapter that discussed the difficulty of understanding how the mind can change the brain. The Buddhists at the meeting believe that mind is nonphysical and separate from the brain. This is hard to square with Western science, which has found that as far as we can tell, everything we call “mental” is the result of something physical happening in the brain. Western science is still unclear on some of the mechanisms by which conscious mental processes or states change the brain, but it’s a matter of figuring out how one part of a complex system interacts with another. My impression is that what the scientists are grappling with (figuring out how it works) is a far different question from the old Cartesian mind-body interaction, which is what seemed to be exercising the Buddhists, and yet the chapter seemed to me to be implying more common ground than there in fact was.

The bottom line is that I thought both books were informative and worth reading. It’s good to be around to watch brain science teaching us so much about how our brains work and what previously unsuspected capacities they harbor. I hope to hear more in the future about how to make neuroplasticity work for us when it comes to enhancing mental performance and learning new things (that new language I’m going to save for my old age, for example).

Seeing the other guy’s viewpoint

The saying about walking a mile in someone else’s shoes does not, of course, have anything to do with pedestrian activities, but with the need to understand how the world looks to another person. According to the results of a new study from the University of Chicago, Americans may be less adept at this kind of understanding than Chinese people.

The study looked at the behavior of 40 students from the university, 20 from China who were native Mandarin speakers and 20 non-Asian students from the US who were native English speakers. The students were paired up, both students in each pair from the same cultural group, and given a task to complete that involved moving pieces around in a grid of squares. One student in each pair directed the other how to proceed; the worker could see all the pieces but the director could not, and the worker could tell what the director could and couldn’t see.

The Chinese students were much better at quickly making the correct moves, taking into account the fact that the director couldn’t see all the pieces. This may indicate that the individualist and independent mindset of the US doesn’t prepare us as well for seeing things from another person’s perspective as the more interdependent and collectivist tendencies of Chinese culture. This article from Science Daily has more details.

The Science Daily article mentions that due to the ambiguity involved in interpreting words and actions, communication can be difficult unless all parties understand each other’s frame of reference. I can certainly agree with that. Sometimes when I see how people, myself included, so easily misinterpret another’s motivation or intentions, I’m amazed that people can communicate with each other at all. This is an interesting look at cultural influences on the problem, but it would be even more interesting to see followup work that looks at cases where neither person has a clear view of the other person’s gaps in knowledge, which is quite often the situation in real life.

Time and stars

I realize I’m stretching the Thinking Meat theme with this post, but the story touches on so many of my favorite things that I’ve just got to write about it. It demonstrates the wonderful things we hominids are capable of when we set our minds to it, and is also an interesting take on the difficulties of storing and accessing the riches of scientific data that our forebears have laid up for us.

Harvard College Observatory is a venerable institution that plays a large role in many of the exciting astronomy stories of the second half of the nineteenth century and on into the twentieth. From the first photograph of a star, taken at the observatory in 1850, through the 1980s, the HCO accumulated glass photographic plates from several observing sites in both hemispheres. This article from the New York Times describes efforts to digitize this huge trove of historical information about the night sky, our window into the universe. It’s a challenging endeavor any way you look at it, but letting all this painstakingly gathered information go to waste for lack of easy access would be a shame. (If you love observatories, check out the “More photos” link under the two photos at the top.)

The article mentions in passing a couple of the stories about how research at HCO played a pivotal role in creating contemporary astrophysics. For example, the tireless Annie Jump Cannon cataloged nearly 400,000 stars according to their spectral type. She examined the spectra of these stars as captured on glass photographic plates, hundreds of tiny smears of light to a plate, each with a characteristic pattern of dark lines that reveal a surprising amount of information about the star’s physical characteristics. She improved upon the classification schemes of two predecessors and tagged each star as belonging to a particular type (her scheme is the one still in use today). Her work was essential for later discoveries about why there are different types of stars and how stars evolve.

Another great story is that of Henrietta Swan Leavitt, who made a discovery about a particular kind of variable star that unlocked the distance scale of the cosmos. Cepheid variable stars periodically brighten and then dim in a regular cycle, and she found that the length of time between two episodes of maximum brightness is related to the intrinsic brightness of the star. This is important because without some such relationship to guide us, we can’t tell the intrinsically dim nearby stars from the intrinsically bright stars at a great distance: we don’t know cosmic distances. Ejnar Hertzsprung calibrated the yardstick that Leavitt had found, and Edwin Hubble used it to measure how far away the Andromeda Galaxy was, resolving one of the great astronomical debates of the early twentieth century: the spiral nebulae are not part of our galaxy but are separate “island universes” like our own.

The digitization project at Harvard (Dasch: Digital Access to a Sky Century at Harvard) also involves some heroic hominids who found a way to scan the plates, within the constraints of time and space imposed by the volume of data and the physical setup of the archive. The project is underway but needs more cash to continue, so the organizers are hoping for generous donors who might like the chance to associate their names with the resulting digital archive. The catalog that Cannon worked on is called the Henry Draper Catalog, and was funded by the widow of Henry Draper, a doctor, astronomer, and early astrophotographer. To this day stars are identified by their HD catalog numbers; I hope some rich Harvard alum is captured by the idea of leaving a similar legacy and donates money to keep the project going.

[Postscript, December 28, 2023: If you’re interested in the stellar classification work done at Harvard College Observatory, I highly recommend Dava Sobel’s wonderful book, The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars.]

Book review: Under a Green Sky

Under a Green Sky: Global Warming, the Mass Extinctions of the Past, and What They Can Tell Us About Our Future, by Peter Ward. Collins, 2007.

Under a Green Sky tells three stories: A story of Earth’s past climate, a story of how scientists have figured out what we know about Earth’s history, and a story of what our future climate might be like. It’s a scary book, but also a call to action.

Peter Ward, a paleontologist, opens with a recent bit of the scientific-discovery story: how we found out that the mass extinction that happened 65 million years ago, the dinosaur-killing one at the end of the Cretaceous, was caused by an asteroid impact. He starts there because, despite the initial controversy over this discovery, cosmic impact became for awhile the dominant model for how most if not all mass extinctions happen. Ward then goes on to describe, with plenty of interesting details about geological field trips and the back-and-forth of scientific discovery, the way this dominant model was nudged aside by the gradual realization that most of the mass extinctions we know about were actually caused by climate change.

After sharing this insider’s view of how science is done, Ward describes how climate change has caused past mass extinctions. Greenhouse gases get into the atmosphere one way or another (e.g., from volcanic eruptions, or more recently of course from human activity), and if the world gets warm enough, the oceans begin to die. Right now, differences in temperature and salinity drive a conveyer system of currents that keep the present-day oceans oxygenated and alive. (The Gulf Stream, which you’ve probably heard mentioned in connection with global warming, is part of this system. Ward describes the movie The Day After Tomorrow, by the way, as a fable that trivializes the ominous possibility of rapid climate change. It sounds like a fast freeze is not the most likely, or even the most scary, possible outcome of the Gulf Stream’s disruption.)

Greenhouse warming disrupts the conveyor system, rendering the ocean depths warmer and less oxygenated and killing the creatures that live there. One of the few things to thrive in these oceans is a type of bacterium that produces toxic (to everything else) hydrogen sulfide. In some cases, carbon dioxide and methane can arise in bubbles from the ocean and enter the atmosphere, along with deadly hydrogen sulfide gas, which is not only noxious–it also breaks down the ozone layer, increasing the amount of ultraviolet radiation reaching the planet’s surface. All these factors contribute to mass extinction. It’s a sobering scenario to contemplate in an era of global warming.

When Ward focuses on the story of Earth’s climate, you get an idea of how unusual the time we’re living in really is. We think of Earth as being like the middle bear’s porridge, just right for life, and we wonder how many other planets are out there that might also be just right. But in fact the Earth has been intermittently all wrong, deeply frozen or stifingly hot with toxic oceans, and it might be so again. A few years ago Ward co-wrote a book with Donald Brownlee about the eventual fate of the Earth; toward the end of it they talked about their belief that interstellar travel is not likely to deliver us from future events on Earth, and this world is likely all we’ll have to depend on. They note:

“We live in a glorious summer of beauty, diversity, and resources. It will not always be so.”

And:

“…this moment on this Earth truly is a precious gift, to be savored and appreciated. If we heedlessly destroy this world, it is unlikely we will find another to replace it. Or be able to get to any refuge, even if we could find it.

Another obvious lesson is that we tinker with our atmosphere and oceans at grave risk.”

Under a Green Sky gives me more context about past conditions on Earth and how unusual the long interglacial period in which humans have had the chance to flourish really is.

This of course is important for the third story Ward tells, that of present-day climate change and where it might lead. Based on what has happened in past conditions of high CO2 levels, he describes possible futures for Earth. It’s in some sense a write-your-own-ending book. Ward closes by outlining three possible future worlds, each with a different atmospheric concentration of carbon dioxide.

Even the best-case scenario is far from ideal, and it would require us to keep the level of CO2 below 450 parts per million (ppm). (It’s currently at 380 ppm and rising.) An intermediate view shows the world at 700 ppm by the year 2100, a discouraging sight, and the worst-case scenario looks at what would happen if the levels hit 1,100 ppm by 2100–the beginning of a return to the hellish conditions of the ultrawarm Eocene epoch, dead oceans and all. He says that we can keep CO2 levels down in the range where we wouldn’t have to face anything worse than the first scenario, but we must act immediately. (It’s outside the scope of the book to go into any detail about how, but check out the action items at stopglobalwarming.org for some ideas about what you can do.)

Coincidentally I recently saw the last two episodes of Carl Sagan’s Cosmos series, in which he talks about SETI and about the perilous choices facing humankind, both topics that have links to what I’ve just been reading in the Ward book.

The Drake equation, which multiplies together factors affecting the odds of contacting intelligent life elsewhere in the cosmos, includes a factor sometimes called L, the average lifetime of an advanced civilization. Of course any number you plug in for L is just a guess. Somewhere years ago I saw a little jingle that went:

Of all the sad stories that SETI could tell,
The saddest would be a small value for L.

I wonder whether the odds of our own survival look subjectively better or worse than they did when Cosmos was made. Nuclear war doesn’t seem as pressing a threat, but climate change certainly seems more urgent. (It was just as urgent then, but, sadly, many of us were not paying enough attention.)

In the final episode of the show, Sagan talks about the dangers of nuclear war and other hazards we face, including climate change. He adapted a quote from Deuteronomy to the effect that we have the choice of life and death before us, and we must “therefore choose life”. I hope we have the collective intelligence and will to do so.

A little learning

Knowledge is power, right? I usually assume that learning more about science or technology is helpful when it comes time to form an opinion or determine a correct course of action, and I also assume that as more and more people learn about scientific or technical topics, the more they will tend to agree with each other about what should be done. But it doesn’t always work like that; in fact, in a recent study, the more people learned, the more they disagreed.

This article from Reason Online describes some research into people’s attitudes toward nanotechnology, published by the Yale Law School’s Cultural Cognition Project. After 1,850 people, most of whom knew little or nothing about nanotech, were given a scanty amount of information on the subject, most of them were willing to venture an opinion on its risk-benefit ratio. The authors of the study say the primary motivator for the opinion was how people felt about the issue: they were going with their gut, basically. And when a subset of the group was given a couple more paragraphs of information, they believed even more strongly in their original positions (e.g., if they thought nanotechnology was risky before getting the additional info, they thought it was even riskier afterward).

This looks like a striking and discouraging example of confirmation bias, the propensity for noting new information that confirms our existing beliefs and discarding new information that doesn’t. The Reason Online article also discusses some of the implications for science and technology policy. I’m curious about how we arrive at our gut reactions in the first place, and if there’s any way to learn to make more informed initial judgments.

What evolutionary biologists think of religion

This article from American Scientist describes a recent project that surveyed the religious beliefs of 149 prominent evolutionary scientists. Three previous polls, in 1914, 1933, and 1998, showed a decline in belief in a personal God and belief in immortality among scientists, with biologists consistently being the least likely to say they believed in either. The 2003 Cornell Evolution Project, a dissertation project, was the first to look only at evolutionary scientists’ attitudes toward religion. It included 17 questions and gave respondents the option to comment; in addition, 12 evolutionists were interviewed.

The biologists were given a more fine-grained range of choices to describe their religious beliefs, and the results by and large agreed with those of the earlier polls. When they were given four options for describing their thoughts on the relationship between religion and evolution, only 8% agreed with the idea put forth most notably by Stephen Jay Gould, that religion and science are not in conflict because they cover separate non-overlapping spheres. (I’m not surprised; as much as I respect Gould, I think he was way off base with that idea.) And only 3% found religion and evolution to be “totally harmonious”.

The other two options were that religion and science are in conflict because they provide contradictory answers to our questions about life, or “religion is a social phenomenon that has developed with the biological evolution of Homo sapiens—therefore religion should be considered as a part of our biological heritage, and its tenets should be seen as a labile social adaptation, subject to change and reinterpretation.” Judging from all the heated discussion on the subject lately, you’d expect that maybe most of the scientists would have chosen the former, but in fact 72% chose the latter: they don’t see a conflict because they see religion as a manifestation of evolution.

The article explains how this attitude is the opposite of one held in the 1860s which saw no conflict because evolution was seen as evidence of God’s activities. It’s easy to see that today’s attitude, which assumes that religion arose as part of our evolutionary history and is not based on eternal revealed truths, is likely to make some religious people unhappy. Since biologists have consistently been the least likely of all scientists to believe in a personal God or in immortality, would giving kids a better education in biology lead to a reduced belief in religion across the rest of the population?