It seemed as if it would be a perfectly ordinary occasion, that hot August day in 1959. Three generations of a large Oklahoma family gathered at a studio in nearby Perryton, Tex., to have a photo taken of the elders, 14 siblings ranging in age from 29 to 52. Afterward, everyone went to a nearby park for a picnic.
Among the group were two cousins, Doug Whitney, who was 10, and Gary Reiswig, who was 19. Doug’s mother and Gary’s father were brother and sister. Doug does not remember any details of that day, but Gary says he can never forget it. His father, and some of his aunts and uncles, just did not seem right. They stared blankly. They were confused, smiling and nodding, even though it seemed as if they weren’t really following the conversation.
Seeing them like that reminded Gary of what his grandfather had been like years before. In 1936, at the age of 53, his grandfather was driving with his grandmother and inexplicably steered into the path of a train. He survived, but his wife did not. Over the next decade, he grew more and more confused. By the time he died at 63, he was unable to speak, unable to care for himself, unable to find his way around his house. Now here were the first signs of what looked like the same condition in several of his children.
“We were looking at the grimness face to face,” Gary says. “After that, we gradually stopped getting together.”
It was the start of a long decline for Gary’s father and his siblings. Their memories became worse, their judgment faltered, they were disoriented. Then one day in 1963, Gary, who was living in Illinois at the time, went with his mother to take his father to a doctor in Oklahoma City. The doctor had recently examined his father’s brother, and after administering some simple memory tests and hearing about the rest of the family, concluded that he probably had Alzheimer’s disease. Gary and his mother took his father in for the same exam, and the doctor confirmed Gary’s fears.
Gary’s mother wanted to keep his father’s condition a secret and asked Gary to tell no one. But his uncle’s wife, Aunt Ester May, wanted to let everyone in the extended family know. Most reacted the way Gary’s mother had — they wanted to keep the information to themselves.
When Doug first heard the news, he hoped his mother, Mildred Whitney, might escape the terrible illness, and for a few years she seemed fine. But on Thanksgiving Day 1971, Mildred, who was then 50 and never used recipes, could not remember how to make her famous pumpkin pie.
That was the beginning of her precipitous fall. Five years later, after she lost her ability to walk, or speak, or recognize her own children, she died. In the end, 10 of those 14 brothers and sisters developed Alzheimer’s, showing symptoms, on average, at around age 50. The family, once close, soon scattered, each descendant of the 14 privately finding a way to live with the possibility that he or she could be next.
More than five decades later, many of these relatives have come together to be part of a large international study of families who carry an Alzheimer’s gene. The study, known as DIAN (for Dominantly Inherited Alzheimer Network), involves more than 260 people in the United States, Britain and Australia and includes at least 10 members of Doug and Gary’s family. Since 2008, researchers have been monitoring the brains of subjects who have mutations in any of three genes that cause Alzheimer’s to see how the disease develops before symptoms occur. By early next year, DIAN researchers plan to begin a new phase. Subjects will receive one of three experimental drugs that the researchers hope will slow or stop the disease in people otherwise destined to get it. (A similar study is expected to start around the same time in Colombia, testing one drug in a large extended family that carries a mutation in one gene that causes Alzheimer’s.)
Though as much as 99 percent of all Alzheimer’s cases are not a result of a known genetic mutation, researchers have determined that the best place to find a treatment or cure for the disease is to study those who possess a mutation that causes it. It’s a method that has worked for other diseases. Statins, the drugs that are broadly prescribed to block the body’s cholesterol synthesis, were first found effective in studies of people who inherited a rare gene that led to severe and early heart disease.
Alzheimer’s is the sixth leading cause of death in this country, and is the only disease among the 10 deadliest that cannot be prevented, slowed or cured. But DIAN investigators say that within a decade there could be a drug that staves off brain destruction and death.
This sense of optimism has been a long time coming. In 1901, a German psychiatrist, Alois Alzheimer, first noted the disease when he described the case of a 51-year-old woman named Auguste Deter. “She sits on the bed with a helpless expression,” Alzheimer wrote. “What is your name? Auguste. Your husband? Ah, my husband. She looks as if she didn’t understand the question.”
Five years later, when Auguste Deter died, Alzheimer examined her brain. It was the color of sandpaper and the texture of tofu, like every other brain. But there the similarities ended. Deter’s brain was shriveled and flecked with tiny particles that stuck to it like barnacles. No one had ever seen such a thing before in any brain.
Pathologists now recognize that the particles are deposits of a protein fragment, beta amyloid, that accumulates in brains with Alzheimer’s and is a hallmark of the disease. Alzheimer also noticed something else in Deter’s brain. Inside her ruined brain cells were tangles: grotesquely twisted ropes of a protein now known as tau. They are not unique to Alzheimer’s — they show up in the course of aging and in other degenerative brain diseases, including Parkinson’s and Pick’s disease, a rare form of dementia whose distinguishing symptoms include erratic and inappropriate behavior. Alzheimer speculated that the tangles in the brain cells were grim signs of the brain’s destruction. But what caused that destruction was a mystery. “All in all we have to face a peculiar disease process,” Alzheimer wrote.
There matters stood until the latter part of the 20th century. A leading Alzheimer’s researcher, Paul Aisen of the University of California, San Diego, told me that when he was in medical school in the late 1970s, his instructors never talked about Alzheimer’s. There was little to say other than that it was a degenerative brain disease with no known cause and no effective treatment. Scientists just did not have the tools to figure out what was going wrong in the brains of these people, or why.
All anyone knew was that the disease followed a relentless path, starting with symptoms so subtle they could be dismissed as normal carelessness or inattentiveness. A person would forget what was just said, or miss an appointment, or maybe become confused driving home one day. Gradually those small memory lapses would progress until the person, now wearing a blank stare, would no longer recognize family members and would be unable to eat or use a bathroom. At autopsy, the brain would be ruined, shrunken and peppered with plaques.
Rudolph Tanzi, a professor of neurology and an Alzheimer’s researcher at Harvard University, explained what it was like for researchers back then to look at an Alzheimer’s brain and try to figure out what caused the devastation. Imagine, he says, that you are an alien from another planet who has never heard of football. You go into a stadium at 5 o’clock, after a game has been played, and see trash in the stands, a littered field, torn turf. How, he asks, could you figure out that it was all caused by a football game? “For decades, that was where we were in trying to figure out the cause of Alzheimer’s disease,” Tanzi says.
But as molecular biology advanced, scientists realized that if they could study large families in whom the disease seemed to be inherited, they might be able to hunt down a gene that caused Alzheimer’s and understand what it did. The difficulty was finding these families and persuading them to participate in the research. A breakthrough came in the late 1980s when a woman who lived in Nottingham, England, contacted a team of Alzheimer’s researchers at St. Mary’s Hospital in London, led by John Hardy, and asked if they wanted to study her family. Alzheimer’s had appeared in three generations, she said, and her father was one of 10 children, 5 of whom developed the disease.
In the English family, the pattern of inheritance seemed clear — the child of someone with the disease had a fifty-fifty chance of developing Alzheimer’s — which meant that it was very likely that a gene was causing the disease. By comparing the DNA sequences of family members who developed Alzheimer’s to the sequences of those who did not develop the disease, the researchers discovered that the family’s disease was caused by a mutated gene on chromosome 21. Everyone in the family who had Alzheimer’s had that mutated gene. No one who escaped the disease had the mutation. And all who inherited the mutated gene eventually got Alzheimer’s. There were no exceptions.
“Sometimes in science, you generate the information and the data gradually,” Alison Goate, who was a young geneticist in the research group, told me. “This was like, boom, a eureka moment.” She says she remembers thinking, “I am the first person to see a cause of Alzheimer’s disease.”
During those years of slow scientific progress on Alzheimer’s, Gary Reiswig made a series of decisions that reflected his fears. He’d been trained as a minister in a conservative arm of the Christian Church (Disciples of Christ), but after his father died at 56, Gary, who was then 27, began questioning his calling. If he was going to get Alzheimer’s in 10 or 20 years, was this the way he wanted to spend his remaining time?
He left the ministry, deeply upsetting his extended family. “Here was our golden boy, rejecting the faith,” Gary says, referring to the way his family responded. “It was hard to go back to my hometown.”
In 1970, he and his wife divorced, and in 1973 he remarried and faced another difficult decision. His new wife, Rita, wanted children. She knew when she married Gary that there was Alzheimer’s disease in his family. “But somehow, it didn’t seem exactly real until we started talking about having a child,” Gary says. “There is a tremendous life force that drives people to love, make love and have children. You just can’t overcome it.” And because the risk to a hypothetical child was so far in the future, they were able to convince themselves that it wasn’t truly real.
Their son was born in 1977. Meanwhile, Alzheimer’s continued to cut a swath through Gary’s family. His older sister lived on a farm in Oklahoma, and he and Rita visited her a couple of times a year. On one trip, when his sister was 43, Gary realized she was starting to show the same unmistakable symptoms of the disease he had seen in his father.
Gary was about to turn 40 in 1979 and was working as a city planner in Pittsburgh. He knew he could not continue in that job if he had Alzheimer’s, so one day he said to Rita, “Let’s get ourselves in a position where if this disease hits me, I can be helpful.”
He found what he was hoping for when he saw an advertisement for an inn for sale in East Hampton, N.Y. He could be an innkeeper, Gary thought, transitioning to simple maintenance work if his memory began to fail. So he quit his job, and he and Rita bought the inn and moved to Long Island in June 1979. “I cast myself loose from dependence on bosses in case I began to lose my mental capacities,” Gary told me.
Though the actual work was more complicated than Gary had anticipated, he found he knew the basics. He had learned to make business decisions by helping his father with the family farm, and he was good at dealing with people from working as a city planner. But all the while, as he managed the inn, Gary had his eye to a future when nothing would be easy, when “my duties could be shifted from complex to simple, mental to merely manual, if the situation demanded it.”
Then, one day in 1986, he got a call from his aunt Ester May, who had made some life-changing decisions of her own. After watching her husband die, Ester May had made it a mission to find someone who might help the family. Eventually, her quest led her to Thomas Bird, who is currently a professor of neurology, medicine and medical genetics at the University of Washington in Seattle and a research neurologist at the Seattle V.A. hospital. Like Alison Goate in England, Bird was looking for large families with a hereditary form of Alzheimer’s disease to provide blood samples that could be analyzed in an attempt to isolate other genetic culprits. For Bird and others searching for Alzheimer’s genes, there were still some fundamental questions that needed to be answered: What were these genes and what did they do to cause the disease? Was there just one gene that causes Alzheimer’s in these families, or were there several? If there were several, there might be many paths to the disease. If there was one — or several that when mutated all had the same effect — the task of finding a cure might be easier.
As soon as Ester May spoke to Bird, she got to work, calling family members and cajoling them to join the study. The consent forms said all data would be kept private, and as is typical in research, even if a gene were found, the participants would not be told if they had it. By taking part in the study they would be contributing to science. They would be doing it to benefit others in the future, not themselves.
Gary agreed to participate, and he went to his internist’s office in East Hampton to have blood drawn and sent to Bird. He’s not sure how many of his cousins also gave blood, but he estimates, from asking around, that about 30 did. Of his father’s generation, 5 out of 14 gave blood — the rest were already dead from the disease.
Gary says he didn’t need to persuade his brother and sister to participate. “By the time Dr. Bird’s study began, my sister was already having symptoms,” he says.
Then Gary put the study out of his mind while he continued on the path he had already set for himself — making use of the limited time he had to live his life before he might be overcome by the disease.
Doug approached the possibility of Alzheimer’s differently, spending his life away from the family tragedies, only distantly aware of what was unfolding. At 18, Doug left home to join the Navy. He stayed in the military for 20 years, and for most of that time, he and his wife, Ione, were stationed around the world, visiting immediate family members a couple of times a year on all-too-brief road trips. When he retired from the Navy in 1988, they settled in Port Orchard, Wash., where Doug had a job with a contractor, scheduling maintenance for ships. Because he’d been out of the country for so long, he didn’t participate in Bird’s study.
Doug is a taciturn man, not one to spill his emotions. Ione is the talker, ebullient and friendly, speaking for Doug in interviews, answering e-mails. She told me that the most difficult time for Doug was when Roger, the oldest of Doug’s seven siblings, started showing signs of the disease when he was 48. (None of the others seem to have symptoms.) In 2001, Roger was deteriorating badly in a nursing home in Grove, Okla., and Doug flew there to be with him one last time. “It had been at least six months since Roger recognized anyone,” Ione says. Doug spent the afternoon and evening with him. The next day, Roger died. He was 55 and left behind three children, one of whom was just a few weeks younger than Doug and Ione’s son, Brian.
In 1995, four years after Alison Goate and her colleagues found the first Alzheimer’s gene, two more genes were discovered. One was found by Bird’s team using the blood from several families, including Gary and Doug’s. Other research groups studying other families made similar discoveries. The three genes are on different chromosomes, and different families have different mutations in the genes, but in every case, the mutated gene leads to the same result: the brake that normally slows down the accumulation of beta amyloid, a toxic protein that forms plaques, no longer works. Beta amyloid piles up and sets the inexorable disease process in motion.
In the years since, researchers have theorized that when the brain makes too much beta amyloid, it creates a toxic environment — “a bad neighborhood,” as some investigators put it. The beta amyloid clumps into hard plaques that form outside cells. Once brain cells are living in that bad neighborhood, the abnormal tangled strands of tau proteins show up inside, killing the cells from within.
The researchers have tended to focus on stopping beta amyloid from accumulating rather than stopping tau. Most beta amyloid drugs either stymie the enzymes that produce it or clear away the amyloid after it’s made. But drug development is hard, and it has taken years for companies to find promising compounds and take them through the phases of preclinical testing.
Several years ago, the first large studies of these new drugs were carried out using people who already had Alzheimer’s. Most of those initial studies are still under way, but a few have been completed, with disappointing results — despite the drugs, the disease continues unabated in these Alzheimer’s patients.
Randall J. Bateman, director of the DIAN Therapeutic Trials Unit at Washington University School of Medicine in St. Louis, says it is far too soon to admit defeat. He notes that the history of medicine is replete with stories of drugs that were almost abandoned because they were initially studied in the wrong group or were administered in the wrong dose or at the wrong time in the course of a disease. Even penicillin was a failure at first. It was initially tested by dabbing it on skin infections, Bateman says. But the way the drug was applied to the infections and its low dose made it impossible for the drug to cure even an infection that would otherwise respond to it. Finally, when the drug was tested at the right dose in the right patients, it cured eye infections and also pneumonia in people who were certain to have died without it.
“Even something as effective as penicillin can fail unless it is administered properly,” Bateman says. He predicts that in the future it will become clear that for Alzheimer’s drugs to be effective, they would have to be given earlier.
“In Alzheimer’s, we are coming to realize that it’s more difficult to treat after there are symptoms,” Bateman says. By then “extensive neuronal death has occurred.” Tau has been destroying brain cells, and “the adult brain does not replace those lost neurons.”
Other diseases work the same way. In Parkinson’s, for example, the substantia nigra — a small, black, crescent-shape group of brain cells that control movement — starts to die. But there are no symptoms until 70 to 90 percent of the substantia nigra is gone. No one has yet found a way to restore those missing cells.
In order to address this, Bateman says that the DIAN researchers will try to use drugs to stop the accretion of amyloid in people with the Alzheimer’s gene who haven’t yet shown symptoms. The study is building on others that followed middle-aged subjects for years, watching for early signs in the brains of those who eventually develop Alzheimer’s.
One study in particular has been helpful. It’s called ADNI (Alzheimer’s Disease Neuroimaging Initiative), and it began in October 2004. ADNI includes 200 people whose memories are normal, 400 with mild memory problems that might be harbingers of Alzheimer’s disease and 200 with Alzheimer’s disease. Researchers regularly give these subjects memory tests and do brain imaging and other tests to watch for the progress of Alzheimer’s. The study found that characteristic brain changes — shrinkage of the memory center, beta amyloid plaques, excessive synthesis of beta amyloid and tau — arise more than a decade before a person has symptoms.
The first phase of the DIAN study also looks at the progression of Alzheimer’s in the brain, but using only subjects who are members of families with Alzheimer’s genes. When these people join DIAN, Bateman and his colleagues test their memory and reasoning as well as administer spinal taps and scans to monitor changes in their brains. The researchers test the subjects every one to three years, and they have found that they can see troubling brain changes in people with the gene as many as 20 years before they would be expected to show symptoms based on their parent’s age when the disease was first diagnosed. Given the results from DIAN and other studies, Bateman concluded that the ideal time to give an experimental drug is within 15 years of the suspected onset.
Before they could begin testing drugs on people with an Alzheimer’s gene, though, the researchers had to solve a delicate problem. DIAN participants are aware that they have a fifty-fifty chance of possessing an Alzheimer’s gene, and they know they can be tested and find out if they inherited it — but almost no one wants to know. The researchers can give the drugs only to people who have the gene, however. (You don’t want to give a drug that affects the brain to healthy people.) If the study took only people with the gene, all those who were accepted would know that they had it. In order to avoid this problem, the DIAN researchers are inviting members of families with one of the mutated genes to join, regardless of whether the individuals know they possess the gene. Subjects won’t know which group they are in, but the researchers will know, and they will assign those who don’t have an Alzheimer’s gene to the placebo group. The participants with the gene will be randomly assigned to receive one of three experimental drugs or a placebo. The researchers say that within two years, they will have an indication about whether any of the drugs are working.
Bateman explained that the next step in Alzheimer’s research would be to study people who do not have the gene. The idea would be to look at, say, 70-year-olds who seem cognitively normal but who are at an age where Alzheimer’s is increasingly likely. Those subjects would be given scans and other tests to see whether, despite the absence of symptoms, their brains showed changes consistent with the beginning of Alzheimer’s. They would then be enrolled in a drug study. If the drug were to prevent the disease in these people, researchers predict that tests for beta amalyoid plaques might become a recommended preventive medical procedure. People might be tested at age 50 and periodically afterward. Anyone getting plaques would take the drug to prevent Alzheimer’s disease.
In 1995, the same year that Bird discovered Gary’s family’s Alzheimer’s gene, Gary made a discovery of his own. That August, his younger brother and his sister-in-law were visiting, and it was clear that his brother had Alzheimer’s. He would become confused by the simplest things. That first morning, he tried to open a latched door, gave up, then tried to open a window, thinking it was a door. Gary was desolate seeing his brother’s condition and could not help thinking that he could be next.
On the day that Gary’s brother and his wife departed, Gary picked up The New York Times. “There was this headline,” he told me. “ ‘Third Gene Tied to Early Onset Alzheimer’s.’ ” The article described a discovery by the Seattle group, in collaboration with other researchers, that was being published that day in Science magazine. Gary was pretty sure it was his family whose gene had been found.
He got a copy of Science and turned to the article, which included a family tree with members who had the gene represented by black diamonds. Those who did not have the gene were represented by white diamonds.
It was scary even to look. Gary knew every person in that diagram, and he knew he was there too. Would he be a black diamond or a white one? He followed his family line, from his grandfather’s generation to his father’s — there were the 14 siblings — to his own. He saw his older sister, who had been given a diagnosis of Alzheimer’s and was represented by a black diamond. He saw his younger brother, a black diamond. Bracketed between them was Gary. His diamond was white. He had prepared all his adult life for that gene. And by an incredible stroke of luck, he did not have it.
His first sensation, he told me, was “lightness, like a weight, a burden, had been lifted off my shoulders.” For several hours he floated, elated by the news. Now his children did not have to worry that they would get it. His wife would not have to worry that she would be caring for Gary as he spiraled down into the chasm of the disease. He had spent his life preparing for an inheritance he had escaped.
Soon though, he moved from joy to sadness. “My feelings of happiness for myself and my children seemed to make light of what my siblings and family faced,” Gary said.
A decade ago, Gary and Doug spoke briefly at a family reunion in Oklahoma City. It was the first time they had seen each other since that fateful picnic four decades earlier. Then in 2009, when Gary was in Seattle, meeting with Bird for a book he was writing about his family, “The Thousand Mile Stare,” he decided to look up Doug and Ione. They talked, and last year Doug joined Phase 1 of the DIAN study, after learning about it from Gary. His testing took place at Washington University in St. Louis over three days in March.
First Doug was given a cognitive endurance course. The idea was to wear the brain out by taxing it with progressively harder tasks in order to see its limits. It’s like giving someone a heart stress test, Bateman says, in which a person must run on a treadmill until exhaustion sets in. The goal is to get a base line reading. New studies are indicating that one of the first symptoms of Alzheimer’s is progressively poorer performances on challenging cognitive and memory tests.
Some tasks were simple — name as many animals as you can in one minute. Others were harder. One was a test for working memory, in which the subject is shown simple arithmetic problems, like 7+5 = 12. In some, the answer is correct; in others, it is not. The subject presses a key on a computer to indicate whether the answer is right or wrong. As soon as one problem is completed, another pops up. After three or four problems, the subject is asked to type, from memory, the second number of each problem.
Doug found it exhausting. That afternoon, the testing continued with standard memory tests and questions for Ione about whether Doug has changed in his ability to handle finances or deal with daily events in his life. (The answer was no.) Then there was a test in which Ione was asked to recall something that happened in the prior week and something that happened in the prior month, in great detail. She was sent out of the room and Doug was called in and asked to recall the same event. (He performed well.) At the end of the first day, Doug was given an M.R.I., the first he ever had, to look for shrinkage of his hippocampus, a telltale sign of Alzheimer’s.
The next morning, Bateman gave Doug a spinal tap to collect the fluid that bathes Doug’s brain and spinal cord. After 10 minutes, Bateman held up a tube filled halfway with a clear, beige-tinged liquid. In it were proteins, including beta amyloid, that can reveal if Alzheimer’s is on its way. The spinal tap was followed by more brain scans the next day, and then Doug and Ione went home.
After they returned to Port Orchard, Doug decided he wanted to know whether he carried the Alzheimer’s gene. He and Ione thought he would be safe, Ione told me. They thought the cognitive tests had gone well, and Doug was in his early 60s. Most of his family members who had Alzheimer’s got it when they were in their 50s.
Last year, on May 31, his 62nd birthday, Doug went to a lab to get his blood drawn. When the results came back in June, they were the last thing Doug and Ione expected: Doug had the mutated gene.
“The first reaction was shock,” Ione said. The couple had gone through a tense period when Doug was in his late 40s and early 50s, and they kept waiting for him to start showing symptoms of the disease. Ione still remembers a couple of occasions when Doug lost his way on familiar routes.
“I thought: Oh, my gosh. This is it,” she says. “It is so easy to get sucked into that constant fear.” But as the years went by, they put the fear behind them.
Now it is back. “It’s kind of like we went through this once already,” Ione said. The fear is compounded by thoughts of their two children. Brian, their son, is 40, and is married with a 2-year-old daughter. Karen, their daughter, is 38 and unmarried. Like Doug, Karen decided she had to know and arranged to be tested. She does not have the gene.
That, Ione says, is the one bright spot in all this. Hearing the news about Karen made her realize how worried she was. “You feel like a rock was lifted from your chest. You didn’t know the rock was there but now it’s gone.”
The first thing Brian did was buy additional life insurance, just in case. Though he initially said he wanted to be tested, so far he has not gone through with it. He plans to join Bateman’s study. If he does, he will, of course, have a gene test but will not be told the result.
Doug says little about how the devastating news affects him. He’s continuing to work, planning to retire when he is 65. Then he figures he will do a lot of fishing and household repairs.
He also wants to join the drug phase of DIAN. It is his one hope of staving off the inevitable, assuming he is placed in a group that is randomly assigned to take one of the experimental drugs.
But even if a drug ultimately proves effective, it will no doubt take time for Bateman and his team to figure out when best to give it and at what dose. It is quite unlikely that a cure will be found in the next few years.
As for Brian, if he does have the gene, perhaps science will come up with the right drug at the right time before his symptoms set in. And if his young daughter were to have it, too, researchers imagine that there will be a cure by the time she faces her own dire future. That is what they cling to, Ione says. “I’d never even heard the word ‘Alzheimer’s’ until I was pregnant with Brian,” she said. “And there was no hope at that point. If you had the gene, that was it.” Meanwhile, she and Doug are going on with their lives. “We’re just hanging in there. Life can be cruel.”
The Hazards of Confidence
By DANIEL KAHNEMAN
Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army's Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.
One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.
A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another's shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.
As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man's true nature revealed itself in sharp relief.
After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers' leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers' leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.
Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: "This one will never make it," "That fellow is rather mediocre, but should do O.K." or "He will be a star." We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, "But of course anything could happen."
We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.
We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.
I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.
I had discovered my first cognitive fallacy.
Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier's performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, "What you see is all there is." We had made up a story from the little we knew but had no way to allow for what we did not know about the individual's future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like "He will be a star." The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events - like who was near the wall - largely determined who became a leader. Other events - some of them also random - would determine later success in training and combat.
You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.
The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.
I coined the term 'illusion of validity' because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true - that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.
I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. "When you sell a stock," I asked him, "who buys it?" He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.
Most people in the investment business have read Burton Malkiel's wonderful book "A Random Walk Down Wall Street." Malkiel's central idea is that a stock's price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.
We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.
Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.
To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled "Trading Is Hazardous to Your Wealth," Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, "Boys Will Be Boys," they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.
Of course, there is always someone on the other side of a transaction; in general, it's a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell 'winners,' stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of 'smart money' that finance professionals apply to themselves.
Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.
Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.
More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not - and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.
Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers' scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?
To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.
No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, "not very high" or "performance certainly fluctuates." It quickly became clear, however, that no one expected the average correlation to be zero.
What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions - and thereby threaten people's livelihood and self-esteem - are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.
The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, "I have done very well for the firm, and no one can take that away from me." I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?
We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.
True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse's mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals' experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.
(See also the article below)
Daniel Kahneman
Psychologist and Nobel Prize winner Daniel Kahneman says that, given a choice, we will usually make the wrong one.
Daniel Kahneman thinks he won the Nobel Prize for being a fool. Over lunch I judge that there is something about him that makes it unwise for me to tell him that this is not very likely. And anyway, if global prestige, the leadership of an entire field of economics and a worldwide bestselling book haven’t persuaded him, it’s unlikely that I will.
What Kahneman will accept, I think, is that he is not the only fool. I am a fool too. We’re pretty much all fools.
The Princeton professor has changed our understanding of ourselves and rocked economics to its foundations. If social scientists believe that in the past 30 years they have got much nearer to the truth, then Kahneman is one of the reasons why. If being a fool makes me an equal of Kahneman, I accept my status with equanimity.
Let’s start at another lunch. Let’s start in 1969 in the Cafe Rimon in Jerusalem. It’s the favourite haunt of junior faculty members from the Hebrew University. It’s Friday noon. The place is filling up as it usually did at that time. And a revolution is about to start.
On one side of the table is Kahneman, a psychologist with a statistical bent, with time served in the Israeli military telling the top brass what they didn’t want to hear — that their favoured method of choosing officers was hopeless, because the test results and the achievement of selected candidates weren’t correlated. And finding out that they ignored the evidence and ploughed on anyway.
On the other side is a slightly younger man. He’s Amos Tversky, who’s been working away in Michigan on the science of decision-making. The two men had come fresh from an argument. But over lunch, says Kahneman, “we just had a grand time”. The argument, a friendly intellectual affair, was concerned with whether most people were good instinctive statisticians. Tversky was an optimist; he thought we weren’t too bad at numbers. Kahneman disagreed. He told Tversky of his own experience. “One of my lines of research wasn’t working at all. I had adopted a rule that I would never be satisfied with one study and I would have to do the study again and get the same results before I would be sure ... I was fairly inconsistent and never got the same results.” Eventually, he realised why. His sample sizes were too small.
“I was teaching statistics. This was material that should have been transparent to me. But it wasn’t.” Was he the lone fool? Or, as he suggested to Tversky, were most people poor as intuitive statisticians?
It didn’t take long for Tversky to become convinced. And the two embarked on studies that showed that Kahneman was right. People trust information garnered from ridiculously small samples, they confuse correlation (two facts are related) with causation (one fact causes the other) and they are for ever seeing patterns in events and numbers that are, in fact, random.
It was — this paper that Kahneman now calls “a joke, a serious joke” — just the start. The beginning of a revolution against standard economic thinking. In paper after paper, following this first one, Kahneman and Tversky revealed the inadequacy of the most basic assumption made by economists — that man is rational. Ultimately, this work created a new strand of economic thinking — behavioural economics — and earned Kahneman the Nobel Prize for Economics in 2002, even though he is not an economist.
Did it take economists too long to see the point? He says that Tversky always joked that economists didn’t really believe in rationality since they thought it was true of people in general, but not of their spouse or their dean. And then he adds that 30 years between the first paper and the Nobel Prize is “very, very fast”.
Let me give you an example of the departure the work represents. You are offered a bet; a 50:50 gamble, a coin toss. Heads and you lose $100, tails and you win $150. Classic economic theory is clear about what you will do. You’ll take the bet, because the expected value is positive. But in reality? People don’t. They are so averse to losing something they already have that even a much bigger potential gain doesn’t compensate them for the risk.
Here’s another example. We react differently to the same question framed in a different way. Let’s say a doctor is asked to make a decision about two treatments for lung cancer: surgery or radiation. The five-year survival rates favour surgery, but there are short-term risks. When told that the one-month survival rate after surgery is 90 per cent, as many as 84 per cent of the doctors chose the surgical option. When the same point was put in another way — there is 10 per cent mortality in the first month — only half the doctors chose surgery.
And this is just one of dozens of ways we behave irrationally.
We are, for instance, prone to something called the halo effect. “If you like the President’s politics,” Kahneman has written, “you probably like his voice and appearance as well.” And we package our opinions up to make neat narratives and help us form an identity even when the logical link isn’t there. “There is,” Kahneman told me, “a very high correlation in the US between attitudes to marriage and beliefs about global warming.”
We tend to use information that comes quickly to mind in order to form judgments, producing a predictable bias. This explains how Robbie Williams came sixth in a poll to identify the most influential musicians of the past millennium, just ahead of Mozart.
The list of such biases is a long one. We have, Kahneman argues, two types of thought processes. System one: quick, intuitive, automatic, but prone to being fooled by its own mental shortcuts. And system two: more contemplative, deeper and harder to deploy. This can correct for error, but more often acts as a lawyer and lobbyist for our emotions.
And things get worse. Kahneman doesn’t really think we can do much about them. Even knowing that they are there doesn’t help you overcome them. Strangely enough, if he had done he wouldn’t have written his new book in the first place.
One of our biases is that we can ignore the lessons of experience. A group of people compiling a report will estimate they can do it in a year, even though every other similar report has taken comparable groups five years. Kahneman knew this, yet still wrote Thinking, Fast and Slow.
“When I started the book I told Richard Thaler [the author of Nudge] that I had 18 months to finish it. He laughed hysterically and said, ‘You have written about that, haven’t you? It’s not going to work the way you expect.’ ” How long did it take you, I ask. “Four years, and it was very painful. It’s not yet clear to me that it was a good idea to write the book in spite of its being quite successful.” I assure him, having read it, that it was indeed a good idea. “For you it’s easy,” he replies.
The book is dedicated to Tversky and many of the ideas in it are his, but tragically he isn’t here to enjoy its reception. He died of cancer in 1996. Over our lunch his friend talks of him often. Indeed, Kahneman finds it hard to accept the praise and recognition he gets because Tversky isn’t around to share it. “For me,” he says, “winning the Nobel Prize has been a much smaller psychological event than for most other people because I always felt that I was part of a winning team and by myself I would never have won it.”
I point out that, despite this, winning the Nobel Prize is quite cool. “Well, yeah, it’s quite good,” he eventually accepts. “For reasons that people don’t appreciate, by the way, and which took me completely by surprise. What makes it very good is the pleasure that it gives other people. Everybody who knows you is thrilled.” Yes, I say, I told my mum I was having lunch with a Nobel Prize winner. “You know, people who wouldn’t come to your funeral nevertheless are absolutely thrilled.” I promise to come if my schedule allows.
His downbeat attitude extends to academic life. “I discouraged my daughter and son-in-law from entering academic life.” Why did you do that? “Two things. You shouldn’t be in academic life if you have a thin skin, and the other one is that you absolutely have to have the ability to exaggerate the importance of what you are doing. If you can’t do that, you can’t be an academic, because a very small problem has to look big to you, otherwise you can’t mobilise yourself to spend so much time and effort on it.”
But when I put it to him that the financial crisis vindicated his own work by showing up the irrational behaviour of bankers, he replies: “Oddly enough, not very much. Standard economics explains that very well, what happened.” The bankers were acting rationally in their own interests rather than that of their banks. “So my sense is that it is undoubtedly true that behavioural economics have gained greatly in credibility from the crisis, but I am not sure that this is for the right reason.”
All this modesty, all of it becoming, and none of it false. Yet it shouldn’t be mistaken for self-doubt. Kahneman knows what he has done and stands by his work. When I present some academic criticisms of behavioural economics — for instance that the effects are not very large — he is quick to call the point “not very serious”. He knows, too, the impact he has had.
It’s just I think that it’s hard to spend your life studying human foibles, to conclude that they are ineradicable and then take yourself too seriously. Daniel Kahneman certainly doesn’t.
Alzheimer's Risk Factors
Alzheimer's disease cases could potentially be prevented through lifestyle changes and treatment or prevention of chronic medical conditions, according to a study led by Deborah Barnes, PhD, a mental health researcher at the San Francisco VA Medical Center.
Analyzing data from studies around the world involving hundreds of thousands of participants, Barnes concluded that worldwide, the biggest modifiable risk factors for Alzheimer's disease are, in descending order of magnitude, low education, smoking, physical inactivity, depression, mid-life hypertension, diabetes and mid-life obesity.
In the United States, Barnes found that the biggest modifiable risk factors are physical inactivity, depression, smoking, mid-life hypertension, mid-life obesity, low education and diabetes.
Together, these risk factors are associated with up to 51 percent of Alzheimer's cases worldwide (17.2 million cases) and up to 54 percent of Alzheimer's cases in the United States (2.9 million cases), according to Barnes.
"What's exciting is that this suggests that some very simple lifestyle changes, such as increasing physical activity and quitting smoking, could have a tremendous impact on preventing Alzheimer's and other dementias in the United States and worldwide," said Barnes, who is also an associate professor of psychiatry at the University of California, San Francisco.
The study results were presented at the 2011 meeting of the Alzheimer's Association International Conference on Alzheimer's Disease in Paris, France, and published online on July 19, 2011 in Lancet Neurology.
Barnes cautioned that her conclusions are based on the assumption that there is a causal association between each risk factor and Alzheimer's disease. "We are assuming that when you change the risk factor, then you change the risk," Barnes said. "What we need to do now is figure out whether that assumption is correct."
Senior investigator Kristine Yaffe, MD, chief of geriatric psychiatry at SFVAMC, noted that the number of people with Alzheimer's disease is expected to triple over the next 40 years. "It would be extremely significant if we could find out how to prevent even some of those cases," said Yaffe, who is also a professor of psychiatry, neurology and epidemiology at UCSF.
The mental fallout from the Sept. 11 attacks has taught psychologists far more about their field's limitations than about their potential to shape and predict behavior, a wide-ranging review has found.
The report, a collection of articles due to be published next month in a special issue of the journal American Psychologist, relates a succession of humbling missteps after the attacks.
Experts greatly overestimated the number of people in New York who would suffer lasting emotional distress.
Therapists rushed in to soothe victims using methods that later proved to be harmful to some.
And they fell to arguing over whether watching an event on television could produce the same kind of traumatic reaction as actually being there.
These and other stumbles have changed the way mental health workers respond to traumatic events, said Roxane Cohen Silver, a psychologist at the University of California, Irvine, who oversaw the special issue along with editors at the journal. "You have to understand," she said, "that before 9/11 we didn't have any good way to estimate the response to something like this other than - well, estimates" based on earthquakes and other trauma.
Chaos reigned in the New York area after the twin towers fell, both on the streets and in the minds of many mental health professionals who felt compelled to help but were unsure how. Therapists by the dozens volunteered their services, eager to relieve the suffering of anyone who looked stricken. Freudian analysts installed themselves at fire stations, unbidden and unpaid, to help devastated firefighters. Employee assistance programs offered free therapy, warning of the consequences of letting people grieve on their own.
Some given treatment undoubtedly benefited, researchers say, but others became annoyed or more upset. At least one commentator referred to therapists' response as "trauma tourism."
"We did a case study in New York and couldn't really tell if people had been helped by the providers - but the providers felt great about it," said Patricia Watson, a co-author of one of the articles and associate director of the terrorism and disaster programs at the National Center for Child Traumatic Stress. "It makes sense; we know that altruism makes people feel better."
But researchers later discovered that the standard approach at the time, in which the therapist urges a distressed person to talk through the experience and emotions, backfires for many people. They plunge even deeper into anxiety and depression when forced to relive the mayhem.
Crisis response teams now take a much less intense approach called psychological first aid, teaching basic coping skills and having victims recount experiences only if it seems helpful.
One of the biggest lessons of Sept. 11, said Richard McNally, a psychologist at Harvard who did not contribute to the new report, was that it "brought attention to the limitations of this debriefing."
Another, he said, was that it drove home the fact that people are far more resilient than experts thought. No one disputes that thousands of Americans who lost loved ones or fled from the collapsing skyscrapers are still living with deep emotional wounds. Yet estimates after the attack projected epidemic levels of post-traumatic stress, afflicting perhaps 100,000 people, or 35 percent of those exposed to the attack in one way or another.
Later studies found rates closer to 10 percent for first responders, and lower for other New Yorkers. (The prevalence in children was slightly higher.)"Some of us were making this case about resilience well before 9/11, but what the attack did was bring a lot more attention to it," said George A. Bonanno, a psychology professor at Columbia.
It also stirred a debate that may soon change the definition of post-traumatic stress. In the breathless weeks and months after the attack, experts and news articles warned that people who had no direct connection to the tragedy would also develop diagnosable symptoms merely from seeing the images on a television screen.
Dr. Silver, who was among the first to question overestimates of trauma, has found evidence for such effects in her own studies. "The distress spilled over the outside communities, mostly to people who saw the images and had pre-existing psychological problems," she said. "The numbers are low, but I think the data is convincing."
Dr. McNally, among others, disagrees. "The notion that TV caused P.T.S.D. seems absurd," he said in an e-mail.
The editors of the Diagnostic and Statistical Manual, the so-called encyclopedia of mental disorders compiled by the American Psychiatric Association, are debating whether to change the criteria for post-traumatic stress to exclude such at-a-distance cases.
The new report reviewed hundreds of other types of 9/11 studies, political and social. Americans on average became more prejudiced toward Arabs after the attack, as well as more likely to contribute to charities and more supportive of aggressive government action against suspected terrorists.
But these and other findings were not new; studies after previous attacks in other countries found similar things. For all their fury and devastation, the attacks gave rise to no new theories of behavior, no new therapies.
Instead, some authors said, the chief effect on the social sciences was to caution against applying theories so readily to real life. Another author in the new collection, Philip E. Tetlock, a psychologist at the Wharton School at the University of Pennsylvania, notes that intelligence agencies employ scientists to try to predict the behavior of foreign leaders and terrorists - and that their track record has been mixed.
"The closer scientists come to applying their favorite abstractions to real-world problems," the article concludes, "the harder it becomes to keep track of the inevitably numerous variables and to resist premature closure on desired conclusions."
Why Do We Choke Under Pressure?
Whether it's missing a golf putt, scoring poorly on a big test, or blowing a job interview or sales presentation, you've likely had some first-hand experience with choking under pressure. Performing below your abilities in a stress-filled situation happens in the workplace and at school, in sports and in the arts - and it's not simply that your nerves get the better of you.
There are two main theories about why people choke: One is that thoughts and worries distract your attention from the task at hand, and you don't access your talents. A second explanation suggests that pressure causes individuals to think too much about all the skills involved and this messes up their execution.
Psychologists are hoping to understand when and why some people are more likely to succeed in high-stakes settings while others fail. But people usually think all high-pressure situations have the same effects on performance, says Marci DeCaro, an assistant professor in the department of psychological and brain sciences at the University of Louisville in Louisville, Kentucky.
DeCaro and a team of researchers recently published a study in the Journal of Experimental Psychology that found not all high-pressure situations are the same, and they looked at how different types of pressure influenced performance.
They compared "monitoring pressure" - being watched by others, whether it's a teacher, audience, or video camera - to "outcome pressure" -- seeking a high test score, prize money, scholarship, or title -- to lower-key situations.
In one experiment, scientists tracked 130 undergraduate students ability to complete two sets of tasks on a computer in which they were asked to correctly categorize shapes and symbols. One-third of the group was in a pressure-monitoring condition (they were told their performance was being videotaped), another group was in an outcome-pressure situation (they were told their accuracy on the first task had been determined, and they were offered a financial incentive to perform 20 percent better), and a third group was a low-pressure control.
Researchers found that tempting students with money hurt their performance by distracting them from an attention-demanding task, perhaps because they worried more and relied less on their working memory. Believing you're being watched caused students to focus their attention on the skills needed to complete a proceduralized task and less on the outcome, and their performance suffered.
Pressure itself isn't always bad, DeCaro says, it depends on the task and type of pressure encountered. "Pressure hurts performance if it leads you to pay attention in a way that is bad for the particular task you're doing," says DeCaro. Some skills are better performed when you devote a lot of attention to them, like solving math problems, she explains, while others (a well-learned sports skill like your golf putt) are performed better without thinking too closely about the steps you're taking.
Knowing what kinds of pressure situations lead you to focus too much or not enough, might help you find ways to overcome the problem.
How to tell when someone's lying
Professor of psychology R. Edward Geiselman at the University of California, Los Angeles, has been studying for years how to effectively detect deception to ensure public safety, particularly in the wake of renewed threats against the U.S. following the killing of Osama bin Laden.
Geiselman and his colleagues have identified several indicators that a person is being deceptive. The more reliable red flags that indicate deceit, Geiselman said, include:
When questioned, deceptive people generally want to say as little as possible. Geiselman initially thought they would tell an elaborate story, but the vast majority give only the bare-bones. Studies with college students, as well as prisoners, show this. Geiselman's investigative interviewing techniques are designed to get people to talk.
Although deceptive people do not say much, they tend to spontaneously give a justification for what little they are saying, without being prompted.
They tend to repeat questions before answering them, perhaps to give themselves time to concoct an answer.
They often monitor the listener's reaction to what they are saying. "They try to read you to see if you are buying their story," Geiselman said.
They often initially slow down their speech because they have to create their story and monitor your reaction, and when they have it straight "will spew it out faster," Geiselman said. Truthful people are not bothered if they speak slowly, but deceptive people often think slowing their speech down may look suspicious. "Truthful people will not dramatically alter their speech rate within a single sentence," he said.
They tend to use sentence fragments more frequently than truthful people; often, they will start an answer, back up and not complete the sentence.
They are more likely to press their lips when asked a sensitive question and are more likely to play with their hair or engage in other 'grooming' behaviors. Gesturing toward one's self with the hands tends to be a sign of deception; gesturing outwardly is not.
Truthful people, if challenged about details, will often deny that they are lying and explain even more, while deceptive people generally will not provide more specifics.
When asked a difficult question, truthful people will often look away because the question requires concentration, while dishonest people will look away only briefly, if at all, unless it is a question that should require intense concentration.
If dishonest people try to mask these normal reactions to lying, they would be even more obvious, Geiselman said. Among the techniques he teaches to enable detectives to tell the truth from lies are:
Have people tell their story backwards, starting at the end and systematically working their way back. Instruct them to be as complete and detailed as they can. This technique, part of a "cognitive interview" Geiselman co-developed with Ronald Fisher, a former UCLA psychologist now at Florida International University, "increases the cognitive load to push them over the edge." A deceptive person, even a 'professional liar,' is "under a heavy cognitive load" as he tries to stick to his story while monitoring your reaction.
Ask open-ended questions to get them to provide as many details and as much complete information as possible ("Can you tell me more about ...?" "Tell me exactly..."). First ask general questions, and only then get more specific.
Don't interrupt, let them talk and use silent pauses to encourage them to talk.
Misunderstanding Risk
Humans have a perplexing tendency to fear rare threats such as shark attacks while blithely ignoring far greater risks like unsafe sex and an unhealthy diet. Those illusions are not just silly - they make the world a more dangerous place.
Last march, as the world watched the aftermath of the Japanese earthquake/tsunami/nuclear near-meltdown, a curious thing began happening in West Coast pharmacies. Bottles of potassium iodide pills used to treat certain thyroid conditions were flying off the shelves, creating a run on an otherwise obscure nutritional supplement. Online, prices jumped from $10 a bottle to upwards of $200. Some residents in California, unable to get the iodide pills, began bingeing on seaweed, which is known to have high iodine levels.
The Fukushima disaster was practically an infomercial for iodide therapy. The chemical is administered after nuclear exposure because it helps protect the thyroid from radioactive iodine, one of the most dangerous elements of nuclear fallout. Typically, iodide treatment is recommended for residents within a 10-mile radius of a radiation leak. But people in the United States who were popping pills were at least 5,000 miles away from the Japanese reactors. Experts at the Environmental Protection Agency estimated that the dose of radiation that reached the western United States was equivalent to 1/100,000 the exposure one would get from a round-trip international flight.
Although spending $200 on iodide pills for an almost nonexistent threat seems ridiculous (and could even be harmful - side effects include skin rashes, nausea, and possible allergic reactions), 40 years of research into the way people perceive risk shows that it is par for the course. Earthquakes? Tsunamis? Those things seem inevitable, accepted as acts of God. But an invisible, man-made threat associated with Godzilla and three-eyed fish? Now that's something to keep you up at night. "There's a lot of emotion that comes from the radiation in Japan," says cognitive psychologist Paul Slovic, an expert on decision making and risk assessment at the University of Oregon. "Even though the earthquake and tsunami took all the lives, all of our attention was focused on the radiation."
We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the 19th and 20th centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table.
But in the late 1960s and early 1970s, that vision of homo economicus - a person who acts in his or her best interest when given accurate information - was knee-capped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have two different systems - logic and instinct, or the head and the gut - that sometimes give us conflicting advice, but we are also at the mercy of deep-seated emotional associations and mental shortcuts.
People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level.
Even if a risk has an objectively measurable probability - like the chances of dying in a fire, which are 1 in 1,177 - people will assess the risk subjectively, mentally calibrating the risk based on dozens of subconscious calculations. If you have been watching news coverage of wildfires in Texas nonstop, chances are you will assess the risk of dying in a fire higher than will someone who has been floating in a pool all day. If the day is cold and snowy, you are less likely to think global warming is a threat.
Our hardwired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain's emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.
A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac 'n' cheese. "People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level," says Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, "Risk as Feelings," debunked theories that decision making in the face of risk or uncertainty relies largely on reason. "Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless," he says.
Even Charles Darwin failed to break the amygdala's iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.
The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.
In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending $10 on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a by-product of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.
David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don't Always Match the Facts, has dubbed this disconnect the perception gap. "Even perfect information perfectly provided that addresses people's concerns will not convince everyone that vaccines don't cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot," he says. "Risk communication can't totally close the perception gap, the difference between our fears and the facts."
In the early 1970s, psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later, Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social sciences at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the 18th hole in a lighting storm.
Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer 10 years from now, so it is fine to blow our savings on a boat - we'll pay it off then.
Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread - the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns - are inherently more risky than other events.
But it is heuristics - the subtle mental strategies that often give rise to such biases - that do much of the heavy lifting in risk perception. The 'availability' heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news, and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of athero-sclerosis will most likely never be the subject of a summer thriller.
The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 6 people in this country, and 800,000 annually, hardly even rates with our gut.
The 'representative' heuristic makes us think something is probable if it is part of a known set of characteristics. John wears glasses, is quiet, and carries a calculator. John is therefore a mathematician? An engineer? His attributes taken together seem to fit the common stereotype.
But of all the mental rules of thumb and biases banging around in our brain, the most influential in assessing risk is the 'affect' heuristic. Slovic calls affect a "faint whisper of emotion" that creeps into our decisions. Simply put, positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier.
One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it's all about the short-term "hedonic" pleasure. The good outweighs the bad, which they never fully expect to experience.
Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drug overdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.
Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Island did not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.
The strategy persists even today. In the aftermath of Japan's nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country.
True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens of Japan. Seaweed, anyone?
At least a few technology promoters have become much more savvy in understanding the way the public perceives risk. The nanotechnology world in particular has taken a keen interest in this process, since even in its infancy it has faced high-profile fears. Nanotech, a field so broad that even its backers have trouble defining it, deals with materials and devices whose components are often smaller than 1/100,000,000,000 of a meter. In the late 1980s, the book Engines of Creation by the nanotechnologist K. Eric Drexler put forth the terrifying idea of nanoscale self-replicating robots that grow into clouds of 'gray goo' and devour the world. Soon gray goo was turning up in video games, magazine stories, and delightfully bad Hollywood action flicks (see, for instance, the last G.I. Joe movie).
The odds of nano-technology's killing off humanity are extremely remote, but the science is obviously not without real risks. In 2008 a study led by researchers at the University of Edinburgh suggested that carbon nanotubes, a promising material that could be used in everything from bicycles to electrical circuits, might interact with the body the same way asbestos does. In another study, scientists at the University of Utah found that nanoscopic particles of silver used as an antimicrobial in hundreds of products, including jeans, baby bottles, and washing machines, can deform fish embryos.
The nanotech community is eager to put such risks in perspective. "In Europe, people made decisions about genetically modified food irrespective of the technology," says Andrew Maynard, director of the Risk Science Center at the University of Michigan and an editor of the International Handbook on Regulating Nanotechnologies. "People felt they were being bullied into the technology by big corporations, and they didn't like it. There have been very small hints of that in nanotechnology." He points to incidents in which sunblock makers did not inform the public they were including zinc oxide nanoparticles in their products, stoking the skepticism and fears of some consumers.
For Maynard and his colleagues, influencing public perception has been an uphill battle. A 2007 study conducted by the Cultural Cognition Project at Yale Law School and coauthored by Paul Slovic surveyed 1,850 people about the risks and benefits of nanotech. Even though 81 percent of participants knew nothing or very little about nanotechnology before starting the survey, 89 percent of all respondents said they had an opinion on whether nanotech's benefits outweighed its risks.
In other words, people made a risk judgment based on factors that had little to do with any knowledge about the technology itself. And as with public reaction to nuclear power, more information did little to unite opinions. "Because people with different values are predisposed to draw different factual conclusions from the same information, it cannot be assumed that simply supplying accurate information will allow members of the public to reach a consensus on nanotechnology risks, much less a consensus that promotes their common welfare," the study concluded.
It should come as no surprise that nanotech hits many of the fear buttons in the psychometric paradigm: It is a man-made risk; much of it is difficult to see or imagine; and the only available images we can associate with it are frightening movie scenes, such as a cloud of robots eating the Eiffel Tower. "In many ways, this has been a grand experiment in how to introduce a product to the market in a new way," Maynard says. "Whether all the up-front effort has gotten us to a place where we can have a better conversation remains to be seen."
That job will be immeasurably more difficult if the media - in particular cable news - ever decide to make nanotech their fear du jour. In the summer of 2001, if you switched on the television or picked up a news magazine, you might think the ocean's top predators had banded together to take on humanity. After 8-year-old Jessie Arbogast's arm was severed by a seven-foot bull shark on Fourth of July weekend while the child was playing in the surf of Santa Rosa Island, near Pensacola, Florida, cable news put all its muscle behind the story. Ten days later, a surfer was bitten just six miles from the beach where Jessie had been mauled. Then a lifeguard in New York claimed he had been attacked. There was almost round-the-clock coverage of the "Summer of the Shark," as it came to be known. By August, according to an analysis by historian April Eisman of Iowa State University, it was the third-most-covered story of the summer until the September 11 attacks knocked sharks off the cable news channels.
All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the 'availability' heuristic was screaming at them that sharks were an imminent threat.
While we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities per year in the United States.
"Certainly anytime we have a situation like that where there's such overwhelming media attention, it's going to leave a memory in the population," says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded 30 to 40 media calls a day that summer. "Perception problems have always been there with sharks, and there's a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there's a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out."
Then again, getting out the real word comes with its own risks - like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, 20 years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of 100 scientists on the long-term health effects of the level 7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent.
Even though the percentages are low, the numbers are little comfort for the people living in the shadow of the reactor's cement sarcophagus who are literally worrying themselves sick. In the same report, the WHO states that "the mental health impact of Chernobyl is the largest problem unleashed by the accident to date," pointing out that fear of contamination and uncertainty about the future has led to widespread anxiety, depression, hypochondria, alcoholism, a sense of victimhood, and a fatalistic outlook that is extreme even by Russian standards. A recent study in the journal Radiology concludes that "the Chernobyl accident showed that overestimating radiation risks could be more detrimental than underestimating them. Misinformation partially led to traumatic evacuations of about 200,000 individuals, an estimated 1,250 suicides, and between 100,000 and 200,000 elective abortions."
It is hard to fault the Chernobyl survivors for worrying, especially when it took 20 years for the scientific community to get a grip on the aftereffects of the disaster, and even those numbers are disputed. An analysis commissioned by Greenpeace in response to the WHO report predicts that the Chernobyl disaster will result in about 270,000 cancers and 93,000 fatal cases.
Chernobyl is far from the only chilling illustration of what can happen when we get risk wrong. During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.
It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, D.C., bringing together scientists, policy makers, and others to discuss how risk perception and communication impact society. "Risk perception is not emotion and reason, or facts and feelings. It's both, inescapably, down at the very wiring of our brain," says Ropeik. "We can't undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means."
Ropeik says policy makers need to stop issuing reams of statistics and start making policies that manipulate our risk perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions About Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won't work as hard to save them, or that they won't be able to have an open- casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests - controversially - 'mandated choice,' in which people must check 'yes' or 'no' to organ donation on their driver's license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.
In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and that is the next step in risk perception and risk communication. "Our risk perception is flawed enough to create harm," he says, "but it's something society can do something about."
Conscious and Unconscious
Neuroscientist David Eagleman explores the processes and skills of the subconscious mind, which our conscious selves rarely consider.
Only a tiny fraction of the brain is dedicated to conscious behavior. The rest works feverishly behind the scenes regulating everything from breathing to mate selection. In fact, neuroscientist David Eagleman of Baylor College of Medicine argues that the unconscious workings of the brain are so crucial to everyday functioning that their influence often trumps conscious thought. To prove it, he explores little-known historical episodes, the latest psychological research, and enduring medical mysteries, revealing the bizarre and often inexplicable mechanisms underlying daily life.
Eagleman's theory is epitomized by the deathbed confession of the 19th-century mathematician James Clerk Maxwell, who developed fundamental equations unifying electricity and magnetism. Maxwell declared that 'something within him' had made the discoveries; he actually had no idea how he'd achieved his great insights. It is easy to take credit after an idea strikes you, but in fact, neurons in your brain secretly perform an enormous amount of work before inspiration hits. The brain, Eagleman argues, runs its show incognito. Or, as Pink Floyd put it, 'There's someone in my head, but it's not me.'
There is a looming chasm between what your brain knows and what your mind is capable of accessing. Consider the simple act of changing lanes while driving a car. Try this: Close your eyes, grip an imaginary steering wheel, and go through the motions of a lane change. Imagine that you are driving in the left lane and you would like to move over to the right lane. Before reading on, actually try it. I'll give you 100 points if you can do it correctly.
It's a fairly easy task, right? I'm guessing that you held the steering wheel straight, then banked it over to the right for a moment, and then straightened it out again. No problem.
Like almost everyone else, you got it completely wrong. The motion of turning the wheel rightward for a bit, then straightening it out again would steer you off the road: you just piloted a course from the left lane onto the sidewalk. The correct motion for changing lanes is banking the wheel to the right, then back through the center, and continuing to turn the wheel just as far to the left side, and only then straightening out. Don't believe it? Verify it for yourself when you're next in the car. It's such a simple motor task that you have no problem accomplishing it in your daily driving. But when forced to access it consciously, you're flummoxed.
The lane-changing example is one of a thousand. You are not consciously aware of the vast majority of your brain's ongoing activities, nor would you want to be - it would interfere with the brain's well-oiled processes. The best way to mess up your piano piece is to concentrate on your fingers; the best way to get out of breath is to think about your breathing; the best way to miss the golf ball is to analyze your swing. This wisdom is apparent even to children, and we find it immortalized in poems such as 'The Puzzled Centipede':
A centipede was happy quite,
Until a frog in fun
Said, "Pray tell which leg comes after which?"
This raised her mind to such a pitch,
She lay distracted in the ditch
Not knowing how to run.
The ability to remember motor acts like changing lanes is called procedural memory, and it is a type of implicit memory - meaning that your brain holds knowledge of something that your mind cannot explicitly access. Riding a bike, tying your shoes, typing on a keyboard, and steering your car into a parking space while speaking on your cell phone are examples of this. You execute these actions easily but without knowing the details of how you do it. You would be totally unable to describe the perfectly timed choreography with which your muscles contract and relax as you navigate around other people in a cafeteria while holding a tray, yet you have no trouble doing it. This is the gap between what your brain can do and what you can tap into consciously.
The concept of implicit memory has a rich, if little-known, tradition. By the early 1600s, Rene Descartes had already begun to suspect that although experience with the world is stored in memory, not all memory is accessible. The concept was rekindled in the late 1800s by the psychologist Hermann Ebbinghaus, who wrote that "most of these experiences remain concealed from consciousness and yet produce an effect which is significant and which authenticates their previous existence."
To the extent that consciousness is useful, it is useful in small quantities, and for very particular kinds of tasks. It's easy to understand why you would not want to be consciously aware of the intricacies of your muscle movement, but this can be less intuitive when applied to your perceptions, thoughts, and beliefs, which are also final products of the activity of billions of nerve cells. We turn to these now.
Chicken Sexers and Plane Spotters
When chicken hatchlings are born, large commercial hatcheries usually set about dividing them into males and females, and the practice of distinguishing gender is known as chick sexing. Sexing is necessary because the two genders receive different feeding programs: one for the females, which will eventually produce eggs, and another for the males, which are typically destined to be disposed of because of their uselessness in the commerce of producing eggs; only a few males are kept and fattened for meat. So the job of the chick sexer is to pick up each hatchling and quickly determine its sex in order to choose the correct bin to put it in. The problem is that the task is famously difficult: male and female chicks look exactly alike.
Well, almost exactly. The Japanese invented a method of sexing chicks known as vent sexing, by which experts could rapidly ascertain the sex of one-day-old hatchlings. Beginning in the 1930s, poultry breeders from around the world traveled to the Zen-Nippon Chick Sexing School in Japan to learn the technique.
The mystery was that no one could explain exactly how it was done. It was somehow based on very subtle visual cues, but the professional sexers could not say what those cues were. They would look at the chick's rear (where the vent is) and simply seem to know the correct bin to throw it in.
And this is how the professionals taught the student sexers. The master would stand over the apprentice and watch. The student would pick up a chick, examine its rear, and toss it into one bin or the other. The master would give feedback: yes or no. After weeks on end of this activity, the student's brain was trained to a masterful - albeit unconscious - level.
Meanwhile, a similar story was unfolding oceans away. During World War II, under constant threat of bombings, the British had a great need to distinguish incoming aircraft quickly and accurately. Which aircraft were British planes coming home and which were German planes coming to bomb? Several airplane enthusiasts had proved to be excellent spotters, so the military eagerly employed their services. These spotters were so valuable that the government quickly tried to enlist more spotters - but they turned out to be rare and difficult to find. The government therefore tasked the spotters with training others.
It was a grim attempt. The spotters tried to explain their strategies but failed. No one got it, not even the spotters themselves. Like the chicken sexers, the spotters had little idea how they did what they did - they simply saw the right answer.
With a little ingenuity, the British finally figured out how to successfully train new spotters: by trial-and-error feedback. A novice would hazard a guess and an expert would say yes or no. Eventually the novices became, like their mentors, vessels of the mysterious, ineffable expertise.
The Knowledge Gap
There can be a large gap between knowledge and awareness. When we examine skills that are not amenable to introspection, the first surprise is that implicit memory is completely separable from explicit memory: You can damage one without hurting the other.
Consider patients with anterograde amnesia, who cannot consciously recall new experiences in their lives. If you spend an afternoon trying to teach them the video game Tetris, they will tell you the next day that they have no recollection of the experience, that they have never seen this game before - and, most likely, that they have no idea who you are, either. But if you look at their performance on the game the next day, you'll find that they have improved exactly as much as nonamnesiacs. Implicitly their brains have learned the game: The knowledge is simply not accessible to their consciousness. (Interestingly, if you wake up an amnesic patient during the night after he has played Tetris, he'll report that he was dreaming of colorful falling blocks but will have no idea why.)
Of course, it's not just sexers and spotters and amnesiacs who enjoy unconscious learning. Essentially everything about your interaction with the world rests on this process. You may have a difficult time putting into words the characteristics of your father's walk, or the shape of his nose, or the way he laughs - but when you see someone who walks, looks, or laughs the way he does, you know it immediately.
Flexible Intelligence
One of the most impressive features of brains - and especially human brains - is the flexibility to learn almost any kind of task that comes their way. Give an apprentice the desire to impress his master in a chicken-sexing task and his brain devotes its massive resources to distinguishing males from females. Give an unemployed aviation enthusiast a chance to be a national hero and his brain learns to distinguish enemy aircraft from local flyboys. This flexibility of learning accounts for a large part of what we consider human intelligence. While many animals are properly called intelligent, humans distinguish themselves in that they are so flexibly intelligent, fashioning their neural circuits to match the task at hand. It is for this reason that we can colonize every region on the planet, learn the local language we're born into, and master skills as diverse as playing the violin, high-jumping, and operating space shuttle cockpits.
The Liar in Your Head
On December 31, 1974, Supreme Court Justice William O. Douglas was debilitated by a stroke that paralyzed his left side and confined him to a wheelchair. But Justice Douglas demanded to be checked out of the hospital on the grounds that he was fine. He declared that reports of his paralysis were 'a myth.' When reporters expressed skepticism, he invited them to join him for a hike, a move interpreted as absurd. He even claimed to be kicking football field goals with his paralyzed leg. As a result of this apparently delusional behavior, Douglas was dismissed from his seat on the Supreme Court.
What Douglas experienced is called anosognosia. This term describes a total lack of awareness about an impairment. It's not that Justice Douglas was lying - his brain actually believed that he could move just fine. But shouldn't the contradicting evidence alert those with anosognosia to a problem? It turns out that alerting the system to contradictions relies on particular brain regions, especially one called the anterior cingulate cortex. Because of these conflict-monitoring regions, incompatible ideas will result in one side or another's winning: The brain either constructs a story that makes them compatible or ignores one side of the debate. In special circumstances of brain damage, this arbitration system can be damaged, and then conflict can cause no trouble to the conscious mind.
Now Batting: Your Subconscious
On August 20, 1974, in a game between the California Angels and the Detroit Tigers, The Guinness Book of Records clocked Nolan Ryan's fastball at 100.9 miles per hour. If you work the numbers, you'll see that Ryan's pitch departs the mound and crosses home plate - 60 feet, 6 inches away - in four-tenths of a second. This gives just enough time for light signals from the baseball to hit the batter's eye, work through the circuitry of the retina, activate successions of cells along the loopy superhighways of the visual system at the back of the head, cross vast territories to the motor areas, and modify the contraction of the muscles swinging the bat. Amazingly, this entire sequence is possible in less than four-tenths of a second; otherwise no one would ever hit a fastball. But even more surprising is that conscious awareness takes longer than that: about half a second. So the ball travels too rapidly for batters to be consciously aware of it.
One does not need to be consciously aware to perform sophisticated motor acts. You can notice this when you begin to duck from a snapping tree branch before you are aware that it's coming toward you, or when you're already jumping up when you first become aware of a phone's ring.
Revealing Clothes
For both men and women, wearing revealing attire causes them to be seen as more sensitive but less competent, says a new study by University of Maryland psychologist Kurt Gray and colleagues from Yale and Northeastern University.
In an article published Nov. 10 in the Journal of Personality and Social Psychology, the researchers write that it would be absurd to think people's mental capacities fundamentally change when they remove clothing. "In six studies, however, we show that taking off a sweater-or otherwise revealing flesh-can significantly change the way a mind is perceived."
Past research, feminist theory and parental admonishments all have long suggested that when men see a woman wearing little or nothing, they focus on her body and think less of her mind. The new findings by Gray, et al. both expand and change our understanding of how paying attention to someone's body can alter how both men and women view both women and men.
"An important thing about our study is that, unlike much previous research, ours applies to both sexes. It also calls into question the nature of objectification because people without clothes are not seen as mindless objects, but they are instead attributed a different kind of mind," says UMD's Gray.
"We also show that this effect can happen even without the removal of clothes. Simply focusing on someone's attractiveness, in essence concentrating on their body rather than their mind, makes you see him or her as less of an agent [someone who acts and plans] and more of an experiencer."
Objectification vs. Two Kinds of Mind
Traditional research and theories on objectification suggest that we see the mind of others on a continuum between the full mind of a normal human and the mindlessness of an inanimate object. The idea of objectification is that looking at someone in a sexual context-such as in pornography-leads people to focus on physical characteristics, turning them into an object without a mind or moral status.
However, recent findings indicate that rather than looking at others on a continuum from object to human, we see others as having two aspects of mind: agency and experience. Agency is the capacity to act, plan and exert self-control, whereas experience is the capacity to feel pain, pleasure and emotions. Various factors -- including the amount of skin shown -- can shift which type of mind we see in another person.
In multiple experiments, the researchers found further support for the two kinds of mind view. When men and women in the study focused on someone's body, perceptions of agency (self-control and action) were reduced, and perceptions of experience (emotion and sensation) were increased. Gray and colleagues suggest that this effect occurs because people unconsciously think of minds and bodies as distinct, or even opposite, with the capacity to act and plan tied to the "mind" and the ability to experience or feel tied to the body.
According to Gray, their findings indicate that the change in perception that results from showing skin is not all bad. "A focus on the body, and the increased perception of sensitivity and emotion it elicits might be good for lovers in the bedroom," he says.
Their study also found that a body focus can actually increase moral standing. Although those wearing little or no clothes -- or otherwise represented as a body -- were seen to be less morally responsible, they also were seen to be more sensitive to harm and hence deserving of more protection. "Others appear to be less inclined to harm people with bare skin and more inclined to protect them. In one experiment, for example, people viewing male subjects with their shirts off were less inclined to give those subjects uncomfortable electric shocks than when the men had their shirts on." Gray says.
However, Gray and his coauthors note that in work or academic contexts, where people are primarily evaluated on their capacity to plan and act, a body focus clearly has negative effects. Seeing someone as a body strips him or her of competence and leadership, potentially impacting job evaluations. "Even more than robbing someone of agency, the increased experience that may accompany body perceptions may lead those who are characterized in terms of their bodies to be seen as more reactive and emotional, traits that may also serve to work against career advancement," they write.
Even the positive aspects of a body focus, such as an increased desire to protect from harm, can be ultimately harmful, the authors say, pointing to the "benevolent sexism" common in the United States in the 1950s, in which men oppressed women under the guise of protecting them.
Decision Making
When faced with making a complicated decision, our automatic instinct to avoid misfortune can result in missing out on rewards, and could even contribute to depression, according to new research.
The results of a new study, published in the journal PLoS Computational Biology, suggest that our brains subconsciously use a simplistic strategy in order to filter out options when faced with a complex decision. However, the research also highlights how this strategy can lead to poor choices, and could possibly contribute to depression -- a condition characterised by impaired decision-making.
In the study, researchers at UCL looked at how people make chains of several decisions, where each step depends on the previous one. Often, the total number of possible choices is far too large to consider them each individually. One way to simplify the problem is to avoid considering any plan where the first step has a seriously negative association -- no matter what the overall outcome would be. This 'pruning' decision-making bias, which was demonstrated in this paper for the first time, can result in poor decisions.
Lead author Dr Quentin Huys from the UCL Gatsby Computational Neuroscience Unit, explained: "Imagine planning a holiday -- you could not possibly consider every destination in the world. To reduce the number of options, you might instinctively avoid considering going to any countries that are more than 5 hours away by plane because you don't enjoy flying.
"This strategy simplifies the planning process and guarantees that you won't have to endure an uncomfortable long-haul flight. However, it also means that you might miss out on an amazing trip to an exotic destination."
In the study, the researchers asked a group of 46 volunteers with no known psychiatric disorders to plan chains of decisions in which they moved around a maze -- on each step they either gained or lost money. The volunteers instinctively avoided paths starting with large losses, even if those decisions would have won them the most money overall. Interestingly, the amount of pruning the volunteers showed was related to the extent to which they reported experiencing depressive symptoms, though none were actually clinically depressed.
Neir Eshel, co-author of the paper, formerly at the UCL Institute for Cognitive Neuroscience and now at Harvard Medical School, said: "The reflex to prune the number of possible options is a double-edged sword. Although necessary to simplify complicated decisions, it could also lead to poor choices."
The researchers link the surprising association with depressive symptoms to the brain chemical serotonin, which is known to be involved in both avoidance and depression, and may also contribute to the optimism bias. However, this role for serotonin in pruning needs to be confirmed in further studies.
A Neuro Surgeon's View
A new exhibition reminds the celebrated neurosurgeon Henry Marsh that we still don’t know our own minds
The preserved, dead human brain is not a thing of beauty — it is a cold, grey, slimy mass, smelling horribly of formaldehyde. As one looks at this object (admittedly bottled and hence odourless) in the Wellcome Trust’s new exhibition Brains: The Mind as Matter, it is not easy to consider that this thought, which feels so light, and our sense of self and being, which feel so familiar, are made of something as alien and gross as this.
The living brain, however, is not without beauty, though it is a sight that only the staff working in neurosurgical operating theatres — and a few privileged patients — ever get to see. Seen through the optics of an operating microscope, the brain’s surface glitters with cerebro-spinal fluid and the red arteries and blue veins branch across it like intricate river estuaries seen from space. (The brain has a mass of blood vessels supplying it because thinking is an energy-intensive process. One quarter of the blood pumped by the heart goes to the brain — and a large part of brain surgery is negotiating one’s way around these vessels.) As I often operate on the brain under local anaesthetic, with the patient wide awake, some of my patients can see the same view, shown on a video monitor. When I operate on tumours in the visual cortex, the part of the brain at the back of the head responsible for vision, I have had patients using their visual cortex to look at itself on the video monitor.
One feels that there should be some kind of philosophical equivalent of acoustic feedback as my patient’s brain sees itself but, of course, there is not. When I pointed out to one patient the part of his brain that was responsible for speech, he replied (or rather the part of his brain known as Broca’s area replied): “It’s crazy.” The more one thinks about the brain, the more difficult it becomes.
Although I know that if I damage certain parts of the brain during an operation I will be faced by a disabled patient afterwards, I still feel that mind and matter are separate entities. Entire schools of philosophy, and countless books, have been devoted to this conviction, the so-called mind-body problem. Many theories have been suggested by philosophers, starting with Descartes, whose dualism had mind and matter as entirely separate. He proposed that the physical brain communicated with the immaterial mind in the pineal gland, a small pea-shaped structure in the centre of the brain.
The pineal gland, I might add, can occasionally give rise to tumours that I will cheerfully remove (along with the gland) without apparently interfering with the ability of the patient’s brain to communicate with his mind. Since Descartes, philosophers have proposed many theories, such as parallelism (that mental and brain events do not influence each other but are nevertheless in harmony) and epiphenomenalism (that mental events are a by-product of brain events in the same way that the ticking of a clock is a by-product of the clock’s machinery), to name but a few. Finally, there is materialism — the view that the mind-body problem is not really a problem at all and that “mind” is a physical phenomenon.
It is difficult for brain surgeons not to be materialists, though I suppose a few might manage it. The complicated philosophical theories cannot survive the crude reality of brain surgery, and there are several sets of neurosurgical instruments in the exhibition that convey this crudity. The delicacy and complexity of the brain — 100 billion nerve cells linked by an even greater number of electro-chemical connections — is not matched, alas, by the tools that we surgeons use.
The identity of mind and matter is most apparent for neurosurgeons when we see patients who have suffered damage to the frontal lobes of their brain. It is easy enough to believe that mind and matter are separate entities when brain damage produces “physical” disability such as paralysis or loss of vision. It is much more difficult when somebody’s personality is changed (almost invariably for the worse). A kind and thoughtful husband can become coarse and violent, even though his intellect is perfectly preserved. He is no longer the person that he was. If the lives of head-injured patients with frontal brain damage have been saved by emergency brain surgery, their enthusiastic young surgeons usually see this as a triumph. But all too often it becomes apparent as time passes that their social and moral nature has been irreversibly damaged — that they have been left “a bit frontal” as doctors say. It is a depressing experience to sit in one’s outpatient clinic and listen to the sad litany of marital breakdown, unemployment and depression that is an all too common outcome from head injury.
The exhibition traces the way in which the analogies used to “understand” the brain have changed as our ability to examine it deepened from the 17th century onwards and as technology advanced. Aristotle thought its purpose was to cool the blood. Descartes saw it as a system of hydraulic tubes. In the 19th century it was compared to a telephone exchange and in the modern era, inevitably, to a computer. But the fact is that we have never met anything like a brain before and it is by no means certain that we will ever have the analogies or the language with which to understand it. The great physicist Richard Feynman famously observed that we cannot understand quantum mechanics, and all we can do is accept the mathematics and experimental evidence, even though the behaviour of particles at an atomic level is so utterly at odds with our everyday experience. But even if, in principle, we can understand our brains, there are certain practical difficulties.
First, science is based on experimentation, and we can perform only limited experiments on our own brains. Experiments on animals tell us only about animals, and ultimately what really interests us is ourselves. “Experiments of nature”, such as strokes and head injuries, which have provided the foundations of neuroscience, can only take us a limited distance. Second, as the exhibition tries to show, the sheer complexity and minuscule scale on which the brain works might defy any attempt to unravel it.
Even if we can one day produce a wiring diagram of the human brain — the most complex object in the universe, as cliché has it — it is by no means certain that we will be any closer to understanding how it actually works. Modern technology and scanning have advanced our understanding of the brain immensely but, relative to the brain’s complexity, it is still like looking at a starry sky through cheap binoculars. Nor can we know for certain that new technologies will be developed in future that will take us beyond what is currently available. We can now see planets circling distant stars, but we will never reach them. We may be equally frustrated in our attempts to look inwards into our own brains.
Finally, there is the most profound and fascinating question of all — the question of consciousness. How does matter give rise to awareness? Does each brain cell contain a hundred billionth part of consciousness and, if not, how does consciousness arise when brain cells are linked together? The trouble with consciousness is that it cannot be observed, it can only be experienced. It is a difficult, if not impossible, subject for scientific study. Recent functional MRI brain scanning, for instance, of patients in a persistent vegetative state has shown that, despite their complete immobility and unresponsiveness, some of them have appropriate activity in their brains in response to being spoken to — but there is no way of knowing if they are conscious.
My work as a neurosurgeon means that I have little choice but to accept that thought is a physical phenomenon, that mind is matter. Certain conclusions follow from this that many will find unpalatable — that animals are conscious and can suffer as much as we do, that there is no human soul and that an afterlife is most unlikely. Most religions fail when faced by this central tenet of neuroscience. Some difficult questions result as well, to which I do not know the answer. Did the murderer commit murder because there was an imbalance of chemicals in his brain, and if so, was that imbalance his fault? It is important, however, to realise that to accept that mind is matter does not diminish us in any way — instead it elevates matter and tells us how little we understand it and how little we understand ourselves. The inner sense of being and consciousness within each one of us is as great and wonderful a mystery as the great mystery of the universe around us.
Henry Marsh is senior consultant neurosurgeon on the Atkinson Morley Wing at St George’s Hospital in London and has been the subject of two documentaries, the most recent of which is the award-winning The English Surgeon (2007) about his work in the Ukraine. Brains: The Mind as Matter, Wellcome Collection, London NW1 (020-7611 2222), March 29 to June 17 2012
Making Big Decisions About Money
We're bad it. And marketers know this.
Consider: you're buying a $30,000 car and you have the option of upgrading the stereo to the 18 speaker, 100 watt version for just $500 more. Should you?
Or perhaps you're considering two jobs, one that you love and one that pays $2,000 more. Which to choose?
Or...You are lucky enough to be able to choose between two colleges. One, the one with the nice campus and slightly more famous name, will cost your parents (and your long-term debt) about $200,000 for four years, and the other ("lesser" school) has offered you a full scholarship.Which should you take?
In a surprisingly large number of cases, we take the stereo, even though we'd never buy a nice stereo at home, or we choose to "go with our heart because college is so important" and pick the expensive college. (This is, of course, a good choice to have to make, as most people can't possibly find the money).
Here's one reason we mess up: Money is just a number.
Comparing dreams of a great stereo (four years of driving long distances, listening to great music!) compared with the daily reminder of our cheapness makes picking the better stereo feel easier. After all, we're not giving up anything but a number.
The college case is even more clear. $200,000 is a number that's big, sure, but it doesn't have much substance. It's not a number we play with or encounter very often. The feeling about the story of compromise involving something tied up in our self-esteem, though, that feeling is something we deal with daily.
Here's how to undo the self-marketing. Stop using numbers.
You can have the stereo if you give up going to Starbucks every workday for the next year and a half. Worth it?
If you go to the free school, you can drive there in a brand new Mini convertible, and every summer you can spend $25,000 on a top-of-the-line internship/experience, and you can create a jazz series and pay your favorite musicians to come to campus to play for you and your fifty coolest friends, and you can have Herbie Hancock give you piano lessons and you can still have enough money left over to live without debt for a year after you graduate while you look for the perfect gig...
Suddenly, you're not comparing "this is my dream," with a number that means very little. You're comparing one version of your dream with another version.
Neuroscience Marketing
Many of the bestselling business books of the past decade, such as “Freakonomics” and “The Undercover Economist”, started with an implicit, fundamental premise: "If it can't be quantified or calculated, it can't be true."
These books often reduced baffling and complex scenarios - everything from global warming to why there are so many Starbucks stores in your neighborhood - to simple explanations supported by basic economic thinking. Sometimes these explanations contained charts, graphs and little diagrams that made the world appear neat, tidy and orderly. A decade ago, in fact, Google made news when they hired UC Berkeley economics professor Hal Varian as their first in-house economist. Varian was charged with modeling consumer behaviors and consulting on corporate strategy. The announcement further projected the belief that, in short, economics was the key to market success.
Today, Google should be looking for a prize-winning neuroscientist.
The new generation of business thinking combines a more nuanced understanding that many actions can neither be quantified nor calculated. Often, these insights are culled from the cutting edge of neuroscience. This new "what you didn't know about your brain can help you"-genre most likely started with science writer Jonah Lehrer's wonderful ”Proust was a Neuroscientist.”. In the book, Lehrer explains how many of the underpinnings of modern neuroscience were actually discovered by the likes of Proust, Stravinsky and Escoffier.
A slew of other titles have followed, each of them offering unique insights into the workings of the human brain. Along the way, we've been told how the human brain decides what to buy, why traditional brainstorming approaches don't work as well as they should, how changing the default settings can change the final outcome and why companies need to understand and cultivate the habits of their customers.
We are, as a society, experiencing a profound reappraisal of traditional economics and its shortcomings. The world is suddenly a lot more irrational than we ever thought, full of black swans. In Economics 101, we're taught that economic models are able to predict the behavior of coldly rational decision-makers. Charts and graphs follow a simple mathematical beauty. When we lower interest rates, we expect a certain reaction. When we devise incentives for customers, we expect them to react in a certain way. When we provide customers with a menu of choices, we expect them to answer in a certain way.
The only problem, of course, is that humans are not always rational.
Not surprisingly, some of the most popular business titles of the past few years have combined research findings at the cutting edge of economic theory. Perhaps the best example is Daniel Kahneman's ”Thinking, Fast and Slow,” which is now edging up the bestseller lists. Kahneman is a Nobel Prize-winning economist who has helped to popularize the latest in economics thinking, including loss aversion. Interestingly enough, Kahneman refers to himself as a psychologist, rather than an economist.
This new thinking about the way the human brain works is starting to impact everything -- how supermarkets stock their shelves, when coupon offers are sent out to consumers, and how to devise the perfect title that will get you to click on a news article (wait, did you think that your reading this was an accident?). A retail store such as Target now knows that you're pregnant before your parents do, thanks to the wonders of understanding customer purchase habits. On the Web, understanding human behavior is everything, given that the best and brightest of our generation are now engaged in an elaborate game of getting people to click on a specific button, text link or banner ad.
A decade ago, if you asked top business leaders whether they'd ever consider reading a book on neuroscience, they probably would have looked askance at you while tapping away at their BlackBerry. Today, they realize that profits lie in understanding how the human brain works, how people make decisions, and what influences the final purchase. What they may not realize, however, is that this understanding is moreso in a neuroscientist's wheelhouse than an economist's.
Personality On-Line
One of the foundations of modern psychology is that human personality can be described in terms of five different forms of behavior. These are:
1. Agreeableness--being helpful, cooperative and sympathetic towards others
2. Conscientiousness--being disciplined, organized and achievement-oriented
3. Extraversion--having a higher degree of sociability, assertiveness and talkativeness
4. Neuroticism--the degree of emotional stability, impulse control and anxiety
5. Openness--having a strong intellectual curiosity and a preference for novelty and variety
Psychologists have spent much time and many years developing tests that can classify people according to these criteria.
Today, Shuotian Bai at the Graduate University of Chinese Academy of Sciences in Beijing and a couple of buddies say they have developed an online version of the test that can determine an individual's personality traits from their behavior on a social network such as Facebook or Renren, an increasingly popular Chinese competitor.
Their method is relatively simple. These guys asked just over 200 Chinese students with Renren accounts to complete online, a standard personality test called the Big Five Inventory, which was developed at the University of California, Berkeley during the 1990s.
At the same time, these guys analyzed the Renren pages of each student, recording their age and sex and various aspects of their online behavior such as the frequency of their blog posts as well as the emotional content of the posts such as whether angry, funny or surprised and so on.
Finally, they used various number crunching techniques to reveal correlations between the results of the personality tests and the online behavior.
It turns out, they say, that various online behaviors are a good indicator of personality type. For example, conscientious people are more likely to post asking for help such as a location or e-mail address; a sign of extroversion is an increased use of emoticons; the frequency of status updates correlates with openness; and a measure of neuroticism is the rate at which blog posts attract angry comments.
Based on these correlations, these guys say they can automatically predict personality type simply by looking at an individual's social network statistics.
That could be extremely useful for social networks. Shuotian and comapny point out that a network might use this to recommend specific services. They give the rather naive example of an outgoing user who may prefer international news and like to make friends with others.
Other scenarios are at least as likely. For example, such an approach might help to improve recommender systems in general. Perhaps people who share similar personality characteristics are more likely to share similar tastes in books, films or each other.
There is also the obvious prospect that social networks would use this data for commercial gain; to target specific adverts to users for example. And finally there is the worry that such a technique could be used to identify vulnerable individuals who might be most susceptible to nefarious persuasion.
Ethics aside, there are also certain questions marks over the result. One important caveat is how people's response to psychology studies online differs from those done at other times. That could clearly introduce some bias. Then there are the more general questions of how online and offline behaviours differs and how these tests vary across cultures. These are things that Shuotian and Co. want to study in the future.
In the meantime, it is becoming increasingly clear that the data associated with our online behavior is a rich and valuable source of information about our innermost natures.
Why We Don't Believe In Science
Last week, Gallup announced the results of their latest survey on Americans and evolution. The numbers were a stark blow to high-school science teachers everywhere: forty-six per cent of adults said they believed that “God created humans in their present form within the last 10,000 years.” Only fifteen per cent agreed with the statement that humans had evolved without the guidance of a divine power.
What’s most remarkable about these numbers is their stability: these percentages have remained virtually unchanged since Gallup began asking the question, thirty years ago. In 1982, forty-four per cent of Americans held strictly creationist views, a statistically insignificant difference from 2012. Furthermore, the percentage of Americans that believe in biological evolution has only increased by four percentage points over the last twenty years.
Such poll data raises questions: Why are some scientific ideas hard to believe in? What makes the human mind so resistant to certain kinds of facts, even when these facts are buttressed by vast amounts of evidence?
A new study in Cognition, led by Andrew Shtulman at Occidental College, helps explain the stubbornness of our ignorance. As Shtulman notes, people are not blank slates, eager to assimilate the latest experiments into their world view. Rather, we come equipped with all sorts of naïve intuitions about the world, many of which are untrue. For instance, people naturally believe that heat is a kind of substance, and that the sun revolves around the earth. And then there’s the irony of evolution: our views about our own development don’t seem to be evolving.
This means that science education is not simply a matter of learning new theories. Rather, it also requires that students unlearn their instincts, shedding false beliefs the way a snake sheds its old skin.
To document the tension between new scientific concepts and our pre-scientific hunches, Shtulman invented a simple test. He asked a hundred and fifty college undergraduates who had taken multiple college-level science and math classes to read several hundred scientific statements. The students were asked to assess the truth of these statements as quickly as possible.
To make things interesting, Shtulman gave the students statements that were both intuitively and factually true (“The moon revolves around the Earth”) and statements whose scientific truth contradicts our intuitions (“The Earth revolves around the sun”).
As expected, it took students much longer to assess the veracity of true scientific statements that cut against our instincts. In every scientific category, from evolution to astronomy to thermodynamics, students paused before agreeing that the earth revolves around the sun, or that pressure produces heat, or that air is composed of matter. Although we know these things are true, we have to push back against our instincts, which leads to a measurable delay.
What’s surprising about these results is that even after we internalize a scientific concept—the vast majority of adults now acknowledge the Copernican truth that the earth is not the center of the universe—that primal belief lingers in the mind. We never fully unlearn our mistaken intuitions about the world. We just learn to ignore them.
Shtulman and colleagues summarize their findings:
When students learn scientific theories that conflict with earlier, naïve theories, what happens to the earlier theories? Our findings suggest that naïve theories are suppressed by scientific theories but not supplanted by them.
While this new paper provides a compelling explanation for why Americans are so resistant to particular scientific concepts—the theory of evolution, for instance, contradicts both our naïve intuitions and our religious beliefs—it also builds upon previous research documenting the learning process inside the head. Until we understand why some people believe in science we will never understand why most people don’t.
In a 2003 study, Kevin Dunbar, a psychologist at the University of Maryland, showed undergraduates a few short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time—a refutation of Aristotle, who claimed that heavier objects fell faster.
While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. They found the two balls falling at the same rate to be deeply unrealistic. (Intuitively, we’re all Aristotelians.) Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The A.C.C. is typically associated with the perception of errors and contradictions—neuroscientists often refer to it as part of the “Oh shit!” circuit—so it makes sense that it would be turned on when we watch a video of something that seems wrong, even if it’s right.
This data isn’t shocking; we already know that most undergrads lack a basic understanding of science. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to identify the error; they knew Galileo’s version was correct.
But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful. If you don’t want to think about the ice cream in the freezer, or need to focus on some tedious task, your D.L.P.F.C. is probably hard at work.
According to Dunbar, the reason the physics majors had to recruit the D.L.P.F.C. is because they were busy suppressing their intuitions, resisting the allure of Aristotle’s error. It would be so much more convenient if the laws of physics lined up with our naïve beliefs—or if evolution was wrong and living things didn’t evolve through random mutation. But reality is not a mirror; science is full of awkward facts. And this is why believing in the right version of things takes work.
Of course, that extra mental labor isn’t always pleasant. (There’s a reason they call it “cognitive dissonance.”) It took a few hundred years for the Copernican revolution to go mainstream. At the present rate, the Darwinian revolution, at least in America, will take just as long.
Self Control
In the late nineteen-sixties, Carolyn Weisz, a four-year-old with long brown hair, was invited into a “game room” at the Bing Nursery School, on the campus of Stanford University. The room was little more than a large closet, containing a desk and a chair. Carolyn was asked to sit down in the chair and pick a treat from a tray of marshmallows, cookies, and pretzel sticks. Carolyn chose the marshmallow. Although she’s now forty-four, Carolyn still has a weakness for those air-puffed balls of corn syrup and gelatine. “I know I shouldn’t like them,” she says. “But they’re just so delicious!” A researcher then made Carolyn an offer: she could either eat one marshmallow right away or, if she was willing to wait while he stepped out for a few minutes, she could have two marshmallows when he returned. He said that if she rang a bell on the desk while he was away he would come running back, and she could eat one marshmallow but would forfeit the second. Then he left the room.
Although Carolyn has no direct memory of the experiment, and the scientists would not release any information about the subjects, she strongly suspects that she was able to delay gratification. “I’ve always been really good at waiting,” Carolyn told me. “If you give me a challenge or a task, then I’m going to find a way to do it, even if it means not eating my favorite food.” Her mother, Karen Sortino, is still more certain: “Even as a young kid, Carolyn was very patient. I’m sure she would have waited.” But her brother Craig, who also took part in the experiment, displayed less fortitude. Craig, a year older than Carolyn, still remembers the torment of trying to wait. “At a certain point, it must have occurred to me that I was all by myself,” he recalls. “And so I just started taking all the candy.” According to Craig, he was also tested with little plastic toys—he could have a second one if he held out—and he broke into the desk, where he figured there would be additional toys. “I took everything I could,” he says. “I cleaned them out. After that, I noticed the teachers encouraged me to not go into the experiment room anymore.”
Footage of these experiments, which were conducted over several years, is poignant, as the kids struggle to delay gratification for just a little bit longer. Some cover their eyes with their hands or turn around so that they can’t see the tray. Others start kicking the desk, or tug on their pigtails, or stroke the marshmallow as if it were a tiny stuffed animal. One child, a boy with neatly parted hair, looks carefully around the room to make sure that nobody can see him. Then he picks up an Oreo, delicately twists it apart, and licks off the white cream filling before returning the cookie to the tray, a satisfied look on his face.
Most of the children were like Craig. They struggled to resist the treat and held out for an average of less than three minutes. “A few kids ate the marshmallow right away,” Walter Mischel, the Stanford professor of psychology in charge of the experiment, remembers. “They didn’t even bother ringing the bell. Other kids would stare directly at the marshmallow and then ring the bell thirty seconds later.” About thirty per cent of the children, however, were like Carolyn. They successfully delayed gratification until the researcher returned, some fifteen minutes later. These kids wrestled with temptation but found a way to resist.
The initial goal of the experiment was to identify the mental processes that allowed some people to delay gratification while others simply surrendered. After publishing a few papers on the Bing studies in the early seventies, Mischel moved on to other areas of personality research. “There are only so many things you can do with kids trying not to eat marshmallows.”
But occasionally Mischel would ask his three daughters, all of whom attended the Bing, about their friends from nursery school. “It was really just idle dinnertime conversation,” he says. “I’d ask them, ‘How’s Jane? How’s Eric? How are they doing in school?’ ” Mischel began to notice a link between the children’s academic performance as teen-agers and their ability to wait for the second marshmallow. He asked his daughters to assess their friends academically on a scale of zero to five. Comparing these ratings with the original data set, he saw a correlation. “That’s when I realized I had to do this seriously,” he says. Starting in 1981, Mischel sent out a questionnaire to all the reachable parents, teachers, and academic advisers of the six hundred and fifty-three subjects who had participated in the marshmallow task, who were by then in high school. He asked about every trait he could think of, from their capacity to plan and think ahead to their ability to “cope well with problems” and get along with their peers. He also requested their S.A.T. scores.
Once Mischel began analyzing the results, he noticed that low delayers, the children who rang the bell quickly, seemed more likely to have behavioral problems, both in school and at home. They got lower S.A.T. scores. They struggled in stressful situations, often had trouble paying attention, and found it difficult to maintain friendships. The child who could wait fifteen minutes had an S.A.T. score that was, on average, two hundred and ten points higher than that of the kid who could wait only thirty seconds.
Carolyn Weisz is a textbook example of a high delayer. She attended Stanford as an undergraduate, and got her Ph.D. in social psychology at Princeton. She’s now an associate psychology professor at the University of Puget Sound. Craig, meanwhile, moved to Los Angeles and has spent his career doing “all kinds of things” in the entertainment industry, mostly in production. He’s currently helping to write and produce a film. “Sure, I wish I had been a more patient person,” Craig says. “Looking back, there are definitely moments when it would have helped me make better career choices and stuff.”
Mischel and his colleagues continued to track the subjects into their late thirties - Ozlem Ayduk, an assistant professor of psychology at the University of California at Berkeley, found that low-delaying adults have a significantly higher body-mass index and are more likely to have had problems with drugs—but it was frustrating to have to rely on self-reports. “There’s often a gap between what people are willing to tell you and how they behave in the real world,” he explains. And so, last year, Mischel, who is now a professor at Columbia, and a team of collaborators began asking the original Bing subjects to travel to Stanford for a few days of experiments in an fMRI machine. Carolyn says she will be participating in the scanning experiments later this summer; Craig completed a survey several years ago, but has yet to be invited to Palo Alto. The scientists are hoping to identify the particular brain regions that allow some people to delay gratification and control their temper. They’re also conducting a variety of genetic tests, as they search for the hereditary characteristics that influence the ability to wait for a second marshmallow.
If Mischel and his team succeed, they will have outlined the neural circuitry of self-control. For decades, psychologists have focussed on raw intelligence as the most important variable when it comes to predicting success in life. Mischel argues that intelligence is largely at the mercy of self-control: even the smartest kids still need to do their homework. “What we’re really measuring with the marshmallows isn’t will power or self-control,” Mischel says. “It’s much more important than that. This task forces kids to find a way to make the situation work for them. They want the second marshmallow, but how can they get it? We can’t control the world, but we can control how we think about it.”
Walter Mischel is a slight, elegant man with a shaved head and a face of deep creases. He talks with a Brooklyn bluster and he tends to act out his sentences, so that when he describes the marshmallow task he takes on the body language of an impatient four-year-old. “If you want to know why some kids can wait and others can’t, then you’ve got to think like they think,” Mischel says.
Mischel was born in Vienna, in 1930. His father was a modestly successful businessman with a fondness for café society and Esperanto, while his mother spent many of her days lying on the couch with an ice pack on her forehead, trying to soothe her frail nerves. The family considered itself fully assimilated, but after the Nazi annexation of Austria, in 1938, Mischel remembers being taunted in school by the Hitler Youth and watching as his father, hobbled by childhood polio, was forced to limp through the streets in his pajamas. A few weeks after the takeover, while the family was burning evidence of their Jewish ancestry in the fireplace, Walter found a long-forgotten certificate of U.S. citizenship issued to his maternal grandfather decades earlier, thus saving his family.
The family settled in Brooklyn, where Mischel’s parents opened up a five-and-dime. Mischel attended New York University, studying poetry under Delmore Schwartz and Allen Tate, and taking studio-art classes with Philip Guston. He also became fascinated by psychoanalysis and new measures of personality, such as the Rorschach test. “At the time, it seemed like a mental X-ray machine,” he says. “You could solve a person by showing them a picture.” Although he was pressured to join his uncle’s umbrella business, he ended up pursuing a Ph.D. in clinical psychology at Ohio State.
But Mischel noticed that academic theories had limited application, and he was struck by the futility of most personality science. He still flinches at the naïveté of graduate students who based their diagnoses on a battery of meaningless tests. In 1955, Mischel was offered an opportunity to study the “spirit possession” ceremonies of the Orisha faith in Trinidad, and he leapt at the chance. Although his research was supposed to involve the use of Rorschach tests to explore the connections between the unconscious and the behavior of people when possessed, Mischel soon grew interested in a different project. He lived in a part of the island that was evenly split between people of East Indian and of African descent; he noticed that each group defined the other in broad stereotypes. “The East Indians would describe the Africans as impulsive hedonists, who were always living for the moment and never thought about the future,” he says. “The Africans, meanwhile, would say that the East Indians didn’t know how to live and would stuff money in their mattress and never enjoy themselves.”
Mischel took young children from both ethnic groups and offered them a simple choice: they could have a miniature chocolate bar right away or, if they waited a few days, they could get a much bigger chocolate bar. Mischel’s results failed to justify the stereotypes—other variables, such as whether or not the children lived with their father, turned out to be much more important—but they did get him interested in the question of delayed gratification. Why did some children wait and not others? What made waiting possible? Unlike the broad traits supposedly assessed by personality tests, self-control struck Mischel as potentially measurable.
In 1958, Mischel became an assistant professor in the Department of Social Relations at Harvard. One of his first tasks was to develop a survey course on “personality assessment,” but Mischel quickly concluded that, while prevailing theories held personality traits to be broadly consistent, the available data didn’t back up this assumption. Personality, at least as it was then conceived, couldn’t be reliably assessed at all. A few years later, he was hired as a consultant on a personality assessment initiated by the Peace Corps. Early Peace Corps volunteers had sparked several embarrassing international incidents—one mailed a postcard on which she expressed disgust at the sanitary habits of her host country—so the Kennedy Administration wanted a screening process to eliminate people unsuited for foreign assignments. Volunteers were tested for standard personality traits, and Mischel compared the results with ratings of how well the volunteers performed in the field. He found no correlation; the time-consuming tests predicted nothing. At this point, Mischel realized that the problem wasn’t the tests—it was their premise. Psychologists had spent decades searching for traits that exist independently of circumstance, but what if personality can’t be separated from context? “It went against the way we’d been thinking about personality since the four humors and the ancient Greeks,” he says.
While Mischel was beginning to dismantle the methods of his field, the Harvard psychology department was in tumult. In 1960, the personality psychologist Timothy Leary helped start the Harvard Psilocybin Project, which consisted mostly of self-experimentation. Mischel remembers graduate students’ desks giving way to mattresses, and large packages from Ciba chemicals, in Switzerland, arriving in the mail. Mischel had nothing against hippies, but he wanted modern psychology to be rigorous and empirical. And so, in 1962, Walter Mischel moved to Palo Alto and went to work at Stanford.
There is something deeply contradictory about Walter Mischel—a psychologist who spent decades critiquing the validity of personality tests—inventing the marshmallow task, a simple test with impressive predictive power. Mischel, however, insists there is no contradiction. “I’ve always believed there are consistencies in a person that can be looked at,” he says. “We just have to look in the right way.” One of Mischel’s classic studies documented the aggressive behavior of children in a variety of situations at a summer camp in New Hampshire. Most psychologists assumed that aggression was a stable trait, but Mischel found that children’s responses depended on the details of the interaction. The same child might consistently lash out when teased by a peer, but readily submit to adult punishment. Another might react badly to a warning from a counsellor, but play well with his bunkmates. Aggression was best assessed in terms of what Mischel called “if-then patterns.” If a certain child was teased by a peer, then he would be aggressive.
One of Mischel’s favorite metaphors for this model of personality, known as interactionism, concerns a car making a screeching noise. How does a mechanic solve the problem? He begins by trying to identify the specific conditions that trigger the noise. Is there a screech when the car is accelerating, or when it’s shifting gears, or turning at slow speeds? Unless the mechanic can give the screech a context, he’ll never find the broken part. Mischel wanted psychologists to think like mechanics, and look at people’s responses under particular conditions. The challenge was devising a test that accurately simulated something relevant to the behavior being predicted. The search for a meaningful test of personality led Mischel to revisit, in 1968, the protocol he’d used on young children in Trinidad nearly a decade earlier. The experiment seemed especially relevant now that he had three young daughters of his own. “Young kids are pure id,” Mischel says. “They start off unable to wait for anything—whatever they want they need. But then, as I watched my own kids, I marvelled at how they gradually learned how to delay and how that made so many other things possible.”
A few years earlier, in 1966, the Stanford psychology department had established the Bing Nursery School. The classrooms were designed as working laboratories, with large one-way mirrors that allowed researchers to observe the children. In February, Jennifer Winters, the assistant director of the school, showed me around the building. While the Bing is still an active center of research—the children quickly learn to ignore the students scribbling in notebooks—Winters isn’t sure that Mischel’s marshmallow task could be replicated today. “We recently tried to do a version of it, and the kids were very excited about having food in the game room,” she says. “There are so many allergies and peculiar diets today that we don’t do many things with food.”
Mischel perfected his protocol by testing his daughters at the kitchen table. “When you’re investigating will power in a four-year-old, little things make a big difference,” he says. “How big should the marshmallows be? What kind of cookies work best?” After several months of patient tinkering, Mischel came up with an experimental design that closely simulated the difficulty of delayed gratification. In the spring of 1968, he conducted the first trials of his experiment at the Bing. “I knew we’d designed it well when a few kids wanted to quit as soon as we explained the conditions to them,” he says. “They knew this was going to be very difficult.”
At the time, psychologists assumed that children’s ability to wait depended on how badly they wanted the marshmallow. But it soon became obvious that every child craved the extra treat. What, then, determined self-control? Mischel’s conclusion, based on hundreds of hours of observation, was that the crucial skill was the “strategic allocation of attention.” Instead of getting obsessed with the marshmallow—the “hot stimulus”—the patient children distracted themselves by covering their eyes, pretending to play hide-and-seek underneath the desk, or singing songs from “Sesame Street.” Their desire wasn’t defeated—it was merely forgotten. “If you’re thinking about the marshmallow and how delicious it is, then you’re going to eat it,” Mischel says. “The key is to avoid thinking about it in the first place.”
In adults, this skill is often referred to as metacognition, or thinking about thinking, and it’s what allows people to outsmart their shortcomings. (When Odysseus had himself tied to the ship’s mast, he was using some of the skills of metacognition: knowing he wouldn’t be able to resist the Sirens’ song, he made it impossible to give in.) Mischel’s large data set from various studies allowed him to see that children with a more accurate understanding of the workings of self-control were better able to delay gratification. “What’s interesting about four-year-olds is that they’re just figuring out the rules of thinking,” Mischel says. “The kids who couldn’t delay would often have the rules backwards. They would think that the best way to resist the marshmallow is to stare right at it, to keep a close eye on the goal. But that’s a terrible idea. If you do that, you’re going to ring the bell before I leave the room.”
According to Mischel, this view of will power also helps explain why the marshmallow task is such a powerfully predictive test. “If you can deal with hot emotions, then you can study for the S.A.T. instead of watching television,” Mischel says. “And you can save more money for retirement. It’s not just about marshmallows.”
Subsequent work by Mischel and his colleagues found that these differences were observable in subjects as young as nineteen months. Looking at how toddlers responded when briefly separated from their mothers, they found that some immediately burst into tears, or clung to the door, but others were able to overcome their anxiety by distracting themselves, often by playing with toys. When the scientists set the same children the marshmallow task at the age of five, they found that the kids who had cried also struggled to resist the tempting treat.
The early appearance of the ability to delay suggests that it has a genetic origin, an example of personality at its most predetermined. Mischel resists such an easy conclusion. “In general, trying to separate nature and nurture makes about as much sense as trying to separate personality and situation,” he says. “The two influences are completely interrelated.” For instance, when Mischel gave delay-of-gratification tasks to children from low-income families in the Bronx, he noticed that their ability to delay was below average, at least compared with that of children in Palo Alto. “When you grow up poor, you might not practice delay as much,” he says. “And if you don’t practice then you’ll never figure out how to distract yourself. You won’t develop the best delay strategies, and those strategies won’t become second nature.” In other words, people learn how to use their mind just as they learn how to use a computer: through trial and error.
But Mischel has found a shortcut. When he and his colleagues taught children a simple set of mental tricks—such as pretending that the candy is only a picture, surrounded by an imaginary frame—he dramatically improved their self-control. The kids who hadn’t been able to wait sixty seconds could now wait fifteen minutes. “All I’ve done is given them some tips from their mental user manual,” Mischel says. “Once you realize that will power is just a matter of learning how to control your attention and thoughts, you can really begin to increase it.”
Marc Berman, a lanky graduate student with an easy grin, speaks about his research with the infectious enthusiasm of a freshman taking his first philosophy class. Berman works in the lab of John Jonides, a psychologist and neuroscientist at the University of Michigan, who is in charge of the brain-scanning experiments on the original Bing subjects. He knows that testing forty-year-olds for self-control isn’t a straightforward proposition. “We can’t give these people marshmallows,” Berman says. “They know they’re part of a long-term study that looks at delay of gratification, so if you give them an obvious delay task they’ll do their best to resist. You’ll get a bunch of people who refuse to touch their marshmallow.”
This meant that Jonides and his team had to find a way to measure will power indirectly. Operating on the premise that the ability to delay eating the marshmallow had depended on a child’s ability to banish thoughts of it, they decided on a series of tasks that measure the ability of subjects to control the contents of working memory—the relatively limited amount of information we’re able to consciously consider at any given moment. According to Jonides, this is how self-control “cashes out” in the real world: as an ability to direct the spotlight of attention so that our decisions aren’t determined by the wrong thoughts.
Last summer, the scientists chose fifty-five subjects, equally split between high delayers and low delayers, and sent each one a laptop computer loaded with working-memory experiments. Two of the experiments were of particular interest. The first is a straightforward exercise known as the “suppression task.” Subjects are given four random words, two printed in blue and two in red. After reading the words, they’re told to forget the blue words and remember the red words. Then the scientists provide a stream of “probe words” and ask the subjects whether the probes are the words they were asked to remember. Though the task doesn’t seem to involve delayed gratification, it tests the same basic mechanism. Interestingly, the scientists found that high delayers were significantly better at the suppression task: they were less likely to think that a word they’d been asked to forget was something they should remember.
In the second, known as the Go/No Go task, subjects are flashed a set of faces with various expressions. At first, they are told to press the space bar whenever they see a smile. This takes little effort, since smiling faces automatically trigger what’s known as “approach behavior.” After a few minutes, however, subjects are told to press the space bar when they see frowning faces. They are now being forced to act against an impulse. Results show that high delayers are more successful at not pressing the button in response to a smiling face.
When I first started talking to the scientists about these tasks last summer, they were clearly worried that they wouldn’t find any behavioral differences between high and low delayers. It wasn’t until early January that they had enough data to begin their analysis (not surprisingly, it took much longer to get the laptops back from the low delayers), but it soon became obvious that there were provocative differences between the two groups. A graph of the data shows that as the delay time of the four-year-olds decreases, the number of mistakes made by the adults sharply rises.
The big remaining question for the scientists is whether these behavioral differences are detectable in an fMRI machine. Although the scanning has just begun—Jonides and his team are still working out the kinks—the scientists sound confident. “These tasks have been studied so many times that we pretty much know where to look and what we’re going to find,” Jonides says. He rattles off a short list of relevant brain regions, which his lab has already identified as being responsible for working-memory exercises. For the most part, the regions are in the frontal cortex—the overhang of brain behind the eyes—and include the dorsolateral prefrontal cortex, the anterior prefrontal cortex, the anterior cingulate, and the right and left inferior frontal gyri. While these cortical folds have long been associated with self-control, they’re also essential for working memory and directed attention. According to the scientists, that’s not an accident. “These are powerful instincts telling us to reach for the marshmallow or press the space bar,” Jonides says. “The only way to defeat them is to avoid them, and that means paying attention to something else. We call that will power, but it’s got nothing to do with the will.”
The behavioral and genetic aspects of the project are overseen by Yuichi Shoda, a professor of psychology at the University of Washington, who was one of Mischel’s graduate students. He’s been following these “marshmallow subjects” for more than thirty years: he knows everything about them from their academic records and their social graces to their ability to deal with frustration and stress. The prognosis for the genetic research remains uncertain. Although many studies have searched for the underpinnings of personality since the completion of the Human Genome Project, in 2003, many of the relevant genes remain in question. “We’re incredibly complicated creatures,” Shoda says. “Even the simplest aspects of personality are driven by dozens and dozens of different genes.” The scientists have decided to focus on genes in the dopamine pathways, since those neurotransmitters are believed to regulate both motivation and attention. However, even if minor coding differences influence delay ability—and that’s a likely possibility—Shoda doesn’t expect to discover these differences: the sample size is simply too small.
In recent years, researchers have begun making house visits to many of the original subjects, including Carolyn Weisz, as they try to better understand the familial contexts that shape self-control. “They turned my kitchen into a lab,” Carolyn told me. “They set up a little tent where they tested my oldest daughter on the delay task with some cookies. I remember thinking, I really hope she can wait.”
While Mischel closely follows the steady accumulation of data from the laptops and the brain scans, he’s most excited by what comes next. “I’m not interested in looking at the brain just so we can use a fancy machine,” he says. “The real question is what can we do with this fMRI data that we couldn’t do before?” Mischel is applying for an N.I.H. grant to investigate various mental illnesses, like obsessive-compulsive disorder and attention-deficit disorder, in terms of the ability to control and direct attention. Mischel and his team hope to identify crucial neural circuits that cut across a wide variety of ailments. If there is such a circuit, then the same cognitive tricks that increase delay time in a four-year-old might help adults deal with their symptoms. Mischel is particularly excited by the example of the substantial subset of people who failed the marshmallow task as four-year-olds but ended up becoming high-delaying adults. “This is the group I’m most interested in,” he says. “They have substantially improved their lives.”
Mischel is also preparing a large-scale study involving hundreds of schoolchildren in Philadelphia, Seattle, and New York City to see if self-control skills can be taught. Although he previously showed that children did much better on the marshmallow task after being taught a few simple “mental transformations,” such as pretending the marshmallow was a cloud, it remains unclear if these new skills persist over the long term. In other words, do the tricks work only during the experiment or do the children learn to apply them at home, when deciding between homework and television?
Angela Lee Duckworth, an assistant professor of psychology at the University of Pennsylvania, is leading the program. She first grew interested in the subject after working as a high-school math teacher. “For the most part, it was an incredibly frustrating experience,” she says. “I gradually became convinced that trying to teach a teen-ager algebra when they don’t have self-control is a pretty futile exercise.” And so, at the age of thirty-two, Duckworth decided to become a psychologist. One of her main research projects looked at the relationship between self-control and grade-point average. She found that the ability to delay gratification—eighth graders were given a choice between a dollar right away or two dollars the following week—was a far better predictor of academic performance than I.Q. She said that her study shows that “intelligence is really important, but it’s still not as important as self-control.”
Last year, Duckworth and Mischel were approached by David Levin, the co-founder of KIPP, an organization of sixty-six public charter schools across the country. KIPP schools are known for their long workday—students are in class from 7:25 A.M. to 5 P.M.—and for dramatic improvement of inner-city students’ test scores. (More than eighty per cent of eighth graders at the KIPP academy in the South Bronx scored at or above grade level in reading and math, which was nearly twice the New York City average.) “The core feature of the KIPP approach is that character matters for success,” Levin says. “Educators like to talk about character skills when kids are in kindergarten—we send young kids home with a report card about ‘working well with others’ or ‘not talking out of turn.’ But then, just when these skills start to matter, we stop trying to improve them. We just throw up our hands and complain.”
Self-control is one of the fundamental “character strengths” emphasized by KIPP—the KIPP academy in Philadelphia, for instance, gives its students a shirt emblazoned with the slogan “Don’t Eat the Marshmallow.” Levin, however, remained unsure about how well the program was working—“We know how to teach math skills, but it’s harder to measure character strengths,” he says—so he contacted Duckworth and Mischel, promising them unfettered access to KIPP students. Levin also helped bring together additional schools willing to take part in the experiment, including Riverdale Country School, a private school in the Bronx; the Evergreen School for gifted children, in Shoreline, Washington; and the Mastery Charter Schools, in Philadelphia.
For the past few months, the researchers have been conducting pilot studies in the classroom as they try to figure out the most effective way to introduce complex psychological concepts to young children. Because the study will focus on students between the ages of four and eight, the classroom lessons will rely heavily on peer modelling, such as showing kindergartners a video of a child successfully distracting herself during the marshmallow task. The scientists have some encouraging preliminary results—after just a few sessions, students show significant improvements in the ability to deal with hot emotional states—but they are cautious about predicting the outcome of the long-term study. “When you do these large-scale educational studies, there are ninety-nine uninteresting reasons the study could fail,” Duckworth says. “Maybe a teacher doesn’t show the video, or maybe there’s a field trip on the day of the testing. This is what keeps me up at night.”
Mischel’s main worry is that, even if his lesson plan proves to be effective, it might still be overwhelmed by variables the scientists can’t control, such as the home environment. He knows that it’s not enough just to teach kids mental tricks—the real challenge is turning those tricks into habits, and that requires years of diligent practice. “This is where your parents are important,” Mischel says. “Have they established rituals that force you to delay on a daily basis? Do they encourage you to wait? And do they make waiting worthwhile?” According to Mischel, even the most mundane routines of childhood—such as not snacking before dinner, or saving up your allowance, or holding out until Christmas morning—are really sly exercises in cognitive training: we’re teaching ourselves how to think so that we can outsmart our desires. But Mischel isn’t satisfied with such an informal approach. “We should give marshmallows to every kindergartner,” he says. “We should say, ‘You see this marshmallow? You don’t have to eat it. You can wait. Here’s how.’ ”
Exercise and the Brain
Exercise clears the mind. It gets the blood pumping and more oxygen is delivered to the brain. But Dartmouth’s David Bucci thinks there is much more going on.
“In the last several years there have been data suggesting that neurobiological changes are happening — [there are] very brain-specific mechanisms at work here,” says Bucci, an associate professor in the Department of Psychological and Brain Sciences.
From his studies, Bucci and his collaborators have revealed important new findings:
The effects of exercise are different on memory as well as on the brain, depending on whether the exerciser is an adolescent or an adult.
A gene has been identified which seems to mediate the degree to which exercise has a beneficial effect. This has implications for the potential use of exercise as an intervention for mental illness.
Bucci began his pursuit of the link between exercise and memory with attention deficit hyperactivity disorder (ADHD), one of the most common childhood psychological disorders. Bucci is concerned that the treatment of choice seems to be medication.
“The notion of pumping children full of psycho-stimulants at an early age is troublesome,” Bucci cautions. “We frankly don’t know the long-term effects of administering drugs at an early age—drugs that affect the brain—so looking for alternative therapies is clearly important.”
Anecdotal evidence from colleagues at the University of Vermont started Bucci down the track of ADHD. Based on observations of ADHD children in Vermont summer camps, athletes or team sports players were found to respond better to behavioral interventions than more sedentary children. While systematic empirical data is lacking, this association of exercise with a reduction of characteristic ADHD behaviors was persuasive enough for Bucci.
Coupled with his interest in learning and memory and their underlying brain functions, Bucci and teams of graduate and undergraduate students embarked upon a project of scientific inquiry, investigating the potential connection between exercise and brain function. They published papers documenting their results, with the most recent now available in the online version of the journal Neuroscience.
Bucci is quick to point out that “the teams of both graduate and undergraduates are responsible for all this work, certainly not just me.” Michael Hopkins, a graduate student at the time, is first author on the papers.
Early on, laboratory rats that exhibit ADHD-like behavior demonstrated that exercise was able to reduce the extent of these behaviors. The researchers also found that exercise was more beneficial for female rats than males, similar to how it differentially affects male and female children with ADHD.
Moving forward, they investigated a mechanism through which exercise seems to improve learning and memory. This is “brain derived neurotrophic factor” (BDNF) and it is involved in growth of the developing brain. The degree of BDNF expression in exercising rats correlated positively with improved memory, and exercising as an adolescent had longer lasting effects compared to the same duration of exercise, but done as an adult.
“The implication is that exercising during development, as your brain is growing, is changing the brain in concert with normal developmental changes, resulting in your having more permanent wiring of the brain in support of things like learning and memory,” says Bucci. “It seems important to [exercise] early in life.”
Bucci’s latest paper was a move to take the studies of exercise and memory in rats and apply them to humans. The subjects in this new study were Dartmouth undergraduates and individuals recruited from the Hanover community.
Bucci says that, “the really interesting finding was that, depending on the person’s genotype for that trophic factor [BDNF], they either did or did not reap the benefits of exercise on learning and memory. This could mean that you may be able to predict which ADHD child, if we genotype them and look at their DNA, would respond to exercise as a treatment and which ones wouldn’t.”
Bucci concludes that the notion that exercise is good for health including mental health is not a huge surprise. “The interesting question in terms of mental health and cognitive function is how exercise affects mental function and the brain.” This is the question Bucci, his colleagues, and students continue to pursue.
Why Smart People Are Stupid
Here’s a simple arithmetic question: A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?
The vast majority of people respond quickly and confidently, insisting the ball costs ten cents. This answer is both obvious and wrong. (The correct answer is five cents for the ball and a dollar and five cents for the bat.)
For more than five decades, Daniel Kahneman, a Nobel Laureate and professor of psychology at Princeton, has been asking questions like this and analyzing our answers. His disarmingly simple experiments have profoundly changed the way we think about thinking. While philosophers, economists, and social scientists had assumed for centuries that human beings are rational agents—reason was our Promethean gift—Kahneman, the late Amos Tversky, and others, including Shane Frederick (who developed the bat-and-ball question), demonstrated that we’re not nearly as rational as we like to believe.
When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions. These shortcuts aren’t a faster way of doing the math; they’re a way of skipping the math altogether. Asked about the bat and the ball, we forget our arithmetic lessons and instead default to the answer that requires the least mental effort.
Although Kahneman is now widely recognized as one of the most influential psychologists of the twentieth century, his work was dismissed for years. Kahneman recounts how one eminent American philosopher, after hearing about his research, quickly turned away, saying, “I am not interested in the psychology of stupidity.”
The philosopher, it turns out, got it backward. A new study in the Journal of Personality and Social Psychology led by Richard West at James Madison University and Keith Stanovich at the University of Toronto suggests that, in many instances, smarter people are more vulnerable to these thinking errors. Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes—it can actually be a subtle curse.
West and his colleagues began by giving four hundred and eighty-two undergraduates a questionnaire featuring a variety of classic bias problems. Here’s a example:In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.
West also gave a puzzle that measured subjects’ vulnerability to something called “anchoring bias,” which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects were first asked if the tallest redwood tree in the world was more than X feet, with X ranging from eighty-five to a thousand feet. Then the students were asked to estimate the height of the tallest redwood tree in the world. Students exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the tallest tree in the world was only a hundred and eighteen feet. Given an anchor of a thousand feet, their estimates increased seven-fold.
But West and colleagues weren’t simply interested in reconfirming the known biases of the human mind. Rather, they wanted to understand how these biases correlated with human intelligence. As a result, they interspersed their tests of bias with various cognitive measurements, including the S.A.T. and the Need for Cognition Scale, which measures “the tendency for an individual to engage in and enjoy thinking.”
The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves. Although the bias blind spot itself isn’t a new concept, West’s latest paper demonstrates that it applies to every single bias under consideration, from anchoring to so-called “framing effects.” In each instance, we readily forgive our own minds but look harshly upon the minds of other people.
And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes. Education also isn’t a savior; as Kahneman and Shane Frederick first noted many years ago, more than fifty per cent of students at Harvard, Princeton, and M.I.T. gave the incorrect answer to the bat-and-ball question.
What explains this result? One provocative hypothesis is that the bias blind spot arises because of a mismatch between how we evaluate others and how we evaluate ourselves. When considering the irrational choices of a stranger, for instance, we are forced to rely on behavioral information; we see their biases from the outside, which allows us to glimpse their systematic thinking errors. However, when assessing our own bad choices, we tend to engage in elaborate introspection. We scrutinize our motivations and search for relevant reasons; we lament our mistakes to therapists and ruminate on the beliefs that led us astray.
The problem with this introspective approach is that the driving forces behind biases—the root causes of our irrationality—are largely unconscious, which means they remain invisible to self-analysis and impermeable to intelligence. In fact, introspection can actually compound the error, blinding us to those primal processes responsible for many of our everyday failings. We spin eloquent stories, but these stories miss the point. The more we attempt to know ourselves, the less we actually understand.
Your Mind On Magic
PINCH a coin at its edge between the thumb and first fingers of your right hand and begin to place it in your left palm, without letting go. Begin to close the fingers of the left hand. The instant the coin is out of sight, extend the last three digits of your right hand and secretly retract the coin. Make a fist with your left — as if holding the coin — as your right hand palms the coin and drops to the side.
You’ve just performed what magicians call a retention vanish: a false transfer that exploits a lag in the brain’s perception of motion, called persistence of vision. When done right, the spectator will actually see the coin in the left palm for a split second after the hands separate.
This bizarre afterimage results from the fact that visual neurons don’t stop firing once a given stimulus (here, the coin) is no longer present. As a result, our perception of reality lags behind reality by about one one-hundredth of a second.
Magicians have long used such cognitive biases to their advantage, and in recent years scientists have been following in their footsteps, borrowing techniques from the conjurer’s playbook in an effort not to mystify people but to study them. Magic may seem an unlikely tool, but it’s already yielded several widely cited results. Consider the work on choice blindness — people’s lack of awareness when evaluating the results of their decisions.
In one study, shoppers in a blind taste test of two types of jam were asked to choose the one they preferred. They were then given a second taste from the jar they picked. Unbeknown to them, the researchers swapped the flavors before the second spoonful. The containers were two-way jars, lidded at both ends and rigged with a secret compartment that held the other jam on the opposite side — a principle that’s been used to bisect countless showgirls. This seems like the sort of thing that wouldn’t scan, yet most people failed to notice that they were tasting the wrong jam, even when the two flavors were fairly dissimilar, like grapefruit and cinnamon-apple.
In a related experiment, volunteers were shown a pair of female faces and asked which they found more attractive. Then they were given a closer look at their putative selection. In fact, the researchers swapped the selection for the “less attractive” face. Again, this bit of fraud flew by most people. Not only that, when pressed to justify their choices, the duped victims concocted remarkably detailed post hoc justifications.
Such tricks suggest that we are often blind to the results of our own decisions. Once a choice is made, our minds tend to rewrite history in a way that flatters our volition, a fact magicians have exploited for centuries. “If you are given a choice, you believe you have acted freely,” says Teller, one half of the study-in-contrasts duo Penn and Teller. “This is one of the darkest of all psychological secrets.”
Another dark psychological secret magicians routinely take advantage of is known as change blindness — the failure to detect changes in consecutive scenes. One of the most beautiful demonstrations is an experiment conducted by the psychologist Daniel Simons in which he had an experimenter stop random strangers on the street and ask for directions.
Midway through the conversation, a pair of confederates walked between them and blocked the stranger’s view, and the experimenter switched places with one of the stooges. Moments later, the stranger was talking to a completely different person — yet strange as it may sound, most didn’t notice.
What are the neural correlates of these cognitive hiccups? One possible answer comes from studies of the so-called face test, in which a volunteer is shown two faces in quick succession. Normally, just about anyone can distinguish the faces provided they’re shown within about half a second. But if the person is distracted by a task like counting, or by a flashing light, the faces start to look the same.
Here’s where it gets interesting, though. Scientists have found a way to induce change blindness, with a machine called a transcranial magnetic stimulator, which uses a magnetic field to disrupt localized brain regions. In one experiment, a T.M.S. was used to scramble the parietal cortex, which controls attention. Subjects were then given the face test. With the machine turned off, they did fine. But when the T.M.S. was on, most failed the test. Conclusion? Misdirection paralyzes part of your cortex.
Such blind spots confirm what many philosophers have long suspected: reality and our perception of it are incommensurate to a far greater degree than is often believed. For all its apparent fidelity, the movie in our heads is a “Rashomon” narrative pieced together from inconsistent and unreliable bits of information. It is, to a certain extent, an illusion.
Medical Creep
Overdiagnosis can turn healthy individuals into anxious patients, with one GP suggesting that modern psychiatry is medicalising normality.
Professional ethical guidelines forbid doctors from endorsing pharmaceutical products, but I hope the General Medical Council will make an exception on this occasion, as the drug I am about to promote is truly amazing.
May I introduce Havidol (avafynetyme) — “when more is not enough”. It’s the first, and only, treatment for dysphoric social attention consumption deficit anxiety disorder (DSACDAD). Not only will it help you to achieve everything you desire and deserve, but the side effects are pretty tasty too: shiny teeth, glowing skin and, in men, hair growth and delayed sexual climax.
OK, so there is no such condition and no such panacea, but DSACDAD and Havidol have featured in recent debate between doctors on the medicalisation of normality, known as medical creep, where overdiagnosis turns healthy individuals into anxious patients.
The British Medical Journal (BMJ) has carried a couple of interesting articles on the subject recently. The first, by Des Spence, a GP, questions diagnostic criteria used by psychiatrists — criteria which suggest that a quarter of the population of the United States has a “mental illness”, one in 30 boys is on the autistic spectrum, and one in six children has ADHD (attention deficit hyperactivity disorder).
Those behind the criteria cite better awareness and diagnosis to explain these statistics, but Spence’s conclusion is more sinister — that modern psychiatry is medicalising normality, an unwelcome move that may benefit only psychiatrists and the pharmaceutical industry, “for which mental ill health is the profit nirvana of lifelong multiple medications”.
It is not just psychiatry that is being criticised. Last week the BMJ carried an article by Australian researchers entitled Preventing overdiagnosis: how to stop harming the healthy, which highlighted growing evidence that modern medicine is causing harm “through ever earlier detection and ever wider definition of disease”.
Examples given range from the introduction of new conditions in which common difficulties have been reclassified as diseases — such as female sexual dysfunction (low libido, difficulty achieving orgasm, pain on intercourse) — to incidentalomas (innocent lumps and bumps picked up by increasingly sensitive scanning technology that lead to unnecessary further investigation or treatment).
I know how worrying incidentalomas can be. Five years ago I submitted myself to a total body scan for an article I was writing, only to be told that I had three nodules on my lungs, one of which was worryingly large. The consultant suggested repeat scans every three months to see if the nodules were growing, in which case I would need to have them removed.
I was eventually given the all-clear, but it is not an experience that I would care to repeat and, even if the nodules had turned out to be cancerous, it is unclear whether the early diagnosis would have saved my life or just meant I lived with the disease for longer.
Screening for other cancers has been implicated in overdiagnosis too. Recent controversy around the National Breast Cancer Screening Programme has centred on concerns that mammography can overdiagnose some types of early cancerous change, putting a significant minority of women through unnecessary anxiety and treatment.
The PSA blood test for prostate cancer is another example. In some men it is a lifesaver, but in others it detects a problem that may never have otherwise come to light. But, as with all tests, once you have found an abnormality, both patient and doctor feel bound to do something about it, and the resulting treatment — such as radical surgery or X-ray treatment — can cause more problems than it solves.
Medical creep is an issue in less sinister conditions too. A Canadian study highlighted in the BMJ article suggests that a third of people receiving a diagnosis of asthma may not have it, and two-thirds of them probably don’t need the medicines they are prescribed. And there is evidence that we are overtreating and overdiagnosing lots of other problems, from high blood pressure and raised cholesterol levels to chronic kidney disease and osteoporosis.
Ironically, while doctors are debating what to do about this overdiagnosis, patients remain more worried about underdiagnosis, with the general public often struggling to understand why more testing, and more screening, can be anything other than a good thing. PSA being a case in point.
Don’t get me wrong, most medical advances, including new drugs, are welcome and needed, but a generous dollop of scepticism is healthy.
Doctor doesn’t always know best.
For more detailed information on Havidol, and guidance on how to persuade your doctor to prescribe it, visit the spoof website havidol.com. Des Spence's article on the advance of modern psychiatry is in the May 2 edition of the BMJ, and Preventing overdiagnosis is in the May 30 edition.
Narcissism and Humility
SOME experts claim there is an epidemic of narcissism. Arrogant young people are the usual suspects — the sneering, supercilious student or the selfish, dismissive intern. But pompous, compliment-demanding executives are also held to suffer from an excess of self-love.
The narcissism disease probably has its roots in the self-esteem movement that began in America. Teachers were worried about (mostly minority) students who did poorly at school. They noticed that the pupils’ bad results led them to feel bad about themselves and rebel.
The teachers argued, with more passion than evidence, that the way to get them to do better at school was to work on their self-esteem rather than their maths or grammar. The naive belief took hold that if you felt better about yourself, you could discover and release your natural abilities.
This folly was fuelled by the “multiple intelligence” gurus and their claim that any human capability was a type of intelligence. So dancing became an intelligence. This is why there are daft degrees in trivial activities, with students expressing great offence — all part of clinical narcissism — at any hint that they might be doing a pointless degree at a bogus university.
The theorists were right about the relationship between self-esteem and (academic) success but wrong about the direction of causality. Doing well is the cause, not the result, of high self-esteem. Nurturing self-esteem in the hope that it brings success just feeds narcissism. And narcissism is a hungry and fragile plant.
Healthy self-regard comes from finding strengths, working on them and building a skills base. It involves dedication and resolve. And from that investment flows self-esteem.
So what happened to humility? Nearly all religions condemn arrogance and praise humility. There are stories and parables that warn against arrogance and cite pride as one of the seven deadly sins. There are strictures on selfishness and ignoring one’s fellow man, and on the foolishness of chasing materialism rather than spiritualism.
Note the charm of the Amish and the strength of the Quakers. Amish adolescents can look sheltered, naive and vulnerable; they seem throwbacks to another age. But who would you choose to teach or spend time with — a class of Amish or Quakers, or a gang of feral inner-city children who trumpet their rights?
Humility begets kindness. It is as attractive as hubris can be repulsive. But there are two other types of humility. First there is British false humility — the kind that always foxes Americans. It can be spotted in how people talk about success. So you say “I was fortunate enough” to be selected for Oxbridge, the Olympics or promotion to the board. The idea is that you invoke luck to explain success — not talent, hard work or family privilege. The understatement continues when describing an occupation. The answer “I sell vacuum cleaners” or “I dabble in art” could mean that you are sitting next to Sir James Dyson or Charles Saatchi.
It is a trick, of course, but it fulfils some important social rules. Arrogance, self-importance and being a show-off are a “pretty poor show”. But believing in yourself and your ability is absolutely fundamental. You have to be sufficiently strong to show weakness. You have to be sufficiently confident to be humble.
There are cases where a sort of bumbling humility is not thought of so highly by the British. This may be nicely illustrated by the famous comment that Winston Churchill made about one of his adversaries. Clement Attlee is a humble man, said Churchill, but then he has a lot to be humble about.
This is indeed very different from the second form of humility, which is the debilitating and dangerous kind. It is the belief that putting yourself forward or first in any situation is morally wrong. The psychiatrists may call this “dependent personality disorder”. These humble people are frequently exploited by their selfish colleagues.
Often this humility is driven by psychological problems around inadequacy. Religions reinforce this. Consider the Prayer of Humble Access — “We are not worthy so much as to gather up the crumbs under thy table” — or the many prayers of penitence. Clearly, religions can go too far in encouraging the believers to feel worthless in the sight of the Almighty.
It may be best to try the humility of those non-sacramental groups like the Quakers or the Salvation Army — people who remind themselves how fortunate they are and how many are less so, and set about doing something to help them.
So, it may be that the excessively humble do not inherit the earth, but those who think about the plight of others do.
Your Brain On Love
Love may not be blind, but it does make you dumb, according to brain scans of people in the early days of a romance.
MRI images show that when people gaze at pictures of their loved ones, the rational parts of their brains shut down, allowing the heart to rule the head. As a result, would-be suitors, their critical faculties dulled, are more likely to overlook niggling personality traits.
Robin Dunbar, an evolutionary psychologist at the University of Oxford, believes that the “rose-tinted spectacles” effect encourages people to take greater risks.
“What seems to be happening is that you have subconsciously made up your mind that you are interested in the person and the rational bit of the brain — the bit that would normally say ‘hang on a minute’ — gets switched off,” he said. “The more emotional parts of the brain are given a free ride. It looks very much like the rose-tinted spectacles kicking in.”
Professor Dunbar’s theory emerged after he analysed findings of brain scan experiments carried out a decade ago at University College London. The research by Semir Zeki and Andreas Bartels used functional Magnetic Resonance Imaging to look at the brain activity of 17 volunteers as they were shown pictures of their boyfriends or girlfriends. The volunteers were recruited to the experiment because all professed to be “truly, madly and deeply in love”. As they lay in the MRI scanner, they were shown three images of friends and one of their partner. “What struck me looking at the data was that parts of the frontal lobe, which is the region of the brain that does the heavy rational work, were deactivated when they looked at pictures of their beloved, compared to pictures of their friends,” said Professor Dunbar, who discussed the science of falling of love at The Times Cheltenham Science Festival last night.
The brain regions affected by “rose-tinted spectacle syndrome” are the dorsolateral prefrontal cortex and the orbitofrontal cortex. These are important in theory of mind, or the ability to see the world from someone else’s perspective, and in rational thought and reasoning.
“In a relationship you are in a trade-off between caution and just going for it,” Professor Dunbar said. “There is a view that emotion exists to get you off the fence. A purely rational organism would sit on the fence all the time to avoid being hurt. But if you don’t engage, you won’t form relationships. If the prefrontal cortex is shut down, that protective and cautious element goes.”
The effect is seen in both men and women. However, women appear to be the driving force in keeping relationships going.
Earlier this year, a study of mobile phone records by Professor Dunbar showed that men call their romantic partners more than any other person in the first seven years of a relationship. But after seven years, their focus shifts to other friends.
Women, by contrast, continue to phone their partners more than anyone else for the first 14 years of a relationship. Only then do they tend to shift their attention to friends.
Professor Dunbar, whose book The Science of Love and Betrayal was published this year, rejected the idea that falling in love was merely a cultural phenomenon. “If you look at poetry from all over the world, and at poems going back 5,000 years, you see the same thing — people describing falling in love,” he said. “It doesn’t mean everyone experiences it. It is just that it is widespread and it long predates Mills and Boon.”
The Power of Situation
CLEARLY, a person’s decisions are determined by circumstances. Just how closely they are determined, however, has only recently become apparent. Experiments conducted over the past few years have revealed that giving someone an icy drink at a party leads him to believe he is getting the cold shoulder from fellow guests, that handing over a warm drink gives people a sense of warmth from others, and—most astonishingly—that putting potential voters in chairs which lean slightly to the left causes them to become more agreeable towards policies associated with the left of the political spectrum.
The latest of these studies also looks at the effect of furniture. It suggests that something as trivial as the stability of chairs and tables has an effect on perceptions and desires.
The study was conducted by David Kille, Amanda Forest and Joanne Wood at the University of Waterloo, in Canada, and will be published soon in Psychological Science. Mr Kille and his collaborators asked half of their volunteers (47 romantically unattached undergraduate students) to sit in a slightly wobbly chair next to a slightly wobbly table while engaged in the task assigned. The others were asked to sit in chairs next to tables that looked physically identical, but were not wobbly.
Once in their chairs, participants were asked to judge the stability of the relationships of four celebrity couples: Barack and Michelle Obama, David and Victoria Beckham, Jay-Z and Beyoncé, and Johnny Depp and Vanessa Paradis. They did this by rating how likely they thought it was, on a scale of one to seven, that a couple would break up in the next five years. A score of one meant “extremely unlikely to dissolve”. A score of seven meant “extremely likely to dissolve”.
After they had done this, they were asked to rate their preferences for various traits in a potential romantic partner. Traits on offer included some which a pilot study indicated people associate with a sense of psychological stability (such as being trustworthy or reliable), some that are associated with psychological instability (being spontaneous or adventurous, for example) and some with no real relevance to instability or stability (like being loving or funny). Participants rated the various traits on another one-to-seven scale, with one indicating “not at all desirable” and seven meaning “extremely desirable”.
The results reveal that just as cold drinks lead to perceptions of social conditions being cold, tinkering with feelings of physical stability leads to perceptions of social instability. Participants who sat in wobbly chairs at wobbly tables gave the celebrity couples an average stability score of 3.2 while those whose furniture did not wobble gave them 2.5.
What was particularly intriguing, though, was that those sitting at wonky furniture not only saw instability in the relationships of others but also said that they valued stability in their own relationships more highly. They gave stability-promoting traits in potential romantic partners an average desirability score of 5.0, whereas those whose tables and chairs were stable gave these same traits a score of 4.5. The difference is not huge, but it is statistically significant. Even a small amount of environmental wobbliness seems to promote a desire for an emotional rock to cling to.
Extroverts and Introverts
Are introversion and shyness the same thing? "Though in popular media they're often viewed as the same, we know in the scientific community that, conceptually or empirically, they're unrelated," Schmidt says.
The two get confused because they both are related to socializing-but lack of interest in socializing is very clearly not the same as fearing it. Schmidt and Arnold H. Buss of the University of Texas wrote a chapter titled "Understanding Shyness" for the upcoming book The Development of Shyness and Social Withdrawal. There they write, "Sociability refers to the motive, strong or weak, of wanting to be with others, whereas shyness refers to behavior when with others, inhibited or uninhibited, as well as feelings of tension and discomfort." This differentiation between motivation and behavior is consistent with the ability many of us have to behave like extroverts when we choose, whereas shy people cannot turn their shyness off and on.
Seven things extroverts should know about their introverted friends:
1) We don’t need alone time because we don’t like you. We need alone time because we need alone time. Don’t take it personally.
2) We aren’t judging anyone when we sit quietly. We're just sitting quietly, probably enjoying watching extroverts in action.
3) If we say we’re having fun, we’re having fun, even though it might not look that way to you.
4) If we leave early, it’s not because we’re party poopers. We’re just pooped. Socializing takes a lot out of us.
5) If you want to hear what we have to say, give us time to say it. We don’t fight to be heard over other people. We just clam up.
6) We’re not lonely, we’re choosy. And we’re loyal to friends who don’t try to make us over into extroverts.
7) Anything but the telephone.
Seven things introverts should know about their extroverted friends:
1) Extroverts don’t understand introversion unless someone explains it.
2) Extroverts who try to get you to loosen up usually aren’t doing it to annoy you. They mean well.
3) Extroverts produce a lot of words but quantity does not preclude quality. There's often plenty of good stuff in there for those with the patience to listen.
4) Extroverts can teach us plenty about glad-handing and small talking. These are useful skills, whether or not you enjoy them.
5) Extroverts can’t read your mind and they’re not big on catching hints. Say what you want.
6) At parties, think of extroverted friends as a glider tow plane. They pull you in and get you started, but eventually you have to sail on your own.
7) Extroverts come in all different styles, just like introverts. Keep a lookout for extroverts with a quiet side, who make dandy friends.
Ego Depletion and Self Control
Do you have what it takes to resist temptation? Or do you find yourself indulging too often in a decadent dessert, using company time to check Facebook, or foregoing morning exercise in favor of sleep? We do not need a science experiment to understand the universality of cravings, desires and longings, or to understand how human desire serves as a double-edged sword. Urges motivate us in positive and important ways - to seek food, find shelter, make friends, get sleep, procreate. But left unchecked, our urges and desires can lead to a myriad of negative consequences, from obesity and poor health to reduced productivity, overspending, damaged relationships, substance abuse, and violence.
If your willpower is weak, a little divine intervention may help. In a series of studies, Kevin Rounding and colleagues tested participants' self-control by asking them to endure discomfort to earn a reward, or to delay immediate payment to obtain a larger stipend. Before the test of self-control, half of the participants were exposed to words with religious themes (e.g., divine, spirit, God) in a puzzle-solving task, and half completed the same task without the religious primes. Those who saw the primes were willing to endure greater discomfort and delay gratification longer than those who did not. Additional studies showed that religious primes also fortified self-control after the fact. In these studies, participants first attempted to resist temptation, and afterward half of the participants viewed religious primes while the other half did not. Finally, all participants were faced with an additional task involving self-restraint. Exposure to the religious words refueled resolve, as participants who saw the religious primes were able to persist at a frustrating task far longer than those who did not.
Resisting temptation can be difficult, especially if it involves repeated self-denial. Indeed, entire industries have evolved to provide support for those who have trouble saying "no" (consider weight loss and smoking cessation programs). Research by Roy Baumeister, Kathleen Vohs, and Dianne Tice sheds light on why self-control can be so elusive. According to Baumeister and colleagues, self-control operates in many ways like a muscle: It depends on a limited energy source that can be depleted. Thus with overexertion, particularly in a short time frame, self-control will fatigue and ultimately fail.
Support for the notion that self-control taxes a limited resource, and that depletion of this resource will lead to lapses in resistance, comes from studies that measure individuals' ability to resist temptation on consecutive tasks. In these studies, some participants first performed a self-control task (e.g., passing up chocolate chip cookies and instead eating a healthier alternative), while others performed a task that allowed them to indulge (e.g., eating the cookies). The critical question is how the experience of resisting temptation affected self-control when individuals were then immediately given another self-control challenge (e.g., solving a difficult puzzle without getting frustrated). Although researchers have varied both the initial temptation and the subsequent self-control challenge across studies (including physical, intellectual, and emotional enticements), the pattern of findings has been the same: People who successfully deny an urge or desire are less likely to regulate their behavior if faced with another test of self-control shortly thereafter.
This ego-depletion, as Baumeister and colleagues call it, occurs not only in the lab but in everyday experience as well. In a recent study, adults carried smart phones for a week, and were queried about their cravings at seven random times every day from early morning until late at night. When signaled, participants were to report whether or not they had experienced a desire within the last 30 minutes, and to indicate the nature of the desire (e.g., eating, coffee, sex, sleep, alcohol, social media, tobacco, spending, etc.). They also indicated the strength of the desire, whether it conflicted with other goals, whether they attempted to resist the desire, and whether they fulfilled the desire. When individuals repeatedly denied their impulses in a given day, the likelihood that they would give in to future temptations that day increased. This heightened vulnerability to temptation occurred even when the urges varied over the day, suggesting that the simple act of self-denial, regardless of what we are denying, weakens our global resolve.
Fortunately, there may to be ways to fortify our self-control beyond reminders of the divine. One obvious short-term step is to indulge a little in our cravings, particularly if we know we have to face a strong temptation or desire later in the day. If for example you are trying to watch what you eat and you plan to eat dinner out with friends, fulfilling other small urges earlier in the day (e.g., sleeping an extra 10 minutes or leaving work 30 minutes early) may improve your chances of skipping the chocolate cake at dessert.
In keeping with the muscle model of willpower, research suggests that you can also increase your self-control through regular exertion over time. Although repeated self-denial drains resolve in an immediate sense, it is possible to build endurance through the routine practice of self-control over time. When people engage in daily exercises of self-control, or focus efforts to enhance willpower in one area (e.g., spending), they show gradual improvements in their ability to resist impulses, and these benefits extend to tasks that are unrelated (e.g., studying or household chores).
Lacking the discipline to start your own self-control regimen? There is still hope. For those seeking a small but simple boost in willpower, other studies show that a cool glass of lemonade (with sugar) can replenish glucose in the bloodstream and (at least temporarily) rejuvenate one's resolve. Other quick fixes include a dose of laughter, monetary incentives, and an emphasis on social goals.
In a world where temptations seem to lurk around every corner, it may be prudent to take a converging methods approach to maintaining and improving self-control, with daily practice, a good sense of humor, the occasional financial incentive, and, if the spirit moves you, a divine reminder. And don't forget to indulge in a chocolate chip cookie every once in a while - that small indulgence may be just what you need to prevent a big misstep.
Judging Size
As both the midget in the country of Brobdingnag and the giant on the island of Lilliput, Lemuel Gulliver—the protagonist of Jonathan Swift's Gulliver's Travels—experienced firsthand that size is relative. As we cast a neuroscientific light on this classic book, it seems clear to us that Swift, a satirist, essayist and poet, knew a few things about the mind, too. Absolute size is meaningless to our brain: we gauge size by context. The same medium-sized circle will appear smaller when surrounded by large circles and bigger when surrounded by tiny ones, a phenomenon discovered by German psychologist Hermann Ebbinghaus. Social and psychological context also causes us to misperceive size. Recent research shows that spiders appear larger to people who suffer from arachnophobia than to those who are unafraid of bugs and that men holding weapons seem taller and stronger than men who are holding tools. In this article, we present a collection of illusions that will expand your horizons and shrink your confidence in what is real. Try them out for size!
Small Change
Do you see tiny objects photographed with a macro lens? Look again. This remarkable illusion combines tilt-shift photography—in which the photographer uses selective focus and a special lens or tilted shot angle to make regular objects look toy-sized—with the strategic placement of a giant coin. Art designers Theo Tveterås and Lars Marcus Vedeler, from the Skrekkøgle group, created the enormous 50-cent euro coin from painted and lacquered wood at a 20:1 scale.
Barbie Trashes Her Dream House
At first sight, they look like real-life scenes from the television show Hoarders, precleanup. In reality, they are photographs of 1:6 scale dioramas by St. Louis–born artist Carrie M. Becker. She makes the cardboard boxes, garbage bags and other trash herself. The furniture and tiny objects are from Barbie's dream house and a Japanese miniatures company called Re-Ment. Becker filths up the rooms with actual dirt collected from the filter of a DustBuster, using the occasional Re-Ment meatball to simulate dog poop on the floor. When she photographs the scenes without an external reference, our brain relies on our everyday experience and assumes that the minuscule objects are life size. Only in proximity to an extraneous, actual-size object does the illusion fail.
Supersize Me
You can look 10 pounds thinner with a well-known slimming trick: vertical lines elongate your shape and give you a more svelte appearance, right? Wrong! Vision scientists Peter Thompson and Kyriaki Mikellidou of the University of York in England say instead that it is time to ditch your vertical-striped wardrobe and invest in some horizontal-striped outfits. They found that vertical stripes on clothing make the wearer appear fatter and shorter than horizontal stripes do. Notice that the vertical-striped lady seems to have wider hips than the horizontal-striped model in the accompanying cartoons. The phenomenon is based on the Helmholtz illusion, in which a square made up of horizontal lines appears to be taller and narrower than an identical square made of vertical lines. The original report from 1867 of this illusion contained the intriguing reflection that ladies' frocks with horizontal stripes make the figure look taller. Because the remark ran counter to contemporary popular belief, the York researchers decided to put it to the test, finding that 19th-century German physicist and physician Hermann von Helmholtz did indeed have a great eye for fashion.
Full Moon
The full moon rising on the horizon appears to be massive. Hours later, when the moon is high overhead, it looks much smaller. Yet the disk that falls on your retina is not smaller for the overhead moon than it is for the rising moon. So why does the overhead moon seem smaller? One answer is that your brain infers the larger size of the rising moon because you see it next to trees, hills or other objects on the horizon. Your brain literally enlarges the moon to fit the context. Look for this effect the next time you see the moon in real life.
Objects project smaller images on our retinas as they move away from us, which can make it hard to decide if an item is truly small or just far away (as we see in this photograph). Forced perspective photography uses this ambiguity to great effect, while eliminating many of the habitual strategies that our brain uses to distinguish size from distance, such as stereopsis (our visual system can calculate the depth in a scene from the slight differences between our left and right retinal images) and motion parallax (as we move, objects closer to us move farther across our field of view than distant objects do).
Tall and Venti
Is your cuppa joe half empty or half full? It depends on your outlook—and on a little twist on the Jastrow illusion, named after Polish-born American psychologist Joseph Jastrow. In this classic illusion, two identical arches positioned in a certain configuration appear to have very different lengths. Magician Greg Wilson and writer and producer David Gripenwaldt realized that Starbucks coffee sleeves have the perfect shape for an impromptu demonstration of the Jastrow illusion, so now you can amaze your office mates at your next coffee break. All you need to do is align the coffee sleeves as in the accompanying photograph and—presto!—your tall cup sleeve is now venti-sized! Your brain compares the upper arch's lower right corner with the lower arch's upper right corner and concludes, incorrectly, that the upper sleeve is shorter than the lower sleeve. We would like to thank magician Victoria Skye for her demonstration of the Jastrow illusion with Starbucks coffee sleeves.
Card Magic Tricks
Think of a playing card. Got one in mind?
Although it may have felt like a free choice, think again: Most people choose one of only four cards, out of a deck of 52. For now, remember your card — we’ll return to it later.
For thousands of years, magicians have amazed audiences by developing and applying intuitions about the mind. Skilled magicians can manipulate memories, control attention, and influence choices. But magicians rarely know why these principles work. Studying magic could reveal the mechanisms of the mind that enable these principles, to uncover the why rather than just the how.
Some of these principles, such as illusions and misdirection, have recently lead to interesting discoveries. For example, in one study, a magician threw a ball into the air a few times. On the third throw, however, he only pretended to throw it. Two thirds of the participants reported seeing the ball vanish in mid-air, even though it never left his hand. The participants saw something amazing — something that never actually happened.
Another example is misdirection, where the magician hides the secret by manipulating what the audience perceives or thinks. One study tracked participants’ eye movements while showing them a vanishing cigarette trick. Even if participants looked directly at the secret move, they did not notice it if their attention was directed elsewhere. They looked, but did not see, thanks to the magician’s misdirection.
Other principles of magic involve card tricks. Magicians can often influence people to choose a particular card from a deck, or even know which card people will choose when asked to think of one. Studying these phenomena could help us learn about the mind, as did the study of illusions and misdirection.
But before we can understand card magic, we have to understand exactly how people perceive the cards themselves. To do this, I teamed up with another researcher and magician, Alym Amlani, as well as professor Ronald Rensink at the University of British Columbia. We applied well-known techniques from vision science to measure how well people see, remember, like, and choose each of the 52 cards in a standard deck. For example, people saw cards quickly presented one after another on a computer while they searched for a target card; their accuracy indicated the card’s visibility. To measure choice, we asked over a thousand people to either name or visualize a card, then recorded their selections.
Measuring these factors allowed us to test magicians’ intuitions about different cards. Our results confirmed several of these intuitions. For example, magicians believe that people treat the Ace of Spades and Queen of Hearts differently from other cards. Sure enough, accuracy for detecting and remembering was highest for the Ace of Spades, and both cards were among the most liked and most often chosen. Other cards chosen frequently were Sevens and Threes, consistent with other studies on how people choose digits.
Magicians also believe they know which cards people are least likely to choose. Now consider: Which card do you think people will name the least often?
Many magicians believe the answer is a mid-valued Club, like the Six of Clubs. Others appear to share that belief; hecklers sometimes end up choosing the Six during magic tricks. In fact, during pilot testing, when asked to name a card several people smugly asserted, “The Six of Clubs!”, perhaps trying to act unpredictably. But by doing so, they in fact acted more predictably. As it turned out, however, it was the black Nines that were chosen the least. Of the 1150 selections people made in our experiment, these cards were only chosen four times.
Several other common beliefs were also disproven. For example, magicians often say that when asked to name a card, women choose the Queen of Hearts more than men do. In our sample, we found the opposite: men chose the Queen of Hearts more than women did, and women chose the King of Hearts more than men did.
Other results appeared to be completely new. For example, people detected most cards equally well, except for the Six of Hearts and Diamonds, which seemed to be misreported more than any other cards. In other words, people saw red Sixes that were not there. Also, women seemed to prefer lower number cards, and men preferred higher ones. We don’t know why.
A final interesting result was that the exact wording of the question seemed to influence which cards people chose. When asked to name a card, over half of the people chose one of four cards: the Ace of Spades (25%), or the Queen (14%), Ace (6%), or King (6%) of Hearts. If you’re like most people, you may have chosen one of these cards when asked at the beginning of this article. (A full list of cards and their frequencies is also available.)
But when asked to visualize a card, people seemed to choose the Ace of Hearts more often. In our sample, they chose it almost twice as often when asked to visualize (11%) rather than name (6%) a card. Perhaps something about the visualization process makes people more likely to think of this particular card.
Systematic studies such as these can help form the basis of a psychology of card magic. Magicians can improve their tricks by knowing which cards people like the best or choose the most. Meanwhile, psychologists can follow up on unexpected findings to understand why people may misreport seeing red Sixes or why the wording of a question may bring different cards to mind.
And this is only the beginning. Applying these results, we can uncover the mechanisms behind the principles of card magic. If magicians can influence the audience’s decisions, what factors enable this influence? Why do people still feel like they have a free choice? Answers to these questions could provide new insights into persuasion, marketing, and decision making. Ultimately, we hope to develop a science of magic, where almost any trick can be understood in terms of its underlying psychological mechanisms. Such a science can keep the secrets of magic, while revealing the secrets of the mind.
The Reverse Psychology of Temptation
If you want to help someone stick to a decision, try tempting him out of it.
Published on August 6, 2012 by Peter Bregman in How We Work
"Oh this is delicious, Peter. The ice cream is homemade, the perfect consistency, And this lemon cookie on top, mmmmm. Are you sure you don't want some?"
Tom* smiled devilishly as he reached across the table to hand me a spoon. Tom is my client, the CEO of a $900 million company. I was in San Francisco to run a two-day offsite meeting for him and his leadership team. We've worked together for almost a decade and he's become a close, trusted friend.
We were at Greens in San Francisco, a vegetarian restaurant Tom had chosen because he had seen their cookbooks on my shelf in New York and knew I would love it.
Tom was teasing me because earlier in the meal I told him I was off sugary desserts. There's no medical reason or necessity for me to avoid sugar; I simply feel better when I'm not eating it. But he's seen me eat large quantities of sugary treats in the past and knows my willpower can be weak.
"It does look good and I'm glad you're enjoying it," I said, "but you're on your own. There's no chance I'm eating any."
"C'mon Peter, these desserts are healthy, and all we've eaten is vegetables anyway. It would be a real missed opportunity if you didn't at least taste the desserts at Greens; it's your favorite kind of food."
He took a bite from a second dessert he had ordered just to tantalize me — a berry pie — and rolled his eyes in mock ecstasy, "Ooh, this is good. And it's basically just fruit. Go ahead, have just a bite." As he edged it closer to my side of the table, the red caramelized berries dripped juice over the side of the plate.
The reasons to taste the desserts were compelling. Even putting aside the fact that Tom is a client and there's always some pressure to please clients, his rationalizations were the same rationalizations that were floating inside my head.
But here's the interesting thing: the more he pressured me to eat dessert, the stronger my resolve not to eat dessert grew.
My reaction caught me off guard and offered me a surprising strategy for helping people sustain change: if you want to help someone stick to a decision, try tempting him out of it. In other words, enticing someone to break a commitment can be a great tool to help him maintain his commitment.
Here's why: Going into the dinner, I had one reason I didn't want to eat dessert. But Tom's taunting gave me another reason: I was embarrassed to break my commitment in the face of his teasing. I didn't want to be the guy who caves in to peer pressure.
Maybe it's just my rebellious nature, but when my wife Eleanor reminds me that I don't really want to eat that cookie in my hand, I quickly try to stuff it in my mouth before she can stop me. Even though I've asked her to help me, my feeling is, "I'll eat whatever I want to eat!" It becomes a fun game, a challenge. Somehow, when she's helping me, I become a little less accountable.
But when Tom was egging me on, the tables were turned. I was fully responsible for my own actions. I knew I was on my own. And I also knew that the stakes were high; If I ate the dessert I would never live it down. The brilliance of the psychology is that Tom made it more fun — and free-spirited — to not eat dessert. And successfully withstanding his pressure built my confidence in my commitment.
This approach has broad application. Do you have a colleague who wants to speak less in meetings? Try egging her on. Someone who wants to leave work at a decent time? Prod him at 5pm with his incomplete to-do list. A spouse who's trying to stay off email at night? Dangle her BlackBerry in front of her at bedtime.
There are two conditions necessary to make this an effective strategy and keep it good-natured: The commitment the person wants to make needs to be self-motivated and the person doing the ribbing needs to be a trusted friend who doesn't abuse positional power.
What happens when the prodding is over? It turns out that the motivating impact of that dinner has lasted long after dinner was done. Usually, offsite meetings are particularly dangerous for me as far as sugar consumption is concerned. But this time I didn't eat any sugar during the meeting and I haven't eaten any since. It's been a month since I stopped eating sugar — a month that included a week-long vacation with my wife Eleanor in France — a month filled with opportunities to eat delicious-looking sugary treats.
But each time I'm tempted, I pause, remembering that dinner with Tom, and I think "if I didn't eat dessert then — with all that pressure and temptation and lots of good reasons to eat dessert — why would I eat it now?"
A picture inflates the perceived truth of true and false claims
Trusting research over their guts, scientists in New Zealand and Canada examined the phenomenon that Stephen Colbert, comedian and news satirist, calls “truthiness” — the feeling that something is true.
In four different experiments they discovered that people believe claims are true, regardless of whether they actually are true, when a decorative photograph appears alongside the claim.
“We wanted to examine how the kinds of photos people see every day — the ones that decorate newspaper or TV headlines, for example — might produce “truthiness,” said lead investigator Eryn J. Newman of Victoria University of Wellington, New Zealand. “We were really surprised by what we found.”
In a series of four experiments in both New Zealand and Canada, Newman and colleagues showed people a series of claims such as, “The liquid metal inside a thermometer is magnesium” and asked them to agree or disagree that each claim was true. In some cases, the claim appeared with a decorative photograph that didn’t reveal if the claim was actually true — such as a thermometer. Other claims appeared alone.
When a decorative photograph appeared with the claim, people were more likely to agree that the claim was true, regardless of whether it was actually true.
Across all the experiments, the findings fit with the idea that photos might help people conjure up images and ideas about the claim more easily than if the claim appeared by itself. “We know that when it’s easy for people to bring information to mind, it ‘feels’ right,” said Newman.
The research has important implications for situations in which people encounter decorative photos, such as in the media or in education. “Decorative photos grab people’s attention,” Newman said. “Our research suggests that these photos might have unintended consequences, leading people to accept information because of their feelings rather than the facts.”
We added a photo to make you believe this post — Ed.
More Magic
Every three years, the world’s greatest illusionists gather to compete at the ‘Magic Olympics’. Here they face the toughest challenge of their lives . . . fooling an audience comprised of other magicians.
On a dank and unseasonably chilly weekday evening, few pleasure seekers are walking the streets of Blackpool. Britain’s most popular seaside resort feels dilapidated and unloved. But at the Ruskin Hotel there’s a crowd spilling out on to the pavement; inside they are standing four deep at the bar, squeezed into tight, sweaty clusters. All around, there are people laughing, clapping and shaking their heads in disbelief. It is close to midnight in the most magical place in the world.
One of those holding court is Garrett Thomas, a fast-talking New Yorker with a goatee and an earring. He shows me three cards, one of which is a red queen. He places face down in my hand, but when he turns it over again it has changed. “The problem is you have two eyes and I have three cards,” he says, as the three become six.
Pulling a Rubik’s Cube from his pocket, he starts solving it with one hand. He tosses it into the air and it appears to complete itself before landing back in his hand. He turns the sides of the puzzle again and then mutters about solving it “in the blink of an eye”. I must have blinked because there it is, complete again. I gape like the untutored layman that I am, but the crowd packed into the pub is also impressed. And that is striking, because they are the toughest audience in magic — they are all magicians.
Last week Blackpool hosted the World Championships of Magic or, as the rabbit-out-of-hats fraternity refers to it, the “Olympics of Magic”. The event, with delegates from 65 countries, is held every three years, and this is the first time it has been staged in Britain. Unlike the sporting Olympics, this is not an opportunity for the general public to see the greatest showmen on earth. As befits the secretive world of magic, the audience at the Winter Gardens (and afterwards in the pubs and clubs) is composed of professionals who already know every trick in the book.
Alex Stone, the magician and author, says the championships are “like the Roman Colosseum for magicians”, and those who triumph at them are revered as royalty by this mysterious subculture, even if their names are largely unfamiliar to the general public.
Stone, who performed at the 2006 games in Stockholm, has caused uproar with his new book, Fooling Houdini, about his journey deep into the magic kingdom. The book exposes some of the methodology behind popular routines and is published in Britain with perfect conjuror’s timing, just as his peers are gathering by the seaside.
Ricky Jay, a well-known magician, deplored the “gratuitous exposures” in the book. Stone has been excommunicated from the Society of American Magicians and has been shunned by some illusionists. “There is a small but vocal minority of magicians who get real mad when you expose secrets,” Stone says. “I have yet to wake up to find a severed rabbit head at the foot of my mattress,” The book is not a how-to guide, but it delves into the psychology and cognitive science behind magic. For example, he explores how “inattentional blindness” allows a magician to produce an eight of clubs and a nine of spades from a deck of cards without audience members noticing that the cards he had shown them earlier were the nine of clubs and the eight of spades. Aspiring pickpockets will enjoy his explanation of how to misdirect someone’s attention while removing their watch.
“I think that extreme secrecy policies, wherein no discussion of methodology behind magic tricks is allowed, is harmful to magic in the long run,” Stone says during a telephone call from New York. “It prevents the audience from appreciating the often jaw-dropping skill that goes into magic and doesn’t allow the spectators to distinguish between a trick that is truly original and skilful and one that is a 100-year-old easy trick.”
Some magicians, he adds, “take themselves a little bit too seriously in mythologising themselves as members of a secret cult”.
The championships, run by the Fédération Internationale des Sociétés Magiques, are split into two categories: stage magic and close-up magic. Some of the stage illusions are elaborate and sophisticated, but in an age of computer-generated special effects all the dim lighting and dry ice seems less impressive than the sleights of hand and ingenious methods for misdirecting the audience’s attention displayed by the close-up magicians.
This was the category that Stone entered, and he starts the book with a squirm-inducing description of his routine. Obie O’Brien, who is running the close-up competition in Blackpool this week, chuckles at the memory of it. “Everybody in the audience was laughing when he went below the table to produce the cards,” recalls O’Brien, who was the foreman of the jury that day. “Once, maybe. But twice?” He shakes his head.
“I haven’t read the book yet,” he says gravely. “There shouldn’t be any exposure in magic. Exposure is a bad thing for magic. A guy may have paid $15,000 for an illusion and someone exposes the illusion?” He grimaces.
Danny Hunt, who is a judge in the stage magic competition, agrees. “As a magician the last thing you want is people telling secrets to the lay public,” he says. “You protect your secrets. Some are hundreds of years old. I think it’s a shame. But the good thing is that magic evolves.”
As amazing as many of the performances are, even an ignorant punter like me is not taken in by everything in Blackpool. In one of the gala performances, a flame-haired woman sitting on a sofa is apparently cut in half behind a sheet and her top half moves to the other end of the seat while her legs stay put. Unfortunately the use of a doppelgänger is revealed when a second redhead pops up inadvertently from behind the sofa and then hastily ducks out of sight.
There are mutterings in the audience, but they are nothing compared with the pantomime booing that greets German magician Topas when he appears at a show dressed as The Masked Magician, the illusionist who exposed trade secrets in his 1990s TV shows. Topas performs a seemingly old trick of throwing a sheet over a woman on a chair and making her disappear, but having already turned the chair round to reveal the cavity behind the chair in which his glamorous assistant can hide, he somehow pulls off the trick with the hole facing the audience.
It’s combinations of entertainment and technique that the judges are looking for, O’Brien says. “If you can fool the judges you have a good chance of a gold, silver or bronze medal.” A fooled judge is one who cannot spot the method used to carry off the trick. “I might work out my method but it might not be his method,” O’Brien says. “David Copperfield made the Statue of Liberty disappear. I might have a method to do that, but it is not the same as his. When the audience clap and stand you know you have a hell of a performance.”
One who gets a standing ovation is Francisco Sanchez, an 18-year-old Spanish card magician whose hands move with such elegance and deftness to make cards vanish from one side of the table and re-appear on the other that even hardened veterans are nodding their heads approvingly. “Three years’ work for a ten-minute show,” he says backstage afterwards.
Marc Oberon won the parlour magic prize in the close-up category at the 2009 tournament in Beijing with a routine that included turning an edible apple into one covered in gold leaf. He has since performed on Penn and Teller’s TV show and is in demand around the world. He tells me that the apple trick took three years from the original vision through design, choreography and practice to performance. “People have no idea the amount of time it takes to be able to do a good piece of magic that lasts a few seconds,” he says.
This year, the overall Grand Prix winners were Yu Ho Jin (stage) and Yann Frisch (close-up). Hunt says they deserve to be recognised alongside the other Olympians seeking glory this summer. “It’s the Olympics of Magic! When you see a guy get a standing ovation, as much work has gone into that as an athlete has put into doing what he does. It is years of dedication and having no other life. It’s physically and mentally demanding and there’s pressure when you perform in front of 3,000 people who think they know what you are going to do.”
This dedication to innovation is not without its monetary rewards. Many of the performers sell their tricks at stands in the exhibition hall. For £3,000 I could learn how to make a woman levitate, or buy a “gimmicked” strait jacket for £200. Some of the technology on display would make you think again about the mentalists who miraculously know the number you have written on a notepad on the other side of the stage.
A charming Irish magician, Pat Fallon, asks: “What is the most beautiful magic you can imagine?” While I’m fumbling for an answer he says: “I’ll show you.” He takes what appear to be a handful of paper scraps cut from magazines, folds the pile in half and before my eyes they become a wad of £20 notes.
Simple, but effective. Like the tricks of Thomas in the pub. “Magic is a religion for me,” he says. “There is never a time when I am not a magician. I’m not an actor, I’m a magician who happens to be doing an act. “The appeal to the public is that it shows us the world is how you want to perceive it. Magicians just like to remind people of that.” He hands me his business card and signs it with his first name, Garrett. But when I turn it the other way up the signature is his surname, Thomas. He raises an eyebrow. I turn to show the photographer with me and when I turn back Garrett Thomas has vanished.
Unconscious Decision Making
We humans think we make all our decisions to act consciously and willfully. We all feel we are wonderfully unified, coherent mental machines and that our underlying brain structure must reflect this overpowering sense. It doesn’t. No command center keeps all other brain systems hopping to the instructions of a five-star general. The brain has millions of local processors making important decisions. There is no one boss in the brain. You are certainly not the boss of your brain. Have you ever succeeded in telling your brain to shut up already and go to sleep?
Even though we know that the organization of the brain is made up of a gazillion decision centers, that neural activities going on at one level of organization are inexplicable at another level, and that there seems to be no boss, our conviction that we have a “self” making all the decisions is not dampened. It is a powerful illusion that is almost impossible to shake. In fact, there is little or no reason to shake it, for it has served us well as a species. There is, however, a reason to try to understand how it all comes about. If we understand why we feel in charge, we will understand why and how we make errors of thought and perception.
When I was a kid, I spent a lot of time in the desert of Southern California—out in the desert scrub and dry bunchgrass, surrounded by purple mountains, creosote bush, coyotes, and rattlesnakes. The reason I am still here today is because I have nonconscious processes that were honed by evolution.
I jumped out of the way of many a rattlesnake, but that is not all. I also jumped out of the way of grass that rustled in the wind. I jumped, that is, before I was consciously aware that it was the wind that rustled the grass, rather than a rattler. If I had had only my conscious processes to depend on, I probably would have jumped less but been bitten on more than one occasion.
Conscious processes are slow, as are conscious decisions. As a person is walking, sensory inputs from the visual and auditory systems go to the thalamus, a type of brain relay station. Then the impulses are sent to the processing areas in the cortex, next relayed to the frontal cortex. There they are integrated with other higher mental processes, and perhaps the information makes it into the stream of consciousness, which is when a person becomes consciously aware of the information (there is a snake!). In the case of the rattler, memory then kicks in the information that rattlesnakes are poisonous and what the consequences of a rattlesnake bite are. I make a decision (I don’t want it to bite me), quickly calculate how close I am to the snake, and answer a question: Do I need to change my current direction and speed? Yes, I should move back. A command is sent to put the muscles into gear, and they then do it.
All this processing takes a long time, up to a second or two. Luckily, all that doesn’t have to occur. The brain also takes a nonconscious shortcut through the amygdala, which sits under the thalamus and keeps track of everything. If a pattern associated with danger in the past is recognized by the amygdala, it sends an impulse along a direct connection to the brain stem, which activates the fight-or-flight response and rings the alarm. I automatically jump back before I realize why.
If you were to have asked me why I had jumped, I would have replied that I thought I’d seen a snake. The reality, however, is that I jumped way before I was conscious of the snake. My explanation is from post hoc information I have in my conscious system. When I answered that question, I was, in a sense, confabulating—giving a fictitious account of a past event, believing it to be true.
I confabulated because our human brains are driven to infer causality. They are driven to make sense out of scattered facts. The facts that my conscious brain had to work with were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of it.
In truth, when we set out to explain our actions, they are all post hoc explanations using post hoc observations with no access to nonconscious processing. Not only that, our left brain fudges things a bit to fit into a makes-sense story. Explanations are all based on what makes it into our consciousness, but actions and the feelings happen before we are consciously aware of them—and most of them are the results of nonconscious processes, which will never make it into the explanations. The reality is, listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time.
With so many systems going on subconsciously, why do we feel unified? I believe the answer to this question resides in the left hemisphere and one of its modules that we happened upon during our years of research, particularly while studying split-brain patients.
Some people with intractable epilepsy undergo split-brain surgery. In this procedure, the large tract of nerves that connects the two hemispheres, the corpus callosum, is severed to prevent the spread of electrical impulses. Afterward, the patients appear completely normal and seem entirely unaware of any changes in their mental process. But we discovered that after the surgery, any visual, tactile, proprioceptive, auditory, or olfactory information that was presented to one hemisphere was processed in that half of the brain alone, without any awareness on the part of the other half. Because tracts carrying sensory information cross over the midline inside the brain, the right hemisphere processes data from the left half of the world, and the left hemisphere handles the right.
The left hemisphere specializes in speech, language, and intelligent behavior, and a split-brain patient’s left hemisphere and language center has no access to sensory information if it is fed only to the right brain. In the case of vision, the optic nerves leading from each eye meet inside the brain at what is called the optic chiasm. Here, each nerve splits in half; the medial half (the inside track) of each crosses the optic chiasm into the opposite side of the brain, and the lateral half (that on the outside) stays on the same side. The parts of both eyes that attend to the right visual field send information to the left hemisphere and information from the left visual field goes to and is processed by the right hemisphere.
More than a few years into our experiments, we were working with a group of split-brain patients on the East Coast. We wondered what they would do if we sneaked information into their right hemisphere and told the left hand to do something.
We showed a split-brain patient two pictures: To his right visual field, a chicken claw, so the left hemisphere saw only the claw picture, and to the left visual field, a snow scene, so the right hemisphere saw only that. He was then asked to choose a picture from an array placed in full view in front of him, which both hemispheres could see. His left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and his right hand pointed to a chicken (the most appropriate answer for the chicken claw).
We asked why he chose those items. His left-hemisphere speech center replied, “Oh, that’s simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw. Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand’s response without the knowledge of why it had picked that item, put it into a context that would explain it. It knew nothing about the snow scene, but it had to explain the shovel in front of his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that’s it! Makes sense.
What was interesting was that the left hemisphere did not say, “I don’t know,” which was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.
We called this left-hemisphere process the interpreter. It is the left hemisphere that engages in the human tendency to find order in chaos, that tries to fit everything into a story and put it into a context. It seems driven to hypothesize about the structure of the world even in the face of evidence that no pattern exists.
Our interpreter does this not only with objects but with events as well. In one experiment, we showed a series of about 40 pictures that told a story of a man waking up in the morning, putting on his clothes, eating breakfast, and going to work. Then, after a bit of time, we tested each viewer. He was presented with another series of pictures. Some of them were the originals, interspersed with some that were new but could easily fit the same story. We also included some distracter pictures that had nothing to do with the story, such as the same man out playing golf or at the zoo. What you and I would do is incorporate both the actual pictures and the new, related pictures and reject the distracter pictures. In split-brain patients, this is also how the left hemisphere responds. It gets the gist of the story and accepts anything that fits in.
The right hemisphere, however, does not do this. It is totally veridical and identifies only the original pictures. The right brain is very literal and doesn’t include anything that wasn’t there originally. And this is why your three-year-old, embarrassingly, will contradict you as you embellish a story. The child’s left-hemisphere interpreter, which is satisfied with the gist, is not yet fully in gear.
The interpreter is an extremely busy system. We found that it is even active in the emotional sphere, trying to explain mood shifts. In one of our patients, we triggered a negative mood in her right hemisphere by showing a scary fire safety video about a guy getting pushed into a fire. When asked what she had seen, she said, “I don’t really know what I saw. I think just a white flash.” But when asked if it made her feel any emotion, she said, “I don’t really know why, but I’m kind of scared. I feel jumpy, I think maybe I don’t like this room, or maybe it’s you.” She then turned to one of the research assistants and said, “I know I like Dr. Gazzaniga, but right now I’m scared of him for some reason.” She felt the emotional response to the video but had no idea what caused it.
The left-brain interpreter had to explain why she felt scared. The information it received from the environment was that I was in the room asking questions and that nothing else was wrong. The first makes-sense explanation it arrived at was that I was scaring her. We tried again with another emotion and another patient. We flashed a picture of a pinup girl to her right hemisphere, and she snickered. She said that she saw nothing, but when we asked her why she was laughing, she told us we had a funny machine. This is what our brain does all day long. It takes input from other areas of our brain and from the environment and synthesizes it into a story. Facts are great but not necessary. The left brain ad-libs the rest.
The view in neuroscience today is that consciousness does not constitute a single, generalized process. It involves a multitude of widely distributed specialized systems and disunited processes, the products of which are integrated by the interpreter module. Consciousness is an emergent property. From moment to moment, different modules or systems compete for attention, and the winner emerges as the neural system underlying that moment’s conscious experience. Our conscious experience is assembled on the fly as our brains respond to constantly changing inputs, calculate potential courses of action, and execute responses like a streetwise kid.
But we do not experience a thousand chattering voices. Consciousness flows easily and naturally from one moment to the next with a single, unified, coherent narrative. The action of an interpretive system becomes observable only when the system can be tricked into making obvious errors by forcing it to work with an impoverished set of inputs, most obviously in the split-brain patients.
Our subjective awareness arises out of our dominant left hemisphere’s unrelenting quest to explain the bits and pieces that pop into consciousness.
What does it mean that we build our theories about ourselves after the fact? How much of the time are we confabulating, giving a fictitious account of a past event, believing it to be true? When thinking about these big questions, one must always remember that all these modules are mental systems selected for over the course of evolution. The individuals who possessed them made choices that resulted in survival and reproduction. They became our ancestors.
Bacteria and Mind Control
The thought of parasites preying on your body or brain very likely sends shivers down your spine. Perhaps you imagine insectoid creatures bursting from stomachs or a malevolent force controlling your actions. These visions are not just the night terrors of science-fiction writers—the natural world is replete with such examples.
Take Toxoplasma gondii, the single-celled parasite. When mice are infected by it, they suffer the grave misfortune of becoming attracted to cats. Once a cat inevitably consumes the doomed creature, the parasite can complete its life cycle inside its new host. Or consider Cordyceps, the parasitic fungus that can grow into the brain of an insect. The fungus can force an ant to climb a plant before consuming its brain entirely. After the insect dies, a mushroom sprouts from its head, allowing the fungus to disperse its spores as widely as possible.
Gut bacteria may influence thoughts and behaviour
The human gut contains a diverse community of bacteria that colonize the large intestine in the days following birth and vastly outnumber our own cells. These so-called gut microbiota constitute a virtual organ within an organ, and influence many bodily functions. Among other things, they aid in the uptake and metabolism of nutrients, modulate the inflammatory response to infection, and protect the gut from other, harmful micro-organisms. A study by researchers at McMaster University in Hamilton, Ontario now suggests that gut bacteria may also influence behaviour and cognitive processes such as memory by exerting an effect on gene activity during brain development.
Jane Foster and her colleagues compared the performance of germ-free mice, which lack gut bacteria, with normal animals on the elevated plus maze, which is used to test anxiety-like behaviours. This consists of a plus-shaped apparatus with two open and two closed arms, with an open roof and raised up off the floor. Ordinarily, mice will avoid open spaces to minimize the risk of being seen by predators, and spend far more time in the closed than in the open arms when placed in the elevated plus maze.
This is exactly what the researchers found when they placed the normal mice into the apparatus. The animals spent far more time in the closed arms of the maze and rarely ventured into the open ones. The germ-free mice, on the other hand, behaved quite differently – they entered the open arms more often, and continued to explore them throughout the duration of the test, spending significantly more time there than in the closed arms.
The researchers then examined the animals' brains, and found that these differences in behaviour were accompanied by alterations in the expression levels of several genes in the germ-free mice. Brain-derived neurotrophic factor (BDNF) was significantly up-regulated, and the 5HT1A serotonin receptor sub-type down-regulated, in the dentate gyrus of the hippocampus. The gene encoding the NR2B subunit of the NMDA receptor was also down-regulated in the amygdala.
All three genes have previously been implicated in emotion and anxiety-like behaviours. BDNF is a growth factor that is essential for proper brain development, and a recent study showed that deleting the BDNF receptor TrkB alters the way in which newborn neurons integrate into hippocampal circuitry and increases anxiety-like behaviours in mice. Serotonin receptors, which are distributed widely throughout the brain, are well known to be involved in mood, and compounds that activate the 5HT1A subtype also produce anxiety-like behaviours.
The finding that the NR2B subunit of the NMDA receptor down-regulated in the amygdala is particularly interesting. NMDA receptors are composed of multiple subunits, but those made up of only NR2B subunits are known to be critical for the development and function of the amygdala, which has a well established role in fear and other emotions, and in learning and memory. Drugs that block these receptors have been shown to block the formation of fearful memories and to reduce the anxiety associated with alcohol withdrawal in rodents.
The idea of cross-talk between the brain and the gut is not new. For example, irritable bowel syndrome (IBS) is associated with psychiatric illness, and also involves changes in the composition of the bacterial population in the gut. But this is the first study to show that the absence of gut bacteria is associated with altered behaviour. Bacteria colonize the gut in the days following birth, during a sensitive period of brain development, and apparently influence behaviour by inducing changes in the expression of certain genes.
"One of the things our data point to is that gut microbiota are very important in the first four weeks of a mouse's life, and I think the processes are translatable [to humans]," says Foster. "I'm getting a lot of attention from paediatricians who want to collaborate to test some of these connections in kids with early onset IBS. Their microbiota profile is wrong, and our results suggest that we have a window up until puberty, during which we can potentially fix this."
Exactly how gut bacteria influence gene expression in the brain is unclear, but one possible line of communication is the autonomic branch of the peripheral nervous system, which controls functions such as digestion, breathing and heart rate. A better understanding of cross-talk within this so-called 'brain-gut axis' could lead to new approaches for dealing with the psychiatric symptoms that sometimes accompany gastrointestinal disorders such as IBS, and may also show that gut bacteria affect function of the mature brain.
More evidence that gut bacteria can influence neuronal signalling has emerged in the past few months. In June, Cryan's group reported that germ-free mice have significantly elevated levels of serotonin in the hippocampus compared to animals reared normally. This was also associated with reduced anxiety, but was reversed when the gut bacteria were restored. And at the General Meeting of the American Society for Microbiology, also in June, researchers from the Baylor College of Medicine in Texas described experiments showing that one bacterial species found in the gut, Bifidobacteria dentium, synthesizes large amounts of the inhibitory neurotransmitter GABA.
SSRIs, the class of antidepressants that includes Prozac, prevent neurons from mopping up serotonin once it has been released, thus maintaining high levels of the transmitter at synapses. And benzodiazepines, a class of anti-anxiety drugs that includes diazepam, mimic the effects of GABA by binding to a distinct site on the GABA-A receptor.
All of this suggests that probiotic formulations that are enriched in specific strains of gut bacteria could one day be used to treat psychiatric disorders. "There's definitely potential on numerous levels, but I do think studies need to be done in a proper, robust manner in representative samples," says Cryan. "Even as an adjunctive therapy for anti-depressants, this could be really important, but first we'll have to figure out which species are going to be beneficial, and how they're doing it."
Microbiota researcher Rob Knight of the University of Colorado, Boulder, agrees that probiotics could potentially be useful. "I find the mouse data convincing but there's not yet direct evidence in humans," he says. "What's needed is longitudinal studies of at-risk individuals to determine whether there are systematic changes in the microbiota that correlate with psychiatric conditions, and double-blind randomized clinical trials. Research-supported, FDA-approved and effective products are likely at minimum 5-10 years off, but given the lax regulation of probiotics, I'm sure that products could be on the shelf tomorrow."
How much do evolutionary stories reveal about the mind?
When Rudyard Kipling first published his fables about how the camel got his hump and the rhinoceros his wrinkly folds of skin, he explained that they would lull his daughter to sleep only if they were always told “just so,” with no new variations. The “Just So Stories” have become a byword for seductively simple myths, though one of Kipling’s turns out to be half true.
The Leopard and the Ethiopian were hungry, the story goes, because the Giraffe and the Zebra had moved to a dense forest and were impossible to catch. So the Ethiopian changed his skin to a blackish brown, which allowed him to creep up on them. He also used his inky fingers to make spots on the Leopard’s coat, so that his friend could hunt stealthily, too—which now seems to be about right, minus the Ethiopian. A recent article in a biology journal approvingly quotes Kipling on the places “full of trees and bushes and stripy, speckly, patchy-blatchy shadows” where cats have patterned coats. The study matched the coloring of thirty-five species to their habitats and habits, which, together with other clues, is hard evidence that cats’ flank patterns mostly evolved through natural selection as camouflage. There are some puzzles—cheetahs have spots, though they prefer open hunting grounds—but that’s to be expected, since the footsteps of evolution can be as hard to retrace as those of a speckly leopard in the forest.
The idea of natural selection itself began as a just-so story, more than two millennia before Darwin. Darwin belatedly learned this when, a few years after the publication of “On the Origin of Species,” in 1859, a town clerk in Surrey sent him some lines of Aristotle, reporting an apparently crazy tale from Empedocles. According to Empedocles, most of the parts of animals had originally been thrown together at random: “Here sprang up many faces without necks, arms wandered without shoulders . . . and eyes strayed alone, in need of foreheads.” Yet whenever a set of parts turned out to be useful the creatures that were lucky enough to have them “survived, being organised spontaneously in a fitting way, whereas those which grew otherwise perished.” In later editions of “Origin,” Darwin added a footnote about the tale, remarking, “We here see the principle of natural selection shadowed forth.”
Today’s biologists tend to be cautious about labelling any trait an evolutionary adaptation—that is, one that spread through a population because it provided a reproductive advantage. It’s a concept that is easily abused, and often “invoked to resolve problems that do not exist,” the late George Williams, an influential evolutionary biologist, warned. When it comes to studying ourselves, though, such admonitions are hard to heed. So strong is the temptation to explain our minds by evolutionary “Just So Stories,” Stephen Jay Gould argued in 1978, that a lack of hard evidence for them is frequently overlooked (his may well have been the first pejorative use of Kipling’s term). Gould, a Harvard paleontologist and a popular-science writer, who died in 2002, was taking aim mainly at the rising ambitions of sociobiology. He had no argument with its work on bees, wasps, and ants, he said. But linking the behavior of humans to their evolutionary past was fraught with perils, not least because of the difficulty of disentangling culture and biology. Gould saw no prospect that sociobiology would achieve its grandest aim: a “reduction” of the human sciences to Darwinian theory.
This was no straw man. The previous year, Robert Trivers, a founder of the discipline, told Time that, “sooner or later, political science, law, economics, psychology, psychiatry, and anthropology will all be branches of sociobiology.” The sociobiologists believed that the concept of natural selection was a key that would unlock all the sciences of man, by revealing the evolutionary origins of behavior.
The dream has not died. “Homo Mysterious: Evolutionary Puzzles of Human Nature” (Oxford), a new book by David Barash, a professor of psychology and biology at the University of Washington, Seattle, inadvertently illustrates how just-so stories about humanity remain strikingly oversold. As Barash works through the common evolutionary speculations about our sexual behavior, mental abilities, religion, and art, he shows how far we still are from knowing how to talk about the evolution of the mind.
Evolutionary psychologists are not as imperialist in their ambitions as their sociobiologist forebears of the nineteen-seventies, but they tend to be no less hubristic in their claims. An evolutionary perspective “has profound implications for applied disciplines such as law, medicine, business and education,” Douglas Kenrick, of Arizona State University, writes in his recent book “Sex, Murder and the Meaning of Life.” The latest edition of a leading textbook, “Evolutionary Psychology: The New Science of the Mind,” by David Buss, of the University of Texas at Austin, announces that an evolutionary approach can integrate the disparate branches of psychology, and is “beginning to transform” the study of the arts, religion, economics, and sociology.
There are plenty of factions in this newish science of the mind. The most influential sprang up in the nineteen-eighties at the University of California, Santa Barbara, was popularized in books by Steven Pinker and others in the nineteen-nineties, and has largely won over science reporters. It focusses on the challenges our ancestors faced when they were hunter-gatherers on the African savanna in the Pleistocene era (between approximately 1.7 million and ten thousand years ago), and it has a snappy slogan: “Our modern skulls house a Stone Age mind.” This mind is regarded as a set of software modules that were written by natural selection and now constitute a universal human nature. We are, in short, all running apps from Fred Flintstone’s not-very-smartphone. Work out what those apps are—so the theory goes—and you will see what the mind was designed to do.
Designed? The coup of natural selection was to explain how nature appears to be designed when in fact it is not, so that a leopard does not need an Ethiopian (or a God) to get his spots. Mostly, it doesn’t matter when biologists speak figuratively of design in nature, or the “purpose” for which something evolved. This is useful shorthand, as long as it’s understood that no forward planning or blueprints are involved. But that caveat is often forgotten when we’re talking about the “design” of our minds or our behavior.
Barash writes that “the brain’s purpose is to direct our internal organs and our external behavior in a way that maximizes our evolutionary success.” That sounds straightforward enough. The trouble is that evolution has to make compromises, since it must work with the materials at hand, often while trying to solve several challenges at once. Any trait or organ may therefore be something of a botch, from the perspective of natural selection, even if the creature as a whole was the best job that could be done in the circumstances. If nature always stuck to simple plans, it would be easier to track the paths of evolution, but nature does not have that luxury.
In theory, if you did manage to trace how the brain was shaped by natural selection, you might shed some light on how the mind works. But you don’t have to know about the evolution of an organ in order to understand it. The heart is just as much a product of evolution as the brain, yet William Harvey figured out how it works two centuries before natural selection was discovered. Neither of the most solid post-Darwinian accounts of mental mechanisms—Noam Chomsky’s work on language and David Marr’s on vision—drew on evolutionary stories.
Going by what Barash has to say about religion, Darwinian thinking isn’t likely to transform our understanding of it anytime soon. We do not even know why we are relatively hairless or why we walk on two legs, so finding the origin of religious belief is a tall order. Undaunted, Barash explores various ways in which religion might have been advantageous for early man, or a consequence of some other advantageous trait. It might, for example, have been a by-product of our curiosity about the causes of natural phenomena, or of our desire for social connection. Or maybe religious beliefs and practices helped people coördinate with others and become less selfish, or less lonely and more fulfilled. Although he does not endorse any of these ideas—how could he, given that there’s no possible way to know after all this time?—Barash concludes that it is “highly likely” that religion owes its origin to natural selection. (He does not explain why; this conclusion seems to be an article of faith.) He also thinks that natural selection is probably responsible for religion’s “perseverance,” which suggests that his knowledge of the subject is a century out of date. Historians and social scientists have found quite a lot to say about why faith thrives in some places and periods but not in others—why, for the first time in human history, there are now hundreds of millions of unbelievers, and why religion is little more than vestigial in countries like Denmark and Sweden. It is hard to see what could be added to these accounts by evolutionary stories, even if they were known to be true.
One problem with trying to reconstruct the growth of the mind from Pleistocene materials is that you would need to know what varieties of mental equipment Stone Age minds already possessed. Even if a plausible-sounding story can be told about how some piece of behavior would have helped early hunter-gatherers survive and reproduce, it may well have become established earlier and for different reasons. Darwin underlined the temptations here when he wrote about the unfused bone in the heads of newborn humans and other mammals, which makes their skulls conveniently elastic. One might conclude that this trait evolved to ease their passage through a narrow birth canal, but it seems to result from the way vertebrate skeletons develop. Birds and reptiles hatch from eggs, yet they, too, have these sutures.
Textbooks in evolutionary psychology have proposed the hypothesis that the fear of spiders is an adaptation shaped by the mortal threat posed by their bites. In other words, we are descended from hominid wusses who thrived because they kept away from spiders. The idea is prompted by evidence that people may be innately primed to notice and be wary of spiders (as we seem to be of snakes). Yet there is no reason to think that spiders in the Stone Age were a greater threat to man than they are now—which is to say, hardly any threat at all. Scientists who study phobias and dislikes have come up with several features of spiders that may be more relevant than their bites, including their unpredictable, darting movements. Natural selection would have played some role in the development of any such general aversions, which may have their origins in distant species, somewhere far back down the line that leads to us. But that’s another story, one that evolutionary psychologists have less interest in telling, because they like tales about early man.
It would be good to know why some people love spiders—there is, inevitably, a Facebook group—while others have a paralyzing phobia, and most of us fall somewhere in between. But, with one large exception, evolutionary psychology has little to say about the differences among people; it’s concerned mainly with human universals, not human variations. Perhaps this is why most psychologists, who tend to relish unusual cases, aren’t yet rushing to have their specialties “integrated” by an evolutionary approach.
The exception is the differences between men and women: evolutionary psychologists are greatly concerned with sex, and with women’s bodies. Barash speculates at length on why women don’t have something similar to chimps’ bright-pink sexual swellings to advertise their most fertile time of the month. There are several ways, he thinks, in which female hominids could have boosted their reproductive success by concealing their time of ovulation. Perhaps it was a game of “keep him guessing to keep him close”: if a male could not tell when his mate was fertile, he would have to stick around for more of the month to insure that any offspring were his and thereby, perhaps, provide better parental care. Among the other possibilities considered—some rejected, many not—are that concealed ovulation gave females more freedom in their choice of mates, perhaps by reducing the frenzy of male competition.
This is all quite entertaining—almost as entertaining as Barash’s romp through eleven evolutionary theories about the “biological pay-off” of the human female orgasm, which unfittingly comes to no gratifying conclusion. But “concealed” ovulation seems to be an example of what George Williams called a nonexistent problem. Barash dismisses, on flimsy grounds, the idea that it is the florid advertisements of chimps that need explaining, and not our lack of them. Yet chimps are the exceptional ones in our family of the great apes, and there’s reason to think that the most recent common ancestor of chimps and humans displayed, at most, only slight swellings around the time of ovulation.
The simplest theory is that these swellings dwindled to nothing after our ancestors began to walk upright, because the costs of advertising ovulation in this way came to outweigh any benefits. Swellings could have made it harder to walk for several days each month, could have required more energy and a greater intake of water, and would be of less use as a signal when you were no longer clambering up trees with your bottom in males’ faces.
A larger difficulty vexes evolutionary psychologists’ sexual speculations in general. Especially on this topic, work in psychology can unwittingly accommodate itself to the folk wisdom and stereotypes of the day.
Darwin built the prejudices of Victorian gentlemen into his account of the evolution of the sexes. He wrote that man reaches “a higher eminence, in whatever he takes up, than woman can attain—whether requiring deep thought, reason, or imagination, or merely the use of the senses and hands,” and he looked to the struggle for mates and the struggle for survival to explain why. He also noted that some of the faculties that are strongest in women “are characteristic of the lower races, and therefore of a past and lower state of civilization.”
These days, what evolutionary psychologists have mainly noted about the sexes is that they look for different things in a mate. The evolutionary psychologists have spent decades administering questionnaires to college students in an effort to confirm their ideas about what sort of partner was desirable in bed before there were beds. “Men value youth and physical attractiveness very highly, while women value wealth and status (though they don’t mind physical attractiveness too),” Dario Maestripieri, a behavioral biologist at the University of Chicago, bluntly summarizes in his new book, “Games Primates Play.” It is also said that men are much more interested in casual sex; that sexual jealousy works differently for men and women (men are more concerned with sexual fidelity, and women with emotional fidelity); and that all these differences, and more, can be explained as the traces of behavior that would have enabled our distant ancestors to leave more descendants. Many such explanations arise from the idea that males have more to gain than females do by seeking a large number of mates—a notion that is ultimately based on experiments with fruit flies in 1948.
It’s not inconceivable that in a hundred and fifty years today’s folk wisdom about the sexes will sound as ridiculous as Darwin’s. It will surely look a bit quaint. Sexual mores can shift quickly: American women reared during the nineteen-sixties were nearly ten times as likely as those reared earlier to have had sex with five or more partners before the age of twenty, according to a 1994 study. As for women’s supposedly inborn preference for wealth and status in a mate, one wonders how much can be inferred from behavior in a world that seems always to have been run by and for men. Although it is, in some places, now easier than ever for a woman to acquire power without marrying it, economic inequality has not disappeared. Even in the most egalitarian countries, in Scandinavia, the average earnings of male full-time workers are more than ten per cent higher than those of their female counterparts; and more than ninety per cent of the top earners in America’s largest companies are men.
A study of attitudes toward casual sex, based on surveys in forty-eight countries, by David Schmitt, a psychologist at Bradley University, in Peoria, Illinois, found that the differences between the sexes varied widely, and shrank in places where women had more freedom. The sexes never quite converged, though: Schmitt found persistent differences, and thinks those are best explained as evolutionary adaptations. But he admits that his findings have limited value, because they rely entirely on self-reports, which are notoriously unreliable about sex, and did not examine a true cross-section of humanity. All of his respondents were from modern nation-states—there were no hunter-gatherers, or people from other small-scale societies—and most were college students.
Indeed, the guilty secret of psychology and of behavioral economics is that their experiments and surveys are conducted almost entirely with people from Western, industrialized countries, mostly of college age, and very often students of psychology at colleges in the United States. This is particularly unfortunate for evolutionary psychologists, who are trying to find universal features of our species. American college kids, whatever their charms, are a laughable proxy for Homo sapiens. The relatively few experiments conducted in non-Western cultures suggest that the minds of American students are highly unusual in many respects, including their spatial cognition, responses to optical illusions, styles of reasoning, coöperative behavior, ideas of fairness, and risk-taking strategies. Joseph Henrich and his colleagues at the University of British Columbia concluded recently that U.S. college kids are “one of the worst subpopulations one could study” when it comes to generalizing about human psychology. Their main appeal to evolutionary psychologists is that they’re readily available. Man’s closest relatives are all long extinct; breeding experiments on humans aren’t allowed (they would take far too long, anyway); and the mental life of our ancestors left few fossils.
Perhaps it shouldn’t matter whether evolutionary psychologists can prove that some trait got incorporated into human nature because it was useful on the African savanna. If they were really in the history business, they wouldn’t spend so much time playing Hot or Not with undergraduates. A review of the methods of evolutionary psychology, published last summer in a biology journal, underlined a point so simple that its implications are easily missed. To confirm any story about how the mind has been shaped, you need (among other things) to determine how people today actually think and behave, and to test rival accounts of how these traits function. Once you have done that, you will, in effect, have finished the job of explaining how the mind works. What life was really like in the Stone Age no longer matters. It doesn’t make any practical difference exactly how our traits became established. All that matters is that they are there.
Then why do enthusiasts for evolutionary psychology insist that politicians and social scientists should pay attention to the evolutionary roots of behavior? In theory, historical conjectures might point to useful patterns that hadn’t been noticed before, though convincing examples are hard to come by.
One much discussed study, from the early nineteen-eighties, by the Canadian psychologists Martin Daly and Margo Wilson, suggests that parents are more likely to abuse stepchildren than to abuse their own offspring. They reasoned that our distant ancestors would have left more descendants by focussing their care on their own children, with the result that people today would on the whole feel less love for stepchildren than for biological ones. Daly and Wilson found, by analyzing child-abuse data, that men are indeed much more likely to murder their stepchildren than to murder their natural children. After thirty years, this rare gem is still advertised as a triumph for evolutionary psychology.
“Hamlet” and “David Copperfield” notwithstanding, wicked stepmothers are more common in folklore and literature than wicked stepfathers, so perhaps it did come as news that the latter can be villains in real life. (This is one up for Rossini, who presciently switched the roles in his version of “Cinderella” and gave her a wicked stepfather instead.) But whether these findings are useful for detecting or preventing violent abuse is another question, even putting aside the issue of whether the evolutionary explanation is right. Most children don’t have stepfathers, most stepfathers don’t abuse anyone, and many more children suffer at the hands of their natural fathers. Studies that assess a large number of the risk factors for violent abuse or neglect—as a study at Columbia University did in 1998—consistently find that the presence of a stepfather isn’t a significant marker of risk. (The presence of a stepfather is a significant marker for the sexual abuse of girls. But Daly and Wilson’s theory makes no prediction about this, and it’s a well-known phenomenon.)
Evolutionary psychologists point to other studies that they claim have practical significance. Mating strategies are thought to help explain why young men are much more violent than old women, which has led researchers to chart the ages of killers around the world. (The theory is that young men in ancestral environments would have got the best reproductive results by taking dangerous risks to compete for mates and status.) A knowledge of these patterns may be useful one day. Still, when a youth is knifed outside a night club, no cop needs evening classes in evolutionary psychology to realize the folly of rounding up grannies. It has also been claimed, in an academic journal, that books of tips by pickup artists show how the insights of evolutionary psychology can pay off in real life, or at least in bars. Field research into this is no doubt ongoing.
Barash muses, at the end of his book, on the fact that our minds have a stubborn fondness for simple-sounding explanations that may be false. That’s true enough, and not only at bedtime. It complements a fondness for thinking that one has found the key to everything. Perhaps there’s an evolutionary explanation for such proclivities.
Brain Exercises Don't Work
Market researcher SharpBrains has predicted that the brain fitness industry will range anywhere from $2 billion to $8 billion in revenues by 2015.
That’s a wide swath, but the companies that sell brain-tuning software could conceivably hit at least the low end of their sales target by then.
The question that persists is whether any of these games and exercises actually enhance the way your brain works, whether it be memory, problem solving or the speed with which you execute a mental task. True, study participants often get better at doing an exercise that is supposedly related to a given facet of cognition. But the ability to master a game or ace a psych test often doesn’t translate into better cognition when specific measures of intelligence are assayed later.
One area of research that has shown some promise relates to a method of boosting the mental scratchpad of working memory— keeping in your head a telephone number long enough to dial, for instance. Some studies have demonstrated that a particular technique to energize working memory betters the reasoning and problem-solving abilities known as fluid intelligence.
Yet two new studies have now called into question the earlier research on working memory. A recent online publication in the Journal of Experimental Psychology led by a group at the Georgia Institute of Technology showed that 20 sessions on a working memory task did not did not result in a later acing of tests of cognitive ability. Similarly, a group at Case Western Reserve University tried the same “dual n-back test” and published a report in the journal Intellgence that found that better scores did not produce higher tallies for working memory and fluid intelligence. An n-back test requires keeping track of a number, letter or image “n” places back. A dual n-back demands the simultaneous remembering of both a visual and auditory cue perceived a certain number of places back.
What does this all mean, that the best means to boost smarts may not work? For the moment, it means a continuing academic debate because of all the excitement previously generated about prospects for upping intelligence, not just for self-help types and gamers but for students and those in need of cognitive rehabilitation. The study groups in this recent research were small—in the first study, 24 people who trained working memory and 49 others in two control groups—so the proverbial “more research” mantra will probably be invoked. But other work, including a meta-analysis of other studies, has also cast similar doubts.
So should you keep doing Web-based brain training? Only if you like it. Just don’t expect enormous leaps on your score for Raven’s Progressive Matrices or some other test of fluid intelligence. If you’re doing these tests as part of a personal self-improvement program, maybe consider the piano, Spanish lessons or even Grand Theft Auto 3 - The Ultimate Tribute to Liberty. Any of these pose less threat of the monotony that could ultimately undermine the persistence needed for mastery of any new pastime. Seems like a no brainer, in fact.
Toxiplasma and the Human Brain
Feeling sociable or reckless? You might have toxoplasmosis, an infection caused by the microscopic parasite Toxoplasma gondii, which the CDC estimates has infected about 22.5 percent of Americans older than 12 years old. Researchers tested participants for T. gondii infection and had them complete a personality questionnaire. They found that both men and women infected with T. gondii were more extroverted and less conscientious than the infection-free participants. These changes are thought to result from the parasite's influence on brain chemicals, the scientists write in the May/June issue of the European Journal of Personality.
“Toxoplasma manipulates the behavior of its animal host by increasing the concentration of dopamine and by changing levels of certain hormones,” says study author Jaroslav Flegr of Charles University in Prague, Czech Republic.
Although humans can carry the parasite, its life cycle must play out in cats and rodents. Infected mice and rats lose their fear of cats, increasing the chance they will be eaten, so that the parasite can then reproduce in a cat's body and spread through its feces [see “Protozoa Could Be Controlling Your Brain,” by Christof Koch, Consciousness Redux; Scientific American Mind, May/June 2011].
In humans, T. gondii's effects are more subtle; the infected population has a slightly higher rate of traffic accidents, studies have shown, and people with schizophrenia have higher rates of infection - but until recent years, the parasite was not thought to affect most people's daily lives.
In the new study, a pattern appeared in infected men: the longer they had been infected, the less conscientious they were. This correlation supports the researchers' hypothe-sis that the personality changes are a result of the parasite, rather than personality influencing the risk of infection. Past studies that used outdated personality surveys also found that toxoplasmosis-related personality changes increased with the length of infection.
T. gondii is most commonly contracted through exposure to undercooked contaminated meat (the rates of infection in France are much higher than in the U.S.), unwashed fruits or vegetables from contaminated soil, and tainted cat litter. The parasite is the reason pregnant women are advised not to clean litter boxes: T. gondii can do much more damage to the fetal brain than the personality tweak it inflicts on adults.
Botox Fights Depression
A common complaint about wrinkle-masking Botox is that recipients have difficulty displaying emotions on their faces. That side effect might be a good thing, however, for people with treatment-resistant depression.
In the first randomized, controlled study on the effect of botulinum toxin—known commercially as Botox—on depression, researchers investigated whether it might aid patients with major depressive disorder who had not responded to antidepressant medications. Participants in the treatment group were given a single dose (consisting of five injections) of botulinum toxin in the area of the face between and just above the eyebrows, whereas the control group was given placebo injections. Depressive symptoms in the treatment group decreased 47 percent after six weeks, an improvement that remained through the 16-week study period. The placebo group had a 9 percent reduction in symptoms. The findings appeared in May in the Journal of Psychiatric Research.
Study author M. Axel Wollmer, a psychiatrist at the University of Basel in Switzerland, believes the treatment “interrupts feedback from the facial musculature to the brain, which may be involved in the development and maintenance of negative emotions.” Past studies have shown that Botox impairs people's ability to identify others' feelings, and the new finding adds more evidence: the muscles of the face are instrumental for identifying and experiencing emotions, not just communicating them.
Changing False Beliefs
A recurring red herring in the current presidential campaign is the verity of President Barack Obama's birth certificate. Although the president has made this document public, and records of his 1961 birth in Honolulu have been corroborated by newspaper announcements, a vocal segment of the population continues to insist that Obama's birth certificate proving U.S. citizenship is a fraud, making him legally ineligible to be president. A Politico survey found that a majority of voters in the 2011 Republican primary shared this clearly false belief.
Scientific issues can be just as vulnerable to misinformation campaigns. Plenty of people still believe that vaccines cause autism and that human-caused climate change is a hoax. Science has thoroughly debunked these myths, but the misinformation persists in the face of overwhelming evidence. Straightforward efforts to combat the lies may backfire as well. A paper published on September 18 in Psychological Science in the Public Interest (PSPI) says that efforts to fight the problem frequently have the opposite effect.
"You have to be careful when you correct misinformation that you don't inadvertently strengthen it," says Stephan Lewandowsky, a psychologist at the University of Western Australia in Perth and one of the paper's authors. "If the issues go to the heart of people's deeply held world views, they become more entrenched in their opinions if you try to update their thinking."
Psychologists call this reaction belief perseverance: maintaining your original opinions in the face of overwhelming data that contradicts your beliefs. Everyone does it, but we are especially vulnerable when invalidated beliefs form a key part of how we narrate our lives. Researchers have found that stereotypes, religious faiths and even our self-concept are especially vulnerable to belief perseverance. A 2008 study in the Journal of Experimental Social Psychology found that people are more likely to continue believing incorrect information if it makes them look good (enhances self-image). For example, if an individual has become known in her community for purporting that vaccines cause autism, she might build her self-identity as someone who helps prevent autism by helping other parents avoid vaccination. Admitting that the original study linking autism to the MMR (measles–mumps–rubella) vaccine was ultimately deemed fraudulent would make her look bad (diminish her self-concept).
In this circumstance, it is easier to continue believing that autism and vaccines are linked, according to Dartmouth College political science researcher Brendan Nyhan. "It's threatening to admit that you're wrong," he says. "It's threatening to your self-concept and your worldview." It's why, Nyhan says, so many examples of misinformation are from issues that dramatically affect our lives and how we live.
Ironically, these issues are also the hardest to counteract. Part of the problem, researchers have found, is how people determine whether a particular statement is true. We are more likely to believe a statement if it confirms our preexisting beliefs, a phenomenon known as confirmation bias. Accepting a statement also requires less cognitive effort than rejecting it. Even simple traits such as language can affect acceptance: Studies have found that the way a statement is printed or voiced (or even the accent) can make those statements more believable. Misinformation is a human problem, not a liberal or conservative one, Nyhan says.
Misinformation is even more likely to travel and be amplified by the ongoing diversification of news sources and the rapid news cycle. Today, publishing news is as simple as clicking "send." This, combined with people's tendency to seek out information that confirms their beliefs, tends to magnify the effects of misinformation. Nyhan says that although a good dose of skepticism doesn't hurt while reading news stories, the onus to prevent misinformation should be on political pundits and journalists rather than readers. "If we all had to research every factual claim we were exposed to, we'd do nothing else," Nyhan says. "We have to address the supply side of misinformation, not just the demand side."
Correcting misinformation, however, isn't as simple as presenting people with true facts. When someone reads views from the other side, they will create counterarguments that support their initial viewpoint, bolstering their belief of the misinformation. Retracting information does not appear to be very effective either. Lewandowsky and colleagues published two papers in 2011 that showed a retraction, at best, halved the number of individuals who believed misinformation.
Combating misinformation has proved to be especially difficult in certain scientific areas such as climate science. Despite countless findings to the contrary, a large portion of the population doesn't believe that scientists agree on the existence of human-caused climate change, which affects their willingness to seek a solution to the problem, according to a 2011 study in Nature Climate Change.
"Misinformation is inhibiting public engagement in climate change in a major way," says Edward Maibach, director of the Center for Climate Change Communication at George Mason University and author of the Nature article, as well as a commentary that accompanied the recent article in PSPI by Lewandowsky and colleagues. Although virtually all climate scientists agree that human actions are changing the climate and that immediate action must be taken, roughly 60 percent of Americans believe that no scientific consensus on climate change exists.
"This is not a random event," Maibach says. Rather, it is the result of a concerted effort by a small number of politicians and industry leaders to instill doubt in the public. They repeat the message that climate scientists don't agree that global warming is real, is caused by people or is harmful. Thus, the message concludes, it would be premature for the government to take action and increase regulations.
To counter this effort, Maibach and others are using the same strategies employed by climate change deniers. They are gathering a group of trusted experts on climate and encouraging them to repeat simple, basic messages. It's difficult for many scientists, who feel that such simple explanations are dumbing down the science or portraying it inaccurately. And researchers have been trained to focus on the newest research, Maibach notes, which can make it difficult to get them to restate older information. Another way to combat misinformation is to create a compelling narrative that incorporates the correct information, and focuses on the facts rather than dispelling myths—a technique called "de-biasing."
Although campaigns to counteract misinformation can be difficult to execute, they can be remarkably effective if done correctly. A 2009 study found that an anti-prejudice campaign in Rwanda aired on the country's radio stations successfully altered people's perceptions of social norms and behaviors in the aftermath of the 1994 tribally based genocide of an estimated 800,000 minority Tutsi. Perhaps the most successful de-biasing campaign, Maibach notes, is the current near-universal agreement that tobacco smoking is addictive and can cause cancer. In the 1950s smoking was considered a largely safe lifestyle choice—so safe that it was allowed almost everywhere and physicians appeared in ads to promote it. The tobacco industry carried out a misinformation campaign for decades, reassuring smokers that it was okay to light up. Over time opinions began to shift as overwhelming evidence of ill effects was made public by more and more scientists and health administrators.
The most effective way to fight misinformation, ultimately, is to focus on people's behaviors, Lewandowsky says. Changing behaviors will foster new attitudes and beliefs.
Disgust
DAVID PIZARRO can change the way you think, and all he needs is a small vial of liquid. You simply have to smell it. The psychologist spent many weeks tracking down the perfect aroma. It had to be just right. "Not too powerful," he explains. "And it had to smell of real farts."
It's no joke. Pizarro needed a suitable fart spray for an experiment to investigate whether a whiff of something disgusting can influence people's judgements.
His experiment, together with a growing body of research, has revealed the profound power of disgust, showing that this emotion is a much more potent trigger for our behaviour and choices than we ever thought. The results play out in all sorts of unexpected areas, such as politics, the judicial system and our spending habits. The triggers also affect some people far more than others, and often without their knowledge. Disgust, once dubbed "the forgotten emotion of psychiatry", is showing its true colours.
Disgust is experienced by all humans, typically accompanied by a puckered-lipped facial expression. It is well established that it evolved to protect us from illness and death. "Before we had developed any theory of disease, disgust prevented us from contagion," says Pizarro, based at Cornell University in Ithaca, New York. The sense of revulsion makes us shy away from biologically harmful things like vomit, faeces, rotting meat and, to a certain extent, insects.
Disgust's remit broadened when we became a supersocial species. After all, other humans are all potential disease-carriers, says Valerie Curtis, director of the Hygiene Centre at the London School of Hygiene and Tropical Medicine. "We've got to be very careful about our contact with others; we've got to mitigate those disease-transfer risks," she says. Disgust is the mechanism for doing this - causing us to shun people who violate the social conventions linked to disgust, or those we think, rightly or wrongly, are carriers of disease. As such, disgust is probably an essential characteristic for thriving on a cooperative, crowded planet.
Yet the idea that disgust plays a deeper role in people's everyday behaviour emerged only recently. It began when researchers decided to investigate the interplay between disgust and morality. One of the first was psychologist Jonathan Haidt at the University of Virginia in Charlottesville, who in 2001 published a landmark paper proposing that instinctive gut feelings, rather than logical reasoning, govern our judgements of right and wrong.
Haidt and colleagues went on to demonstrate that a subliminal sense of disgust - induced by hypnosis - increased the severity of people's moral judgements about shoplifting or political bribery, for example (Psychological Science, vol 16, p 780). Since then, a number of studies have illustrated the unexpected ways in which disgust can influence our notions of right and wrong.
In 2008, Simone Schnall, now at the University of Cambridge, showed that placing people in a room with an unacknowledged aroma of fart spray and a filthy desk increased the severity of their moral judgements about, say, whether it's OK to eat your dead pet dog (Personality and Social Psychology Bulletin, vol 34, p 1096) "One would think that one makes decisions about whether a behaviour is right or wrong by considering the pros and cons and arriving at a balanced judgement. We showed this wasn't the case," says Schnall.
Perhaps it's no surprise, then, to find that the more "disgustable" you are, the more likely you are to be politically conservative, says Pizarro, who has studied this correlation. Similarly, the more conservative that people are, the harsher their moral judgements become in the presence of disgust stimuli.
Together, these findings raise all sorts of interesting, and troubling, questions about people's prejudices, and the ways in which they might be influenced or even deliberately manipulated. Humanity already has a track record of using disgust as a weapon against "outsiders" - lower castes, immigrants and homosexuals. Nazi propaganda notoriously depicted Jewish people as filthy rats.
Now there is empirical evidence that inducing disgust can cause people to shun certain minority groups - at least temporarily. That's what Pizarro acquired his fart spray to explore. Along with Yoel Inbar of Tilburg University in the Netherlands and colleagues, he primed a room with the foul-smelling spray, then invited people in to complete a questionnaire, asking them to rate their feelings of warmth towards various social groups, such as the elderly or homosexuals. The researchers didn't mention the pong to the participants, who were a mix of heterosexual male and female US college students.
Reeking of prejudice
While the whiff did not influence people's feelings towards many social groups, one effect was stark: those in the smelly room, on average, felt less warmth towards homosexual men compared to participants in a non-smelly room. The effect was of equal strength among political liberals and conservatives (Emotion, vol 12, p 23). This finding is consistent with previous studies showing that a stronger susceptibility to disgust is linked with disapproval of gay people.
In another experiment, making western people feel more vulnerable to disease - by showing pictures of different pathogens - made them view foreign groups, such as Nigerian immigrants, less favourably (Group Processes & Intergroup Relations, vol 7, p 333).
"It's not that I think we could change liberals to conservatives by grossing them out, but sometimes all you need is a temporary little boost," says Pizarro. He points out that if there happened to be disgust triggers in or around a polling station, for example, it could in principle sway undecided voters to a more conservative decision. "Subtle influences in places where you're voting might actually have an effect."
To an extent, many politicians have already come to the same conclusions about disgust's ability to sway the views of their electorates. In April this year, Republicans made hay of a story about President Barack Obama eating dog meat as a boy, which was recounted in his memoir. The criticism of Obama might have seemed like the typical, if surreal, electioneering you would expect in the run-up to a presidential election, but the psychology of disgust suggests that it would have struck deeper with many voters than the Democrats might have realised.
Other politicians have gone further when employing disgust to win votes. Ahead of the primaries for the 2010 gubernatorial election in New York state, candidate Carl Paladino of the Tea Party sent out thousands of flyers impregnated with the smell of rotten garbage, with a message to "get rid of the stink" alongside pictures of his rivals. While Paladino didn't manage to beat his Democrat opponent in the race to be governor, some political analysts believe his bold tactics and smelly flyers helped him thrash rivals to win the Republican nomination against the odds.
At the same time as the role that disgust plays in politics was emerging, others were searching for its effects in yet more realms of life. Given that disgust influences judgements of right and wrong, it made sense to look to the legal system.
Sometimes disgust is arguably among the main reasons that a society chooses to deem an act illegal - necrophilia, some forms of pornography, or sex between men, for example. In court, disgusting crimes can attract harsher penalties. For example, in some US states, the death penalty is sought for murders with an "outrageously or wantonly vile" element.
Research led by Sophieke Russell at the University of Kent in Canterbury, UK, holds important lessons about how juries arrive at decisions of guilt and sentencing - and possible pointers for achieving genuine justice in courts. She showed that once people feel a sense of disgust, it is difficult for them to take into account mitigating factors important in the process of law, such as the intentions of the people involved in a case. Disgust also clouds a juror's judgement more than feelings of anger.
It is for these reasons that philosopher Martha Nussbaum at the University of Chicago Law School has argued strongly to stop using the "politics of disgust" as a basis for legal judgements. She argues instead for John Stuart Mill's principle of harm, whereby crimes are judged solely on the basis of the harm they cause. It is a contentious view. Others, such as Dan Kahan of Yale Law School, argue that "it would certainly be a mistake - a horrible one - to accept the guidance of disgust uncritically. But it would be just as big an error to discount it in all contexts." Besides, disgust could never be eliminated from trials, because this would mean never exposing the jury to descriptions of crimes or pictures of crime scenes.
Beyond the courtroom, psychologists searching for disgust's influence have found it in various everyday scenarios. Take financial transactions. It's possible that a particularly unhygienic car dealer, for instance, could make a difference to the price for which you agree to sell your vehicle. Jennifer Lerner and colleagues at Carnegie Mellon University showed that a feeling of disgust can cause people to sell their property at knock-down prices. After watching a scene from the film Trainspotting, in which a character reaches into the bowl of an indescribably filthy toilet, they sold a pack of pens for an average of $2.74, compared with a price of $4.58 for participants shown a neutral clip of coral reefs. Curiously, the disgusted participants denied being influenced by the Trainspotting clip, and instead justified their actions with more rational reasons.
Lerner, now at Harvard, calls it the "disgust-disposal" effect, in which the yuck factor causes you to expel objects in close proximity, regardless of whether they are the cause of your disgust. She also found that people were less likely to buy something when feeling disgust. Perhaps this is why, aside from public health campaigns, there is little evidence of product advertisers using disgust as part of their marketing strategies.
So, armed with all this knowledge about the psychology of disgust, is it possible to spot and overcome the subtle triggers that influence behaviour? And would we want to?
Some would argue that instead of trying to overcome our sense of disgust, we should listen to our gut feelings and be guided by them. The physician Leon Kass, who was chairman of George W. Bush's bioethics council from 2001 to 2005, has made the case for the "wisdom of repugnance". "Repugnance is the emotional expression of deep wisdom, beyond reason's power to fully articulate it," he wrote in his 2002 book Life, Liberty and the Defense of Dignity.
Still, is it really desirable for, say, bad smells to encourage xenophobia or homophobia? "I think it's very possible to override disgust. That's my hope, in fact," says Pizarro. "Even though we might have very strong disgust reactions, we should be tasked with coming up with reasons independent of this reflexive gut reaction."
For those seeking to avoid disgust's influence, it's first worth noting that some people are more likely to be grossed out than others, and that the triggers vary according to culture (see "Cheese and culture"). In general, women tend to be more easily disgusted than men, and are far more likely to be disgusted about sex. Women are also particularly sensitive to disgust in the early stages of pregnancy or just after ovulation - both times when their immune system is dampened.
The young are more likely to be influenced by the yuck factor, and we tend to become less easily disgusted as we grow old. This could boil down to the fact that our senses become less acute with age, or perhaps it is simply that older people have had more life experience and take a more rational view of potential threats.
If they so choose, it is possible for anybody to become desensitised to disgusting things by continued exposure over time. For example, while faeces is the most potent disgust trigger, it's amazing how easy it is to overcome it when you have to deal with your own offspring's bowel movements. And psychologists have shown that after spending months dissecting bodies, medical students become less sensitive to disgust relating to death and bodily deformity.
Pizarro suspects that there may also be shortcuts to overriding disgust - even if the tips he has found so far may not be especially practical for day-to-day life. One of his most recent experiments shows that if you can prevent people from making that snarled-lip expression when they experience disgust - by simply asking them to hold a pencil between their lips - you can reduce their feeling of disgust when they are made to view revolting images. This, in turn, makes their judgement of moral transgressions less severe.
Happily, our lives are already a triumph over disgust. If we let it rule us completely, we'd never leave the house in the morning. As Paul Rozin, often called the "father of the psychology of disgust", has pointed out, we live in a world where the air we breathe comes from the lungs of other people, and contains molecules of animal and human faeces.
It would be wise not to think about that too much. It really is quite disgusting.
Cheese and culture
On a recent summer's day, a stench filled New Scientist's London office. It smelled like sweaty feet bathed in vomit, or something long past its sell-by date. Soon its source became clear: someone had returned from Paris with a selection of France's finest soft cheeses. How can something that smells revolting be so delicious?
For a start, no matter how potent, smells can be ambiguous. We need more information to tell us whether something really is revolting or not.
"With smell, the meaning is based on context much more so than with vision," says smell researcher Rachel Herz, author of the book That's Disgusting. In other words, a vomit smell in an alley beside a bar will immediately conjure up a mental picture of a disgusting source, but exactly the same aroma would evoke deliciousness in a fine restaurant, she says.
The stinky cheese also illustrates the power of culture over our minds. Westerners have learned that cheese is a good thing to eat - a badge of cultural distinction, even. This explains why rotten shark meat is a delicacy in Iceland, says Herz, and the liquor chicha, made from chewed and spat-out maize, is a popular drink in parts of South America. Food choices mark out who is part of our group - hence the strong religious taboos about pork which have endured long past the time when consuming it carried a serious risk of food poisoning.
The influence of culture on disgust isn't limited to food. Kissing is public is seen as distasteful in India, whereas Brits are more revulsed by mistreatment of animals. Christian participants in one study even experienced a sense of disgust when reading a passage from Richard Dawkins's atheist manifesto The God Delusion. As Herz says: "To a large extent, what is disgusting or not is in the mind of the beholder."
Many things probably transcend cultural influence, however. Using a selection of disgusting images, Valerie Curtis at the London School of Hygiene and Tropical Medicine discovered a universal disgust towards faeces, with vomit, pus, spit and a variety of insects following close behind in the revulsion stakes. Delicious, these are not.
Possessed By Demons
In his new book, novelist and former psychologist Frank Tallis explores the psychology behind demonic possession.
Your latest novel is about a man who is possessed. Did any of your patients have that belief?
Once a patient came in and said: "I am possessed by a demon." This guy wasn't insane, he wasn't schizophrenic - he just had this particular belief. In my day we called it "monosymptomatic delusion", but now it would be called something like "delusional disorder". That's when you're completely sound and reasonable in every respect except you have one belief that is absolutely bonkers.
Why would an otherwise well person believe something like that?
He was misattributing certain symptoms he had to a demonic presence. When you're possessed, you're supposed to get headaches, and he was getting loads of headaches.
I can't imagine making that assumption myself...
You have to have an openness to it. Lots of people are open to all kinds of spiritual and magical beliefs. An individual could have a perfectly harmless interest in the supernatural but then something happens that triggers this delusion and they get stuck with it, reinforcing it by piling up one misinterpretation after another. If you go out looking for evidence, you will find it.
What kind of evidence?
In my patient's case, he wanted to know the demon's name, so he got a Ouija board out. This shows he had a willingness to go down a particular path. When you think about the way that brains work, our natural inclination is to look for causes.
Could anyone end up with delusions like these?
Theoretically, yes, in the right circumstances. Maybe we all get such episodes in our lives. It's not that unusual for people to think they are seriously ill without much evidence. Who hasn't had a health scare for no good reason? That's taking a symptom and extrapolating, then finding more evidence that supports the belief.
Are there any other examples?
The big one is people suspecting that their spouse is cheating on them. Morbid obsessions about infidelity are relatively common and produce spectacular behaviours, often in individuals who otherwise are OK. In a way, falling in love is kind of monosymptomatic delusion. Even though you're a rational person, you can engage in all kinds of irrational behaviour because you are fixated on a particular individual.
Can these delusions be treated?
In the past they were treated with lots of medication or were perceived as untreatable. But these days, not just monosymptomatic delusions but all forms of psychotic illness are increasingly treated with cognitive behavioural therapy. You cultivate a sort of scientific attitude in the patient, getting them to test their beliefs. It is probably the most important new advance in psychotherapy.
Morality and mental Processes
When it really comes down to it—when the chips are down and the lights are off—are we naturally good? That is, are we predisposed to act cooperatively, to help others even when it costs us? Or are we, in our hearts, selfish creatures?
This fundamental question about human nature has long provided fodder for discussion. Augustine’s doctrine of original sin proclaimed that all people were born broken and selfish, saved only through the power of divine intervention. Hobbes, too, argued that humans were savagely self-centered; however, he held that salvation came not through the divine, but through the social contract of civil law. On the other hand, philosophers such as Rousseau argued that people were born good, instinctively concerned with the welfare of others. More recently, these questions about human nature—selfishness and cooperation, defection and collaboration—have been brought to the public eye by game shows such as Survivor and the UK’s Golden Balls, which test the balance between selfishness and cooperation by pitting the strength of interpersonal bonds against the desire for large sums of money.
But even the most compelling televised collisions between selfishness and cooperation provide nothing but anecdotal evidence. And even the most eloquent philosophical arguments mean noting without empirical data.
A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?
To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.
The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample. Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.
The researchers followed up these correlational studies with a set of experiments in which they directly manipulated both this apparent influence on the tendency to cooperate—processing speed—and the cognitive mechanism thought to be associated with this influence—intuitive, as opposed to reflective, decision-making. In the first of these studies, researchers gathered 891 participants (211 undergraduates and 680 participants from a nationwide sample) and had them play a public goods game with one key twist: these participants were forced to make their decisions either quickly (within 10 seconds) or slowly (after at least 10 seconds had passed). In the second, researchers had 343 participants from a nationwide sample play a public goods game after they had been primed to use either intuitive or reflective reasoning. Both studies showed the same pattern—whether people were forced to use intuition (by acting under time constraints) or simply encouraged to do so (through priming), they gave significantly more money to the common good than did participants who relied on reflection to make their choices. This again suggests that our intuitive impulse is to cooperate with others.
Taken together, these studies—7 total experiments, using a whopping 2,068 participants—suggest that we are not intuitively selfish creatures. But does this mean that we our naturally cooperative? Or could it be that cooperation is our first instinct simply because it is rewarded? After all, we live in a world where it pays to play well with others: cooperating helps us make friends, gain social capital, and find social success in a wide range of domains. As one way of addressing this possibility, the experimenters carried out yet another study. In this study, they asked 341 participants from a nationwide sample about their daily interactions—specifically, whether or not these interactions were mainly cooperative; they found that the relationship between processing speed (that is, intuition) and cooperation only existed for those who reported having primarily cooperative interactions in daily life. This suggests that cooperation is the intuitive response only for those who routinely engage in interactions where this behavior is rewarded - that human 'goodness' may result from the acquisition of a regularly rewarded trait.
Throughout the ages, people have wondered about the basic state of human nature - whether we are good or bad, cooperative or selfish. This question—one that is central to who we are - has been tackled by theologians and philosophers, presented to the public eye by television programs, and dominated the sleepless nights of both guilt-stricken villains and bewildered victims; now, it has also been addressed by scientific research. Although no single set of studies can provide a definitive answer - no matter how many experiments were conducted or participants were involved - this research suggests that our intuitive responses, or first instincts, tend to lead to cooperation rather than selfishness.
Although this evidence does not definitely solve the puzzle of human nature, it does give us evidence we may use to solve this puzzle for ourselves—and our solutions will likely vary according to how we define 'human nature.' If human nature is something we must be born with, then we may be neither good nor bad, cooperative nor selfish. But if human nature is simply the way we tend to act based on our intuitive and automatic impulses, then it seems that we are an overwhelmingly cooperative species, willing to give for the good of the group even when it comes at our own personal expense.
I do not think like Sherlock Holmes. Not in the least. That was the rather disheartening conclusion I reached while researching a book on the detective’s mental prowess. I’d hoped to discover that I had the secret to Sherlockian thought. What I found instead was that it would be hard work indeed to even begin to approximate the essence of the detective’s approach to the world: his ever-mindful mindset and his relentless mental energy. Holmes was a man eternally on, who relished that on-ness and floundered in its absence. It would be exhausting to think like Sherlock. And would it really be worth it in the end?
It all began with those pesky steps, the stairs leading up to the legendary residence that Sherlock Holmes shares with Dr. Watson, 221B Baker Street. Why couldn't Watson recall the number of steps? "I believe my eyes are as good as yours," Watson tells his new flatmate - as, in fact, they are. But the competence of the eyes isn't the issue. Instead, the distinction lies in how those eyes are deployed. "You see, but you do not observe," Holmes tells his companion. And Holmes? "Now, I know there are seventeen steps," he continues, "“because I have both seen and observed."
To both see and observe: Therein lies the secret. When I first heard the words as a child, I sat up with recognition. Like Watson, I didn't have a clue. Some 20 years later, I read the passage a second time in an attempt to decipher the psychology behind its impact. I realized I was no better at observing than I had been at the tender age of 7. Worse, even. With my constant companion Sir Smartphone and my newfound love of Lady Twitter, my devotion to Count Facebook, and that itch my fingers got whenever I hadn't checked my email for, what, 10 minutes already? OK, five - but it seemed a lifetime. Those Baker Street steps would always be a mystery.
The confluence of seeing and observing is central to the concept of mindfulness, a mental alertness that takes in the present moment to the fullest, that is able to concentrate on its immediate landscape and free itself of any distractions.
Mindfulness allows Holmes to observe those details that most of us don’t even realize we don't see. It’s not just the steps. It’s the facial expressions, the sartorial details, the seemingly irrelevant minutiae of the people he encounters. It’s the sizing up of the occupants of a house by looking at a single room. It's the ability to distinguish the crucial from the merely incidental in any person, any scene, any situation. And, as it turns out, all of these abilities aren't just the handy fictional work of Arthur Conan Doyle. They have some real science behind them. After all, Holmes was born of Dr. Joseph Bell, Conan Doyle's mentor at the University of Edinburgh, not some, well, more fictional inspiration. Bell was a scientist and physician with a sharp mind, a keen eye, and a notable prowess at pinpointing both his patients' disease and their personal details. Conan Doyle once wrote to him, "Round the centre of deduction and inference and observation which I have heard you inculcate, I have tried to build up a man who pushed the thing as far as it would go."
Over the past several decades, researchers have discovered that mindfulness can lead to improvements in physiological well-being and emotional regulation. It can also strengthen connectivity in the brain, specifically in a network of the posterior cingulate cortex, the adjacent precuneus, and the medial prefrontal cortex that maintains activity when the brain is resting. Mindfulness can even enhance our levels of wisdom, both in terms of dialectism (being cognizant of change and contradictions in the world) and intellectual humility (knowing your own limitations). What’s more, mindfulness can lead to improved problem solving, enhanced imagination, and better decision making. It can even be a weapon against one of the most disturbing limitations that our attention is up against: inattentional blindness.
When inattentional blindness (sometimes referred to as attentional blindness) strikes, our focus on one particular element in a scene or situation or problem causes the other elements to literally disappear. Images that hit our retina are not then processed by our brain but instead dissolve into the who-knows-where, so that we have no conscious experience of having ever been exposed to them to begin with. The phenomenon was made famous by Daniel Simons and Christopher Chabris: In their provocative study, students repeatedly failed to see a person in a gorilla suit who walked onto a basketball court midgame, pounded his chest, and walked off. But the phenomenon actually dates to research conducted by Ulric Neisser, the father of cognitive psychology, in the 1960s and 1970s.
One evening, Neisser noticed that when he looked out the window at twilight, he had the ability to see either the twilight or the reflection of the room on the glass. Focusing on the one made the other vanish. No matter what he did, he couldn’t pay active attention to both. He termed this phenomenon 'selective looking' and went on to study its effects in study after study of competing attentional demands. Show a person two superimposed videos, and he fails to notice when card players suddenly stop their game, stand up, and start shaking hands - or fails to realize that someone spoke to him in one ear while he’s been listening to a conversation with the other. In a real-world illustration of the innate inability to split attention in any meaningful way, a road construction crew once paved over a dead deer in the road. They simply did not see it, so busy were they ensuring that their assignment was properly carried out.
Inattentional blindness, more than anything else, illustrates the limitations of our attentional abilities. Try as we might, we can never see both twilight and reflection. We can't ever multitask the way we think we can. Each time we try, either the room or the world outside it will disappear from conscious processing. That's why Holmes is so careful about where and when he deploys that famed keenness of observation. Were he to spread himself too thin—imagine modern-day Holmes, be it Benedict Cumberbatch or Jonny Lee Miller, pulling out his cell to check his email as he walks down the street and has a conversation at the same time, something you'll never see either of these current incarnations actually doing - he'd be unable to deploy his observation as he otherwise would. Enter the email, exit the Baker Street steps - and then some.
It's not an easy task, that constant cognitive vigilance, the eternal awareness of our own limitations and the resulting strategic allocation of attention. Even Holmes, I'm willing to bet, couldn't reach that level of mindfulness and deliberate thought all at once. It came with years of motivation and practice. To think like Holmes, we have to both want to think like him and practice doing so over and over and over, even when the effort becomes exhausting and seems a pointless waste of energy. Mindfulness takes discipline.
Even after I discovered my propensity for sneaking over to email or Twitter when I wasn't quite sure what to write next, the discovery alone wasn't enough to curb my less-than-ideal work habits. I thought it would be. And I tried, I really did. But somehow, up that browser window popped, seemingly of its own volition. What, me? Attempt to multitask while writing my book? Never.
And so, I took the Odyssean approach: I tied myself to the mast to resist the sirens' call of the Internet. I downloaded Freedom, a program that blocked my access completely for a specified amount of time, and got to writing. The results shocked me. I was woefully bad at maintaining my concentration for large chunks of time. Over and over, my fingers made their way to that habitual key-press combination that would switch the window from my manuscript to my online world - only to discover that that world was off-limits for another - how long is left? Has it really been only 20 minutes?
Over time, the impulse became less frequent. And what's more, I found that my writing - and my thinking, it bears note - was improving with every day of Internet-less interludes. I could think more fluidly. My brain worked more conscientiously. In those breaks when, before, there would be a quick check of email or a surreptitious run to my Twitter feed, there would be a self-reflecting concentration that quickly rummaged through my brain attic. (You can't write about Holmes without mentioning his analogy for the human mind at least once.) I came up with multiple ways of moving forward where before I would find myself stuck. Pieces that had taken hours to write suddenly were completed in a fraction of the time.
Until that concrete evidence of effectiveness, I had never quite believed that focused attention would make such a big difference. As much research as I’d read, as much science as I'd examined, it never quite hit home. It had taken Freedom, but I was finally taking Sherlock Holmes at his word. I was learning the benefits of both seeing and observing—and I was no longer trading in the one for the other without quite realizing what I was doing.
Self-binding software, of course, is not always an option to keep our brains mindfully on track. Who is to stop us from checking our phone mid-dinner or having the TV on as background noise? But here's what I learned. Those little nudges to limit your own behavior have a more lasting effect, even in areas where you've never used them. They make you realize just how limited your attention is in reality - —and how often we wave our own limitations off with a disdainful motion. Not only did that nagging software make me realize how desperately I was chained to my online self, but I began to notice how often my hand reached for my phone when I was walking down the street or sitting in the subway, how utterly unable I had become to just do what I was doing, be it walking or sitting or even reading a book, without trying to get in just a little bit more.
I did my best to resist. Now, something that was once thoughtless habit became a guilt-inducing twinge. I would force myself to replace the phone without checking it, to take off my headphones and look around, to resist the urge to place a call just because I was walking to an appointment and had a few minutes of spare time. It was hard. But it was worth it, if only for my enhanced perceptiveness, for the quickly growing pile of material that I wouldn’t have even noticed before, for the tangible improvements in thought and clarity that came with every deferred impulse. It’s not for nothing that study after study has shown the benefits of nature on our thinking: Being surrounded by the natural world makes us more reflective, more creative, sharper in our cognition. But if we’re too busy talking on the phone or sending a text, we won’t even notice that we've walked by a tree.
If we follow Holmes’ lead, if we take his admonition to not only see but also observe, and do so as a matter of course, we may not only find ourselves better able to rattle off the number of those proverbial steps in a second, but we may be surprised to discover that the benefits extend much further: We may even be happier as a result. Even brief exercises in mindfulness, for as little as five minutes a day, have been shown to shift brain activity in the frontal lobes toward a pattern associated with positive and approach-oriented emotional states. And the mind-wandering, multitasking alternative? It may do more than make us less attentive. It may also make us less happy.
As Daniel Gilbert discovered after tracking thousands of participants in real time, a mind that is wandering away from the present moment is a mind that isn't happy. He developed an iPhone app that would prompt subjects to answer questions on what they were currently doing and what they were thinking about at various points in the day. In 46.9 percent of samples Gilbert and his colleagues collected, people were not thinking about whatever it was they were doing—even if what they were doing was actually quite pleasant, like listening to music or playing a game. And their happiness? The more their minds wandered, the less happy they were—regardless of the activity. As Gilbert put it in a paper in Science, "The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost."
Thinking like Sherlock Holmes isn't just a way to enhance your cognitive powers. It is also a way to derive greater happiness and satisfaction from life.
Brain Compartments
If you have pondered how intelligent and educated people can, in the face of overwhelming contradictory evidence, believe that evolution is a myth, that global warming is a hoax, that vaccines cause autism and asthma, that 9/11 was orchestrated by the Bush administration, conjecture no more. The explanation is in what I call logic-tight compartments—modules in the brain analogous to watertight compartments in a ship.
The concept of compartmentalized brain functions acting either in concert or in conflict has been a core idea of evolutionary psychology since the early 1990s. According to University of Pennsylvania evolutionary psychologist Robert Kurzban in Why Everyone (Else) Is a Hypocrite (Princeton University Press, 2010), the brain evolved as a modular, multitasking problem-solving organ—a Swiss Army knife of practical tools in the old metaphor or an app-loaded iPhone in Kurzban's upgrade. There is no unified “self” that generates internally consistent and seamlessly coherent beliefs devoid of conflict. Instead we are a collection of distinct but interacting modules often at odds with one another. The module that leads us to crave sweet and fatty foods in the short term is in conflict with the module that monitors our body image and health in the long term. The module for cooperation is in conflict with the one for competition, as are the modules for altruism and avarice or the modules for truth telling and lying.
Compartmentalization is also at work when new scientific theories conflict with older and more naive beliefs. In the 2012 paper 'Scientific Knowledge Suppresses but Does Not Supplant Earlier Intuitions' in the journal Cognition, Occidental College psychologists Andrew Shtulman and Joshua Valcarcel found that subjects more quickly verified the validity of scientific statements when those statements agreed with their prior naive beliefs. Contradictory scientific statements were processed more slowly and less accurately, suggesting that "naive theories survive the acquisition of a mutually incompatible scientific theory, coexisting with that theory for many years to follow."
Cognitive dissonance may also be at work in the compartmentalization of beliefs. In the 2010 article “When in Doubt, Shout!” in Psychological Science, Northwestern University researchers David Gal and Derek Rucker found that when subjects' closely held beliefs were shaken, they "engaged in more advocacy of their beliefs ... than did people whose confidence was not undermined." Further, they concluded that enthusiastic evangelists of a belief may in fact be "boiling over with doubt," and thus their persistent proselytizing may be a signal that the belief warrants skepticism.
In addition, our logic-tight compartments are influenced by our moral emotions, which lead us to bend and distort data and evidence through a process called motivated reasoning. The module housing our religious preferences, for example, motivates believers to seek and find facts that support, say, a biblical model of a young earth in which the overwhelming evidence of an old earth must be denied. The module containing our political predilections, if they are, say, of a conservative bent, may motivate procapitalists to believe that any attempt to curtail industrial pollution by way of the threat of global warming must be a liberal hoax.
What can be done to break down the walls separating our logic-tight compartments? In the 2012 paper 'Misinformation and Its Correction: Continued Influence and Successful Debiasing' in Psychological Science in the Public Interest, University of Western Australia psychologist Stephan Lewandowsky and his colleagues suggest these strategies: "Consider what gaps in people's mental event models are created by debunking and fill them using an alternative explanation.... To avoid making people more familiar with misinformation..., emphasize the facts you wish to communicate rather than the myth. Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.... Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect."
Debunking by itself is not enough. We must replace bad bunk with sound science.
Culture and Thinking
IN THE SUMMER of 1995, a young graduate student in anthropology at UCLA named Joe Henrich traveled to Peru to carry out some fieldwork among the Machiguenga, an indigenous people who live north of Machu Picchu in the Amazon basin. The Machiguenga had traditionally been horticulturalists who lived in single-family, thatch-roofed houses in small hamlets composed of clusters of extended families. For sustenance, they relied on local game and produce from small-scale farming. They shared with their kin but rarely traded with outside groups.
While the setting was fairly typical for an anthropologist, Henrich’s research was not. Rather than practice traditional ethnography, he decided to run a behavioral experiment that had been developed by economists. Henrich used a “game”—along the lines of the famous prisoner’s dilemma—to see whether isolated cultures shared with the West the same basic instinct for fairness. In doing so, Henrich expected to confirm one of the foundational assumptions underlying such experiments, and indeed underpinning the entire fields of economics and psychology: that humans all share the same cognitive machinery—the same evolved rational and psychological hardwiring.
The test that Henrich introduced to the Machiguenga was called the ultimatum game. The rules are simple: in each game there are two players who remain anonymous to each other. The first player is given an amount of money, say $100, and told that he has to offer some of the cash, in an amount of his choosing, to the other subject. The second player can accept or refuse the split. But there’s a hitch: players know that if the recipient refuses the offer, both leave empty-handed. North Americans, who are the most common subjects for such experiments, usually offer a 50-50 split when on the giving end. When on the receiving end, they show an eagerness to punish the other player for uneven splits at their own expense. In short, Americans show the tendency to be equitable with strangers—and to punish those who are not.
Among the Machiguenga, word quickly spread of the young, square-jawed visitor from America giving away money. The stakes Henrich used in the game with the Machiguenga were not insubstantial—roughly equivalent to the few days’ wages they sometimes earned from episodic work with logging or oil companies. So Henrich had no problem finding volunteers. What he had great difficulty with, however, was explaining the rules, as the game struck the Machiguenga as deeply odd.
When he began to run the game it became immediately clear that Machiguengan behavior was dramatically different from that of the average North American. To begin with, the offers from the first player were much lower. In addition, when on the receiving end of the game, the Machiguenga rarely refused even the lowest possible amount. “It just seemed ridiculous to the Machiguenga that you would reject an offer of free money,” says Henrich. “They just didn’t understand why anyone would sacrifice money to punish someone who had the good luck of getting to play the other role in the game.”
The potential implications of the unexpected results were quickly apparent to Henrich. He knew that a vast amount of scholarly literature in the social sciences—particularly in economics and psychology—relied on the ultimatum game and similar experiments. At the heart of most of that research was the implicit assumption that the results revealed evolved psychological traits common to all humans, never mind that the test subjects were nearly always from the industrialized West. Henrich realized that if the Machiguenga results stood up, and if similar differences could be measured across other populations, this assumption of universality would have to be challenged.
Henrich had thought he would be adding a small branch to an established tree of knowledge. It turned out he was sawing at the very trunk. He began to wonder: What other certainties about “human nature” in social science research would need to be reconsidered when tested across diverse populations?
Henrich soon landed a grant from the MacArthur Foundation to take his fairness games on the road. With the help of a dozen other colleagues he led a study of 14 other small-scale societies, in locales from Tanzania to Indonesia. Differences abounded in the behavior of both players in the ultimatum game. In no society did he find people who were purely selfish (that is, who always offered the lowest amount, and never refused a split), but average offers from place to place varied widely and, in some societies—ones where gift-giving is heavily used to curry favor or gain allegiance—the first player would often make overly generous offers in excess of 60 percent, and the second player would often reject them, behaviors almost never observed among Americans.
The research established Henrich as an up-and-coming scholar. In 2004, he was given the U.S. Presidential Early Career Award for young scientists at the White House. But his work also made him a controversial figure. When he presented his research to the anthropology department at the University of British Columbia during a job interview a year later, he recalls a hostile reception. Anthropology is the social science most interested in cultural differences, but the young scholar’s methods of using games and statistics to test and compare cultures with the West seemed heavy-handed and invasive to some. “Professors from the anthropology department suggested it was a bad thing that I was doing,” Henrich remembers. “The word ‘unethical’ came up.”
So instead of toeing the line, he switched teams. A few well-placed people at the University of British Columbia saw great promise in Henrich’s work and created a position for him, split between the economics department and the psychology department. It was in the psychology department that he found two kindred spirits in Steven Heine and Ara Norenzayan. Together the three set about writing a paper that they hoped would fundamentally challenge the way social scientists thought about human behavior, cognition, and culture.
A MODERN LIBERAL ARTS education gives lots of lip service to the idea of cultural diversity. It’s generally agreed that all of us see the world in ways that are sometimes socially and culturally constructed, that pluralism is good, and that ethnocentrism is bad. But beyond that the ideas get muddy. That we should welcome and celebrate people of all backgrounds seems obvious, but the implied corollary—that people from different ethno-cultural origins have particular attributes that add spice to the body politic—becomes more problematic. To avoid stereotyping, it is rarely stated bluntly just exactly what those culturally derived qualities might be. Challenge liberal arts graduates on their appreciation of cultural diversity and you’ll often find them retreating to the anodyne notion that under the skin everyone is really alike.
If you take a broad look at the social science curriculum of the last few decades, it becomes a little more clear why modern graduates are so unmoored. The last generation or two of undergraduates have largely been taught by a cohort of social scientists busily doing penance for the racism and Eurocentrism of their predecessors, albeit in different ways. Many anthropologists took to the navel gazing of postmodernism and swore off attempts at rationality and science, which were disparaged as weapons of cultural imperialism.
Economists and psychologists, for their part, did an end run around the issue with the convenient assumption that their job was to study the human mind stripped of culture. The human brain is genetically comparable around the globe, it was agreed, so human hardwiring for much behavior, perception, and cognition should be similarly universal. No need, in that case, to look beyond the convenient population of undergraduates for test subjects. A 2008 survey of the top six psychology journals dramatically shows how common that assumption was: more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners—with nearly 70 percent from the United States alone. Put another way: 96 percent of human subjects in these studies came from countries that represent only 12 percent of the world’s population.
Henrich’s work with the ultimatum game was an example of a small but growing countertrend in the social sciences, one in which researchers look straight at the question of how deeply culture shapes human cognition. His new colleagues in the psychology department, Heine and Norenzayan, were also part of this trend. Heine focused on the different ways people in Western and Eastern cultures perceived the world, reasoned, and understood themselves in relationship to others. Norenzayan’s research focused on the ways religious belief influenced bonding and behavior. The three began to compile examples of cross-cultural research that, like Henrich’s work with the Machiguenga, challenged long-held assumptions of human psychological universality.
Some of that research went back a generation. It was in the 1960s, for instance, that researchers discovered that aspects of visual perception were different from place to place. One of the classics of the literature, the Müller-Lyer illusion, showed that where you grew up would determine to what degree you would fall prey to the illusion that these two lines are different in length.
Researchers found that Americans perceive the line with the ends feathered outward (B) as being longer than the line with the arrow tips (A). San foragers of the Kalahari, on the other hand, were more likely to see the lines as they are: equal in length. Subjects from more than a dozen cultures were tested, and Americans were at the far end of the distribution—seeing the illusion more dramatically than all others.
More recently psychologists had challenged the universality of research done in the 1950s by pioneering social psychologist Solomon Asch. Asch had discovered that test subjects were often willing to make incorrect judgments on simple perception tests to conform with group pressure. When the test was performed across 17 societies, however, it turned out that group pressure had a range of influence. Americans were again at the far end of the scale, in this case showing the least tendency to conform to group belief.
As Heine, Norenzayan, and Henrich furthered their search, they began to find research suggesting wide cultural differences almost everywhere they looked: in spatial reasoning, the way we infer the motivations of others, categorization, moral reasoning, the boundaries between the self and others, and other arenas. These differences, they believed, were not genetic. The distinct ways Americans and Machiguengans played the ultimatum game, for instance, wasn’t because they had differently evolved brains. Rather, Americans, without fully realizing it, were manifesting a psychological tendency shared with people in other industrialized countries that had been refined and handed down through thousands of generations in ever more complex market economies. When people are constantly doing business with strangers, it helps when they have the desire to go out of their way (with a lawsuit, a call to the Better Business Bureau, or a bad Yelp review) when they feel cheated. Because Machiguengan culture had a different history, their gut feeling about what was fair was distinctly their own. In the small-scale societies with a strong culture of gift-giving, yet another conception of fairness prevailed. There, generous financial offers were turned down because people’s minds had been shaped by a cultural norm that taught them that the acceptance of generous gifts brought burdensome obligations. Our economies hadn’t been shaped by our sense of fairness; it was the other way around.
The growing body of cross-cultural research that the three researchers were compiling suggested that the mind’s capacity to mold itself to cultural and environmental settings was far greater than had been assumed. The most interesting thing about cultures may not be in the observable things they do—the rituals, eating preferences, codes of behavior, and the like—but in the way they mold our most fundamental conscious and unconscious thinking and perception.
For instance, the different ways people perceive the Müller-Lyer illusion likely reflects lifetimes spent in different physical environments. American children, for the most part, grow up in box-shaped rooms of varying dimensions. Surrounded by carpentered corners, visual perception adapts to this strange new environment (strange and new in terms of human history, that is) by learning to perceive converging lines in three dimensions.
When unconsciously translated in three dimensions, the line with the outward-feathered ends (C) appears farther away and the brain therefore judges it to be longer. The more time one spends in natural environments, where there are no carpentered corners, the less one sees the illusion.
As the three continued their work, they noticed something else that was remarkable: again and again one group of people appeared to be particularly unusual when compared to other populations—with perceptions, behaviors, and motivations that were almost always sliding down one end of the human bell curve.
In the end they titled their paper “The Weirdest People in the World?” (pdf) By “weird” they meant both unusual and Western, Educated, Industrialized, Rich, and Democratic. It is not just our Western habits and cultural preferences that are different from the rest of the world, it appears. The very way we think about ourselves and others—and even the way we perceive reality—makes us distinct from other humans on the planet, not to mention from the vast majority of our ancestors. Among Westerners, the data showed that Americans were often the most unusual, leading the researchers to conclude that “American participants are exceptional even within the unusual population of Westerners—outliers among outliers.”
Given the data, they concluded that social scientists could not possibly have picked a worse population from which to draw broad generalizations. Researchers had been doing the equivalent of studying penguins while believing that they were learning insights applicable to all birds.
NOT LONG AGO I met Henrich, Heine, and Norenzayan for dinner at a small French restaurant in Vancouver, British Columbia, to hear about the reception of their weird paper, which was published in the prestigious journal Behavioral and Brain Sciences in 2010. The trio of researchers are young—as professors go—good-humored family men. They recalled that they were nervous as the publication time approached. The paper basically suggested that much of what social scientists thought they knew about fundamental aspects of human cognition was likely only true of one small slice of humanity. They were making such a broadside challenge to whole libraries of research that they steeled themselves to the possibility of becoming outcasts in their own fields.
“We were scared,” admitted Henrich. “We were warned that a lot of people were going to be upset.”
“We were told we were going to get spit on,” interjected Norenzayan.
“Yes,” Henrich said. “That we’d go to conferences and no one was going to sit next to us at lunchtime.”
Interestingly, they seemed much less concerned that they had used the pejorative acronym WEIRD to describe a significant slice of humanity, although they did admit that they could only have done so to describe their own group. “Really,” said Henrich, “the only people we could have called weird are represented right here at this table.”
Still, I had to wonder whether describing the Western mind, and the American mind in particular, as weird suggested that our cognition is not just different but somehow malformed or twisted. In their paper the trio pointed out cross-cultural studies that suggest that the “weird” Western mind is the most self-aggrandizing and egotistical on the planet: we are more likely to promote ourselves as individuals versus advancing as a group. WEIRD minds are also more analytic, possessing the tendency to telescope in on an object of interest rather than understanding that object in the context of what is around it.
The WEIRD mind also appears to be unique in terms of how it comes to understand and interact with the natural world. Studies show that Western urban children grow up so closed off in man-made environments that their brains never form a deep or complex connection to the natural world. While studying children from the U.S., researchers have suggested a developmental timeline for what is called “folkbiological reasoning.” These studies posit that it is not until children are around 7 years old that they stop projecting human qualities onto animals and begin to understand that humans are one animal among many. Compared to Yucatec Maya communities in Mexico, however, Western urban children appear to be developmentally delayed in this regard. Children who grow up constantly interacting with the natural world are much less likely to anthropomorphize other living things into late childhood.
Given that people living in WEIRD societies don’t routinely encounter or interact with animals other than humans or pets, it’s not surprising that they end up with a rather cartoonish understanding of the natural world. “Indeed,” the report concluded, “studying the cognitive development of folkbiology in urban children would seem the equivalent of studying ‘normal’ physical growth in malnourished children.”
During our dinner, I admitted to Heine, Henrich, and Norenzayan that the idea that I can only perceive reality through a distorted cultural lens was unnerving. For me the notion raised all sorts of metaphysical questions: Is my thinking so strange that I have little hope of understanding people from other cultures? Can I mold my own psyche or the psyches of my children to be less WEIRD and more able to think like the rest of the world? If I did, would I be happier?
Henrich reacted with mild concern that I was taking this research so personally. He had not intended, he told me, for his work to be read as postmodern self-help advice. “I think we’re really interested in these questions for the questions’ sake,” he said.
The three insisted that their goal was not to say that one culturally shaped psychology was better or worse than another—only that we’ll never truly understand human behavior and cognition until we expand the sample pool beyond its current small slice of humanity. Despite these assurances, however, I found it hard not to read a message between the lines of their research. When they write, for example, that weird children develop their understanding of the natural world in a “culturally and experientially impoverished environment” and that they are in this way the equivalent of “malnourished children,” it’s difficult to see this as a good thing.
THE TURN THAT HENRICH, Heine, and Norenzayan are asking social scientists to make is not an easy one: accounting for the influence of culture on cognition will be a herculean task. Cultures are not monolithic; they can be endlessly parsed. Ethnic backgrounds, religious beliefs, economic status, parenting styles, rural upbringing versus urban or suburban—there are hundreds of cultural differences that individually and in endless combinations influence our conceptions of fairness, how we categorize things, our method of judging and decision making, and our deeply held beliefs about the nature of the self, among other aspects of our psychological makeup.
We are just at the beginning of learning how these fine-grained cultural differences affect our thinking. Recent research has shown that people in “tight” cultures, those with strong norms and low tolerance for deviant behavior (think India, Malaysia, and Pakistan), develop higher impulse control and more self-monitoring abilities than those from other places. Men raised in the honor culture of the American South have been shown to experience much larger surges of testosterone after insults than do Northerners. Research published late last year suggested psychological differences at the city level too. Compared to San Franciscans, Bostonians’ internal sense of self-worth is more dependent on community status and financial and educational achievement. “A cultural difference doesn’t have to be big to be important,” Norenzayan said. “We’re not just talking about comparing New York yuppies to the Dani tribesmen of Papua New Guinea.”
As Norenzayan sees it, the last few generations of psychologists have suffered from “physics envy,” and they need to get over it. The job, experimental psychologists often assumed, was to push past the content of people’s thoughts and see the underlying universal hardware at work. “This is a deeply flawed way of studying human nature,” Norenzayan told me, “because the content of our thoughts and their process are intertwined.” In other words, if human cognition is shaped by cultural ideas and behavior, it can’t be studied without taking into account what those ideas and behaviors are and how they are different from place to place.
This new approach suggests the possibility of reverse-engineering psychological research: look at cultural content first; cognition and behavior second. Norenzayan’s recent work on religious belief is perhaps the best example of the intellectual landscape that is now open for study. When Norenzayan became a student of psychology in 1994, four years after his family had moved from Lebanon to America, he was excited to study the effect of religion on human psychology. “I remember opening textbook after textbook and turning to the index and looking for the word ‘religion,’ ” he told me, “Again and again the very word wouldn’t be listed. This was shocking. How could psychology be the science of human behavior and have nothing to say about religion? Where I grew up you’d have to be in a coma not to notice the importance of religion on how people perceive themselves and the world around them.”
Norenzayan became interested in how certain religious beliefs, handed down through generations, may have shaped human psychology to make possible the creation of large-scale societies. He has suggested that there may be a connection between the growth of religions that believe in “morally concerned deities”—that is, a god or gods who care if people are good or bad—and the evolution of large cities and nations. To be cooperative in large groups of relative strangers, in other words, might have required the shared belief that an all-powerful being was forever watching over your shoulder.
If religion was necessary in the development of large-scale societies, can large-scale societies survive without religion? Norenzayan points to parts of Scandinavia with atheist majorities that seem to be doing just fine. They may have climbed the ladder of religion and effectively kicked it away. Or perhaps, after a thousand years of religious belief, the idea of an unseen entity always watching your behavior remains in our culturally shaped thinking even after the belief in God dissipates or disappears.
Why, I asked Norenzayan, if religion might have been so central to human psychology, have researchers not delved into the topic? “Experimental psychologists are the weirdest of the weird,” said Norenzayan. “They are almost the least religious academics, next to biologists. And because academics mostly talk amongst themselves, they could look around and say, ‘No one who is important to me is religious, so this must not be very important.’” Indeed, almost every major theorist on human behavior in the last 100 years predicted that it was just a matter of time before religion was a vestige of the past. But the world persists in being a very religious place.
HENRICH, HEINE, AND NORENZAYAN’S FEAR of being ostracized after the publication of the WEIRD paper turned out to be misplaced. Response to the paper, both published and otherwise, has been nearly universally positive, with more than a few of their colleagues suggesting that the work will spark fundamental changes. “I have no doubt that this paper is going to change the social sciences,” said Richard Nisbett, an eminent psychologist at the University of Michigan. “It just puts it all in one place and makes such a bold statement.”
More remarkable still, after reading the paper, academics from other disciplines began to come forward with their own mea culpas. Commenting on the paper, two brain researchers from Northwestern University argued (pdf) that the nascent field of neuroimaging had made the same mistake as psychologists, noting that 90 percent of neuroimaging studies were performed in Western countries. Researchers in motor development similarly suggested that their discipline’s body of research ignored how different child-rearing practices around the world can dramatically influence states of development. Two psycholinguistics professors suggested that their colleagues had also made the same mistake: blithely assuming human homogeneity while focusing their research primarily on one rather small slice of humanity.
At its heart, the challenge of the WEIRD paper is not simply to the field of experimental human research (do more cross-cultural studies!); it is a challenge to our Western conception of human nature. For some time now, the most widely accepted answer to the question of why humans, among all animals, have so successfully adapted to environments across the globe is that we have big brains with the ability to learn, improvise, and problem-solve.
Henrich has challenged this “cognitive niche” hypothesis with the “cultural niche” hypothesis. He notes that the amount of knowledge in any culture is far greater than the capacity of individuals to learn or figure it all out on their own. He suggests that individuals tap that cultural storehouse of knowledge simply by mimicking (often unconsciously) the behavior and ways of thinking of those around them. We shape a tool in a certain manner, adhere to a food taboo, or think about fairness in a particular way, not because we individually have figured out that behavior’s adaptive value, but because we instinctively trust our culture to show us the way. When Henrich asked Fijian women why they avoided certain potentially toxic fish during pregnancy and breastfeeding, he found that many didn’t know or had fanciful reasons. Regardless of their personal understanding, by mimicking this culturally adaptive behavior they were protecting their offspring. The unique trick of human psychology, these researchers suggest, might be this: our big brains are evolved to let local culture lead us in life’s dance.
The applications of this new way of looking at the human mind are still in the offing. Henrich suggests that his research about fairness might first be applied to anyone working in international relations or development. People are not “plug and play,” as he puts it, and you cannot expect to drop a Western court system or form of government into another culture and expect it to work as it does back home. Those trying to use economic incentives to encourage sustainable land use will similarly need to understand local notions of fairness to have any chance of influencing behavior in predictable ways.
Because of our peculiarly Western way of thinking of ourselves as independent of others, this idea of the culturally shaped mind doesn’t go down very easily. Perhaps the richest and most established vein of cultural psychology—that which compares Western and Eastern concepts of the self—goes to the heart of this problem. Heine has spent much of his career following the lead of a seminal paper published in 1991 by Hazel Rose Markus, of Stanford University, and Shinobu Kitayama, who is now at the University of Michigan. Markus and Kitayama suggested that different cultures foster strikingly different views of the self, particularly along one axis: some cultures regard the self as independent from others; others see the self as interdependent. The interdependent self—which is more the norm in East Asian countries, including Japan and China—connects itself with others in a social group and favors social harmony over self-expression. The independent self—which is most prominent in America—focuses on individual attributes and preferences and thinks of the self as existing apart from the group.
The classic “rod and frame” task: Is the line in the center vertical?
That we in the West develop brains that are wired to see ourselves as separate from others may also be connected to differences in how we reason, Heine argues. Unlike the vast majority of the world, Westerners (and Americans in particular) tend to reason analytically as opposed to holistically. That is, the American mind strives to figure out the world by taking it apart and examining its pieces. Show a Japanese and an American the same cartoon of an aquarium, and the American will remember details mostly about the moving fish while the Japanese observer will likely later be able to describe the seaweed, the bubbles, and other objects in the background. Shown another way, in a different test analytic Americans will do better on something called the “rod and frame” task, where one has to judge whether a line is vertical even though the frame around it is skewed. Americans see the line as apart from the frame, just as they see themselves as apart from the group.
Heine and others suggest that such differences may be the echoes of cultural activities and trends going back thousands of years. Whether you think of yourself as interdependent or independent may depend on whether your distant ancestors farmed rice (which required a great deal of shared labor and group cooperation) or herded animals (which rewarded individualism and aggression). Heine points to Nisbett at Michigan, who has argued (pdf) that the analytic/holistic dichotomy in reasoning styles can be clearly seen, respectively, in Greek and Chinese philosophical writing dating back 2,500 years. These psychological trends and tendencies may echo down generations, hundreds of years after the activity or situation that brought them into existence has disappeared or fundamentally changed.
And here is the rub: the culturally shaped analytic/individualistic mind-sets may partly explain why Western researchers have so dramatically failed to take into account the interplay between culture and cognition. In the end, the goal of boiling down human psychology to hardwiring is not surprising given the type of mind that has been designing the studies. Taking an object (in this case the human mind) out of its context is, after all, what distinguishes the analytic reasoning style prevalent in the West. Similarly, we may have underestimated the impact of culture because the very ideas of being subject to the will of larger historical currents and of unconsciously mimicking the cognition of those around us challenges our Western conception of the self as independent and self-determined. The historical missteps of Western researchers, in other words, have been the predictable consequences of the WEIRD mind doing the thinking.
We Respond To Individuals, Not Mass Humanity
Rob Portman, Republican senator from Ohio and one-time contender for Romney’s would-be VP slot, announced on Friday that he has reversed his very public stance against gay marriage. As the well-known conservative stated in an Op-Ed piece on Friday, he now believes that “if two people are prepared to make a lifetime commitment to love and care for each other in good times and in bad, the government shouldn’t deny them the opportunity to get married.”
What’s the reason behind this seemingly sudden change of heart? According to Portman, it can all be credited to his son, Will — an openly gay 21-year-old man who came out of the closet to his conservative father two years ago.
This seems very similar to other political idiosyncrasies that we’ve seen in the past when there are family members involved. Dick Cheney’s support of gay marriage within an otherwise conservative platform is largely due to his love for his lesbian daughter, Mary. And these two politicians’ positions on gay marriage are not all that different from the political views of Sarah Palin, mother of a disabled child who is opposed to all government spending — except, of course, when that spending is earmarked for programs benefiting disabled children.
These views have often been criticized by the media, given snarky names, and demeaned as narcissistic or self-centered. And sure, it is certainly possible that these politicians (and the many others like them) are consciously picking and choosing their political platforms in a selfish way to maximize self-interest. However, considering what we know about social psychology, it’s fairly short-sighted to assume that these political about-faces are always the conscious result of these intentionally selfish motives. More likely, they are actually the result of a common psychological phenomenon that impacts all of our decisions — the identifiable victim effect.
Broadly speaking, the identifiable victim effect states one thing: Individual stories will have a far greater sway on our attitudes, intentions, and behavior than any long list of numbers, statistics, and facts. For example, if you see an ad for Save the Children with a picture of a single, emaciated Malian child named Rokia, you will donate significantly more to the charity (about 50% more, on average) than if you see a message listing the statistics about how many people are starving throughout all of Africa.
So why do individual stories have such a greater pull on us than statistics — especially when, rationally, learning about millions of people being impacted by something should impact your attitudes and actions much more than hearing about just one?
First of all, these individual stories are vivid. Stories about people are graphic, full of individual details, and typically involve strong visual imagery. Similarly, our experiences with close loved ones are vivid; we know a lot about their lives and individual personalities, and we come into frequent contact with them. Decades of research has informed us that vivid information has a much stronger influence on what people think and believe than dull, boring statistics. Even if the facts themselves are supposed to be “shocking,” numbers on a page will never hit us at the same vivid level as a picture of a wounded puppy or a video of a crying little girl. Pure information will never really impact us in the same way that seeing something happen to our friends or loved ones will.
Secondly, in addition to being vivid and full of graphic details, individual accounts are emotional, and emotion is an invaluable component of persuasion. For example, men and women asked to donate money to support the charity March of Dimes would consistently donate more money if they were asked outside of a church as they walked in to confession (aka while they felt fairly guilty) than if they were asked when they were walking out of confession (aka when their guilt had already been resolved). We use emotions as a cue for what we should think and do. If you feel guilty? Do something good to resolve it. If you feel happy? Do something good to maintain that positive state. Without even realizing it, our emotions will sway our attitudes and actions — and no facts or numbers will manage to hit our emotions as strongly as an individual story of heartache and woe, or the thoughts, feelings, and lives of the people that we love and care about. In fact, as I’ve written about before, there are entire lines of research devoted to informing us about all of the ways in which our emotions impact our moral and political judgments. (Spoiler Alert: They impact them a lot.)
So what does a bunch of research on Mali, March of Dimes, and starving children have to do with Portman’s new attitude towards gay marriage? Well, it will all click together once we realize that the broad logic underlying the identifiable victim effect is not necessarily about the presence of “victims” themselves. Rather, the main point is that it’s harder to work up the empathy and the emotional connection to care about numbers and figures to the point where they will actually sway your opinions and political actions. Plenty of journalists have remarked recently that Portman is showing a lack of empathy because he couldn’t bring himself to care about other people’s children. Maybe so — and, certainly, there are plenty of people whose attitudes towards important political issues aren’t solely determined by the lives and interests of their friends and family. After all, you can certainly be in favor of legalizing gay marriage without being closely related to someone who is gay. But even so, the fact remains that it’s much easier to become emotionally invested in a cause when there’s a name and a face tied to it — especially when that name and that face belong to someone who is particularly close to you. The more you’re emotionally invested in something, the more you can identify a single person being impacted by the issue in vivid, emotional detail — and the more likely that person is to sway your attitudes.
What this means is that the tendency for people like Portman and Cheney to only care about gay marriage once they have children who are affected, or the tendency for people like Palin to support government spending on a cause that would impact her own son, is not out of the ordinary. In fact, it’s a core aspect of human cognitive biases. Of course issues that impact your own family members are going to have a greater pull on your beliefs and political attitudes. They are going to involve individual people, they are going to be more vivid, and they will be more emotional. It doesn’t have to be knowingly selfish, and it doesn’t have to involve conscious self-interest (although it could). But, to give these three (and the many others like them) the benefit of the doubt, it could simply be that they, like most others, don’t receive the emotional pull from numbers and figures that they do from close family members.
The point is, regardless of political affiliation, it’s not necessarily a sign of narcissism or selfishness if someone is susceptible to the effects of identifiable victims or individual stories. It’s just a cognitive bias that all of us face, which we need to be aware of if we wish to understand why people make certain exceptions to their political beliefs and how we can get people to care about certain political issues if they are not closely related to anyone being affected by them. It’s certainly not out of the ordinary for people to fall victim to identifiable victims. So, if you are a Republican and you wish to defend Portman from people claiming that he lacks empathy, it should comfort you to know that his empathic response is actually incredibly normal. And, if you are a Democrat and you are arguing that it angers you when politicians like Portman only hold empathic views for issues that personally impact them, you should know that it’s now your job, if you wish to be an effective persuader, to figure out how to create identifiable stories and vivid accounts for the issues that you care about, rather than relying on numbers and figures and wondering why they don’t evoke a more powerful reaction from politicians.
After all, the identifiable victim effect isn’t going anywhere anytime soon. Even Mother Teresa fell victim to it. As she put it, “If I look at the mass, I will never act. If I look at the one, I will.”
Of course, Stalin also noted that “the death of a single Russian soldier is a tragedy, [whereas] a million deaths is a statistic.” But, eh — let’s stick with quoting Mother Teresa.
Many people are wondering what this means for the future of same-sex marriage in the United States. Why exactly is this such a contentious issue, and why do Americans’ opinions seem to differ so greatly? When it comes to marriage equality, why can’t we all just get along?
Where Does a Same-Sex Marriage Attitude Come From?
The reason why only nine states in the USA have legalized same-sex marriage most likely has something to do with the large number of senators (and, presumably, American citizens) who are against it. But other than the obvious factors (like religion and age), what else might make someone especially likely to reject the idea of same-sex marriage?
We might find some answers by looking at empirical research on the psychological roots of political ideology. Conservative social attitudes (which typically include an opposition to same-sex marriage) are strongly related to preferences for stability, order, and certainty. In fact, research suggests that these attitudes may be part of a compensatory mental process motivated by anxiety; people who feel particularly threatened by uncertainty cope with it by placing great importance on norms, rules, and rigidity. As a result, people who are particularly intolerant of ambiguity, live in unstable circumstances, or simply have an innate need for order, structure, and closure are more likely to hold attitudes that promote rigidity and conventional social norms – meaning that they are most likely to be against same-sex marriage.
What does it mean to be intolerant of ambiguity? Well, would you rather see the world around you as clear and straightforward, or would you rather see everything as complicated and multidimensional? People who fall into the first category are much more likely to want everything in life (including gender roles, interpersonal relationships, and conceptualizations of marriage) to be dichotomous, rigid, and clear-cut. “Ambiguity-intolerant” people are also, understandably, more likely to construe ambiguous situations as particularly threatening. After all, if you are inherently not comfortable with the idea of a complicated, shades-of-gray world, any situation that presents you with this type of uncertainty will be seen as potentially dangerous. This is likely what’s happening when a conservative sees an ambiguous situation (e.g. a same-sex couple’s potential marriage) as a source of threat (e.g. to the sanctity of marriage).
Why Is Attitude Change So Hard?
After reading the section above, it should be fairly clear that there’s a problem with how both pro- and anti-same-sex-marriage proponents are viewing the other side’s point of view. The issue is not really that there’s one way to see the issue, and the other side simply isn’t seeing it that way; the issue is that both sides are focusing on entirely different things.
Overall, liberal ideology paints society as inherently improvable, and liberals are therefore motivated by a desire for eventual societal equality; conservative ideology paints society as inherently hierarchical, and conservatives are therefore motivated by a desire to make the world as stable and safe as possible. So while the liberals are banging their heads against the wall wondering why conservatives are against human rights, the conservatives are sitting on the other side wondering why on earth the liberals would want to create chaos, disorder, and dangerous instability. It boils down to a focus on equality versus a focus on order. Without understanding that, no one’s ever going to understand what the other side wants to know and hear, and all sides’ arguments will fall on deaf ears.*
But there’s another mental process at play. When someone has a strong attitude about something (liberal or conservative), the mind works very hard to protect it. When faced with information about a given topic, people pay attention to (and remember) the arguments that strengthen their attitudes, and they ignore, forget, or misremember any arguments that go against them. Even if faced with evidence that proves how a given attitude is undeniably wrong, people will almost always react by simply becoming more polarized; they will leave the interaction even more sure that their attitude is correct than they were before. So even if each side understood how to frame their arguments – even if liberals pointed out the ways in which same-sex marriage rights would help stabilize the economy, or conservatives argued to liberals that they could provide equal rights through civil unions rather than through marriage – it’s still very unlikely that either side would successfully change anyone’s attitude about anything.
If Attitudes Are So Stubborn, How Have They Changed In The Past?
So how did it happen? As one specific example, how did New York end up legalizing same-sex marriage in June 2011 with a 33-29 vote?
I’d wager a guess that part of it had to do with the five other states that had legalized same-sex marriage by that point and seen their heterosexual marriages remain just as sacred as they ever were before. As same-sex marriage becomes more commonplace (and heterosexual marriages remain unaffected), it will also become less threatening; as it becomes less threatening, it will evoke less of a threat response from people who don’t deal well with ambiguity.
But I can offer another serious contender: Amendment S5857-2011.
This amendment, which states that religious institutions opposed to same-sex marriage do not have to perform them, was passed shortly before the same-sex marriage legalization bill. There’s a very powerful social norm at work in our interactions, and it shapes how we respond to people’s attempts at persuasion: When we feel like someone has conceded something to us, we feel pressured to concede something back. This is called a reciprocal concession.
Let’s say a Girl Scout comes to your door and asks if she can sell you ten boxes of cookies. You feel bad saying no, but your waistline doesn’t need the cookies and your wallet doesn’t need the expense. After you refuse the sale, she responds by asking if you’d like to purchase five boxes instead. You then change your mind and agree to buy five boxes; after all, if the girl scout was willing to concede those five boxes of cookies, you feel pressured to concede something in return – like some of your money. That’s the power of reciprocal concessions.
This, in my opinion, is a good contender to explain what happened in the New York State Senate back in 2011. The vote was dead even: 31 for, 31 against. When the Senate passed the Amendment, this was a concession from the pro-same-sex-marriage side, which, according to the logic of reciprocal concessions, should have encouraged no-voting senators to reciprocate by conceding their votes. For two of them, it worked.
So now, we’ve seen that personality, ideology, and attitudes can all play a role in our attitudes towards same-sex marriage, and that votes might even swing because of techniques that we could have learned from our local Girl Scouts. This means is that it’s absolutely essential for everyone involved in the debate this week to understand that we won’t all respond to the same types of arguments, reasoning, or pleas. Rather, it is imperative that we consider how a focus on equality or stability might shape what information we pay attention to, and what values we deem most important.
1 I recognize that these are generalizations, and these descriptions do not accurately represent every liberal person and every conservative person. I also recognize that individual political attitudes are more complex than this distinction may make them out to be, and that religious ideology plays a very strong role as well. However, the focus on equality vs. stability is, at its core, the fundamental difference between liberal and conservative political ideology.
Repair Stroke Damage
Mice can recover from physically debilitating strokes that damage the primary motor cortex, the region of the brain that controls most movement in the body if the mice are quickly subjected to physical conditioning that rapidly “rewires” a different part of the brain to take over lost function, Johns Hopkins researchers have found. The research uses precise, intense and early treatment.
The researchers first trained normal but hungry mice to reach for and grab pellets of food in a precise way that avoided spilling the pellets and gave them the pellets as a reward. They reached maximum accuracy after seven to nine training days.
Then the researchers created experimental small strokes that left the mice with damage to the primary motor cortex. Predictably, the reaching and grasping precision disappeared, but a week of retraining, begun just 48 hours after the stroke, led the mice to again successfully perform the task with a degree of precision comparable to before the stroke.
Subsequent brain studies showed that although many nerve cells in the primary motor cortex were permanently damaged by the stroke, a different part of the brain called the medial premotor cortex adapted to control reaching and grasping.
“The function of the medial premotor cortex is not well-understood, but in this case it seemed to take over the functions associated with the reach-and-grab task in his experimental mice,” said study leader Steven R. Zeiler, M.D., Ph.D., an assistant professor of neurology at the Johns Hopkins University School of Medicine.
The researchers also report that otherwise healthy mice trained to reach and grasp pellets did not lose this ability after experiencing a stroke in the medial premotor cortex, which suggests that this part of the brain typically plays no role in those activities, and the existence of untapped levels of brain plasticity might be exploited to help human stroke victims.
Zeiler said another key finding in his research team’s mouse model was a reduction of the level of parvalbumin, a protein that marks the identity and activity of inhibitory neurons that keep the brain’s circuitry from overloading. With lower levels of parvalbumin in the medial premotor cortex, it appears the “brakes” are essentially off, allowing for the kind of activity required to reorganize and rewire the brain to take on new functions — in this case the ability to reach and grasp.
To prove that the learned functions had moved to the medial premotor cortex in the mice, the researchers induced strokes there. Again, the new skills were lost. And again, the mice could be retrained.
The research team’s next steps with their mouse model include evaluating the effect of drugs and timing of physical rehab on long-term recovery. The research could offer insight into whether humans should receive earlier and more aggressive rehab.
As many as 60 percent of stroke patients are currently left with diminished use of an arm or leg, and one-third need placement in a long-term care facility.
There's A Lot We Don't Know
In the early nineteen-nineties, David Poeppel, then a graduate student at M.I.T. (and a classmate of mine)—discovered an astonishing thing. He was studying the neurophysiological basis of speech perception, and a new technique had just come into vogue, called positron emission tomography (PET). About half a dozen PET studies of speech perception had been published, all in top journals, and David tried to synthesize them, essentially by comparing which parts of the brain were said to be active during the processing of speech in each of the studies. What he found, shockingly, was that there was virtually no agreement. Every new study had published with great fanfare, but collectively they were so inconsistent they seemed to add up to nothing. It was like six different witnesses describing a crime in six different ways.
This was terrible news for neuroscience—if six studies led to six different answers, why should anybody believe anything that neuroscientists had to say? Much hand-wringing followed. Was it because PET, which involves injecting a radioactive tracer into the brain, was unreliable? Were the studies themselves somehow sloppy? Nobody seemed to know.
And then, surprisingly, the field prospered. Brain imaging became more, not less, popular. The technique of PET was replaced with the more flexible technique of functional magnetic resonance imaging (fMRI), which allowed scientists to study people’s brains without the use of the risky radioactive tracers, and to conduct longer studies that collected more data and yielded more reliable results. Experimental methods gradually become more careful. As fMRI machines become more widely available, and methods became more standardized and refined, researchers finally started to find a degree of consensus between labs.
Meanwhile, neuroscience started to go public, in a big way. Fancy color pictures of brains in action became a fixture in media accounts of the human mind and lulled people into a false sense of comprehension. (In a feature for the magazine titled “Duped,” Margaret Talbot described research at Yale that showed that inserting neurotalk into a papers made them more convincing.) Brain imaging, which was scarcely on the public’s radar in 1990, became the most prestigious way of understanding human mental life. The prefix “neuro” showed up everywhere: neurolaw, neuroeconomics, neuropolitics. Neuroethicists wondered about whether you could alter someone’s prison sentence based on the size of their neocortex.
And then, boom! After two decades of almost complete dominance, a few bright souls started speaking up, asking: Are all these brain studies really telling us much as we think they are? A terrific but unheralded book published last year, “Neuromania,” worried about our growing obsession with brain imaging. A second book, by Raymond Tallis, published this year, invoked the same term and made similar arguments. In the book “Out of our Heads,” the philosopher Alva Noe wrote, ”It is easy to overlook the fact that images… made by fMRI and PET are not actually pictures of the brain in action.” Instead, brain images are elaborate reconstructions that depend on complex mathematical assumptions that can, as one study earlier this year showed, sometimes yield slightly different results when analyzed on different types of computers.
Last week, worries like these, and those of thoughtful blogs like Neuroskeptic and The Neurocritic, finally hit the mainstream, in the form of a blunt New York Times op-ed, in which the journalist Alissa Quart declared, “I applaud the backlash against what is sometimes called brain porn, which raises important questions about this reductionist, sloppy thinking and our willingness to accept seemingly neuroscientific explanations for, well, nearly everything.”
Quart and the growing chorus of neuro-critics are half right: our early-twenty-first-century world truly is filled with brain porn, with sloppy reductionist thinking and an unseemly lust for neuroscientific explanations. But the right solution is not to abandon neuroscience altogether, it’s to better understand what neuroscience can and cannot tell us, and why.
The first and foremost reason why we shouldn’t simply disown neuroscience altogether is an obvious one: if we want to understand our minds, from which all of human nature springs, we must come to grips with the brain’s biology. The second is that neuroscience has already told us lot, just not the sort of things we may think it has. What gets play in the daily newspaper is usually a study that shows some modest correlation between a sexy aspect of human behavior, with headlines like “FEMALE BRAIN MAPPED IN 3D DURING ORGASM” and “THIS IS YOUR BRAIN ON POKER”
But a lot of those reports are based on a false premise: that neural tissue that lights up most in the brain is the only tissue involved in some cognitive function. The brain, though, rarely works that way. Most of the interesting things that the brain does involve many different pieces of tissue working together. Saying that emotion is in the amygdala, or that decision-making is the prefrontal cortex, is at best a shorthand, and a misleading one at that. Different emotions, for example, rely on different combinations of neural substrates. The act of comprehending a sentence likely involves Broca’s area (the language-related spot on the left side of the brain that they may have told you about in college), but it also draws on the parts of the brain in the temporal lobe that analyze acoustic signals, and part of sensorimotor cortex and the basal ganglia become active as well. (In congenitally blind people, some of the visual cortex also plays a role.) It’s not one spot, it’s many, some of which may be less active but still vital, and what really matters is how vast networks of neural tissue work together.
The smallest element of a brain image that an fMRI can pick out is something called a voxel. But voxels are much larger than neurons, and, in the long run, the best way to understand the brain is probably not by asking which particular voxels are most active in a given process. It will instead come from asking how the many neurons work together within those voxels. And for that, fMRI may turn not out not to be the best technique, despite its current convenience. It may ultimately serve instead as the magnifying glass that leads us to the microscope we really need. If most of the action in the brain lies at the level of neurons rather than voxels or brain regions (which themselves often contain hundreds or thousands of voxels), we may need new methods, like optogenetics or automated, robotically guided tools for studying individual neurons; my own best guess is that we will need many more insights from animal brains before we can fully grasp what happens in human brains. Scientists are also still struggling to construct theories about how arrays of individual neurons relate complex behaviors, even in principle. Neuroscience has yet find its Newton, let alone its Einstein.
But that’s no excuse for giving up. When Darwin wrote “The Origin of Species,” nobody knew what DNA was for, and nobody imagined that we would eventually be sequencing it.
The real problem with neuroscience today isn’t with the science—though plenty of methodological challenges still remain—it’s with the expectations. The brain is an incredibly complex ensemble, with billions of neurons coming into—and out of—play at any given moment. There will eventually be neuroscientific explanations for much of what we do; but those explanations will turn out to be incredibly complicated. For now, our ability to understand how all those parts relate is quite limited, sort of like trying to understand the political dynamics of Ohio from an airplane window above Cleveland.
Which may be why the best neuroscientists today may be among those who get the fewest headlines, like researchers studying the complex dynamics that enter into understanding a single word. As Poeppel says, what we need now is “the meticulous dissection of some elementary brain functions, not ambitious but vague notions like brain-based aesthetics, when we still don’t understand how the brain recognizes something as basic as a straight line.”
The sort of short, simple explanations of complex brain functions that often make for good headlines rarely turn out to be true. But that doesn’t mean that there aren’t explanations to be had, it just means that evolution didn’t evolve our brains to be easily understood.
Robot Therapists For Autistic Kids
The number of children in the United States who are being diagnosed with autism is on the rise.
Traditional therapies to help autistic children develop their social skills can be time consuming and expensive. Recently, however, scientific researchers at Vanderbilt University, in Nashville, in the US state of Tennessee, may have overcome that challenge with the help of a robot.
The pioneering work is still in its earliest stages, but the results appear to be promising.
One of the children involved with the study is three-year-old Aiden. Learning doesn’t come easily for him. As a result of his autism, Aiden often struggles with social interactions. He’s been working with a therapist to address his learning challenges, but he’s also getting help from an unlikely source – a robot named NAO.
NAO is a commercial robot developed in France. What makes the robot different is the “intelligent environment” researchers have built. This aspect of his machinery consists of web cameras and TVs to track Aiden’s head movements and analyse his emotional state.
The information is then transmitted to a computer that programs the robot to respond to Aiden’s needs and instruct him through verbal prompts.
When NAO says, “Look here, Aiden,” incredibly, the boy responds to the command from his robotic “therapist” almost every time. That’s not always the case when Aiden is prompted by a human.
Researchers are still not fully sure why this happens, but seem convinced what makes robot therapy so successful is NAO’s ability to communicate with autistic children in a way most humans cannot.
Nilanjan Sarkar, a professor of mechanical and computer engineering at Vanderbilt, lead the study.
He told me, “If the robot determines that some of the gestures a child is not able to do, it can correct the gestures in a playful way, having them involved, not as a teacher or student, but as a playmate.“
Sarkar says this seems to be reassuring to autistic children who can quickly become disengaged from a teaching session if they believe they are not measuring up to the expectations of their human therapist.
Sarkar says he got the idea for creating the robot after visiting a relative in India whose child also suffers from autism. He realised there may be a need for robot “therapists” after observing how that child responded well to technology, but struggled with human social interactions.
Sarkar’s breakthrough work comes at a critical time. It’s hoped robots like NAO will play a crucial role in dealing with the increasing number of cases of autism in the US. Today, one-in-88 children is being diagnosed with autism.
Zachary Warren, director of the Vanderbilt Kennedy Center Treatment and Research Institute for Autism Spectrum Disorders, says Sarkar’s research is still new, but early results suggest a promising future for robot therapy.
“This model of thinking, of using definite tools, robots and computers at critical periods of development, might be one of the big contributions of this work. It might prime children for learning complex tasks. We might have something very impactful here,” he told me.
Warren cautions robot therapy is not designed, and could never replace human, therapists. Still, he is hopeful it will become a valuable companion tool to more traditional therapies.
He says as the number of autism cases continues to rise, budgets for treatment, will become even more constrained.
Robot therapies, he hopes, will help offset some of those costs, and help autistic children everywhere gain the social skills they need.
(This is an extract from an article in Scientific American about Iron Man 3 Tech: Iron Man 3
In real life science sensory feedback increases learning for brain-machine interface. In 2010, Aaron Suminski, Nicholas Hatsopoulos, and colleagues at the University of Chicago used a “sleeve” placed over a monkey’s arm to help improve learn how to move a cursor on a computer screen driven by recording activity in motor cortex. Using visual and somatosensory feedback together the monkeys learned how to control the cursor much faster and more accurately than without those sensations.
In 2011, a research team at the Duke University Center for Neuroengineering headed by Miguel Nicolelis, a pioneer and leader in the area of brain machine interface, trained two monkeys using brain activity to control and move a virtual hand.
The critical piece in this experiment - and a requirement for functional training with the fictional Iron Man exoskeleton - was that electrical activation in both the sensory and motor parts of the brains were used. Motor signals were used to drive the controller and then feedback was given directly into the brain by stimulating the sensory cortex when the monkeys made accurate movements. This advance actually provides patterns of electrical stimulation to the brain that mimic sensory inputs in movement.
The full article on using exoskeletons and computer controlled feedback, is here
Rituals
Think about the last time you were about to interview for a job, speak in front of an audience, or go on a first date. To quell your nerves, chances are you spent time preparing – reading up on the company, reviewing your slides, practicing your charming patter. People facing situations that induce anxiety typically take comfort in engaging in preparatory activities, inducing a feeling of being back in control and reducing uncertainty.
While a little extra preparation seems perfectly reasonable, people also engage in seemingly less logical behaviors in such situations. Here’s one person’s description from our research:
I pound my feet strongly on the ground several times, I take several deep breaths, and I "shake" my body to remove any negative energies. I do this often before going to work, going into meetings, and at the front door before entering my house after a long day.
While we wonder what this person’s co-workers and neighbors think of their shaky acquaintance, such rituals – the symbolic behaviors we perform before, during, and after meaningful event – are surprisingly ubiquitous, across culture and time. Rituals take an extraordinary array of shapes and forms. At times performed in communal or religious settings, at times performed in solitude; at times involving fixed, repeated sequences of actions, at other times not. People engage in rituals with the intention of achieving a wide set of desired outcomes, from reducing their anxiety to boosting their confidence, alleviating their grief to performing well in a competition – or even making it rain.
Recent research suggests that rituals may be more rational than they appear. Why? Because even simple rituals can be extremely effective. Rituals performed after experiencing losses – from loved ones to lotteries – do alleviate grief, and rituals performed before high-pressure tasks – like singing in public – do in fact reduce anxiety and increase people’s confidence. What’s more, rituals appear to benefit even people who claim not to believe that rituals work. While anthropologists have documented rituals across cultures, this earlier research has been primarily observational. Recently, a series of investigations by psychologists have revealed intriguing new results demonstrating that rituals can have a causal impact on people’s thoughts, feelings, and behaviors.
Basketball superstar Michael Jordan wore his North Carolina shorts underneath his Chicago Bulls shorts in every game; Curtis Martin of the New York Jets reads Psalm 91 before every game. And Wade Boggs, former third baseman for the Boston Red Sox, woke up at the same time each day, ate chicken before each game, took exactly 117 ground balls in practice, took batting practice at 5:17, and ran sprints at 7:17. (Boggs also wrote the Hebrew word Chai (“living”) in the dirt before each at bat. Boggs was not Jewish.) Do rituals like these actually improve performance? In one recent experiment, people received either a “lucky golf ball” or an ordinary golf ball, and then performed a golf task; in another, people performed a motor dexterity task and were either asked to simply start the game or heard the researcher say “I’ll cross fingers for you” before starting the game. The superstitious rituals enhanced people’s confidence in their abilities, motivated greater effort – and improved subsequent performance. These findings are consistent with research in sport psychology demonstrating the performance benefits of pre-performance routines, from improving attention and execution to increasing emotional stability and confidence.
Humans feel uncertain and anxious in a host of situations beyond laboratory experiments and sports – like charting new terrain. In the late 1940s, anthropologist Bronislaw Malinowski lived among the inhabitants of islands in the South Pacific Ocean. When residents went fishing in the turbulent, shark-infested waters beyond the coral reef, they performed specific rituals to invoke magical powers for their safety and protection. When they fished in the calm waters of a lagoon, they treated the fishing trip as an ordinary event and did not perform any rituals. Malinowski suggested that people are more likely to turn to rituals when they face situations where the outcome is important and uncertain and beyond their control – as when sharks are present.
Rituals in the face of losses such as the death of a loved one or the end of a relationship (or loss of limb from shark bite) are ubiquitous. There is such a wide variety of known mourning rituals that they can even be contradictory: crying near the dying is viewed as disruptive by Tibetan Buddhists but as a sign of respect by Catholic Latinos; Hindu rituals encourage the removal of hair during mourning, while growing hair (in the form of a beard) is the preferred ritual for Jewish males.
People perform mourning rituals in an effort to alleviate their grief – but do they work? Our research suggests they do. In one of our experiments, we asked people to recall and write about the death of a loved one or the end of a close relationship. Some also wrote about a ritual they performed after experiencing the loss:
I used to play the song by Natalie Cole “I miss you like crazy” and cry every time I heard it and thought of my mom.
I looked for all the pictures we took together during the time we dated. I then destroyed them into small pieces (even the ones I really liked!), and then burnt them in the park where we first kissed.
We found that people who wrote about engaging in a ritual reported feeling less grief than did those who only wrote about the loss.
We next examined the power of rituals in alleviating disappointment in a more mundane context: losing a lottery. We invited people into the laboratory and told them they would be part of a random drawing in which they could win $200 on the spot and leave without completing the study. To make the pain of losing even worse, we even asked them to think and write about all the ways they would use the money. After the random draw, the winner got to leave, and we divided the remaining “losers” into two groups. Some people were asked to engage in the following ritual:
Step 1. Draw how you currently feel on the piece of paper onyour desk for two minutes.
Step 2. Please sprinkle a pinch of salt on the paper with your drawing.
Step 3. Please tear up the piece of paper.
Step 4. Count up to ten in your head five times.
Other people simply engaged in a task (drawing how they felt) for the same amount of time. Finally, everyone answered questions about their level of grief, such as “I can’t help feeling angry and upset about the fact that I did not win the $200.” The results? Those who performed a ritual after losing in the lottery reported feeling less grief. Our results suggest that engaging in rituals mitigates grief caused by both life-changing losses (such as the death of a loved one) and more mundane ones (losing a lottery).
Rituals appear to be effective, but, given the wide variety of rituals documented by social scientists, do we know which types of rituals work best? In a recent study conducted in Brazil, researchers studied people who perform simpatias: formulaic rituals that are used for solving problems such as quitting smoking, curing asthma, and warding off bad luck. People perceive simpatias to be more effective depending on the number of steps involved, the repetition of procedures, and whether the steps are performed at a specified time. While more research is needed, these intriguing results suggest that the specific nature of rituals may be crucial in understanding when they work – and when they do not.
Despite the absence of a direct causal connection between the ritual and the desired outcome, performing rituals with the intention of producing a certain result appears to be sufficient for that result to come true. While some rituals are unlikely to be effective – knocking on wood will not bring rain – many everyday rituals make a lot of sense and are surprisingly effective.
Brain Repair
When the hippocampus, the brain’s primary learning and memory center, is damaged, complex new neural circuits — often far from the damaged site — arise to compensate for the lost function, say life scientists from UCLA and Australia who have pinpointed the regions of the brain involved in creating those alternate pathways.
The researchers found that parts of the prefrontal cortex take over when the hippocampus is disabled. Their breakthrough discovery, the first demonstration of such neural-circuit plasticity, could potentially help scientists develop new treatments for Alzheimer’s disease, stroke, and other conditions involving damage to the brain.
In the research, UCLA‘s Michael Fanselow and Moriel Zelikowsky in collaboration with Bryce Vissel, a group leader of the neuroscience research program at Sydney’s Garvan Institute of Medical Research, conducted laboratory experiments with rats showing that the rodents were able to learn new tasks even after damage to the hippocampus.
While the rats needed additional training, they nonetheless learned from their experiences — a surprising finding.
“I expect that the brain probably has to be trained through experience,” said Fanselow, a professor of psychology and member of the UCLA Brain Research Institute, who was the study’s senior author. “In this case, we gave animals a problem to solve.”
After discovering the rats could, in fact, learn to solve problems, Zelikowsky, a graduate student in Fanselow’s laboratory, traveled to Australia, where she worked with Vissel to analyze the anatomy of the changes that had taken place in the rats’ brains. Their analysis identified significant functional changes in two specific regions of the prefrontal cortex.
Compensating for damage from Alzheimer’s
“Interestingly, previous studies had shown that these prefrontal cortex regions also light up in the brains of Alzheimer’s patients, suggesting that similar compensatory circuits develop in people,” Vissel said. “While it’s probable that the brains of Alzheimer’s sufferers are already compensating for damage, this discovery has significant potential for extending that compensation and improving the lives of many.”
The hippocampus, a seahorse-shaped structure where memories are formed in the brain, plays critical roles in processing, storing and recalling information. The hippocampus is highly susceptible to damage through stroke or lack of oxygen and is critically involved in Alzheimer’s disease, Fanselow said.
“Until now, we’ve been trying to figure out how to stimulate repair within the hippocampus,” he said. “Now we can see other structures stepping in and whole new brain circuits coming into being.”
Zelikowsky said she found it interesting that sub-regions in the prefrontal cortex compensated in different ways, with one sub-region — the infralimbic cortex — silencing its activity and another sub-region — the prelimbic cortex — increasing its activity.
“If we’re going to harness this kind of plasticity to help stroke victims or people with Alzheimer’s,” she said, “we first have to understand exactly how to differentially enhance and silence function, either behaviorally or pharmacologically. It’s clearly important not to enhance all areas. The brain works by silencing and activating different populations of neurons. To form memories, you have to filter out what’s important and what’s not.”
Complex behavior always involves multiple parts of the brain communicating with one another, with one region’s message affecting how another region will respond, Fanselow noted. These molecular changes produce our memories, feelings and actions.
“The brain is heavily interconnected — you can get from any neuron in the brain to any other neuron via about six synaptic connections,” he said. “So there are many alternate pathways the brain can use, but it normally doesn’t use them unless it’s forced to. Once we understand how the brain makes these decisions, then we’re in a position to encourage pathways to take over when they need to, especially in the case of brain damage.
“Behavior creates molecular changes in the brain; if we know the molecular changes we want to bring about, then we can try to facilitate those changes to occur through behavior and drug therapy,” he added. I think that’s the best alternative we have. Future treatments are not going to be all behavioral or all pharmacological, but a combination of both.”
Why Buddha Isn’t Dead–and Psychology Still Isn’t Really a Science
(John Horgan in Scientific American)
I’ve been mulling over how I should follow up my previous post, the one with the subtle headline “Crisis in Psychiatry!” My meta-theme is that science has failed to deliver a potent theory of and therapy for the human mind. I’ve made this same point previously, notably in my 1996 Scientific American article “Why Freud Isn’t Dead” and my 1999 book The Undiscovered Mind, which was originally also titled “Why Freud Isn’t Dead.”
I was faulted for being too critical in those works, but in retrospect I probably wasn’t critical enough. My “Freud isn’t dead” argument went as follows: In spite of enduring vicious attacks since its inception, Freudian psychoanalysis endures as a theory of and therapy of the mind not because it has been scientifically validated. Far from it. Psychoanalysis is arguably analogous to phlogiston, the pseudo-stuff that alchemists once thought was the basis of combustion and other chemical phenomena.
Psychoanalysis endures because science has not produced an obviously superior paradigm to replace it. If psychoanalysis is phlogiston, so are all the supposedly new-and-improved mind-paradigms proposed over the past century, including behaviorism, cognitive science, behavioral genetics, evolutionary psychology and neuroscience.
An effective mind-paradigm should produce effective treatments for mental illness, right? Countless new psychotherapies have emerged since Freud’s heyday, but studies have shown that all “talking cures” are roughly as effective as each other, or ineffective. This is the notorious Dodo effect. (Those of you who believe, like my Scientific American colleague Ferris Jabr, that cognitive behavioral therapy represents a genuine advance in psychotherapy should check out this new study, which concludes otherwise.) Antidepressants, neuroleptics and other drugs can provide short-term relief for some sufferers from mental illness, but on balance they may do more harm than good.
Here’s how bad things have gotten. Many prominent psychologists, such as Richard Davidson, are promoting meditation as a therapy for troubled minds, even though the evidence for meditation’s benefits is flimsy. Think about that a moment. In spite of all the supposed advances of modern science, some authorities believe that the best treatment for mental disorders might be the method that Buddha taught 2,500 years ago. That’s like chemists suddenly telling us that phlogiston theory - or something even older, like the ancient belief that all matter is made of earth, fire, air and water - was right after all.
I’m often accused of of being too negative, of seeing the glass of mind-science as half empty instead of half full. Actually, even describing the glass as half empty is far too generous. We don’t have a genuine science of the mind yet. The question is when, if ever, will we?
Trick Qs and False Memories
Simply asking people whether they experienced an event can trick them into later believing that it did occur, according to a neat little study just out: Susceptibility to long-term misinformation effect outside of the laboratory.
Psychologists Miriam Lommen and colleagues studied 249 Dutch soldiers were deployed for a four month tour of duty in Afghanistan. As part of a study into PTSD, they were given an interview at the end of the deployment asking them about their exposure to various stressful events that had occurred. However, one of the things discussed was made up – a missile attack on their base on New Year’s Eve.
At the post-test, participants were provided new information about an event that did not take place during their deployment, that is, a (harmless) missile attack at the base on New Year’s Eve.
We provided a short description of the event including some sensory details (e.g., sound of explosion, sight of gravel after the explosion). After that, participants were asked if they had experienced it…
Eight of the soldiers reported remembering this event right there in the interview. The other 241 correctly said they didn’t recall it, but seven months later, when they did a follow-up questionnaire about their experiences in the field, 26% said they did remember the non-existent New Year’s Eve bombardment (this question had been added to an existing PTSD scale.)
Susceptibility to the misinformation was correlated with having a lower IQ, and with PTSD symptom severity.
False memory effects like this one have been widely studied, but generally only in laboratory conditions. I like this study because it used a clever design to take memory misinformation into the real world, by neatly piggybacking onto another piece of research.
Also, it’s interesting (and worrying) that the false information was presented in the context of a question, not a statement. It seems that merely being asked about something can, in some cases, lead to memories of having experienced that thing.
Famous Obsessives
The man could not stand dirt. When he built his company’s first factory in Fremont, Calif., in 1984, he frequently got down on his hands and knees and looked for specks of dust on the floor as well as on all the equipment. For Steve Jobs, who was rolling out the Macintosh computer, these extreme measures were a necessity. “If we didn’t have the discipline to keep that place spotless,” the Apple co-founder later recalled, “then we weren’t going to have the discipline to keep all these machines running.” This perfectionist also hated typos. As Pam Kerwin, the marketing director at Pixar during Jobs’ hiatus from Apple, told me, “He would carefully go over every document a million times and would pick up on punctuation errors such as misplaced commas.” And if anything wasn’t just right, Jobs could throw a fit. He was a difficult and argumentative boss who had trouble relating to others. But Jobs could focus intensely on exactly what he wanted—which was to design “insanely great products”—and he doggedly pursued this obsession until the day he died. Hard work and intelligence can take you only so far. To be super successful like Jobs, you also need that X-factor, that maniacal overdrive—which often comes from being a tad mad.
For decades, scholars have made the case that mental illness can be an asset for writers and artists. In her landmark work Touched With Fire: Manic-Depressive Illness and the Artistic Temperament, Johns Hopkins psychologist Kay Jamison documented the “fine madness” that gripped dozens of prominent novelists, poets, painters, and composers. As Lord Byron wrote of his fellow bards, “We of the craft are all crazy. Some are affected by gaiety, others by melancholy, but all are more or less touched.” For the author of Don Juan, as for many of the other artsy types profiled by Jamison, the disease in question is manic depression (or bipolar disorder), but depression is also common. Sylvia Plath’s signature works—The Bell Jar and Daddy—hinge on her suicidal despair. But while most Americans now acknowledge that many famous writers were unbalanced, few realize that the movers and shakers who have built this country—CEOs like Steve Jobs—also struggled with psychiatric maladies. This misunderstanding motivated me to write my latest book, America’s Obsessives. After discussing Jobs and other contemporary figures in the prologue, I cover seven icons, including Thomas Jefferson, marketing genius Henry J Heinz, librarian Melvil Dewey, aviator Charles Lindbergh, beauty tycoon Estée Lauder, and baseball slugger Ted Williams. (Like Jobs, the Red Sox Hall of Famer was a neatness nut who used to quiz the clubhouse attendant about why he used Tide on the team’s laundry.) By picking trailblazers who toiled in different arenas - from business and politics to information technology and sports - I wanted to show how a touch of madness is perhaps the secret to rising to the top in just about any line of work.
These men and women of action did have occasional bouts with depression, but they primarily suffered (or benefited) from another form of mental illness: obsessive-compulsive personality disorder. The key features of this superachiever’s disease include a love of order, lists, rules, schedules, details, and cleanliness; people with OCPD are addicted to work, and they are control freaks who must do everything “their way.” OCPD is not to be confused with its cousin, obsessive-compulsive disorder. Those with OCD are paralyzed by thoughts that just won’t go away, while people with OCPD are inspired by them. Steve Jobs couldn’t stop designing products—when hospitalized in the ICU, he once ripped off his oxygen mask, insisting that his doctors improve its design on the double. Estée Lauder couldn’t stop touching other women’s faces. Perfect strangers would do, including those she might bump into on an elevator or a street corner. Without her beauty biz as an alibi, she might have been arrested for assault with deadly lipstick or face powder. These dynamos are hard-pressed to carve out time for anything else but their compulsions. Spouses and children typically endure long stretches of neglect. In the early 1950s, with two boys at home (today both are billionaire philanthropists), Lauder was riding the rails all over the country half the year, hawking her wares.
Obsessives hate nothing so much as taking a break to relax or reflect, and they typically do so only when felled by illness. “Home. Not well. Busy about house. Always plenty to do. Cannot well be idle and believe will rather wear out than rust out,” wrote the 35-year-old Henry Heinz in his diary in 1880, four years after starting his eponymous processed food company. Heinz’s compulsions included measuring everything in sight—he never left home without his steel tape measure, which he used on many an unsuspecting doorway—and keeping track of meaningless numbers. When traveling across the Atlantic on a steamer in 1886, he jotted down in his diary its precise dimensions as well as the number of passengers who rode in steerage class. But this love of pseudo-quantification would produce in the early 1890s one of the sturdiest slogans in American advertising history—“57 Varieties.” At the time, his company actually produced more than 60 products, but this number fetishist felt that there was something magical about sevens. By his early 50s, Heinz had already driven himself close to a complete nervous collapse on numerous occasions, and he reluctantly passed the reins of the company to his heirs. For the last two decades of his life, his children insisted that the overbearing paterfamilias chill out in a German sanatorium every summer, either at Dr. Carl von Dapper’s outfit in Bad Kissingen or Dr. Franz Dengler’s in Baden-Baden.
Melvil Dewey, whose childhood fixation with the number 10 led him to devise the Dewey Decimal Classification system, also was forced into an early retirement by his feverish pace. Dewey published the first edition of his search engine—the Google of its day, which is still in use in libraries in nearly 150 countries—in 1876, when he was only 24. For the next quarter of a century, Dewey took on a series of demanding jobs, typically juggling two or three at a time, as a librarian, businessman, and editor. He became the head of the world’s first library school, at Columbia University in 1884. According to a running joke, Dewey had a habit of dictating notes to two stenographers at the same time. In the end, it was his sexual compulsions that did him in. He was a serial sexual harasser and in 1905 was ostracized from the American Library Association, the organization that he had helped found a generation earlier, when four prominent female members of the guild filed complaints against him.
The aviator Charles Lindbergh also was an order aficionado whose oversized libido created a mess. This demanding dad saw his five children only a couple of months a year. He ruled over them and his wife, the best-selling author Anne Morrow Lindbergh, not with an iron fist but with ironclad lists. He kept track of each child’s infractions, which included such innocuous activities as gum-chewing. And he insisted that Anne track all her household expenditures, including every 15 cents spent for rubber bands, in copious account books. After Lindbergh turned 50, feeding his sex addiction became his full-time job; for the rest of his life, he was constantly flying around the world to visit his three German “wives,” longtime mistresses with whom he fathered seven children, and to hook up with various other flings.
Remarkably, though these obsessive icons were all awash in neurotic tics, there has been no shortage of hagiographers who idealize their every move. Of Heinz’s penchant for collecting seemingly random numbers, one biographer has observed that he “enthusiastically wrote down in his diary the statistics that one must know and record on such an occasion.” Another saw in Heinz’s factoid-finding a reason to compare him to “a scientist such as Thomas Edison.” The author of the first biography of Dewey made the laughable claim that “there was no psycho-neurosis in [him].” Even today, some still agree with what New York Gov. Al Smith said about Lindbergh soon after his legendary flight to Paris: “He represents to us … all that we wish—a young American at his best.” We Americans like our heroes and do not easily let them go. By pointing out the character flaws in our superachievers, I do not intend to diminish the greatness of their achievements. Instead I aim to show exactly how they managed to pull them off. And more often than not, it was with a touch of madness.
Your Inner Chimp
The central idea now familiar to most British Olympians, the stars of Liverpool Football Club and Ronnie O’Sullivan, the mercurial snooker player, is that there is a chimp in your brain. The chimp is not exactly you. It is your primitive self. It has emotions, reacts quickly and impulsively, and is not logical in its thinking. It jumps to conclusions and makes assumptions.
The key to success — in life as well as in sport — is to be able to recognise the behaviour of this chimp and to manage it with the logical part of the brain. As Dr Steve Peters, the psychiatrist who invented the model and has written a bestselling book on the subject, puts it: “You have to put the chimp back in its box.”
Peters is a very likeable Northerner. Within moments of meeting him at Google HQ in London, I can see why he has built such a strong rapport with Sir Chris Hoy, Craig Bellamy and the like. He has a bright face, a commonsense style and a way of making you think you are, for the time he is with you, at the centre of his universe.
Peters is also in demand. His chimp framework has made him one of the most influential people in sport. Peters is a psychiatrist — he trained as a doctor before taking up various hospital posts — but his principal work today is in the psychology of success. He wants people to maximise their potential and thinks that he can help them to do just that.
“Ronnie O’Sullivan has been very open about his work with me,” Peters says. “When he came to see me, the problem was that his chimp would try to sabotage him with anxious thoughts. This is what the chimp does. It frets about losing, or about not being able to pot the ball, and about how awful it would be not to win.
“But I explained to Ronnie that these thoughts were not coming from him. They were coming from his chimp. The key was for him to box his chimp. The rational part of his brain was perfectly able to recognise that losing doesn’t define him.
“Snooker is not the be-all and end-all. Once he recognised this, and talked the chimp down, he could play without the negative emotional baggage.”
Peters’s methods have had strong results. O’Sullivan has won the World Championship twice since starting to work with him in 2011. Peters has also been central to the success of the Great Britain track cycling team, Team Sky and other Olympic sports, such as taekwondo and canoeing. He has been working with Liverpool since 2012.
“The key is to understand that everyone is an individual,” Peters says. “We all have individual personalities and individual chimps. The way to box the chimp will vary from person to person, and from situation to situation. Sometimes a chimp needs to be reasoned with; sometimes it needs to be confronted. My job is to train people in the most effective method to use.”
Peters does not only help athletes with particular problems; he also helps athletes to use their psychological tools more effectively. “Some people are already very robust emotionally when they come to see me,” he says. “Chris Hoy, for example, was never unstable. There was a report in the press about him having panic attacks. I can tell you that never happened. He has never been on Valium in his life.
“What he did when he came to me was to say, ‘I am in a great place mentally and physically, but I can get an advantage in physical terms by working with a specialist in strength and conditioning. And I believe I can get an advantage in emotional terms by working with someone who can help me understand how my mind works.’ There was no dysfunction that I had to sort out; I was just adding to what he already had.”
Much of Peters’s work is to help athletes to gain a sense of perspective. The danger when walking out into an Olympic final is that an athlete will become overwhelmed with fear, or panic, or the yearning to run away. It is sometimes called the fight, flight, freeze response. The key to unlocking peak performance is to banish this “chimp-like” reaction by introducing rationality.
As Peters puts it: “It is vital to remember that sport is just sport. You won’t die if you lose. You will still be you. Victoria Pendleton once said to me, ‘All I do is go round and round in a circle.’ That is a great perspective to have because once you realise that sport is nonsense, you can give it everything without fear. I am not promoting being laid-back; I’m promoting perspective.”
Are there any downsides of perspective, I wonder. It may be very useful when one is about to go into competition and might otherwise be seized by panic, but what about at the beginning of an Olympic cycle? Why would someone want to commit to four years of professional sport when she has just convinced herself that it is “nonsense”?
“It is possible to want to win the medal, to commit to winning the medal, while still recognising that the sport is, in a sense, trivial,” Peters says. “Some people may find that a difficult balance to strike, but others do it very well. It is all about the individual.”
Occasionally, Peters has worked with athletes who just do not want to commit. The problem is not a reluctant chimp, keen to stay in a warm bed rather than do an early-morning run, but something more profound. You might call it a rational decision to say: “Sport isn’t worth it.”
“I have worked with two athletes who said, ‘I don’t want to do this,’ ” Peters says. “They were both great athletes who could have medalled. But they walked away, and I agreed with their decision. One wrote to me two years later and said, ‘Thank you for giving me my life back. I don’t want the medal and I don’t want the lifestyle. I have made no error of judgment.’ To me, that is fantastic. I don’t want to force people to do something they don’t want to.”
Spock vs Kirk
Good myths turn on simple pairs— God and Lucifer, Sun and Moon, Jerry and George—and so an author who makes a vital duo is rewarded with a long-lived audience. No one in 1900 would have thought it possible that a century later more people would read Conan Doyle’s Holmes and Watson stories than anything of George Meredith’s, but we do. And so Gene Roddenberry’s “Star Trek,” despite the silly plots and the cardboard-seeming sets, persists in its many versions because it captures a deep and abiding divide. Mr. Spock speaks for the rational, analytic self who assumes that the mind is a mechanism and that everything it does is logical, Captain Kirk for the belief that what governs our life is not only irrational but inexplicable, and the better for being so. The division has had new energy in our time: we care most about a person who is like a thinking machine at a moment when we have begun to have machines that think. Captain Kirk, meanwhile, is not only a Romantic, like so many other heroes, but a Romantic on a starship in a vacuum in deep space. When your entire body is every day dissolved, reënergized, and sent down to a new planet, and you still believe in the ineffable human spirit, you have really earned the right to be a soul man.
Writers on the brain and the mind tend to divide into Spocks and Kirks, either embracing the idea that consciousness can be located in a web of brain tissue or debunking it. For the past decade, at least, the Spocks have been running the Enterprise: there are books on your brain and music, books on your brain and storytelling, books that tell you why your brain makes you want to join the Army, and books that explain why you wish that Bar Refaeli were in the barracks with you. The neurological turn has become what the “cultural” turn was a few decades ago: the all-purpose non-explanation explanation of everything. Thirty years ago, you could feel loftily significant by attaching the word “culture” to anything you wanted to inspect: we didn’t live in a violent country, we lived in a “culture of violence”; we didn’t have sharp political differences, we lived in a “culture of complaint”; and so on. In those days, Time, taking up the American pursuit of pleasure, praised Christopher Lasch’s “The Culture of Narcissism”; now Time has a cover story on happiness and asks whether we are “hardwired” to pursue it.
Myths depend on balance, on preserving their eternal twoness, and so we have on our hands a sudden and severe Kirkist backlash. A series of new books all present watch-and-ward arguments designed to show that brain science promises much and delivers little. They include “A Skeptic’s Guide to the Mind” (St. Martin’s), by Robert A. Burton; “Brainwashed: The Seductive Appeal of Mindless Neuro-Science” (Basic), by Sally Satel and Scott O. Lilienfeld; and “Neuro: The New Brain Sciences and the Management of the Mind” (Princeton), by a pair of cognitive scientists, Nikolas Rose and Joelle M. Abi-Rached.
“Bumpology” is what the skeptical wit Sydney Smith, writing in the eighteen-twenties, called phrenology, the belief that the shape of your skull was a map of your mind. His contemporary heirs rehearse, a little mordantly, failed bits of Bumpology that indeed seem more like phrenology than like real psychology. There was the left-right brain split, which insisted on a far neater break within our heads (Spock bits to the left, Kirk bits to the right) than is now believed to exist. The skeptics revisit the literature on “mirror neurons,” which become excited in the frontal lobes of macaque monkeys when the monkeys imitate researchers, and have been used to explain the origins of human empathy and sociability. There’s no proof that social-minded Homo sapiens has mirror neurons, while the monkeys who certainly do are not particularly social. (And, if those neurons are standard issue, then they can’t be very explanatory of what we mean by empathy: Bernie Madoff would have as many as Nelson Mandela.)
It turns out, in any case, that it’s very rare for any mental activity to be situated tidily in one network of neurons, much less one bit of the brain. When you think you’ve located a function in one part of the brain, you will soon find that it has skipped town, like a bail jumper. And all of the neuro-skeptics argue for the plasticity of our neural networks. We learn and shape our neurology as much as we inherit it. Our selves shape our brains at least as much as our brains our selves.
Each author, though, has a polemical project, something to put in place of mere Bumpology. (People who write books on indoor plumbing seldom feel obliged to rival Vitruvius as theorists of architecture, but it seems that no one can write about one neuron without explaining all thought.) “Brainwashed” is nervously libertarian; Satel is a scholar at the American Enterprise Institute, and she and Lilienfeld are worried that neuroscience will shift wrongdoing from the responsible individual to his irresponsible brain, allowing crooks to cite neuroscience in order to get away with crimes. This concern seems overwrought, copping a plea via neuroscience not being a significant social problem. Burton, a retired medical neurologist, seems anxious to prove himself a philosopher, and races through a series of arguments about free will and determinism to conclude that neuroscience doesn’t yet know enough and never will. Minds give us the illusion of existing as fixed, orderly causal devices, when in fact they aren’t. Looking at our minds with our minds is like writing a book about hallucinations while on LSD: you can’t tell the perceptual evidence from your own inner state. “The mind is and will always be a mystery,” Burton insists. Maybe so, and yet we’re perfectly capable of probing flawed equipment with flawed equipment: we know that our eyes have blind spots, even as we look at the evidence with them, and we understand all about the dog whistles we can’t hear. Since in the past twenty-five years alone we’ve learned a tremendous amount about minds, it’s hard to share the extent of his skepticism. Psychology is an imperfect science, but it’s a science.
In “Neuro,” Rose and Abi-Rached see the real problem: neuroscience can often answer the obvious questions but rarely the interesting ones. It can tell us how our minds are made to hear music, and how groups of notes provoke neural connections, but not why Mozart is more profound than Manilow. Courageously, they take on, and dismiss, the famous experiments by Benjamin Libet that seem to undermine the idea of free will. For a muscle movement, Libet showed, the brain begins “firing”—choosing, let’s say, the left joystick rather than the right—milliseconds before the subject knows any choice has been made, so that by the time we think we’re making a choice the brain has already made it. Rose and Abi-Rached are persuasively skeptical that “this tells us anything about the exercise of human will in any of the naturally occurring situations where individuals believe they have made a conscious choice—to take a holiday, choose a restaurant, apply for a job.” What we mean by “free will” in human social practice is just a different thing from what we might mean by it in a narrower neurological sense. We can’t find a disproof of free will in the indifference of our neurons, any more than we can find proof of it in the indeterminacy of the atoms they’re made of.
A core objection is that neuroscientific “explanations” of behavior often simply re-state what’s already obvious. Neuro-enthusiasts are always declaring that an MRI of the brain in action demonstrates that some mental state is not just happening but is really, truly, is-so happening. We’ll be informed, say, that when a teen-age boy leafs through the Sports Illustrated swimsuit issue areas in his brain associated with sexual desire light up. Yet asserting that an emotion is really real because you can somehow see it happening in the brain adds nothing to our understanding. Any thought, from Kiss the baby! to Kill the Jews!, must have some route around the brain. If you couldn’t locate the emotion, or watch it light up in your brain, you’d still be feeling it. Just because you can’t see it doesn’t mean you don’t have it. Satel and Lilienfeld like the term “neuroredundancy” to “denote things we already knew without brain scanning,” mockingly citing a researcher who insists that “brain imaging tells us that post-traumatic stress disorder (PTSD) is a ‘real disorder.’ ” The brain scan, like the word “wired,” adds a false gloss of scientific certainty to what we already thought. As with the old invocation of “culture,” it’s intended simply as an intensifier of the obvious.
Phrenology, the original Bumpology, at least had the virtue of getting people to think about “cortical location,” imagining, for the first time, that the brain might indeed be mapped into areas. Bumpology brought a material order, however factitious, to a metaphysical subject. In the same way, even the neuro-skeptics seem to agree that modern Bumpology remains an important corrective to radical anti-Bumpology: to the kind of thinking that insists that brains don’t count at all and cultures construct everything; that, given the right circumstances, there could be a human group with six or seven distinct genders, each with its own sexuality; that there is a possible human society in which very old people would be regarded as attractive and nubile eighteen-year-olds not; and still another where adolescent children would be noted for their rigorous desire to finish recently commenced tasks. How impressive you find modern pop Bumpology depends in part on whether you believe that there are a lot of people who still think like that.
For all the exasperations of neurotautology, there’s a basic, arresting truth to neo-Bumpology. In a new, belligerently pro-neuro book, “The Anatomy of Violence: The Biological Roots of Crime” (Pantheon), Adrian Raine, a psychology professor at the University of Pennsylvania, discusses a well-studied case in which the stepfather of an adolescent girl, with no history of pedophilia, began to obsess over child pornography and then to molest his stepdaughter. He was arrested, arraigned, and convicted. Then it emerged that he had a tumor, pressing on the piece of the brain associated with social and sexual inhibitions. When it was removed, the wayward desires vanished. Months of normality ensued, until the tumor began to grow back and, with it, the urges.
Now, there probably is no precise connection between the bit of the brain the tumor pressed on and child lust. The same bit of meat-matter pressing on the same bit of brain in some other head might have produced some other transgression—in the head of a Lubavitcher, say, a mad desire to eat prosciutto. But it would still be true that what we think of as traditionally deep matters of guilt and temptation and taboo, the material of Sophocles and Freud, can be flicked on and off just by physical pressure. You have to respect the power of the meat to change the morals so neatly.
In one sense, this is more neuro-redundancy. Charting a path between these two truths is the philosopher Patricia S. Churchland’s project in “Touching a Nerve: The Self as Brain” (Norton), a limited defense of the centrality of neuro. She is rightly contemptuous of the invocation of “scientism” to dismiss the importance of neuroscience to philosophy, seeing that resistance as identical to the Inquisition’s resistance to Galileo, or the seventeenth century’s to Harvey’s discovery of the pumping heart:
This is the familiar strategy of let’s pretend. Let’s believe what we prefer to believe. But like the rejection of the discovery that Earth revolves around the sun, the let’s pretend strategy regarding the heart could not endure very long. . . . Students reading the history of this period may be as dumbfounded regarding our resistance to brain science as we are now regarding the seventeenth-century resistance to the discovery that the heart is a meat pump.
Humanism not only has survived each of these sequential demystifications; they have made it stronger by demonstrating the power of rational inquiry on which humanism depends. Every time the world becomes less mysterious, nature becomes less frightening, and the power of the mind to grasp reality more sure. A constant reduction of mystery to matter, a belief that we can name natural rules we didn’t make—that isn’t scientism. That’s science.
Yet Churchland also makes beautifully clear how complex and contingent the simplest brain business is. She discusses whether the hormone testosterone makes men angry. The answer even to that off-on question is anything but straightforward. Testosterone counts for a lot in making men mad, but so does the “stress” hormone cortisol, along with the “neuromodulator” serotonin, which affects whether the aggression is impulsive or premeditated, and the balance between all these things is affected by “other hormones, other neuromodulators, age and environment.”
So this question, like any other about neurology, turns out to be both simply mechanical and monstrously complex. Yes, a hormone does wash through men’s brains and makes them get mad. But there’s a lot more turning on than just the hormone. For a better analogy to the way your neurons and brain chemistry run your mind, you might think about the way the light switch runs the lights in your living room. It’s true that the light switch in the corner turns the lights on in the living room. Nor is that a trivial observation. How the light switch gets wired to the bulb, how the bulb got engineered to be luminous—all that is an almost miraculously complex consequence of human ingenuity. But at the same time the light switch on the living-room wall is merely the last stage in a long line of complex events that involve waterfalls and hydropower and surge protectors and thousands of miles of cables and power grids. To say the light switch turns on the living-room light is both true—vitally true, if you don’t want to bang your shins on the sofa sneaking home in the middle of the night—and wildly misleading.
It’s perfectly possible, in other words, to have an explanation that is at once trivial and profound, depending on what kind of question you’re asking. The strength of neuroscience, Churchland suggests, lies not so much in what it explains as in the older explanations it dissolves. She gives a lovely example of the panic that we feel in dreams when our legs refuse to move as we flee the monster. This turns out to be a straightforward neurological phenomenon: when we’re asleep, we turn off our motor controls, but when we dream we still send out signals to them. We really are trying to run, and can’t. If you feel this, and also have the not infrequent problem of being unable to distinguish waking and dreaming states, you might think that you have been paralyzed and kidnapped by aliens.
There are no aliens; there is not even a Freudian wave of guilt driving the monster. It’s just those neuromotor neurons, making the earth sticky. The best thing for people who have recurrent nightmares of this kind is to get more rem rest. “Get more sleep,” Churchland remarks. “It works.” Neurology should provide us not with sudden explanatory power but with a sense of relief from either taking too much responsibility for, or being too passive about, what happens to us. Autism is a wiring problem, not a result of “refrigerator mothers.” Schizophrenia isn’t curable yet, but it looks more likely to be cured by getting the brain chemistry right than by finding out what traumatized Gregory Peck in his childhood. Neuroscience can’t rob us of responsibility for our actions, but it can relieve us of guilt for simply being human. We are in better shape in our mental breakdowns if we understand the brain breakdowns that help cause them. This is a point that Satel and Lilienfeld, in their eagerness to support a libertarian view of the self as a free chooser, get wrong. They observe of one “brilliant and tormented” alcoholic that she, not her heavy drinking, was responsible for her problems. But, if we could treat the brain circuitry that processes the heavy drinking, we might very well leave her just as brilliant and tormented as ever, only not a drunk. (A Band-Aid, as every parent knows, is an excellent cure whenever it’s possible to use one.)
The really curious thing about minds and brains is that the truth about them lies not somewhere in the middle but simultaneously on both extremes. We know already that the wet bits of the brain change the moods of the mind: that’s why a lot of champagne gets sold on Valentine’s Day. On the other hand, if the mind were not a high-level symbol-managing device, flower sales would not rise on Valentine’s Day, too. Philosophy may someday dissolve into psychology and psychology into neurology, but since the lesson of neuro is that thoughts change brains as much as brains thoughts, the reduction may not reduce much that matters. As Montaigne wrote, we are always double in ourselves. Or, as they say on the Enterprise, it takes all kinds to run a starship
A Logical Problem
The Monty Hall problem - where a contestant has to pick one of three boxes - left readers scratching their heads. Why does this probability scenario hurt everyone's brain so much, asks maths lecturer Dr John Moriarty.
Imagine Deal or No Deal with only three sealed red boxes.
The three cash prizes, one randomly inserted into each box, are 50p, £1 and £10,000. You pick a box, let's say box two, and the dreaded telephone rings.
The Banker tempts you with an offer but this one is unusual. Box three is opened in front of you revealing the £1 prize, and he offers you the chance to change your mind and choose box one. Does switching improve your chances of winning the £10,000?
Each year at my university we hold open days for hordes of keen A-level students. We want to sell them a place on our mathematics degree, and I unashamedly have an ulterior motive - to excite the best students about probability using this problem, usually referred to as the Monty Hall Problem.
This mind-melter was alluded to in an AL Kennedy piece on change this week and dates back to Steve Selvin in 1975 when it was published in the academic journal American Statistician.
It imagines a TV game show not unlike Deal or No Deal in which you choose one of three closed doors and win whatever is behind it.
One door conceals a Cadillac - behind the other two doors are goats. The game show host, Monty Hall (of Let's Make a Deal fame), knows where the Cadillac is and opens one of the doors that you did not choose. You are duly greeted by a goat, and then offered the chance to switch your choice to the other remaining door.
Most people will think that with two choices remaining and one Cadillac, the chances are 50-50.
The most eloquent reasoning I could find is from Emerson Kamarose of San Jose, California (from the Chicago Reader's Straight Dope column in 1991): "As any fool can plainly see, when the game-show host opens a door you did not pick and then gives you a chance to change your pick, he is starting a new game. It makes no difference whether you stay or switch, the odds are 50-50."
But the inconvenient truth here is that it's not 50-50 - in fact, switching doubles your chances of winning. Why?
Pink Cadillac and a goat
Let's not get confused by the assumptions. To be clear, Monty Hall knows the location of the prize, he always opens a different door from the one you chose, and he will only open a door that does not conceal the prize.
For the purists, we also assume that you prefer Cadillacs to goats. There is a beautiful logical point here and, as the peddler of probability, I really don't want you to miss it.
Switching doubles your chances of winning”
In the game you will either stick or switch. If you stick with your first choice, you will end up with the Caddy if and only if you initially picked the door concealing the car. If you switch, you will win that beautiful automobile if and only if you initially picked one of the two doors with goats behind them.
If you can accept this logic then you're home and dry, because working out the odds is now as easy as pie - sticking succeeds 1/3 of the time, while switching works 2/3 of the time.
Kamarose was wrong because he fell for the deception - after opening the door, the host is not starting a new 50-50 game. The actions of the host have already stacked the odds in favour of switching.
The mistake is to think that two choices always means a 50-50 chance. Still not convinced? You are in good company. The paradox of the Monty Hall Problem has been incredibly powerful, busting the brains of scientists since 1975.
In 1990 the problem and a solution were published in Parade magazine in the US, generating thousands of furious responses from readers, many with distinguished scientific credentials.
Part of the difficulty was that, as usual, there was fault on both sides as the published solution was arguably unclear in stating its assumptions. Subtly changing the assumptions can change the conclusion, and as a result this topic has attracted sustained interest from mathematicians and riddlers alike.
Even Paul Erdos, an eccentric and brilliant Hungarian mathematician and one-time guest lecturer at Manchester, was taken in.
So what happens on our university's open days? We do a Monty Hall flash mob. The students split into hosts and contestants and pair up. While the hosts set up the game, half the contestants are asked to stick and the other half to switch.
The switchers are normally roughly twice as successful. Last time we had 60 pairs in 30 of which the contestants were always stickers and in the other 30 pairs always switchers:
Among the 30 switcher contestants, the Cadillac was won 18 times out of 30 - a strike rate of 60%
Among the 30 sticker contestants, there were 11 successes out of 30, a strike rate of about 36%
So switching proved to be nearly twice as successful in our rough and ready experiment and I breathed a sigh of relief.
Beliefs Always Trump Facts
Yale law school professor Dan Kahan's new research paper is called "Motivated Numeracy and Enlightened Self-Government," but for me a better title is the headline on science writer Chris Mooney's piece about it in Grist: "Science Confirms: Politics Wrecks Your Ability to Do Math."
Kahan conducted some ingenious experiments about the impact of political passion on people's ability to think clearly. His conclusion, in Mooney's words: partisanship "can even undermine our very basic reasoning skills.... [People] who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs."
In other words, say goodnight to the dream that education, journalism, scientific evidence, media literacy or reason can provide the tools and information that people need in order to make good decisions. It turns out that in the public realm, a lack of information isn't the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we're rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.
For years my go-to source for downer studies of how our hard-wiring makes democracy hopeless has been Brendan Nyhan, an assistant professor of government at Dartmouth.
Nyan and his collaborators have been running experiments trying to answer this terrifying question about American voters: Do facts matter?
The answer, basically, is no. When people are misinformed, giving them facts to correct those errors only makes them cling to their beliefs more tenaciously.
Here's some of what Nyhan found:
People who thought WMDs were found in Iraq believed that misinformation even more strongly when they were shown a news story correcting it.
People who thought George W. Bush banned all stem cell research kept thinking he did that even after they were shown an article saying that only some federally funded stem cell work was stopped.
People who said the economy was the most important issue to them, and who disapproved of Obama's economic record, were shown a graph of nonfarm employment over the prior year - a rising line, adding about a million jobs. They were asked whether the number of people with jobs had gone up, down or stayed about the same. Many, looking straight at the graph, said down.
But if, before they were shown the graph, they were asked to write a few sentences about an experience that made them feel good about themselves, a significant number of them changed their minds about the economy. If you spend a few minutes affirming your self-worth, you're more likely to say that the number of jobs increased.
In Kahan's experiment, some people were asked to interpret a table of numbers about whether a skin cream reduced rashes, and some people were asked to interpret a different table -- containing the same numbers -- about whether a law banning private citizens from carrying concealed handguns reduced crime.
Kahan found that when the numbers in the table conflicted with people's positions on gun control, they couldn't do the math right, though they could when the subject was skin cream. The bleakest finding was that the more advanced that people's math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem.
I hate what this implies -- not only about gun control, but also about other contentious issues, like climate change. I'm not completely ready to give up on the idea that disputes over facts can be resolved by evidence, but you have to admit that things aren't looking so good for reason. I keep hoping that one more photo of an iceberg the size of Manhattan calving off of Greenland, one more stretch of record-breaking heat and drought and fires, one more graph of how atmospheric carbon dioxide has risen in the past century, will do the trick. But what these studies of how our minds work suggest is that the political judgments we've already made are impervious to facts that contradict us.
Maybe climate change denial isn't the right term; it implies a psychological disorder. Denial is business-as-usual for our brains. More and better facts don't turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions. In the entire history of the universe, no Fox News viewers ever changed their minds because some new data upended their thinking. When there's a conflict between partisan beliefs and plain evidence, it's the beliefs that win. The power of emotion over reason isn't a bug in our human operating systems, it's a feature.
The Peanut Butter Test For Alzheimers
You may not have heard of "the peanut butter test," but it could become a fantastically low-cost and non-invasive way to test for Alzheimer's. After all, what's less invasive than asking someone to smell some delicious peanut butter?
"The ability to smell is associated with the first cranial nerve and is often one of the first things to be affected in cognitive decline," reads this release from the University of Florida, researchers from which conducted the experiment. But with Alzheimer's patients, the sense of smell is affected in a very particular way: The left nostril is significantly more impaired than the right. Weird! But true.
The experiment involved capping one nostril and measuring the distance at which the patient could detect about a tablespoon of peanut butter. In Alzheimer's patients, the left nostril was impaired so thoroughly that, on average, it had 10 centimeters less range than the right, in terms of odor detection. That's specific to Alzheimer's patients; neither control patients (those not suffering from cognitive decline) nor those with other types of cognitive impairment (like dementia) demonstrated that nostril difference.
Peanut butter was used because it's a so-called "pure odorant." Generally our sense of smell actually incorporates two distinct sensations: the olfactory sense, or smell, as well as a trigeminal sense, which is like a more physical burning or stinging sort of sense. Peanut butter has no trigeminal element; it's only olfactory, which makes it ideal for testing, as the link to Alzheimer's is specifically dealing with the olfactory sense.
This could be a great, inexpensive, early warning system for those with Alzheimer's; the illness is not easy to detect, requiring neurological examination as well as mental, and has to be carried out by a professional. The peanut butter test? Much easier.
Climate Change Persuasion
WHEN scholars of the future write the history of climate change, they may look to early 2008 as a pivotal moment. Al Gore's film An Inconvenient Truth was bringing the science to the masses. The economist Nicholas SternMovie Camera had made the financial case for tackling the problem sooner rather than later. And the Intergovernmental Panel on Climate Change (IPCC) had just issued its most unequivocal report yet on the link between human activity and climatic change.
The scientific and economic cases were made. Surely with all those facts on the table, soaring public interest and ambitious political action were inevitable?
The exact opposite happened. Fast-forward to today, the eve of the IPCC's latest report on the state of climate science, and it is clear that public concern and political enthusiasm have not kept up with the science. Apathy, lack of interest and even outright denial are more widespread than they were in 2008.
How did the rational arguments of science and economics fail to win the day? There are many reasons, but an important one concerns human nature.
Through a growing body of psychological research, we know that scaring or shaming people into sustainable behaviour is likely to backfire. We know that it is difficult to overcome the psychological distance between the concept of climate change – not here, not now – and people's everyday lives. We know that beliefs about the climate are influenced by extreme and even daily weather.
One of the most striking findings is that concern about climate change is not only, or even mostly, a product of how much people know about science. Increased knowledge tends to harden existing opinions (Nature Climate Change, vol 2, p 732).
These findings, and many more, are increasingly available to campaigners and science communicators, but it is not clear that lessons are being learned. In particular, there is a great deal of resistance towards the idea that communicating climate change requires more than explaining the science.
The IPCC report, due out on 27 September, will provide communicators with plenty of factual ammunition. It will inevitably be attacked by climate deniers. In response, rebuttals, debunkings and counter-arguments will pour forth, as fighting denial has become a cottage industry in itself.
None of it will make any real difference. This is for the simple reason that the argument is not really about the science; it is about politics and values.
Consider, for example, the finding that people with politically conservative beliefs are more likely to doubt the reality or seriousness of climate change. Accurate information about climate change is no less readily available to these people than anybody else. But climate policies such as the regulation of industrial emissions often seem to clash with conservative political views. And people work backwards from their values, filtering the facts according to their pre-existing beliefs.
Research has shown that people who endorse free-market economic principles become less hostile when they are presented with policy responses which do not seem to be as threatening to their world view, such as geoengineering. Climate change communicators must understand that debates about the science are often simply a proxy for these more fundamental disagreements.
Some will argue that climate change discourse has become so polluted by politics that we can't see the scientific woods for the political trees. Why should science communicators get their hands dirty with politics? But the solution is not to scream ever louder at people that the woods are there if only they would look properly. A much better, and more empirically supported, answer is to start with those trees. The way to engage the public on climate change is to find ways of making it resonate more effectively with the values that people hold.
My colleagues and I argued in a recent report for the Climate Outreach and Information Network that there is no inherent contradiction between conservative values and engaging with climate change science. But hostility has grown because climate change has become associated with left-wing ideas and language.
If communicators were to start with ideas that resonated more powerfully with the right – the beauty of the local environment, or the need to enhance energy security – the conversation about climate change would likely flow much more easily.
Similarly, a recent report from the Understanding Risk group at Cardif University in the UK showed there are some core values that underpin views about the country's energy system. Whether wind farms or nuclear power, the public judges energy technologies by a set of underlying values – including fairness, avoiding wastefulness and affordability. If a technology is seen as embodying these, it is likely to be approved of. Again, it is human values, more than science and technology, which shape public perceptions.
Accepting this is a challenge for those seeking to communicate climate science. Too often, they assume that the facts will speak for themselves – ignoring the research that reveals how real people respond. That is a pretty unscientific way of going about science communication.
The challenge when the IPCC report appears, then, is not to simply crank up the volume on the facts. Instead, we must use the report as the beginning of a series of conversations about climate change – conversations that start from people's values and work back from there to the science.
Alzheimers and T2 Diabetes
ALZHEIMER’S, the devastating neurological disease affecting 500,000 Britons, may actually be the late stages of type 2 (T2) diabetes, say scientists.
They have found that the extra insulin produced by those with T2 diabetes also gets into the brain, disrupting its chemistry.
Eventually it leads to the formation of toxic clumps of amyloid proteins that poison brain cells.
“The discovery could explain why people who develop T2 diabetes often show sharp declines in cognitive function, with an estimated 70% developing Alzheimer’s — far more than in the rest of the population,” said Ewan McNay, a Briton whose research at Albany University in New York State was co-sponsored by the American Diabetes Association
“People who develop diabetes have to realise this is about more than controlling their weight or diet. It’s also the first step on the road to cognitive decline.
“At first they won’t be able to keep up with their kids playing games, but in 30 years’ time they may not even recognise them.”
In Britain about 2.5m people have T2 diabetes — the National Diabetes Audit, published in October, showed that about 80% were overweight or obese.
The sharply elevated risk of Alzheimer’s disease in T2 diabetics has been known for a long time. However, since relatively few obese people have tended to survive into old age, the effects have had less attention and are not widely known among the public and GPs.
Now, however, better treatments mean a sharp improvement in the survival rates of people with T2 diabetes — meaning there is likely to be a surge in Alzheimer’s cases.
McNay’s research was aimed at uncovering the mechanism by which T2 diabetes might cause Alzheimer’s.
He fed rats on a high-fat diet to induce T2 diabetes and then carried out memory tests, showing that the animals’ cognitive skills deteriorated rapidly as the disease progressed.
An examination of their brains showed clumps of amyloid protein had formed, of the kind found in the brains of Alzheimer’s patients. McNay confirmed that these clumps were linked with cognitive decline by injecting a second batch of diabetic rats with drugs that dissolved the amyloid clumps — whereupon they regained their lost function.
Why, though, do the amyloid clumps form at all? McNay suggests that, in people with T2 diabetes, the body becomes resistant to insulin, a hormone that controls blood-sugar levels — so the body produces more of it.
However, some of that insulin also makes its way into the brain, where its levels are meant to be controlled by the same enzyme that breaks down amyloid.
McNay, who presented his research at the recent Society for Neuroscience meeting in San Diego, said: “High levels of insulin swamp this enzyme so that it stops breaking down amyloid. The latter then accumulates until it forms toxic clumps that poison brain cells. It’s the same amyloid build-up to blame in both diseases — T2 diabetics really do have low-level Alzheimer’s.”
McNay’s research does, however, offer one cause for hope. It is known that people who develop T2 diabetes can get rid of it again by losing weight and taking exercise. McNay suggests that the same remedies might also serve to ward off Alzheimer’s, at least in the very early stages.
His research has changed his own life already. “I have cut down on chocolate and go to the gym more, and as for my children, they have to run around the green before I’ll give them any treats.”
Anti-Depressants
Depression strikes some 35 million people worldwide, according to the World Health Organization, contributing to lowered quality of life as well as an increased risk of heart disease and suicide. Treatments typically include psychotherapy, support groups and education as well as psychiatric medications. SSRIs, or selective serotonin reuptake inhibitors, currently are the most commonly prescribed category of antidepressant drugs in the U.S., and have become a household name in treating depression.
The action of these compounds is fairly familiar. SSRIs increase available levels of serotonin, sometimes referred to as the feel-good neurotransmitter, in our brains. Neurons communicate via neurotransmitters, chemicals which pass from one nerve cell to another. A transporter molecule recycles unused transmitter and carries it back to the pre-synaptic cell. For serotonin, that shuttle is called SERT (short for “serotonin transporter”). An SSRI binds to SERT and blocks its activity, allowing more serotonin to remain in the spaces between neurons. Yet, exactly how this biochemistry then works against depression remains a scientific mystery.
In fact, SSRIs fail to work for mild cases of depression, suggesting that regulating serotonin might be an indirect treatment only. “There’s really no evidence that depression is a serotonin-deficiency syndrome,” says Alan Gelenberg, a depression and psychiatric researcher at The Pennsylvania State University. “It’s like saying that a headache is an aspirin-deficiency syndrome.” SSRIs work insofar as they reduce the symptoms of depression, but “they’re pretty nonspecific,” he adds.
Now, research headed up by neuroscientists David Gurwitz and Noam Shomron of Tel Aviv University in Israel supports recent thinking that rather than a shortage of serotonin, a lack of synaptogenesis (the growth of new synapses, or nerve contacts) and neurogenesis (the generation and migration of new neurons) could cause depression. In this model lower serotonin levels would merely result when cells stopped making new connections among neurons or the brain stopped making new neurons. So, directly treating the cause of this diminished neuronal activity could prove to be a more effective therapy for depression than simply relying on drugs to increase serotonin levels.
Evidence for this line of thought came when their team found that cells in culture exposed to a 21-day course of the common SSRI paroxetine (Paxil is one of the brand names) expressed significantly more of the gene for an integrin protein called ITGB3 (integrin beta-3). Integrins are known to play a role in cell adhesion and connectivity and therefore are essential for synaptogenesis. The scientists think SSRIs might promote synaptogenesis and neurogenesis by turning on genes that make ITGB3 as well as other proteins that are involved in these processes. A microarray, which can house an entire genome on one laboratory slide, was used to pinpoint the involved genes. Of the 14 genes that showed increased activity in the paroxetine-treated cells, the gene that expresses ITGB3 showed the greatest increase in activity. That gene, ITGB3, is also crucial for the activity of SERT. Intriguingly, none of the 14 genes are related to serotonin signaling or metabolism, and, ITGB3 has never before been implicated in depression or an SSRI mode of action.
These results, published October 15 in Translational Psychiatry, suggest that SSRIs do indeed work by blocking SERT. But, the bigger picture lies in the fact that in order to make up for the lull in SERT, more ITGB3 is produced, which then goes to work in bolstering synaptogenesis and neurogenesis, the true culprits behind depression. “There are many studies proposing that antidepressants act by promoting synaptogenesis and neurogenesis,” Gurwitz says. “Our work takes one big step on the road for validating such suggestions.”
The research is weakened by its reliance on observations of cells in culture rather than in actual patients. The SSRI dose typically delivered to a patient’s brain is actually a fraction of what is swallowed in a pill. “Obvious next steps are showing that what we found here is indeed viewed in patients as well,” Shomron says.
The study turned up additional drug targets for treating depression—two microRNA molecules, miR-221 and miR-222. Essentially, microRNAs are small molecules that can turn a gene off by binding to it. The microarray results showed a significant decrease in the expression of miR-221 and miR-222, both of which are predicted to target ITGB3, when cells were exposed to paroxetine. So, a drug that could prevent those molecules from inhibiting the production of the ITGB3 protein would arguably enable the growth of more new neurons and synapses. And, if the neurogenesis and synaptogenesis hypothesis holds, a drug that specifically targeted miR-221 or miR-222 could bring sunnier days to those suffering from depression.
Dementia Care
“Before I was born I was a twin,” Richard Carr-Gomm wrote in his autobiography. It may be far-fetched to suppose that his lifelong concern for the lonely stemmed from his sibling’s early death, but that concern has touched the lives of thousands.
Carr-Gomm was the self-effacing but utterly determined founder of the Abbeyfield Society, which offers the elderly the care and company of a small army of volunteers.
The society originated in a house in Bermondsey that the “scrubbing major” bought with a gratuity on his retirement from the Coldstream Guards. Nearly six decades on, it owns or runs more than 500 houses and care homes in Britain and still more in affiliated societies from Australia to Mexico. Between them they provide a refuge for more than 8,000 people from the quiet tragedy of loneliness, and, for a growing number, relief from the still poorly understood traumas of dementia.
One Abbeyfield house on which we report today takes infinite pride in giving its residents a home to enjoy rather than an institution to endure. Its manager has found space for a sweet shop, a library and a farmyard’s worth of pets, which until recently included pigs called Winston and Churchill. More importantly, she and her staff find time — time to talk with those for whom a conversation is a treat; time to sit with a distressed dementia sufferer long enough to learn something about her mood swings; time to make family members welcome and help them cope when eyes that used to light up with recognition no longer seem to know who’s come to visit.
Carr-Gomm was, as a boy, considered “delicate”. That did not stop him marching up Juno Beach on D-Day, or devoting most of his life after the war to a cause that he championed with unbending inner strength. It is a cause that deserves all our support.
Fairness (Seth Godin)
Our society tolerates gross unfairness every day. It tolerates misogyny, racism and the callous indifference to those born without privilege.
But we manage to find endless umbrage for petty slights and small-time favoritism.
When a teacher gives one student a far better grade than he deserves, and does it without shame, we're outraged. When the flight attendant hands that last chicken meal to our seatmate, wow, that's a slight worth seething over for hours.
When Bull Connor directed fire hoses and attack dogs on innocent kinds in Birmingham, it conflated the two, the collision of the large and the small. Viewers didn't witness the centuries of implicit and explicit racism, they saw a small, vivid act, moving in its obvious unfairness. It was the small act that focused our attention on the larger injustice.
I think that most of us are programmed to process the little stories, the emotional ones, things that touch people we can connect to. When it requires charts and graphs and multi-year studies, it's too easy to ignore.
We don't change markets, or populations, we change people. One person at a time, at a human level. And often, that change comes from small acts that move us, not from grand pronouncements.
Selective Vision
People intuitively think that their eyes move smoothly across a scene, continuously taking in what is there, like a video camera. But we actually take in a series of multiple snapshots that the brain stitches together into a seamless image. We only get clear detail from a very small part where our vision is focused - everything else is a blur until we fixate on the next bit.
The biggest problem with vision is the way our brain chooses to work. This affects areas such as airport luggage scanning and doctors looking for tumours. When you constantly look at pics where there is nothing suspicious to be found, the brain begins to expect nothing to ever be found. Then, when something suspicious does show up, it fails to register, simply because it is unexpected.
This is surprisingly difficult to counter. It is not a matter of watchers failing to concentrate or being careless or lazy. It is a quirk of human brain processing, and it happens to everybody. The only way that researchers have been able to improve accuracy is by 'retraining' the brain by artificially increasing the number of hits (by putting fake occurrences into the data stream.)
Mental Toughness
Forty seconds before round two, and I’m lying on my back trying to breathe. Pain all through me. Deep breath. Let it go. I won’t be able to lift my shoulder tomorrow, it won’t heal for over a year, but now it pulses, alive, and I feel the air vibrating around me, the stadium shaking with chants, in Mandarin, not for me. My teammates are kneeling above me, looking worried. They rub my arms, my shoulders, my legs. The bell rings. I hear my dad’s voice in the stands, ‘C’mon Josh!’ Gotta get up. I watch my opponent run to the center of the ring. He screams, pounds his chest. The fans explode. They call him Buffalo. Bigger than me, stronger, quick as a cat. But I can take him – if I make it to the middle of the ring without falling over. I have to dig deep, bring it up from somewhere right now. Our wrists touch, the bell rings, and he hits me like a Mack truck. — Joshua Waitzkin
In his book The Art of Learning, Joshua Waitzkin describes how he is able to compete, and win, against martial arts competitors much physically stronger than himself by putting his mind into the game. When I asked Waitzkin whether he thinks his mental game is a result of his high intelligence, he told me,
“I don’t think I have an extraordinary intelligence. Buffalo had cultivated his whole body his whole life, and he had that edge. I had cultivated my mind. My chance lay in making the mental game dominate a physical battle. At a high level of competition, success often hinges on who determines the field and tone of battle.
“Mental toughness” is a phrase that is commonly used in sports to describe the superior mental qualities of the competitor. Most elite athletes report that at least 50% of superior athletic performance is the result of mental or psychological factors, and a whopping 83% of coaches rate mental toughness as the most important set of psychological characteristics for determining competitive success.
One of the first descriptions of mental toughness was made by sports psychologist James Loeher. Based on his extensive work with elite athletes and coaches, he proposed seven dimensions of mental toughness that he argued are developed: self-confidence, attention control, minimizing negative energy, increasing positive energy, maintaining motivation levels, attitude control, and visual and imagery control.
Following up this work with a more systematic analysis in 2002, Graham Jones and colleagues interviewed ten international performers (seven males and three females) from a variety of sports. The elite performers were asked to define mental toughness in their own words and describe the central characteristics of mental toughness. The following definition naturally emerged from the interviews:
People who are mentally tough have a psychological edge that enables them to cope better than their opponents with the many demands that sports place on a performer, and they are also more consistent and better than their opponents in remaining determined, focused, confident, and in control under pressure.
The athletes identified 12 key attributes as key to mental toughness in sport, ranked in order of importance:
Unshakeable self-belief in your ability to achieve competition goals (“Mental toughness is about your self belief and not being shaken from your path. . . . It is producing the goods and having the self belief in your head to produce the goods”).
Ability to bounce back from performance set-backs as a result of an increased determination to succeed (“Yea, we all have them (setbacks), the mentally tough performer doesn’t let them affect him, he uses them”).
Unshakeable self-belief that you possess unique qualities and abilities that make you better than your opponents (“I am better than everyone else by a long way because I have something that sets me apart from other performers”).
Insatiable desire and internalized motives to succeed (“You’ve really got to want it, but you’ve also got to want to do it for yourself. Once you start doing it for anyone else . . . you’re in trouble. You’ve also got to really understand why you’re in it . . . and constantly reminding yourself is vital”).
Remaining fully focused on the task at hand in the face of competition-specific distractions (“There are inevitable distractions and you just have to be able to focus on what you need to focus on”).
Regaining psychological control following unexpected, uncontrollable events (comeptition-specific) (“It’s definitely about not getting unsettled by things you didn’t expect or can’t control. You’ve got to be able to switch back into control mode”).
Pushing back the boundaries of physical and emotional pain, while still maintaining technique and effort under distress during training and competition (“In my sport you have to deal with the physical pain from fatigue, dehydration, and tiredness . . . you are depleting your body of so many different things. It is a question of pushing yourself . . . it’s mind over matter, just trying to hold your technique and perform while under this distress and go beyond your limits”).
Accepting that competition anxiety is inevitable and knowing that you can cope with it. (“I accept that I’m going to get nervous, particularly when the pressure’s on, but keeping the lid on it and being in control is crucial”).
Not being adversely affected by other’s good and bad performances (“There have been cases where people have set world records and people have gone out 5 or 6 minutes later, and improved the world record again. The mentally tough performer uses others ‘good performances as a spur rather than say “I can’t go that fast.” They say “well, he is no better than me, so I’m going to go out there and beat that”).
Thriving on the pressure of competition (“If you are going to achieve anything worthwhile, there is bound to be pressure. Mental toughness is being resilient to and using the competition pressure to get the best out of yourself”).
Remaining fully focused in the face of personal life distractions (“Once you’re in the competition, you cannot let you mind wander to other things”; and, “it doesn’t matter what has happened to you, you can’t bring the problem into the performance arena”).
Switching sport focus on and off as required (“You need to be able to switch it [i.e., focus] on and off, especially between games during a tournament. The mentally tough performer succeeds by having control of the on/off switch”).
In more recent years, a number of studies have attempted to further clarify mental toughness, its dimensions, and its development. In one large review, Daniel Gucciardi and colleagues argued that the dimensions that comprise mental toughness influence the way we approach and interpret both positive and negative events, which in turn influence performance.
Research also shows that mental toughness is an ongoing developing process. The attitudes, cognitions, emotions, and personal values that comprise mental toughness develop as a result of repeated exposure to a variety of experiences, challenges, and adversities. Once acquired, mental toughness is maintained by:
A desire and motivation to succeed that is insatiable and internalized
A perceived support network that includes sporting and non-sporting personnel
Effective use of basic and advanced psychological skills.
Do athletes have higher levels of mental toughness than non-athletes? In a very recent study, Félix Guillén and Sylvain Laborde compared levels of mental toughness between athletes and non-athletes. Based on the review by Gucciardi and colleagues, they distilled mental toughness down into four main dimensions:
Hope: The unshakeable self-belief in one’s ability to achieve competition goals (“I can think of many ways to get out of a jam“).
Optimism: A general expectancy that good things will happen (“In uncertain times, i usually expect the best“).
Perseverance: Consistency in achieving one’s goals and not giving up easily when facing adversity of difficulties (“I am often so determined that I continue working long after other people have given up“).
Resilience: The ability to adapt to challenges in the environment (“I do not dwell on things that I can’t do anything about“).
All four dimensions were significantly related to each other, forming a general factor of mental toughness. Athletes scored much higher than non-athletes on this general mental toughness factor, with a large effect size. What’s more, there was no difference between the type of sport (individual vs. team sports). This is consistent with prior research suggesting that mental toughness is more a function of environment than domains.
The researchers also found that mental toughness increased with age, also consistent with prior research showing that mental toughness develops through developmental experiences. Finally, the researchers found that athletes with higher levels of mental toughness practiced for longer, on average, than athletes with lower levels of mental toughness.
Mental toughness is not only important in sports. Markus Gerber and colleagues found that adolescents with higher mental toughness are more resilient against stress and depression. As Gucciardi and colleagues argue, mental toughness is important in any environment that requires performance setting, challenges, and adversities.
Beyond Mental Toughness
In Finland there is a phrase– dating back hundreds of years– which refers to extraordinary determination, courage, and resoluteness in the face of extreme adversity. It’s called Sisu.
Rising superstar Emilia Lahti, who is about to begin her doctoral studies relating to Sisu, has made a good case for why Sisu is distinguishable from other dimensions of mental toughness, such as perseverance, grit, and resilience. In one large-scale survey, which she conducted as a Masters student in the Masters of Positive Psychology Program at the University of Pennsylvania, Lahti found that 62% of people surveyed (Finns and Finnish Americans) viewed Sisu as a powerful psychological strength capacity, rather than the ability to be persistent and stick to a task (34%).
Lahti argues that Sisu contributes to an “action mindset”, a consistent and courageous approach toward challenges that enables individuals to see beyond their present limitations and into what might be. I think Joshua Waitzkin illustrates Sisu in his competition with Bufffalo (described above), as he digs deep into the wellspring of possibility that is not evident from the surface.
Uploading Your Brain
Everything felt possible at Transhuman Visions 2014, a conference in February billed as a forum for visionaries to "describe our fast-approaching, brilliant, and bizarre future." Inside an old waterfront military depot in San Francisco's Fort Mason Center, young entrepreneurs hawked experimental smart drugs and coffee made with a special kind of butter they said provided cognitive enhancements. A woman offered online therapy sessions, and a middle-aged conventioneer wore an electrode array that displayed his brain waves on a monitor as multicolor patterns.
On stage, a speaker with a shaved head and a thick, black beard held forth on DIY sensory augmentation. A group called Science for the Masses, he said, was developing a pill that would soon allow humans to perceive the near-infrared spectrum. He personally had implanted tiny magnets into his outer ears so that he could listen to music converted into vibrations by a magnetic coil attached to his phone.
None of this seemed particularly ambitious, however, compared with the claim soon to follow. In the back of the audience, carefully reviewing his notes, sat Randal Koene, a bespectacled neuroscientist wearing black cargo pants, a black T-shirt showing a brain on a laptop screen, and a pair of black, shiny boots. Koene had come to explain to the assembled crowd how to live forever. ''As a species, we really only inhabit a small sliver of time and space,''Koene said when he took the stage. ''We want a species that can be effective and influential and creative in a much larger sphere.''
Koene's solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. ''When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC,'' he said. ''It's kind of like platform-independent code.''
Koene's solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. "When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC," he said. "It's kind of like platform-independent code."
The audience sat silent, possibly awed, possibly confused, as Koene led them through a complex tour of recent advances in neuroscience supplemented with charts and graphs. Koene has always had a complicated relationship with transhumanists, who likewise believe in elevating humanity to another plane. A Dutch-born neuroscientist and neuro-engineer, he has spent decades collecting the credentials necessary to bring his fringe ideas in line with mainstream science. Now, that science is coming to him. Researchers around the globe have made deciphering the brain a central objective. In 2013, both the U.S. and the EU announced initiatives that promise to accelerate brain science in much the same way that the Human Genome Project advanced genomics. The minutiae may have been lost on the crowd, but as Koene departed the stage, the significance of what they just witnessed was not: The knowledge necessary to achieve what Koene calls "substrate independent minds" seems tantalizingly within reach.
The concept of brain emulation has a long, colorful history in science fiction, but it’s also deeply rooted in computer science. An entire subfield known as neural networking is based on the physical architecture and biological rules that underpin neuroscience.
Roughly 85 billion individual neurons make up the human brain, each one connected to as many as 10,000 others via branches called axons and dendrites. Every time a neuron fires, an electrochemical signal jumps from the axon of one neuron to the dendrite of another, across a synapse between them. It’s the sum of those signals that encode information and enable the brain to process input, form associations, and execute commands. Many neuroscientists believe the essence of who we are—our memories, emotions, personalities, predilections, even our consciousness—lies in those patterns.
In the 1940s, neurophysiologist Warren McCulloch and mathematician Walter Pitts suggested a simple way to describe brain activity using math. Regardless of everything happening around it, they noted, a neuron can be in only one of two possible states: active or at rest. Early computer scientists quickly grasped that if they wanted to program a brainlike machine, they could use the basic logic systems of their prototypes—the binary electric switches symbolized by 1s and 0s—to represent the on/off state of individual neurons.
A few years later, Canadian psychologist Donald Hebb suggested that memories are nothing more than associations encoded in a network. In the brain, those associations are formed by neurons firing simultaneously or in sequence. For example, if a person sees a face and hears a name at the same time, neurons in both the visual and auditory areas of the brain will fire, causing them to connect. The next time that person sees the face, the neurons encoding the name will also fire, prompting the person to recollect it.
Using these insights, computer engineers have created artificial neural networks capable of forming associations, or learning. Programmers instruct the networks to remember which pieces of data have been linked in the past, and then to predict the likelihood that those two pieces will be linked in the future. Today, such software can perform a variety of complex pattern-recognition tasks, such as detecting credit card purchases that diverge dramatically from a consumer’s past behavior, indicating possible fraud.
Of course, any neuroscientist will tell you that artificial neural networks don’t begin to incorporate the true complexity of the human brain. Researchers have yet to characterize the many ways neurons interact and have yet to grasp how different chemical pathways affect the likelihood that they will fire. There may be rules they don't yet know exist.
But such networks remain perhaps the strongest illustration of an assumption crucial to the hopes and dreams of Randal Koene: that our identity is nothing more than the behavior of individual neurons and the relationships between them. And that most of the activities of the brain, if technology were capable of recording and analyzing them, can theoretically be reduced to computations.
On a warm afternoon in late January, I follow Koene up the stairs of the second-floor walkup he shares with his girlfriend on the edge of San Francisco’s Portrero Hill. He leads me through a small living room crammed full of synthesizers and Legos and into a bedroom, where a standing desk represents his home office. It holds oversize computer screens and laptops arrayed like the electronics of a star-ship command center. It’s a modest setting, but Koene is only in the third decade of his quest—a mere blink of an eye when you consider that his goal is immortality.
Koene, the son of a particle physicist, first discovered mind uploading at age 13 when he read the 1956 Arthur C. Clarke classic The City and the Stars. Clarke’s book describes a city one billion years in the future. Its residents live multiple lives and spend the time between them stored in the memory banks of a central computer capable of generating new bodies. “I began to think about our limits,” Koene says. “Ultimately, it is our biology, our brain, that is mortal. But Clarke talks about a future in which people can be constructed and deconstructed, in which people are information.”
It was a vision, Koene decided, worth devoting his life to pursuing. He began by studying physics in college, believing the route to his goal lay in finding ways to reconstitute patterns of individual atoms. By the time he graduated, however, he concluded that all he really needed was a digital brain. So he enrolled in a masters program at Delft University of Technology in the Netherlands, where he focused on neural networks and artificial intelligence.
It was while at Delft in 1994 that Koene made an important discovery: a community of people who shared his ambition. Exploring the new medium of the Internet, he stumbled upon the “Mind Uploading Home Page,” owned by Joe Strout, an Ohio-born computer buff, aspiring neuroscientist, and self-described immortalist. Strout facilitated a discussion group that Koene quickly joined, and its members began to debate whether extracting information from the brain was technologically feasible, and if it was, what they should call it: downloading, uploading, or mind transfer. They eventually settled on “whole brain emulation.” And then they outlined career goals that would help them advance their cause.
Koene chose to pursue a Ph.D. in computational neuro-science at McGill University, and later landed at a Boston University neurophysiology lab, where he attempted to replicate mouse brain activity on a computer. Strout pursued an advanced degree in neuroscience, then moved on to the lab of a computational neurobiologist at the Salk Institute. “We were all trying to push research problems in whatever way we could,” Strout says. “The trouble was that for the elder neuroscience researchers, this wasn’t a topic they could discuss publicly. They would talk about it over a beer. But it was too fringe for people who were trying to get grants for research.”
By mapping the brain, humans could live indefinitely.
By then, many of the other group members had earned their credentials. And in 2007, computational neuroscientist Anders Sandberg, who studies the bioethics of human enhancement at Oxford University, summoned interested experts to Oxford’s Future of Humanity Institute for a two-day workshop. Participants laid out a roadmap of capabilities humans would need to develop in order to successfully emulate a brain: mapping the structure, learning how that structure matches function, and developing the software and hardware to run it.
Not long afterward, Koene left Boston University to become the director of neuroengineering at the Fatronik-Tecnalia Institute in Spain, one of the largest private research organizations in Europe. “I didn’t like the job once I figured out they weren’t into taking any risks and didn’t really care about futuristic things related to whole brain emulation,” Koene says. So, in 2010, he moved to Silicon Valley to take a job as head of analysis at Halcyon Molecular, a nanotechnology company that had raised more than $20 million from PayPal cofounders Peter Thiel and Elon Musk, among others. Though Halcyon’s goal was to develop low-cost, DNA-sequencing tools, its leaders assured Koene he would have time to work on brain emulation, a goal they supported.
By the time Halcyon abruptly went out of business in 2012, Koene had created Carboncopies.org, which serves as a hub for mind-uploading advocates. He had also made a lot of contacts. Within months, he secured financial backing from Dimitry Itskov, a Russian dot-com mogul who hoped to upload himself to a “sophisticated artificial carrier” and considered whole brain emulation an essential step.
“We need to provide a foundation so the new field of brain emulation is taken seriously,” Koene tells me from his bedroom command center. He opens a color-coded chart on one of the screens. It consists of overlapping circles filled with names and affiliations, divided into wedges representing the roadmap’s objectives. Koene points to the outermost circle. “These are the people who just have compatible R&D goals,” he says. Then he indicates the smaller, inner circle. “And these are the people who are onboard.”
It’s all of these individuals, mainstream neuroscientists, who will advance whole brain emulation, Koene says—not trans-humanists, who he observes “lack rigor.” And they’ll do so even if philosophically their goals are quite different.
Today, as it happens, every pillar of the brain-uploading roadmap is a highly active area in neuroscience, for an entirely unrelated reason: Understanding the structure and function of the brain could help doctors treat some of our most debilitating diseases.
At Harvard University, neurobiologist Jeff Lichtman leads the effort to create a connectome, or comprehensive map of the brain’s structure: the network of trillions of axons, dendrites, and synapses that convey electro-chemical signals. Lichtman is working to understand how experiences are physically encoded at the most basic level in the brain. To do so, he uses a device that incorporates innovations made by a brain-uploading proponent, Kenneth Hayworth, who spent time as a postdoc in Lichtman’s lab. It slices off razor-thin pieces of mouse brain and collects them sequentially on a reel of tape. The slices can then be scanned with an electron microscope and viewed on a computer like the frames of a movie.
By following the threadlike extensions of individual nerve cells from frame to frame, Lichtman and his team have gained some interesting insights. “We noticed, for instance, that when an axon bumped into a dendrite and made a synapse, if we followed it along, it made another synapse on the same dendrite,” he says. “Even though there were 80 or 90 other dendrites in there, it seemed to be making a choice. Who expected that? Nobody. It means this thing is not some random mess.”
When he started five years ago, Lichtman says, the technique was so slow it would have taken several centuries to generate images for a cubic millimeter of brain—about one thousandth the size of a mouse brain and a millionth the size of a human one. Now Lichtman can do a cubic millimeter every couple of years. This summer, a new microscope will reduce the timeline to a couple of weeks. An army of such machines, he says, could put an entire human brain within reach.
At the same time, scientists elsewhere are aggressively mapping neural function. Last April, President Obama unveiled the BRAIN Initiative (for Brain Research through Advancing Innovative Neurotechnologies) with an initial $100 million investment that many hope will grow to rival the $3.8 billion poured into decoding the human genome.
Columbia University neuroscientist Rafael Yuste proposed a large-scale brain activity map that helped inspire the BRAIN Initiative, and he has spent two decades developing tools aimed at tracking how neurons excite and inhibit one another. Yuste likens the brain’s connectome to roads and the firing of its neurons to traffic.
Studying how neurons fire in circuits and how those circuits interact, he says, could help demystify diseases such as schizophrenia and autism. It could also reveal far more. Our very identity, Yuste suspects, lies in the traffic of brain activity. “Our identity is no more than that,” he says. “There is no magic inside our skull. It’s just neurons firing.”
To study those electrical impulses, scientists need to record the activity of individual neurons, but they’re limited by the micromachining techniques used to produce today’s technology. In his lab at MIT, neuro-engineer Ed Boyden is developing electrode arrays a hundred times denser than the ones currently in use. At the University of California, Berkeley, meanwhile, a team of scientists has proposed nanoscale particles called neural dust, which they plan to someday embed in the cortex as a wireless brain-machine interface.
Whatever discoveries these researchers make may end up as fodder for another ambitious government initiative: the European Union’s Human Brain Project. Backed by 1.2 billion euros and 130 research institutions, it aims to create a super-computer simulation that incorporates everything currently known about how the human brain works.
There is no magic inside our skull, it’s just neurons firing.
Koene is thrilled with all of these developments. But he’s most excited about a brain-simulation technology already being tested in animals. In 2011, a team from the University of Southern California (USC) and Wake Forest University succeeded in creating the world’s first artificial neural implant—a device capable of producing electrical activity that causes a rat to react as if the signal came from the animal’s own brain. “We’ve been able to uncover the neural code—the actual spatio-temporal firing patterns—for particular objects in the hippocampus,” says Theodore Berger, the USC biomedical engineer who led the effort. “It’s a major breakthrough.”
Scientists believe long-term memory involves neurons in two areas of the hippocampus that convert electrical signals to entirely new sequences, which are then transmitted to other parts of the brain. Berger’s team recorded the incoming and outgoing signals in rats trained to perform a memory task, and then programmed a computer chip to emulate the latter on cue. When they destroyed one of the layers of the rat’s hippocampus, the animals couldn’t perform the task. After being outfitted with the neural implant, they could.
Berger and his team have since replicated the activity of other groups of neurons in the hippocampus and prefrontal cortex of primates. The next step, he says, will be to repeat the experiment with more complex memories and behaviors. To that end, the researchers have begun to adapt the implant for testing in human epilepsy patients who have had surgery to remove areas of the hippocampus involved in seizures.
“Ted Berger’s experiment shows in principle you can take an unknown circuit, analyze it, and make something that can replace what it does,” Koene says. “The entire brain is nothing more than just many, many different individual circuits.”
That afternoon, Koene and I drive to an office park in Petaluma about 30 miles outside of San Francisco. We head into a dimly lit, stucco building decorated with posters that superimpose words like “focus” and “imagination” over photographs of Alpine peaks and tropical sunsets.
Guy Paillet, a snowy-haired former IBM engineer with a thick French accent and a cheerful Santa Claus–like disposition, soon joins us in a conference room. Paillet and his partner had invented a new kind of energy-efficient computer chip based on the physical architecture of the brain—an achievement that had earned them inclusion in Koene’s chart. Koene wanted an update on their progress.
Paillet reports that he is negotiating to take over an economically troubled computer chip–fabrication foundry in the South of France. Would Koene be willing to serve as a scientific advisor and possibly a fund-raiser on a related project, he asks? Koene shifts impatiently in his chair. “I just had an idea,” he announces. “You are thinking of getting into the foundry business. At the same time people at UC Berkeley are thinking of building new types of neural interfaces. When they get their prototype to work, would you consider . . . .”
“That’s a very good idea!” Paillet interrupts, before Koene can even finish asking whether he might fabricate their device too.
Many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point?
As we pull out of the parking lot, Koene is ebullient. I had just witnessed his job at its best. “This is what I do,” he says. “You have got tons of labs and researchers who are motivated by their own personal interests.” The trick, he says, is to identify the goals that could benefit brain uploading and try to push them forward—whether the researchers have asked for the help or not.
Certainly, it seems, many scientists have proven willing to consult and even collaborate with Koene. That was clear last spring, when scientists from institutions as varied as MIT, Harvard University, Duke University, and the University of Southern California descended on New York City’s Lincoln Center to speak at a two-day congress that Koene organized with the Russian mogul Itskov. Called Global Future 2045, the conference’s objective was to explore the requirements and implications of transferring minds into virtual bodies by the year 2045.
Some of those present, however, later distanced themselves from the event’s stated “spiritual and sci-tech” vision. “We were trying to get people with a lot of funding who can do big things to start investing in important questions,” says Jose Carmena, one of the Berkeley neuroscientists working on neural dust. “That doesn’t mean we have the same goal. We have similar goals along the way, like recording from as many neurons as possible. We all want to understand the brain. It just happens that they need to understand the brain so they can upload it to a computer.”
Carmena’s reticence was shared by other researchers, some of whom grew alarmed at even a faint possibility that their opinions about the technical plausibility of brain uploading—however qualified and cautious—might somehow be misinterpreted as an endorsement. “There is a big difference between understanding and building a brain,” Yuste says. “There are many things that we more or less understand but we cannot build.” For example, the brain’s hardware could prove critical, he explained, “or there could be intrinsic stochastic events, like in quantum physics, that could make it impossible to replicate."
Harvard’s Lichtman was more comfortable speculating on the concept. “I am not sure any new laws of physics have to be invented as they go forward,” he says. “It’s not completely impossible, like the idea of putting a cow head on a dog. It’s a science-fiction idea, but making a brain of silicon does not seem crazy to me.” In fact, he thinks the movement has helped advance neuroscience and hopes people like his former postdoc Hayworth succeed—not so they can live forever but to accelerate cures for brain dysfunction.
Hayworth, for his part, is now a senior scientist at Howard Hughes Medical Institute’s Janelia Farm Research Campus, a leader in connectomics, where he is developing techniques to precisely image much larger sections of brain than currently possible. He also founded the Brain Preservation Foundation, which has offered a prize for inventing a method that can preserve the brain until emulation technology catches up. “I know this is a controversial topic,” he says, “and there aren’t a heck of a lot of scientific institutes of any type that relish being dragged into it. Hopefully at some point that will change.”
In the meantime, many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point? Existing indefinitely in the confines of computer code, Lichtman points out, would be a pretty boring life.
Earlier in the day, I had asked Todd Huffman, a member of Strout’s early discussion group, whether the quest really boiled down to achieving immortality. Koene and I had dropped by Huffman’s company, which received venture capital to develop automated brain-slicing and imaging technologies. Huffman was wearing pink toe nail polish on his shoeless feet and sported a thick beard and bleached faux-hawk.
“That’s a very egocentric and individualist way of characterizing it,” he responded. “It’s so that we can look at the thought structures of humans who are alive today, so that we can understand human history and what it is to be human. If we can capture and work with human creativity, drive, and awareness the same way that we work with, you know, pieces of matter,” he said, “we can take what it is to be human, move it to another substrate, and go do things that we can’t do as individual humans. We want as a species to continue our evolution.”
Brain uploading, Koene agreed, was about evolving humanity, leaving behind the confines of a polluted planet and liberating humans to experience things that would be impossible in an organic body. “What would it be like, for instance, to travel really close to the sun?” he wondered. “I got into this because I was interested in exploring not just the world, but eventually the universe. Our current substrates, our biological bodies, have been selected to live in a particular slot in space and time. But if we could get beyond that, we could tackle things we can’t currently even contemplate.”
Beliefs Trump Facts 2
Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet - focussed on a lack of evidence connecting vaccines and autism - seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
Nyhan’s interest in false beliefs dates back to early 2000, when he was a senior at Swarthmore. It was the middle of a messy Presidential campaign, and he was studying the intricacies of political science. “The 2000 campaign was something of a fact-free zone,” he said. Along with two classmates, Nyhan decided to try to create a forum dedicated to debunking political lies. The result was Spinsanity, a fact-checking site that presaged venues like PolitiFact and the Annenberg Policy Center’s factcheck.org. For four years, the trio plugged along. Their work was popular - it was syndicated by Salon and the Philadelphia Inquirer, and it led to a best-selling book - but the errors persisted. And so Nyhan, who had already enrolled in a doctorate program in political science at Duke, left Spinsanity behind to focus on what he now sees as the more pressing issue: If factual correction is ineffective, how can you make people change their misperceptions? The 2014 vaccine study was part of a series of experiments designed to answer the question.
Until recently, attempts to correct false beliefs haven’t had much success. Stephan Lewandowsky, a psychologist at the University of Bristol whose research into misinformation began around the same time as Nyhan’s, conducted a review of misperception literature through 2012. He found much speculation, but, apart from his own work and the studies that Nyhan was conducting, there was little empirical research. In the past few years, Nyhan has tried to address this gap by using real-life scenarios and news in his studies: the controversy surrounding weapons of mass destruction in Iraq, the questioning of Obama’s birth certificate, and anti-G.M.O. activism. Traditional work in this area has focussed on fictional stories told in laboratory settings, but Nyhan believes that looking at real debates is the best way to learn how persistently incorrect views of the world can be corrected.
One thing he learned early on is that not all errors are created equal. Not all false information goes on to become a false belief - that is, a more lasting state of incorrect knowledge - and not all false beliefs are difficult to correct. Take astronomy. If someone asked you to explain the relationship between the Earth and the sun, you might say something wrong: perhaps that the sun rotates around the Earth, rising in the east and setting in the west. A friend who understands astronomy may correct you. It’s no big deal; you simply change your belief.
But imagine living in the time of Galileo, when understandings of the Earth-sun relationship were completely different, and when that view was tied closely to ideas of the nature of the world, the self, and religion. What would happen if Galileo tried to correct your belief? The process isn’t nearly as simple. The crucial difference between then and now, of course, is the importance of the misperception. When there’s no immediate threat to our understanding of the world, we change our beliefs. It’s when that change contradicts something we’ve long held as important that problems occur.
In those scenarios, attempts at correction can indeed be tricky. In a study from 2013, Kelly Garrett and Brian Weeks looked to see if political misinformation - specifically, details about who is and is not allowed to access your electronic health records - that was corrected immediately would be any less resilient than information that was allowed to go uncontested for a while. At first, it appeared as though the correction did cause some people to change their false beliefs. But, when the researchers took a closer look, they found that the only people who had changed their views were those who were ideologically predisposed to disbelieve the fact in question. If someone held a contrary attitude, the correction not only didn’t work - it made the subject more distrustful of the source. A climate-change study from 2012 found a similar effect. Strong partisanship affected how a story about climate change was processed, even if the story was apolitical in nature, such as an article about possible health ramifications from a disease like the West Nile Virus, a potential side effect of change. If information doesn’t square with someone’s prior beliefs, he discards the beliefs if they’re weak and discards the information if the beliefs are strong.
Even when we think we’ve properly corrected a false belief, the original exposure often continues to influence our memory and thoughts. In a series of studies, Lewandowsky and his colleagues at the University of Western Australia asked university students to read the report of a liquor robbery that had ostensibly taken place in Australia’s Northern Territory. Everyone read the same report, but in some cases racial information about the perpetrators was included and in others it wasn’t. In one scenario, the students were led to believe that the suspects were Caucasian, and in another that they were Aboriginal. At the end of the report, the racial information either was or wasn’t retracted. Participants were then asked to take part in an unrelated computer task for half an hour. After that, they were asked a number of factual questions (“What sort of car was found abandoned?”) and inference questions (“Who do you think the attackers were?”). After the students answered all of the questions, they were given a scale to assess their racial attitudes toward Aboriginals.
Everyone’s memory worked correctly: the students could all recall the details of the crime and could report precisely what information was or wasn’t retracted. But the students who scored highest on racial prejudice continued to rely on the racial misinformation that identified the perpetrators as Aboriginals, even though they knew it had been corrected. They answered the factual questions accurately, stating that the information about race was false, and yet they still relied on race in their inference responses, saying that the attackers were likely Aboriginal or that the store owner likely had trouble understanding them because they were Aboriginal. This was, in other words, a laboratory case of the very dynamic that Nyhan identified: strongly held beliefs continued to influence judgment, despite correction attempts - even with a supposedly conscious awareness of what was happening.
In a follow-up, Lewandowsky presented a scenario that was similar to the original experiment, except now, the Aboriginal was a hero who disarmed the would-be robber. This time, it was students who had scored lowest in racial prejudice who persisted in their reliance on false information, in spite of any attempt at correction. In their subsequent recollections, they mentioned race more frequently, and incorrectly, even though they knew that piece of information had been retracted. False beliefs, it turns out, have little to do with one’s stated political affiliations and far more to do with self-identity: What kind of person am I, and what kind of person do I want to be? All ideologies are similarly affected.
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. For example, when women are asked to state their gender before taking a math or science test, they end up performing worse than if no such statement appears, conforming their behavior to societal beliefs about female math-and-science ability. To address this so-called stereotype threat, Steele proposes an exercise in self-affirmation: either write down or say aloud positive moments from your past that reaffirm your sense of self and are related to the threat in question. Steele’s research suggests that affirmation makes people far more resilient and high performing, be it on an S.A.T., an I.Q. test, or at a book-club meeting.
Normally, self-affirmation is reserved for instances in which identity is threatened in direct ways: race, gender, age, weight, and the like. Here, Nyhan decided to apply it in an unrelated context: Could recalling a time when you felt good about yourself make you more broad-minded about highly politicized issues, like the Iraq surge or global warming? As it turns out, it would. On all issues, attitudes became more accurate with self-affirmation, and remained just as inaccurate without. That effect held even when no additional information was presented - that is, when people were simply asked the same questions twice, before and after the self-affirmation.
Still, as Nyhan is the first to admit, it’s hardly a solution that can be applied easily outside the lab. “People don’t just go around writing essays about a time they felt good about themselves,” he said. And who knows how long the effect lasts - it’s not as though we often think good thoughts and then go on to debate climate change.
But, despite its unwieldiness, the theory may still be useful. Facts and evidence, for one, may not be the answer everyone thinks they are: they simply aren’t that effective, given how selectively they are processed and interpreted. Instead, why not focus on presenting issues in a way keeps broader notions out of it - messages that are not political, not ideological, not in any way a reflection of who you are?
Take the example of the burgeoning raw-milk movement. So far, it’s a relatively fringe phenomenon, but if it spreads it threatens to undo the health benefits of more than a century of pasteurization. The C.D.C. calls raw milk “one of the world’s most dangerous food products,” noting that improperly handled raw milk is responsible for almost three times as many hospitalizations as any other food-borne illness. And yet raw-milk activists are becoming increasingly vocal - and the supposed health benefits of raw milk are gaining increased support. To prevent the idea from spreading even further, Nyhan advises, advocates of pasteurization shouldn’t dwell on the misperceptions, lest they “inadvertently draw more attention to the counterclaim.” Instead, they should create messaging that self-consciously avoids any broader issues of identity, pointing out, for example, that pasteurized milk has kept children healthy for a hundred years.
I asked Nyhan if a similar approach would work with vaccines. He wasn’t sure - for the present moment, at least. “We may be past that point with vaccines,” he told me. “For now, while the issue is already so personalized in such a public way, it’s hard to find anything that will work.” The message that could be useful for raw milk, he pointed out, cuts another way in the current vaccine narrative: the diseases are bad, but people now believe that the vaccines, unlike pasteurized milk, are dangerous. The longer the narrative remains co-opted by prominent figures with little to no actual medical expertise - the Jenny McCarthys of the world - the more difficult it becomes to find a unified, non-ideological theme. The message can’t change unless the perceived consensus among figures we see as opinion and thought leaders changes first.
And that, ultimately, is the final, big piece of the puzzle: the cross-party, cross-platform unification of the country’s élites, those we perceive as opinion leaders, can make it possible for messages to spread broadly. The campaign against smoking is one of the most successful public-interest fact-checking operations in history. But, if smoking were just for Republicans or Democrats, change would have been far more unlikely. It’s only after ideology is put to the side that a message itself can change, so that it becomes decoupled from notions of self-perception.
Vaccines, fortunately, aren’t political. “They’re not inherently linked to ideology,” Nyhan said. “And that’s good. That means we can get to a consensus.” Ignoring vaccination, after all, can make people of every political party, and every religion, just as sick.
We Hate Thinking
MANY people would rather inflict pain on themselves than spend 15 minutes in a room with nothing to do but think, according to a US study.
Researchers at the University of Virginia and Harvard University conducted 11 different experiments to see how people reacted to being asked to spend some time alone.
Just over 200 people participated in the experiments, in which researchers asked them to sit alone in an unadorned room, and report back on what it was like to entertain themselves with their thoughts for between six and 15 minutes.
About half found the experience was unpleasant.
“Most people do not enjoy ‘just thinking’ and clearly prefer having something else to do,” said the study in the journal Science.
Researchers then turned their attention to what people were doing to avoid being alone with their thoughts.
In one experiment, students were asked to do the “thinking time” exercise at home.
Afterward, 32 per cent reported they had cheated by getting out of their chair, listening to music or consulting their mobile phone.
An initial pilot study found, surprisingly, that students preferred to hear the sound of a scraping knife to hearing no noise at all.
“We thought, surely, people wouldn’t shock themselves,” co-author Erin Westgate, a PhD student at the University of Virginia, said.
They offered participants a chance to rate various stimuli, from seeing attractive photographs to the feeling of being given an electric shock about as strong as one that might come from dragging one’s feet on a carpet.
After the participants felt the shock, some even said they would prefer to pay $5 than feel it again.
Then each subject went into a room for 15 minutes of thinking time alone. They were told they had the opportunity to shock themselves, if desired.
Two-thirds of the male subjects - 12 out of 18 - gave themselves at least one shock while they were alone.
Most of the men shocked themselves between one and four times, although one “outlier” shocked himself 190 times.
A quarter of the women, six out of 24, decided to shock themselves, each between one and nine times.
All of those who shocked themselves had previously said they would have paid to avoid it.
Ms Westgate said she is still astounded by those findings. “I think we just vastly underestimated both how hard it is to purposely engage in pleasant thought and how strongly we desire external stimulation from the world around us, even when that stimulation is actively unpleasant.”
Music Triggers Memories in Dementia Patients
When asked about her childhood in the film Alive Inside, a 90-year-old woman with dementia replies, “I’ve forgotten so much, I’m very sorry.” Filmmaker William Flew then plays music from her past for her. “That’s Louis Armstrong,” she says, “He’s singing ‘When the Saints Go Marching By’ and it takes me back to my school days.” She then proceeds to recall precise details from her life: that her mother told her not to listen to Louis Armstrong, the date of her birthday, that she worked at Fort Jackson during wartime, and much more.
Alive Inside documents the uncanny power of music to reawaken emotions and lost memories in people with dementia. William Flew shadows Dan Cohen, a social worker and founder of the nonprofit Music & Memory, as he brings personalized music on iPods into nursing homes across the country. The transformation in emotion, awareness and memory shown in these elderly patients may leave viewers incredulous, wondering “How is this possible?” A number of researchers have studied this topic, however, and they have some ideas about how music affects the brain—specifically, music that is deeply meaningful to the person.
Music tends to accompany events that arouse emotions or otherwise make strong impressions on us—such as weddings, graduations and even spending good times with friends as a teenager. These kinds of experiences form strong memories, and the music and memories likely become intertwined in our neural networks, according to Julene Johnson, a professor at the University of California, San Francisco’s Institute for Health and Aging. Movements, such as dancing, also often pair with our experience of music, which can facilitate memory formation. Even many years later, hearing the music can evoke memories of these long-past events.
As Alive Inside shows, music retains this power even for many people with dementia. Researchers note that the brain areas that process and remember music are typically less damaged by dementia than other regions, and they speculate that this sparing may explain the phenomenon.
Another contributing factor might be that elderly people with dementia, especially those in nursing homes, often live in an unfamiliar environment. “It’s possible those long-term memories are still there,” Johnson says, “but people just have a harder time accessing them because there’s not a lot of context in which someone could pull out those memories.” It seems that familiar music might be a good tool to provide context and reconnect with lost memories.
Johnson also notes that music will not have a strong effect on all people with dementia. “This isn’t universal,” she says. “There are some dementias where the recognition of music is impaired.”
In addition to reawakening memories, research and anecdotes have shown that music can soothe agitated patients and thus may avoid the need for antipsychotic drugs to help them calm down.
Despite music’s apparent benefits, few studies have explored its influence on memory recall in people with dementia. “It’s really an untapped area,” Johnson says. Petr Janata, a cognitive neuroscientist in the Center for Mind and Brain at the University of California, Davis, is one researcher investigating the topic of music and memory. He says that although scientists still do not have the answers for why and how music reawakens memories in people with dementia, there is tremendous anecdotal evidence that suggests it does work. “I don’t think we’re there yet with a bulletproof explanation for why this happens,” Janata says, “but I do think this phenomenon is real and it’s just a matter of time before it’s fully borne out by scientific research.”
In the meantime, though, Dan Cohen continues his mission of using music to help patients and their families and caregivers cope with dementia. “We need to use music to engage with people,” Cohen says, “to allow them to express themselves, enjoy themselves and live again.” And he is determined to make this happen all over the country—he has already brought iPods into 640 nursing homes and 45 states, and he aims to establish personalized music as a standard of care in all 50,000 care facilities in the U.S.
3 Brain Myths
IN the early 19th century, a French neurophysiologist named Pierre Flourens conducted a series of innovative experiments. He successively removed larger and larger portions of brain tissue from a range of animals, including pigeons, chickens and frogs, and observed how their behavior was affected.
His findings were clear and reasonably consistent. “One can remove,” he wrote in 1824, “from the front, or the back, or the top or the side, a certain portion of the cerebral lobes, without destroying their function.” For mental faculties to work properly, it seemed, just a “small part of the lobe” sufficed.
Thus the foundation was laid for a popular myth: that we use only a small portion — 10 percent is the figure most often cited — of our brain. An early incarnation of the idea can be found in the work of another 19th-century scientist, Charles-Édouard Brown-Séquard, who in 1876 wrote of the powers of the human brain that “very few people develop very much, and perhaps nobody quite fully.”
But Flourens was wrong, in part because his methods for assessing mental capacity were crude and his animal subjects were poor models for human brain function. Today the neuroscience community uniformly rejects the notion, as it has for decades, that our brain’s potential is largely untapped.
The myth persists, however. The newly released movie “Lucy,” about a woman who acquires superhuman abilities by tapping the full potential of her brain, is only the latest and most prominent expression of this idea.
Myths about the brain typically arise in this fashion: An intriguing experimental result generates a plausible if speculative interpretation (a small part of the lobe seems sufficient) that is later overextended or distorted (we use only 10 percent of our brain). The caricature ultimately infiltrates pop culture and takes on a life of its own, quite independent from the facts that spawned it.
Another such myth is the idea that the left and right hemispheres of the brain are fundamentally different. The “left brain” is supposedly logical and detail-oriented, whereas the “right brain” is the seat of passion and creativity. This caricature developed initially out of the observation, dating from the 1860s, that damage to the left hemisphere of the brain can have drastically different effects on language and motor control than does damage to the right hemisphere.
But while these and other, more subtle, asymmetries certainly exist, far too much has been made of the idea of distinct left- and right-brain function. The fact is that the two sides of the brain are more similar to each other than they are different, and both sides participate in most tasks, especially complex ones like acts of creativity and feats of logic.
In recent years, a new myth about the brain has started to emerge. This is the myth of mirror neurons, or the idea that a certain class of brain cells discovered in the macaque monkey is the key to understanding the human mind.
Mirror neurons are activated both when a macaque monkey generates its own actions, such as reaching for a piece of fruit, and when it observes others who are performing the same action themselves. Some scientists have argued that these cells are responsible for the ability of monkeys to understand other monkeys’ actions, by simulating the action in their own brains. It has also been claimed that humans have their own mirror system (most likely true), which not only allows us to understand actions but also underlies a wide range of our mental skills — language, imitation, empathy — as well as disorders, such as autism, in which the system is said to be dysfunctional.
The mirror neuron claim has escaped the lab and is starting to find its way into popular culture. You might hear it said, for example, that watching a World Cup match is an intense experience because our mirror neurons allow us to experience the game as if we were on the field itself, simulating every kick and pass.
But as with older myths, this speculation has lost its connection with the data. We now recognize that physical movements themselves don’t uniquely determine our understanding of them. After all, we can understand actions that we can’t ourselves perform (flying, slithering) and a single movement can be understood in many ways (tipping a carafe can be pouring or filling or emptying). Further research shows that dysfunction of the motor system, for example in cerebral palsy, stroke or Lou Gehrig’s disease, does not preclude the ability to understand actions (or enjoy World Cup matches). Accordingly, more recently developed theories of mirror neuron function emphasize their role in motor control instead of understanding actions.
So please, take heed. An ounce of myth prevention now may save a pound of neuroscientific nonsense later.
The Problem With Perfectionists
Perfectionism is a trait many of us cop to coyly, maybe even a little proudly. (“I’m a perfectionist” being the classic response you say in a job interview when asked to name your biggest flaw — one that you think isn’t really a flaw — for example.) But real perfectionism can be devastatingly destructive, leading to crippling anxiety or depression, and it may even be an overlooked risk factor for suicide, argues a new paper in Review of General Psychology, a journal of the American Psychological Association.
The most agreed-upon definition of perfectionist is simply the need to be perfect, or to at least appear that way. We tend to see the Martha Stewarts and Steve Jobs and Tracy Flicks of the world as high-functioning, high-achieving people, even if they are a little intense, said lead author Gordon Flett, a psychologist at York University who has spent decades researching the potentially ruinous psychological impact of perfectionism. “Other than those people who have suffered greatly because of their perfectionism or the perfectionism of a loved one, the average person has very little understanding or awareness of how destructive perfectionism can be,” Flett said in an email. But for many perfectionists, that “together” image is just an emotionally draining mask and underneath “they feel like imposters,” he said.
And, eventually, that façade may collapse. In one 2007 study, researchers conducted interviews with the friends and family members of people who had recently killed themselves. Without prompting, more than half of the deceased were described as “perfectionists” by their loved ones. Similarly, in a British study of students who committed suicide, 11 out of the 20 students who’d died were described by those who knew them as being afraid of failure. In another study, published last year, more than 70 percent of 33 boys and young men who had killed themselves were said by their parents to have placed “exceedingly high” demands and expectations on themselves — traits associated with perfectionism.
It doesn’t take much imagination to explain what might drive a perfectionist to self-harm. The all-or-nothing, impossibly high standards perfectionists set for themselves often mean that they’re not happy even when they’ve achieved success. And research has suggested that anxiety over making mistakes may ultimately be holding some perfectionists back from ever achieving success in the first place. “Wouldn't it be good if your surgeon, or your lawyer or financial advisor, is a perfectionist?” said Thomas S. Greenspon, a psychologist and author of a recent paper on an “antidote to perfectionism,” published in Psychology in the Schools. “Actually, no. Research confirms that the most successful people in any given field are less likely to be perfectionistic, because the anxiety about making mistakes gets in your way,” he continued. “Waiting for the surgeon to be absolutely sure the correct decision is being made could allow me to bleed to death.”
But the dangers of perfectionism, and particularly the link to suicide, have been overlooked at least partially because perfectionists are very skilled at hiding their pain. Admitting to suicidal thoughts or depression wouldn’t exactly fit in with the image they’re trying to project. Perfectionism might not only be driving suicidal impulses, it could also be simultaneously masking them.
Still, there’s a distinction between perfectionism and the pursuit of excellence, Greenspon said. Perfectionism is more than pushing yourself to do your best to achieve a goal; it’s a reflection of an inner self mired in anxiety. “Perfectionistic people typically believe that they can never be good enough, that mistakes are signs of personal flaws, and that the only route to acceptability as a person is to be perfect,” he said. Because the one thing these people are decidedly not-perfect at, research shows, is self-compassion.
If you have perfectionistic tendencies, Flett advises aiming the trait outside yourself. “There is much to be said for feeling better about yourself by volunteering and making a difference in the lives of others,” he said. If you’re a perfectionist who also happens to be a parent, it’s even more important to get your inner Tracy Flick under control, because research suggests that perfectionism is a trait that you can pass down to your kids. One simple way to help your kids, he suggests, is storytelling. “Kids love to hear a parent or teacher talk about mistakes they have made or failures that have had to overcome,” he said. “This can reinforce the ‘nobody is perfect and you don't have to be either’ theme.”
It’s important to address as early as possible, because the link between perfectionism and suicide attempts is a particularly dangerous one. In a sad twist of irony, once a perfectionist has made up his mind to end his own life, his conscientious nature may make him more likely to succeed. Perfectionists act deliberately, not impulsively, and this means their plans for taking their own lives tend to be very well thought-out and researched, Flett and colleagues write. To drive the point home, they quote the wife of a Wyoming man who died via suicide in 2006, who told the Jackson Hole News & Guide, “He was very deliberate. He was a perfectionist. I have been learning that perfectionism plus depression is a loaded gun.”
Mind Over Money
There are not many fund managers who believe that the answers to successful investing are to be found in Neanderthal man’s reaction to a sabre-toothed tiger. Thomas Howard is not your usual money manager.
An academic for most of his career, he became a fund manager only when he turned 54, an age when most of us are beginning to think of retiring. Yet since its launch 12 years ago, his Athena Pure Valuation fund has performed extraordinarily well.
Interestingly, he has thrown away the foundations of financial theory that were the bible of his early university years. He has jettisoned the gold standards of “modern portfolio theory” and the “efficient markets hypothesis”. He manages money on the basis of what he calls “behavioural portfolio management”. He realises that “the emotions that the stock market engenders are the same as when a sabre-toothed tiger showed up at the cave door tens of thousands of years ago”. By betting against that evolutionary bias, Mr Howard aims to profit.
Over the past few decades, a lot of work has been done by psychologists on how human beings make decisions. They have discovered that we are remarkably clouded by emotion, far from the “rational” beings that financial theory assumes. Daniel Kahneman, the psychologist, won a Nobel prize in economics for his work on how human beings make decisions involving risk (irrationally).
In his book Thinking, Fast and Slow, he points out how differently psychologists and economists view human beings: “To a psychologist, it is self-evident that people are neither fully rational nor completely selfish and that their tastes are anything but stable. Our two disciplines seem to be studying different species.”
It is this difference that Mr Howard seeks to exploit. He is betting that other investors make consistent mistakes because their decisions are clouded by emotion. He aims to become the purely rational investor by “ruthlessly driving emotion” out of his investment choices. In order to do that, he deliberately doesn’t know the names of the shares he owns, doesn’t know what they do and he never looks at news feeds. An extraordinary admission from a “normal” fund manager.
How does he invest? He screens 7,000 United States-listed companies for a few criteria. He is looking for companies that pay high dividends and are heavily indebted. Then he values the companies based on their sales and expected future profits.
The accepted narrative against buying high-dividend stocks is that they have nothing else to do with their cash and so are mature or declining businesses. The usual narrative against buying highly indebted companies is that they are too risky. Mr Howard aims to profit from the fact that these fears overly hurt share prices. Companies that pay generous dividends and are indebted are overlooked by investors and so their share prices are relatively cheap. The majority of investors mis-price the risks.
Mr Kahneman explains why: “The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animal’s odds of living long enough to reproduce.” It is bedded deep into our evolutionary history to fear and it is the mispricing of fear in the stock market that Mr Howard aims to exploit by buying high-dividend-paying indebted companies.
Another problem with investors is an overreliance on narratives. Nassim Taleb states in his book The Black Swan: The Impact of the Highly Improbable: “We like stories, we like to summarise and we like to simplify.” He calls it the “narrative fallacy” that is “our predilection for compact stories over raw truth”. As humans, we invent stories to make sense of and explain the complex and ever changing world we live in. However, although the narratives are compelling, they are not good explainers of events.
Again, this is a hangover from our past, where the ability to communicate and tell stories secured the success of our species. Telling your fellow hunter gatherers around a fire at night how to spear a sabre-toothed tiger and where to find the best berries ensured survival. The human desire to listen to a story is hardwired into our subconscious. It was the key skill that enabled our ancestors to flourish in a hostile world.
Yet, as Mr Howard states: “The stock market is almost incomprehensible and impossible to simplify. It is beyond our evolutionary abilities. Narratives may work in many places, but the stock market is such a complex system, narrative no longer works.”
The financial world has moved faster than our evolutionary ability to understand it, but most investors buy stocks on the basis of a story, a simplified narrative that a broker sells them. It is from that Mr Howard aims to profit; he buys and sells stocks purely based on the company’s financials. He avoids Mr Taleb’s “narrative fallacy”.
There is an old Wall Street adage that the stock market is driven by fear and greed. Now we know why. It’s because of our ancestor’s reaction to a sabre-toothed tiger.
'Lay theory' and Free Will
If one thing’s for sure, it’s that I decided what breakfast cereal to eat this morning. I opened the cupboard, Iperused the options, and when I ultimately chose the Honey Bunches of Oats over the Kashi Good Friends, it came from a place of considered judgment, free from external constraints and predetermined laws.
Or did it? This question—about how much people are in charge of their own actions—is among the most central to the human condition. Do we have free will? Are we in control of our destiny? Do we choose the proverbial Honey Bunches of Oats? Or does the cereal—or some other mysterious force in the vast and unknowable universe—choose us?
The Greek playwright Sophocles seemed convinced that people have no real control over their fortunes. The character Oedipus, for example, tries desperately to buck the prophesy that he will kill his father and marry his mother, only to end up doing just that. Shakespeare’s characters, on the other hand, attempt to seize control of their futures. Cassius encourages Brutus to assassinate Caesar by appealing to his sense of self-responsibility: “The fault, dear Brutus, is not in our stars, but in ourselves, that we are the underlings.”
Fortunately, though, for social scientists (and for readers of this column), the task of the experimental psychologist isn’t to settle once and for all whether we have free will, but rather to see whether people think they do. This is the study of “lay theory”—people’s convictions about the workings of the world.
The study of lay theory yields interesting insights about the factors that hold sway over our seemingly most deeply held beliefs. What if I were to tell you, for instance, that belief in free will is negatively correlated with the desire to urinate? Those are the implications of a new study published in the journal Consciousness and Cognition by Michael Ent and Roy Baumeister. They predicted—and found—that the more people felt they needed to pee, the less they believed that humans are in control of their destinies.
Whence comes such a seemingly bizarre theory about the relationship between something as mundane as bodily function and as lofty as human freedom? It’s based on a brand of psychological research known as “embodied cognition,” the primary lesson of which is that moment-to-moment states of our bodies influence how we consider about the world around us. An example of this is work by Amy Cuddy showing that “power postures” change how we come across in job interviews. Ent and Baumeister turned this approach toward the question of free will, hypothesizing that body-states could affect even abstract philosophies. When a feature of physical experience reminds subjects they are constrained by the laws of nature, Ent and Baumeister reasoned, their belief in free will should diminish.
The researchers started by contacting people with epilepsy and panic disorder. Epileptics suffer uncontrollable seizures that can come without warning; panic disorder results in recurrent attacks of intense fear and anxiety. Ent and Baumeister reasoned that, due to these daily reminders of physical limitations, these patients would have less belief in free will. Sure enough, when they compared these people’s attitudes to those of a control population, they saw significantly more skepticism among epileptics.
But this data, though intriguing, doesn’t prove unequivocally that our deepest beliefs are influenced by the states of our bodies. Instead, epileptics’ opinions could just be reflections of the fact that they actually do have less control over their actions.
To address this concern, the researchers next demonstrated that even healthy subjects have less belief in free will when they’re subtly reminded of their own physical limitations. Ent & Baumeister had people respond to a battery of questions not just about free will, but also about their current corporal desires. The desires that negatively correlated most strongly to belief in freedom were: a) the desire to urinate, b) the desire to sleep, and c) the desire to have sex. You read that correctly: when people feel stymied in their desire to sleep or to procreate, among other things, it affects their opinions on one of the most hotly debated issues of all time.
Perhaps this finding does not come as a surprise. After all, small influences in our daily lives—a complement from a friend, a criticism from a boss—have the power make our entire world seem sunny or grey. But given the ramifications that belief or distrust in free will would have on people’s life outlook (a “yea” vote gives a person carte blanche to proactively tackle life’s challenges, a “nay” stimulates a considerably more laissez-faire approach), it’s interesting to know how malleable these beliefs really are.
William James said that belief in free will comprises “the whole sting and excitement of voluntary life.” That sting and excitement may be as fleeting and ephemeral as the last bite from a bowl of Rice Crispies.
DKE Errors
Reddit is running an Ask Me Anything with Dr. David Dunning, a Cornell psychologist who studies, among other things, how people evaluate their own abilities, and it's very much worth checking out. Dunning is probably best known for the so-called Dunning-Kruger effect, a bias he laid out in a paper he co-authored in 1999 (PDF), which argues, in short, that people without a lot of competence in a given area tend to overrate how good they are at the thing in question. That's because they suffer from a "dual burden": Not only are they bad at, say, bird-watching, but they also know so little about it that they lack the ability to discern that they can't tell a sparrow from a skylark.
In the AMA, he explains many of the practical day-to-day implications of this bias, which leads people to make a lot of easily preventable mistakes. This response about how to avoid the Dunning-Kruger effect, in particular, contains a lot of useful information:
Get competent. Always be learning.
Absent that, get mentors or a “kitchen cabinet” of people whose opinions you’ve found useful in the past.
Or, know when the problem is likely to be most common, such as when you are doing something new. For myself, for instance, I know how to give a lecture or a public talk. I do it all the time. However, just last month I had to buy a car, for only the fourth time in my life. Knowing this is an uncommon thing for me to do, I spent a lot of time [researching] cars … and also how to buy them.
Our most recent research also suggests one should be wary of quick and impulsive decisions … that those who get caught in DKE errors less are those who deliberate over them, at least a little. People who jump to conclusions are the most prone to overconfident error.
And they also do so in a particular way. I have found it useful to explicitly consider how I might be wrong or missing in a decision. What’s wrong with this car deal that seems so attractive? What have I left out in this response about avoiding the DKE?
I am often asked if being confident is fundamentally good or bad. I say it has to be both, in its proper place. A general on the day of battle needs to be confident so that his or her troops execute the battle plan with efficiency. Doing so saves lives. However, before that day, I want a cautious general who over-plans — one who wants more troops, more ordnance, better contingency plans — so that he or she is best prepared for the day of battle. Who wants an overconfident general who underestimates the number of troops and ordnance he or she will need to prevail?
I think that analogy works for athletes, too. They don’t use confidence to become complacent, but to use confidence to put in the extra effort and strategizing that will help them excel.
DKE Errors Long Version
For more than 20 years, I have researched people’s understanding of their own expertise—formally known as the study of metacognition, the processes by which human beings evaluate and regulate their knowledge, reasoning, and learning—and the results have been consistently sobering, occasionally comical, and never dull.
The American author and aphorist William Feather once wrote that being educated means “being able to differentiate between what you know and what you don’t.” As it turns out, this simple ideal is extremely hard to achieve. Although what we know is often perceptible to us, even the broad outlines of what we don’t know are all too often completely invisible. To a great degree, we fail to recognize the frequency and scope of our ignorance.
In 1999, in the Journal of Personality and Social Psychology, my then graduate student Justin Kruger and I published a paper that documented how, in many areas of life, incompetent people do not recognize - scratch that, cannot recognize - just how incompetent they are, a phenomenon that has come to be known as the Dunning-Kruger effect. Logic itself almost demands this lack of self-insight: For poor performers to recognize their ineptitude would require them to possess the very expertise they lack. To know how skilled or unskilled you are at using the rules of grammar, for instance, you must have a good working knowledge of those rules, an impossibility among the incompetent. Poor performers - and we are all poor performers at some things - fail to see the flaws in their thinking or the answers they lack.
What’s curious is that, in many cases, incompetence does not leave people disoriented, perplexed, or cautious. Instead, the incompetent are often blessed with an inappropriate confidence, buoyed by something that feels to them like knowledge.
This isn’t just an armchair theory. A whole battery of studies conducted by myself and others have confirmed that people who don’t know much about a given set of cognitive, technical, or social skills tend to grossly overestimate their prowess and performance, whether it’s grammar, emotional intelligence, logical reasoning, firearm care and safety, debating, or financial knowledge. College students who hand in exams that will earn them Ds and Fs tend to think their efforts will be worthy of far higher grades; low-performing chess players, bridge players, and medical students, and elderly people applying for a renewed driver’s license, similarly overestimate their competence by a long shot.
Occasionally, one can even see this tendency at work in the broad movements of history. Among its many causes, the 2008 financial meltdown was precipitated by the collapse of an epic housing bubble stoked by the machinations of financiers and the ignorance of consumers. And recent research suggests that many Americans’ financial ignorance is of the inappropriately confident variety. In 2012, the National Financial Capability Study, conducted by the Financial Industry Regulatory Authority (with the U.S. Treasury), asked roughly 25,000 respondents to rate their own financial knowledge, and then went on to measure their actual financial literacy.
The roughly 800 respondents who said they had filed bankruptcy within the previous two years performed fairly dismally on the test—in the 37th percentile, on average. But they rated their overall financial knowledge more, not less, positively than other respondents did. The difference was slight, but it was beyond a statistical doubt: 23 percent of the recently bankrupted respondents gave themselves the highest possible self-rating; among the rest, only 13 percent did so. Why the self-confidence? Like Jimmy Kimmel’s victims, bankrupted respondents were particularly allergic to saying “I don’t know.” Pointedly, when getting a question wrong, they were 67 percent more likely to endorse a falsehood than their peers were. Thus, with a head full of “knowledge,” they considered their financial literacy to be just fine.
Because it’s so easy to judge the idiocy of others, it may be sorely tempting to think this doesn’t apply to you. But the problem of unrecognized ignorance is one that visits us all. And over the years, I’ve become convinced of one key, overarching fact about the ignorant mind. One should not think of it as uninformed. Rather, one should think of it as misinformed.
An ignorant mind is precisely not a spotless, empty vessel, but one that’s filled with the clutter of irrelevant or misleading life experiences, theories, facts, intuitions, strategies, algorithms, heuristics, metaphors, and hunches that regrettably have the look and feel of useful and accurate knowledge. This clutter is an unfortunate by-product of one of our greatest strengths as a species. We are unbridled pattern recognizers and profligate theorizers. Often, our theories are good enough to get us through the day, or at least to an age when we can procreate. But our genius for creative storytelling, combined with our inability to detect our own ignorance, can sometimes lead to situations that are embarrassing, unfortunate, or downright dangerous—especially in a technologically advanced, complex democratic society that occasionally invests mistaken popular beliefs with immense destructive power (See: crisis, financial; war, Iraq). As the humorist Josh Billings once put it, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” (Ironically, one thing many people “know” about this quote is that it was first uttered by Mark Twain or Will Rogers—which just ain’t so.)
Because of the way we are built, and because of the way we learn from our environment, we are all engines of misbelief. And the better we understand how our wonderful yet kludge-ridden, Rube Goldberg engine works, the better we—as individuals and as a society—can harness it to navigate toward a more objective understanding of the truth.
BORN WRONG
Some of our deepest intuitions about the world go all the way back to our cradles. Before their second birthday, babies know that two solid objects cannot co-exist in the same space. They know that objects continue to exist when out of sight, and fall if left unsupported. They know that people can get up and move around as autonomous beings, but that the computer sitting on the desk cannot. But not all of our earliest intuitions are so sound.
Very young children also carry misbeliefs that they will harbor, to some degree, for the rest of their lives. Their thinking, for example, is marked by a strong tendency to falsely ascribe intentions, functions, and purposes to organisms. In a child’s mind, the most important biological aspect of a living thing is the role it plays in the realm of all life. Asked why tigers exist, children will emphasize that they were “made for being in a zoo.” Asked why trees produce oxygen, children say they do so to allow animals to breathe.
Any conventional biology or natural science education will attempt to curb this propensity for purpose-driven reasoning. But it never really leaves us. Adults with little formal education show a similar bias. And, when rushed, even professional scientists start making purpose-driven mistakes. The Boston University psychologist Deborah Kelemen and some colleagues demonstrated this in a study that involved asking 80 scientists—people with university jobs in geoscience, chemistry, and physics—to evaluate 100 different statements about “why things happen” in the natural world as true or false. Sprinkled among the explanations were false purpose-driven ones, such as “Moss forms around rocks in order to stop soil erosion” and “The Earth has an ozone layer in order to protect it from UV light.” Study participants were allowed either to work through the task at their own speed, or given only 3.2 seconds to respond to each item. Rushing the scientists caused them to double their endorsements of false purpose-driven explanations, from 15 to 29 percent.
This purpose-driven misconception wreaks particular havoc on attempts to teach one of the most important concepts in modern science: evolutionary theory. Even laypeople who endorse the theory often believe a false version of it. They ascribe a level of agency and organization to evolution that is just not there. If you ask many laypeople their understanding of why, say, cheetahs can run so fast, they will explain it’s because the cats surmised, almost as a group, that they could catch more prey if they could just run faster, and so they acquired the attribute and passed it along to their cubs. Evolution, in this view, is essentially a game of species-level strategy.
This idea of evolution misses the essential role played by individual differences and competition between members of a species in response to environmental pressures: Individual cheetahs who can run faster catch more prey, live longer, and reproduce more successfully; slower cheetahs lose out, and die out—leaving the species to drift toward becoming faster overall. Evolution is the result of random differences and natural selection, not agency or choice.
But belief in the “agency” model of evolution is hard to beat back. While educating people about evolution can indeed lead them from being uninformed to being well informed, in some stubborn instances it also moves them into the confidently misinformed category. In 2014, Tony Yates and Edmund Marek published a study that tracked the effect of high school biology classes on 536 Oklahoma high school students’ understanding of evolutionary theory. The students were rigorously quizzed on their knowledge of evolution before taking introductory biology, and then again just afterward. Not surprisingly, the students’ confidence in their knowledge of evolutionary theory shot up after instruction, and they endorsed a greater number of accurate statements. So far, so good.
The trouble is that the number of misconceptions the group endorsed also shot up. For example, instruction caused the percentage of students strongly agreeing with the true statement “Evolution cannot cause an organism’s traits to change during its lifetime” to rise from 17 to 20 percent—but it also caused those strongly disagreeing to rise from 16 to 19 percent. In response to the likewise true statement “Variation among individuals is important for evolution to occur,” exposure to instruction produced an increase in strong agreement from 11 to 22 percent, but strong disagreement also rose from nine to 12 percent. Tellingly, the only response that uniformly went down after instruction was “I don’t know.”
And it’s not just evolution that bedevils students. Again and again, research has found that conventional educational practices largely fail to eradicate a number of our cradle-born misbeliefs. Education fails to correct people who believe that vision is made possible only because the eye emits some energy or substance into the environment. It fails to correct common intuitions about the trajectory of falling objects. And it fails to disabuse students of the idea that light and heat act under the same laws as material substances. What education often does appear to do, however, is imbue us with confidence in the errors we retain.
MISAPPLIED RULES
Imagine that the illustration below represents a curved tube lying horizontally on a table:
In a study of intuitive physics in 2013, Elanor Williams, Justin Kruger, and I presented people with several variations on this curved-tube image and asked them to identify the trajectory a ball would take (marked A, B, or C in the illustration) after it had traveled through each. Some people got perfect scores, and seemed to know it, being quite confident in their answers. Some people did a bit less well—and, again, seemed to know it, as their confidence was much more muted.
But something curious started happening as we began to look at the people who did extremely badly on our little quiz. By now, you may be able to predict it: These people expressed more, not less, confidence in their performance. In fact, people who got none of the items right often expressed confidence that matched that of the top performers. Indeed, this study produced the most dramatic example of the Dunning-Kruger effect we had ever seen: When looking only at the confidence of people getting 100 percent versus zero percent right, it was often impossible to tell who was in which group.
Why? Because both groups “knew something.” They knew there was a rigorous, consistent rule that a person should follow to predict the balls’ trajectories. One group knew the right Newtonian principle: that the ball would continue in the direction it was going the instant it left the tube - Path B. Freed of the tube’s constraint, it would just go straight.
People who got every item wrong typically answered that the ball would follow Path A. Essentially, their rule was that the tube would impart some curving impetus to the trajectory of the ball, which it would continue to follow upon its exit. This answer is demonstrably incorrect - but a plurality of people endorse it.
These people are in good company. In 1500 A.D., Path A would have been the accepted answer among sophisticates with an interest in physics. Both Leonardo da Vinci and French philosopher Jean Buridan endorsed it. And it does make some sense. A theory of curved impetus would explain common, everyday puzzles, such as why wheels continue to rotate even after someone stops pushing the cart, or why the planets continue their tight and regular orbits around the sun. With those problems “explained,” it’s an easy step to transfer this explanation to other problems like those involving tubes.
What this study illustrates is another general way - in addition to our cradle-born errors - in which humans frequently generate misbeliefs: We import knowledge from appropriate settings into ones where it is inappropriate.
Doctors, too, are quite familiar with the problem of inappropriately transferred knowledge in their dealings with patients. Often, it’s not the medical condition itself that a physician needs to defeat as much as patient misconceptions that protect it. Elderly patients, for example, frequently refuse to follow a doctor’s advice to exercise to alleviate pain—one of the most effective strategies available—because the physical soreness and discomfort they feel when they exercise is something they associate with injury and deterioration. Research by the behavioral economist Sendhil Mullainathan has found that mothers in India often withhold water from infants with diarrhea because they mistakenly conceive of their children as leaky buckets—rather than as increasingly dehydrated creatures in desperate need of water.
MOTIVATED REASONING
Some of our most stubborn misbeliefs arise not from primitive childlike intuitions or careless category errors, but from the very values and philosophies that define who we are as individuals. Each of us possesses certain foundational beliefs—narratives about the self, ideas about the social order—that essentially cannot be violated: To contradict them would call into question our very self-worth. As such, these views demand fealty from other opinions. And any information that we glean from the world is amended, distorted, diminished, or forgotten in order to make sure that these sacrosanct beliefs remain whole and unharmed.
The way we traditionally conceive of ignorance—as an absence of knowledge—leads us to think of education as its natural antidote. But education can produce illusory confidence.
One very commonly held sacrosanct belief, for example, goes something like this: I am a capable, good, and caring person. Any information that contradicts this premise is liable to meet serious mental resistance. Political and ideological beliefs, too, often cross over into the realm of the sacrosanct. The anthropological theory of cultural cognition suggests that people everywhere tend to sort ideologically into cultural worldviews diverging along a couple of axes: They are either individualist (favoring autonomy, freedom, and self-reliance) or communitarian (giving more weight to benefits and costs borne by the entire community); and they are either hierarchist (favoring the distribution of social duties and resources along a fixed ranking of status) or egalitarian (dismissing the very idea of ranking people according to status). According to the theory of cultural cognition, humans process information in a way that not only reflects these organizing principles, but also reinforces them. These ideological anchor points can have a profound and wide-ranging impact on what people believe, and even on what they “know” to be true.
It is perhaps not so surprising to hear that facts, logic, and knowledge can be bent to accord with a person’s subjective worldview; after all, we accuse our political opponents of this kind of “motivated reasoning” all the time. But the extent of this bending can be remarkable. In ongoing work with the political scientist Peter Enns, my lab has found that a person’s politics can warp other sets of logical or factual beliefs so much that they come into direct contradiction with one another. In a survey of roughly 500 Americans conducted in late 2010, we found that over a quarter of liberals (but only six percent of conservatives) endorsed both the statement “President Obama’s policies have already created a strong revival in the economy” and “Statutes and regulations enacted by the previous Republican presidential administration have made a strong economic recovery impossible.” Both statements are pleasing to the liberal eye and honor a liberal ideology, but how can Obama have already created a strong recovery that Republican policies have rendered impossible? Among conservatives, 27 percent (relative to just 10 percent of liberals) agreed both that “President Obama’s rhetorical skills are elegant but are insufficient to influence major international issues” and that “President Obama has not done enough to use his rhetorical skills to effect regime change in Iraq.” But if Obama’s skills are insufficient, why should he be criticized for not using them to influence the Iraqi government?
Sacrosanct ideological commitments can also drive us to develop quick, intense opinions on topics we know virtually nothing about—topics that, on their face, have nothing to do with ideology. Consider the emerging field of nanotechnology. Nanotech, loosely defined, involves the fabrication of products at the atomic or molecular level that have applications in medicine, energy production, biomaterials, and electronics. Like pretty much any new technology, nanotech carries the promise of great benefit (antibacterial food containers!) and the risk of serious downsides (nano-surveillance technology!).
In 2006, Daniel Kahan, a professor at Yale Law School, performed a study together with some colleagues on public perceptions of nanotechnology. They found, as other surveys had before, that most people knew little to nothing about the field. They also found that ignorance didn’t stop people from opining about whether nanotechnology’s risks outweighed its benefits.
When Kahan surveyed uninformed respondents, their opinions were all over the map. But when he gave another group of respondents a very brief, meticulously balanced description of the promises and perils of nanotech, the remarkable gravitational pull of deeply held sacrosanct beliefs became apparent. With just two paragraphs of scant (though accurate) information to go on, people’s views of nanotechnology split markedly—and aligned with their overall worldviews. Hierarchics/individualists found themselves viewing nanotechnology more favorably. Egalitarians/collectivists took the opposite stance, insisting that nanotechnology has more potential for harm than good.
Why would this be so? Because of underlying beliefs. Hierarchists, who are favorably disposed to people in authority, may respect industry and scientific leaders who trumpet the unproven promise of nanotechnology. Egalitarians, on the other hand, may fear that the new technology could present an advantage that conveys to only a few people. And collectivists might worry that nanotechnology firms will pay insufficient heed to their industry’s effects on the environment and public health. Kahan’s conclusion: If two paragraphs of text are enough to send people on a glide path to polarization, simply giving members of the public more information probably won’t help them arrive at a shared, neutral understanding of the facts; it will just reinforce their biased views.
One might think that opinions about an esoteric technology would be hard to come by. Surely, to know whether nanotech is a boon to humankind or a step toward doomsday would require some sort of knowledge about materials science, engineering, industry structure, regulatory issues, organic chemistry, surface science, semiconductor physics, microfabrication, and molecular biology. Every day, however, people rely on the cognitive clutter in their minds - whether it’s an ideological reflex, a misapplied theory, or a cradle-born intuition - to answer technical, political, and social questions they have little or no direct expertise in. We are never all that far from Tonya and the Hardings.
SEEING THROUGH THE CLUTTER
Unfortunately for all of us, policies and decisions that are founded on ignorance have a strong tendency, sooner or later, to blow up in one’s face. So how can policymakers, teachers, and the rest of us cut through all the counterfeit knowledge - our own and our neighbors’ - that stands in the way of our ability to make truly informed judgments?
The way we traditionally conceive of ignorance - as an absence of knowledge - leads us to think of education as its natural antidote. But education, even when done skillfully, can produce illusory confidence. Here’s a particularly frightful example: Driver’s education courses, particularly those aimed at handling emergency maneuvers, tend to increase, rather than decrease, accident rates. They do so because training people to handle, say, snow and ice leaves them with the lasting impression that they’re permanent experts on the subject. In fact, their skills usually erode rapidly after they leave the course. And so, months or even decades later, they have confidence but little leftover competence when their wheels begin to spin.
In cases like this, the most enlightened approach, as proposed by Swedish researcher Nils Petter Gregersen, may be to avoid teaching such skills at all. Instead of training drivers how to negotiate icy conditions, Gregersen suggests, perhaps classes should just convey their inherent danger - they should scare inexperienced students away from driving in winter conditions in the first place, and leave it at that.
But, of course, guarding people from their own ignorance by sheltering them from the risks of life is seldom an option. Actually getting people to part with their misbeliefs is a far trickier, far more important task. Luckily, a science is emerging, led by such scholars as Stephan Lewandowsky at the University of Bristol and Ullrich Ecker of the University of Western Australia, that could help.
In the classroom, some of best techniques for disarming misconceptions are essentially variations on the Socratic method. To eliminate the most common misbeliefs, the instructor can open a lesson with them—and then show students the explanatory gaps those misbeliefs leave yawning or the implausible conclusions they lead to. For example, an instructor might start a discussion of evolution by laying out the purpose-driven evolutionary fallacy, prompting the class to question it. (How do species just magically know what advantages they should develop to confer to their offspring? How do they manage to decide to work as a group?) Such an approach can make the correct theory more memorable when it’s unveiled, and can prompt general improvements in analytical skills.
Then, of course, there is the problem of rampant misinformation in places that, unlike classrooms, are hard to control - like the Internet and news media. In these Wild West settings, it’s best not to repeat common misbeliefs at all. Telling people that Barack Obama is not a Muslim fails to change many people’s minds, because they frequently remember everything that was said - except for the crucial qualifier “not.” Rather, to successfully eradicate a misbelief requires not only removing the misbelief, but filling the void left behind (“Obama was baptized in 1988 as a member of the United Church of Christ”). If repeating the misbelief is absolutely necessary, researchers have found it helps to provide clear and repeated warnings that the misbelief is false. I repeat, false.
The most difficult misconceptions to dispel, of course, are those that reflect sacrosanct beliefs. And the truth is that often these notions can’t be changed. Calling a sacrosanct belief into question calls the entire self into question, and people will actively defend views they hold dear. This kind of threat to a core belief, however, can sometimes be alleviated by giving people the chance to shore up their identity elsewhere. Researchers have found that asking people to describe aspects of themselves that make them proud, or report on values they hold dear, can make any incoming threat seem, well, less threatening.
For example, in a study conducted by Geoffrey Cohen, David Sherman, and other colleagues, self-described American patriots were more receptive to the claims of a report critical of U.S. foreign policy if, beforehand, they wrote an essay about an important aspect of themselves, such as their creativity, sense of humor, or family, and explained why this aspect was particularly meaningful to them. In a second study, in which pro-choice college students negotiated over what federal abortion policy should look like, participants made more concessions to restrictions on abortion after writing similar self-affirmative essays.
Sometimes, too, researchers have found that sacrosanct beliefs themselves can be harnessed to persuade a subject to reconsider a set of facts with less prejudice. For example, conservatives tend not to endorse policies that preserve the environment as much as liberals do. But conservatives do care about issues that involve “purity” in thought, deed, and reality. Casting environmental protection as a chance to preserve the purity of the Earth causes conservatives to favor those policies much more, as research by Matthew Feinberg and Robb Willer of Stanford University suggests. In a similar vein, liberals can be persuaded to raise military spending if such a policy is linked to progressive values like fairness and equity beforehand—by, for instance, noting that the military offers recruits a way out of poverty, or that military promotion standards apply equally to all.
But here is the real challenge: How can we learn to recognize our own ignorance and misbeliefs? To begin with, imagine that you are part of a small group that needs to make a decision about some matter of importance. Behavioral scientists often recommend that small groups appoint someone to serve as a devil’s advocate—a person whose job is to question and criticize the group’s logic. While this approach can prolong group discussions, irritate the group, and be uncomfortable, the decisions that groups ultimately reach are usually more accurate and more solidly grounded than they otherwise would be.
For individuals, the trick is to be your own devil’s advocate: to think through how your favored conclusions might be misguided; to ask yourself how you might be wrong, or how things might turn out differently from what you expect. It helps to try practicing what the psychologist Charles Lord calls “considering the opposite.” To do this, I often imagine myself in a future in which I have turned out to be wrong in a decision, and then consider what the likeliest path was that led to my failure. And lastly: Seek advice. Other people may have their own misbeliefs, but a discussion can often be sufficient to rid a serious person of his or her most egregious misconceptions.
Hearing Voices
In a Reddit AMA held yesterday, neuroscientists and psychologists from Durham University's Hearing the Voices project, an initiative to study auditory hallucinations and how they affect people in different cultural contexts, answered questions about their work. Among the things they discussed was the misconception that hearing voices is always a negative experience for people. David Smailes, a postdoctoral research associate in psychology, explained that for many, it's quite the opposite:
Hearing voices can be a really distressing experience, but for some people, they aren't. E.g., a Dutch study reported that in a sample of voice-hearers who were not receiving any psychiatric help, 71% reported only positive or neutral voices, 25% heard positive and negative voices, and only 4% heard only negative voices, So, at least in some people, hearing voices can be a relatively positive experience.
For one thing, the research team further explained, the writers and storytellers they've interviewed will sometimes talk about the way they can "hear" the voices of the characters they're creating. And many religious people have experienced hearing the voice of God, offering them comfort in times of distress.
At one point, a Reddit user chimed in with a story about his own positive experience with auditory hallucinations:
I've heard voices for most of my life and I used to think it was just the way thoughts work. I've described it like a council where different people weigh in on an issue with various perspectives. It really helped me mentally poke at a lot of confusing subjects when I was younger.
We humans, including all of the extra voices some of us are carrying around, are pretty fascinating.
The Fallibility of Memory
Take a moment to think of a cherished childhood memory. Try to recall it in detail. Think of where you were, who you were with, the sights, the smells, the tastes. Recall the sounds, like the wind in the trees, and how you felt. Were you happy? Anxious? Laughing? Crying?
We would all like to think that our memory is like a camera that records a scene, tucks it away in a corner of our brain, and retrieves it for playback when we want to relive that birthday ice cream or feel a long lost summer breeze on our cheeks. In a large sense we are what we remember, so memories are an integral part of who we are.
Unfortunately memory isn't even remotely like a record/playback device. As neurologist and renowned skeptic Dr. Steven Novella puts it: When someone looks at me and earnestly says, "I know what I saw," I am fond of replying, "No you don't." You have a distorted and constructed memory of a distorted and constructed perception, both of which are subservient to whatever narrative your brain is operating under. As I like to say, we are a story our brain tells itself, and our brains are motivated, skilled, pathological liars.
Lets take a look at memory, get a rough idea of how it works, and learn when and why we need to be cautious about trusting it. Functionally, the three parts of memory are encoding, storage, and retrieval.
As Dr. Novella points out, the problems begin with encoding, even before a memory has been stored. Our brain is constantly filtering information, and constructing its own reality. We are surrounded by detail. Take a moment right now to be aware of every distant sound around you, of all the leaves on the trees, fibers in the carpet, your breathing, and the sensations on your skin — all of it. Imagine dealing with all of that all of the time! Our brains evolved to construct a narrative of what's going on, lending attention to what matters most. That thing over there that might be a predator is a more pressing matter than the sensation of every individual blade of grass you're standing on. But just as things get lost, distorted, or added when your favorite book becomes a movie, the running story your brain puts together isn't a faithful rendition. In fact, sometimes the circuitry in your brain that distinguishes what's currently happening from a memory gets confused. This is the most likely explanation for déja vu. It's a glitch in your own brain's matrix.
And the first thing your brain does with most information is forget it. In 1968 Atkinson and Shiffrin proposed their multi-store model of human memory. While it's come under some criticism, it has stimulated a lot of memory research and makes a handy top-level guide that suits our purposes here. This is where we got the terms short-term memory and long-term memory. The lesser known mode is called sensory memory. This is where information from our senses about the environment is briefly stored. Auditory information may last three or four seconds, and visuals less than a second.
Once we've attended to that brief burst of information, some moves on to our short-term memory. Sometimes called "active memory", this is the information we're currently dealing with and thinking about. Most of the information here is kept for maybe twenty to thirty seconds, and then forgotten. Some information will then move on to long-term memory. That's information that's not in your consciousness, but is available for recall. You could almost think of it as the data on your computer's hard drive.
Except that your memory is less like data on a hard drive than like a cryptic shorthand on a chalk board. Memories can be smudged, overwritten, and just plain fade away. Imagine using a computer that changed the contents of its files every time you opened them. You can already see how what the brain has stored is compressed, distorted data. Even more errors come into play when you try to retrieve it. Put very simply, we have four basic kinds of retrieval:
Recall — This is accessing information without being cued. For example, answering a fill-in-the-blank test.
Recollection — This is when your brain reconstructs a memory using logical structures, partial memories, narratives, and other clues.
Recognition — When you scan the options in a multiple-choice exam and trigger a memory of what the right answer would be, that's an example of recognition.
Relearning — This is one way you can strengthen a memory, namely by learning the information again and storing it.
Many memory errors happen upon retrieval. We've all experienced that "tip of the tongue" retrieval error when we know we know something, but just can't recall it. I know it happens to me with names all the time.
Now, keep in mind that a comprehensive and in-depth review of the science about how memory works is outside the scope of a single Skeptoid episode. What I've presented so far is clearly a simplification. Science is still working out how memory, and other brain functions, actually work. What is well understood, though, are the kinds of errors our memory system makes, and the consequences of those errors.
Remember how I had you recall that cherished childhood memory? Personally, and on behalf of Skeptoid, I hereby formally apologize for forever altering it. If it now has wind involved somehow and didn't before, that's probably my fault.
Psychologist Elizabeth Loftus wrote: Misinformation has the potential for invading our memories when we talk to other people, when we are suggestively interrogated or when we read or view media coverage about some event that we may have experienced ourselves. After more than two decades of exploring the power of misinformation, researchers have learned a great deal about the conditions that make people susceptible to memory modification. Memories are more easily modified, for instance, when the passage of time allows the original memory to fade. Every time we access a long term memory, it gets rewritten, with new errors, back into memory. The mere act of remembering something from long ago actually changes that memory. This is a common source of one of the most interesting errors, the false memory.
Skeptoid listener Patrick Moore is an instructor in Dallas, Texas. It was actually his email which prompted this episode. He shared a favorite childhood memory of being taken by his dad to an Arkansas Razorbacks football game.
Before the game a local radio station played several minutes of the sounds of actual hogs snorting. Every Arkansas fan in the stadium tuned their transistor radios to that station at top volume. It was deafening: 60,000 radios tuned to hogs snorting and grunting.
I must have told that story at least a hundred times since then. Then one day at work I told it to a Razorback fan I met. He was a little puzzled. Turns out he was a season ticket holder during the years before and after I would have been there, and never missed a game. The problem was that he'd never heard all the radios tuned to the station playing hog sounds.
Since then I haven't found anybody who remembers the hog sounds. Not even my father. Now, I still have that memory. I can hear it, see it, smell it, feel it. It's a memory I can, and do, still enjoy even though I now recognize it as "self-constructed."
Patrick's experience is hardly unique. Everybody likely has false memories about something, and the more we remember them the falser they get. It's important to note that having a false memory doesn't make you dishonest. People who "remember" paranormal sightings, visits from extraterrestrials, and such are likely being perfectly sincere. That really is what they remember.
Stressful situations are very likely to produce false and distorted memories. I have a very clear one of watching the 9/11 attacks on television, and seeing the top of one tower sit at a precarious angle for several seconds before the building collapsed. Watching the videos since then I know it never happened that way, but I would have sworn in court that's what I witnessed. This is an example of what's called flashbulb memory which has also been tested and found to be "dramatically inconsistent".
More troubling are implanted memories. Elizabeth Loftus famously got people to believe they saw Bugs Bunny at Disneyland, got lost in a mall as children, or became sick on hard boiled eggs. Remember how our brain continually constructs the version of reality we experience? Recalled memories get the same kind of "enhancements". Loftus planted false memories of wounded animals outside a terrorist attack, and witnesses later embellished the memories with all kinds of details. Loftus calls these "rich false memories because they are so detailed and so big."
When pseudoscience meets planted memories, the results can be tragic. So-called "recovered memories" have been called "the most dangerous idea in mental health". Fathers have been separated from their children and placed on public registers as child abusers, all based on false, planted memories. In the 1980's through early 1990's there was a panic about day care child abuse, even Satanic ritual abuse, which led to many innocent lives being ruined. The real abuse turned out to be that of the children by the authorities and putative therapists who implanted horrifying, false memories.
Our malleable memories, combined with confirmation bias, are a key factor in the Dunning-Kruger effect, the inability to perceive one's own incompetence in a given area. Eyewitness testimony, so convincing to juries, can be fairly accurate but very often is not.
Nobody is immune from memory errors. Recently, no less than Niel deGrasse Tyson conflated two different statements by President George W. Bush and ended up criticizing him for something he never said. At least most of our memory conflations don't happen that publicly.
Well. This is all a bit disheartening, isn't it? I mean, if we can't trust our own memories, what can we trust? First, we can take a clear-eyed look at our own shortcomings and come up with ways to compensate. If something important has just happened, write it down while the memory is fresh. Take a photograph or video. Stay humble. When you catch yourself saying, "I know what I saw!" step back and realize that maybe you don't. See if you can corroborate it with other witnesses or records. And, as always, be skeptical — especially about things you are just sure you remember.
Expectations
Obviously, when those of us in New York City woke up this morning, we learned our blizzard had turned out to be a bust (though Long Island did get about two feet of snow). It's an odd thing, to be disappointed that we didn't get some meteorological catastrophe. But those who were lucky enough to easily be able to stay home today had been starting to view the impending storm as a kind of bonus winter holiday, and the reality turned out to be a pretty big letdown.
Expectations have a funny effect on our happiness, as many psychologists have discovered using a variety of creative experiments over the years. For example, a clever study by behavioral scientist Dan Ariely (which we wrote abut last month) surveyed partygoers before and after the highly anticipated New Year's Eve parties of 1999, and found those with the highest expectations about their millennium celebrations were more likely to report subsequent disappointment with the night's actual events. Ariely and his team suggested that the people with the highest expectations were the most likely to closely monitor whether those expectations were indeed being met, which — when they weren't — would ultimately lead to greater disappointment.
The same thing happens when a friend hypes a book before you have the chance to read it; one study of reader reviews of novels that had been short-listed for prestigious awards from 2007 to 2011 found that readers' opinions of the books fell after the honor was announced, even as sales of those books increased. Unrealistically high expectations may even set us up for marital disappointment, according to research led by Ohio State University psychologist James McNulty, who told the American Psychological Association, "A therapist should point out [a couple's] problems and help couples deal with them, but also consider helping them to limit their unrealistic expectations."
The bait-and-switch of not getting something you were hoping for is rough even at the neurological level, as research on dopamine, the neurotransmitter associated with pleasure, suggests. As Talia Lerner explained on Stanford University's Neuroblog,"dopamine neurons, which fire at some background rate, fire more in response to unpredicted, but not predicted, rewards. Additionally, if you're expecting a reward and don't get it, the dopamine neurons fire less." Unexpected pleasure is far better than expected pleasure that never materializes.
One defense mechanism to protect against the crushing disappointment that often follows overly high expectation is, naturally, to set your expectations lower — some might simply call this being realistic. The practical-minded Dutch authors of a 2003 paper (titled "Blessed are those who expect nothing") on this subject phrase it this way:
People perceive unexpected positive outcomes as more attractive than expected positive outcomes and unexpected negative outcomes as more repulsive than expected negative outcomes. Thus, irrespective of whether an outcome is favorable or unfavorable, the lower one's initial expectations, the greater one's satisfaction or the less intense one's disappointment with the actual outcome. Hence, when people are faced with uncertainty regarding the occurance of a desirable outcome, they may attempt to protect themselves from the experience of disappointment by underestimating their chances of obtaining the outcome in question.
This strategy is sometimes called defensive pessimism, which is essentially a term to neatly describe the old saying, "Expect the best, prepare for the worst" — although that still doesn't justify preparing for the snow with all that Whole Foods kale.
Mental Stimulation of Complex TV Dramas
Couch potatoes who enjoy TV crime dramas may be taking a vital form of exercise: following a well-produced mystery or thriller is good for your brain, a neuroscientist has claimed.
The twists and the suspense as Gillian Anderson closes in on a killer in The Fall, or as events take a dark turn in Happy Valley, or as the Belgian detective solves a complex series of clues in Poirot, make gripping television, but they are also beneficial to several areas of the brain.
Amanda Ellison, a scientist in the cognitive neuroscience research unit at Durham University, said that well-written crime dramas, with complex plots, red herrings and a cast of richly textured characters, provided the sort of stimulation the brain craved.
“These hooks keep the TV critics happy, but they’re also scientifically important,” Dr Ellison said. “The best TV crime dramas build suspense over a number of episodes. They challenge viewers to pay attention to complicated stories, including red herrings, and to remember them from week to week.”
Almost all of the visual regions in the brain were activated, starting with the primary visual cortex in the occipital lobe, Dr Ellison said. These areas of the brain, for example, receive and process the different locations and their importance in a series such as The Bridge, the Scandinavian serial.
The inferior temporal lobe recognises objects, while the parietal lobe focuses on spatial attention, separating the important bits of the image from the background.
The brain is also engaged in processing intricate dialogue and trying to remember characters and clues which are developed over several episodes. A compelling, suspenseful show could stimulate the release of serotonin and dopamine, improving brain chemistry, Dr Ellison said, adding that, in contrast, brain activity could be depressed when watching a show in which the plot or characters were uninteresting. It was also important that viewers gave a compelling drama their full attention: breaking off to check texts, Twitter or social networks reduced the healthy brain activity, Dr Ellison said.
That is not to say that watching hours of crime drama will be healthy in other ways, though: research has shown that sitting down for too long leads to higher rates of obesity, diabetes and other conditions. The average adult in Britain watches television for more than four hours a day.
Challenger and Why We Remember Wrong/h4>
R. T. first heard about the Challenger explosion as she and her roommate sat watching television in their Emory University dorm room. A news flash came across the screen, shocking them both. R. T., visibly upset, raced upstairs to tell another friend the news. Then she called her parents. Two and a half years after the event, she remembered it as if it were yesterday: the TV, the terrible news, the call home. She could say with absolute certainty that that’s precisely how it happened. Except, it turns out, none of what she remembered was accurate.
R. T. was a student in a class taught by Ulric Neisser, a cognitive psychologist who had begun studying memory in the seventies. Early in his career, Neisser became fascinated by the concept of flashbulb memories—the times when a shocking, emotional event seems to leave a particularly vivid imprint on the mind. William James had described such impressions, in 1890, as “so exciting emotionally as almost to leave a scar upon the cerebral tissues.”
The day following the explosion of the Challenger, in January, 1986, Neisser, then a professor of cognitive psychology at Emory, and his assistant, Nicole Harsch, handed out a questionnaire about the event to the hundred and six students in their ten o’clock psychology 101 class, “Personality Development.” Where were the students when they heard the news? Whom were they with? What were they doing? The professor and his assistant carefully filed the responses away.
In the fall of 1988, two and a half years later, the questionnaire was given a second time to the same students. It was then that R. T. recalled, with absolute confidence, her dorm-room experience. But when Neisser and Harsch compared the two sets of answers, they found barely any similarities. According to R. T.’s first recounting, she’d been in her religion class when she heard some students begin to talk about an explosion. She didn’t know any details of what had happened, “except that it had exploded and the schoolteacher’s students had all been watching, which I thought was sad.” After class, she went to her room, where she watched the news on TV, by herself, and learned more about the tragedy.
R. T. was far from alone in her misplaced confidence. When the psychologists rated the accuracy of the students’ recollections for things like where they were and what they were doing, the average student scored less than three on a scale of seven. A quarter scored zero. But when the students were asked about their confidence levels, with five being the highest, they averaged 4.17. Their memories were vivid, clear—and wrong. There was no relationship at all between confidence and accuracy.
At the time of the Challenger explosion, Elizabeth Phelps was a graduate student at Princeton University. After learning about the Challenger study, and other work on emotional memories, she decided to focus her career on examining the questions raised by Neisser’s findings. Over the past several decades, Phelps has combined Neisser’s experiential approach with the neuroscience of emotional memory to explore how such memories work, and why they work the way they do. She has been, for instance, one of the lead collaborators of an ongoing longitudinal study of memories from the attacks of 9/11, where confidence and accuracy judgments have, over the years, been complemented by a neuroscientific study of the subjects’ brains as they make their memory determinations. Her hope is to understand how, exactly, emotional memories behave at all stages of the remembering process: how we encode them, how we consolidate and store them, how we retrieve them. When we met recently in her New York University lab to discuss her latest study, she told me that she has concluded that memories of emotional events do indeed differ substantially from regular memories. When it comes to the central details of the event, like that the Challenger exploded, they are clearer and more accurate. But when it comes to peripheral details, they are worse. And our confidence in them, while almost always strong, is often misplaced.
Within the brain, memories are formed and consolidated largely due to the help of a small seahorse-like structure called the hippocampus; damage the hippocampus, and you damage the ability to form lasting recollections. The hippocampus is located next to a small almond-shaped structure that is central to the encoding of emotion, the amygdala. Damage that, and basic responses such as fear, arousal, and excitement disappear or become muted.
A key element of emotional-memory formation is the direct line of communication between the amygdala and the visual cortex. That close connection, Phelps has shown, helps the amygdala, in a sense, tell our eyes to pay closer attention at moments of heightened emotion. So we look carefully, we study, and we stare—giving the hippocampus a richer set of inputs to work with. At these moments of arousal, the amygdala may also signal to the hippocampus that it needs to pay special attention to encoding this particular moment. These three parts of the brain work together to insure that we firmly encode memories at times of heightened arousal, which is why emotional memories are stronger and more precise than other, less striking ones. We don’t really remember an uneventful day the way that we remember a fight or a first kiss. In one study, Phelps tested this notion in her lab, showing people a series of images, some provoking negative emotions, and some neutral. An hour later, she and her colleagues tested their recall for each scene. Memory for the emotional scenes was significantly higher, and the vividness of the recollection was significantly greater.
When we met, Phelps had just published her latest work, an investigation into how we retrieve emotional memories, which involved collaboration with fellow N.Y.U. neuroscientist Lila Davachi and post-doctoral student Joseph Dunsmoor. In the experiment, the results of which appeared in Nature in late January, a group of students was shown a series of sixty images that they had to classify as either animals or tools. All of the images—ladders, kangaroos, saws, horses—were simple and unlikely to arouse any emotion. After a short break, the students were shown a different sequence of animals and tools. This time, however, some of the pictures were paired with an electric shock to the wrist: two out of every three times you saw a tool, for instance, you would be shocked. Next, each student saw a third set of animals and tools, this time without any shocks. Finally, each student received a surprise memory test. Some got the test immediately after the third set of images, some, six hours later, and some, a day later.
What Dunsmoor, Phelps, and Davachi found came as a surprise: it wasn’t just the memory of the “emotional” images (those paired with shocks) that received a boost. It was also the memory of all similar images—even those that had been presented in the beginning. That is, if you were shocked when you saw animals, your memory of the earlier animals was also enhanced. And, more important, the effect only emerged after six or twenty-four hours: the memory needed time to consolidate. “It turns out that emotion retroactively enhances memory,” Davachi said. “Your mind selectively reaches back in time for other, similar things.” That would mean, for instance, that after the Challenger explosion people would have had better memory for all space-related news in the prior weeks.
The finding was surprising, but also understandable. Davachi gave me an example from everyday life. A new guy starts working at your company. A week goes by, and you have a few uninteresting interactions. He seems nice enough, but you’re busy and not paying particularly close attention. On Friday, in the elevator, he asks you out. Suddenly, the details of all of your prior encounters resurface and consolidate in your memory. They have retroactively gone from unremarkable to important, and your brain has adjusted accordingly. Or, in a more negative guise, if you’re bitten by a dog in a new neighborhood, your memory of all the dogs that you had seen since moving there might improve.
So, if memory for events is strengthened at emotional times, why does everyone forget what they were doing when the Challenger exploded? While the memory of the event itself is enhanced, Phelps explains, the vividness of the memory of the central event tends to come at the expense of the details. We experience a sort of tunnel vision, discarding all the details that seem incidental to the central event.
In the same 2011 study in which Phelps showed people either emotionally negative or neutral images, she also included a second element: each scene was presented within a frame, and, from scene to scene, the color of the frames would change. When it came to the emotional images, memory of color ended up being significantly worse than memory of neutral scenes. Absent the pull of a central, important event, the students took in more peripheral details. When aroused, they blocked the minor details out.
The strength of the central memory seems to make us confident of all of the details when we should only be confident of a few. Because the shock or other negative emotion helps us to remember the animal (or the explosion), we think we also remember the color (or the call to our parents). “You just feel you know it better,” Phelps says. “And even when we tell them they’re mistaken people still don’t buy it.”
Our misplaced confidence in recalling dramatic events is troubling when we need to rely on a memory for something important—evidence in court, for instance. For now, juries tend to trust the confident witness: she knows what she saw. But that may be changing. Phelps was recently asked to sit on a committee for the National Academy of Sciences to make recommendations about eyewitness testimony in trials. After reviewing the evidence, the committee made several concrete suggestions to changes in current procedures, including “blinded” eyewitness identification (that is, the person showing potential suspects to the witness shouldn’t know which suspect the witness is looking at at any given moment, to avoid giving subconscious cues), standardized instructions to witnesses, along with extensive police training in vision and memory research as it relates to eyewitness testimony, videotaped identification, expert testimony early on in trials about the issues surrounding eyewitness reliability, and early and clear jury instruction on any prior identifications (when and how prior suspects were identified, how confident the witness was at first, and the like). If the committee’s conclusions are taken up, the way memory is treated may, over time, change from something unshakeable to something much less valuable to a case. “Something that is incredibly adaptive normally may not be adaptive somewhere like the courtroom,” Davachi says. “The goal of memory isn’t to keep the details. It’s to be able to generalize from what you know so that you are more confident in acting on it.” You run away from the dog that looks like the one that bit you, rather than standing around questioning how accurate your recall is.
“The implications for trusting our memories, and getting others to trust them, are huge,” Phelps says. “The more we learn about emotional memory, the more we realize that we can never say what someone will or won’t remember given a particular set of circumstances.” The best we can do, she says, is to err on the side of caution: unless we are talking about the most central part of the recollection, assume that our confidence is misplaced. More often than not, it is.
Decision-making with multiple variables
The funny thing about decisions is that they seem to work in the opposite way that we'd like them to. The more good choices you have, the harder it is to make the best decision, something author and psychologist Barry Schwartz called "the paradox of choice" in his book published a decade ago. And that's a good thing to know, but that knowledge alone doesn't exactly help when you're facing approximately one zillion choices that all seem kind of okay, like when you're choosing a health-care plan or looking for a new apartment. To address this, Tibor Besedes at the Georgia Institute of Technology led a study — published recently in The Review of Economics and Statistics — that pitted three decision-making strategies against each other, and the best strategy was the one that treated the process like a tournament.
The news release for the new paper describes the study setup this way:
The study subjects were asked to choose one option that would provide the best payoff from among 16 choices, and were rewarded by as much as $25 if they made optimal selections. ... The respondents were asked to choose from a group of cards that had different probabilities of payoff. This generic choice scenario was used to eliminate personal biases that might have arisen in choosing between real-world options such as insurance plans.
Some of the study volunteers were told to make their decision by studying all 16 different options at once; others worked their way through all 16 using a time-consuming elimination process. But the group that made the best decisions did so by using the tournament strategy, which worked like this:
1. Divide the options into piles of four
2. Choose the best option from each pile
3. Put the winners from the first round into a new finalist pile
4. Choose the best option from winners of the earlier four selections
Besedes believes anyone can adopt this strategy when trying to choose between an overwhelming number of options. But that's your decision.
How To Give Yourself Advice
Perhaps the very last person you should turn to for advice is yourself, according to a new post from the Association for Psychological Science, which references research published last year in Psychological Science. We tend to make wiser decisions when thinking about someone else's problems than when thinking about our own issues, researchers from the University of Waterloo and the University of Michigan found, but there's a way around this. Think through your own decisions from a third-person perspective, suggest the researchers, led by Igor Grossmann at the University of Waterloo.
First, Grossmann and his team asked about 100 people, all of whom were in a long-term relationship, to either imagine that they'd been cheated on or that their best friend had been cheated on. They then were asked to imagine what their friend should do, and they answered a questionnaire designed to measure their "wise reasoning" skills — things like considering multiple perspectives and multiple possible outcomes, or seeking out a compromise. As the researchers expected, the people who were thinking about what their friend should do tended to answer in ways that demonstrated more wisdom than those who were thinking about themselves.
But is there a way we can manipulate this human quirk to make better decisions for ourselves? To find out, Grossmann and the rest of the researchers conducted a second, similar experiment, asking another set of study participants the same question: You (or your best friend) just got cheated on. What do you do? Some of the study volunteers were told to answer the wise-reasoning questionnaire from a first-person perspective, using words like I and me. ("Put yourself in this situation. Ask yourself, Why am I feeling this way?") The rest of the participants were instructed to think about the problem from a third-person perspective. ("Put yourself in this situation. Ask yourself, why is he/she feeling this way?")
As it turned out, the people who were looking at the situation from the third-person vantage point showed better judgment, considering the issue from multiple perspectives and imagining many potential outcomes, regardless of whether they were imagining themselves or a friend in the infidelity scenario. The best way to figure out what to do next may indeed be to imagine how you'd advise a friend in the same situation.
Evading Inconvenient Facts
As public debate rages about issues like immunization, Obamacare, and same-sex marriage, many people try to use science to bolster their arguments. And since it’s becoming easier to test and establish facts - whether in physics, psychology, or policy - many have wondered why bias and polarization have not been defeated. When people are confronted with facts, such as the well-established safety of immunization, why do these facts seem to have so little effect?
Our new research, recently published in the Journal of Personality and Social Psychology, examined a slippery way by which people get away from facts that contradict their beliefs. Of course, sometimes people just dispute the validity of specific facts. But we find that people sometimes go one step further and, as in the opening example, they reframe an issue in untestable ways. This makes potential important facts and science ultimately irrelevant to the issue.
Let’s consider the issue of same-sex marriage. Facts could be relevant to whether it should be legal—for example, if data showed that children raised by same-sex parents are worse off—or just as well-off—as children raised by opposite-sex parents. But what if those facts contradict one’s views?
We presented 174 American participants who supported or opposed same-sex marriage with (supposed) scientific facts that supported or disputed their position. When the facts opposed their views, our participants—on both sides of the issue—were more likely to state that same-sex marriage isn’t actually about facts, it’s more a question of moral opinion. But, when the facts were on their side, they more often stated that their opinions were fact-based and much less about morals. In other words, we observed something beyond the denial of particular facts: We observed a denial of the relevance of facts.
In a similar study using 117 religious participants, we had some read an article critical of religion. Believers who were especially high (but not low) in religiosity were more likely to turn to more untestable “blind faith” arguments as reasons for their beliefs, than arguments based in factual evidence, compared to those who read a neutral article.
These experiments show that when people’s beliefs are threatened, they often take flight to a land where facts do not matter. In scientific terms, their beliefs become less “falsifiable” because they can no longer be tested scientifically for verification or refutation.
For instance, sometimes people dispute government policies based on the argument that they don’t work. Yet, if facts suggest that the policies do work, the same person might stay resolvedly against the argument based on principle. We can see this on both sides of the political spectrum, whether it’s conservatives and Obamacare or liberals and the Iraqi surge of 2007.
One would hope that objective facts could allow people to reach consensus more easily, but American politics are more polarized than ever. Could this polarization be a consequence of feeling free of facts?
While it is difficult to objectively test that idea, we can experimentally assess a fundamental question: When people are made to see their important beliefs as relatively less rather than more testable, does it increase polarization and commitment to desired beliefs? Two experiments we conducted suggest so.
In an experiment with 179 Americans, we reminded roughly half of participants that much of President Obama’s policy performance was empirically testable and did not remind the other half. Then participants rated President Obama’s performance on five domains (e.g., job creation). Comparing opponents and supports of Obama, we found that the reminder of testability reduced the average polarized assessments of President Obama’s performance by about 40%.
To test this further test the hypothesis that people strengthen their desired beliefs, when the beliefs are free of facts, we looked at sample 103 participants that varied from highly to moderate religious. We found that when highly (but not more moderately) religious participants were told that God’s existence will always be untestable, they reported stronger desirable religious beliefs afterwards (e.g. the belief God was looking out for them), relative to when they were told that one day science might be able to investigate God’s existence.
Together these findings show, at least in some cases, when testable facts are less a part of the discussion, people dig deeper into the beliefs they wish to have— such as viewing a politician in a certain way or believing God is constantly there to provide support. These results bear similarities to the many studies that find when facts are fuzzier people tend to exaggerate desired beliefs.
So after examining the power of untestable beliefs, what have we learned about dealing with human psychology? We’ve learned that bias is a disease and to fight it we need a healthy treatment of facts and education. We find that when facts are injected into the conversation, the symptoms of bias become less severe. But, unfortunately, we’ve also learned that facts can only do so much. To avoid coming to undesirable conclusions, people can fly from the facts and use other tools in their deep belief protecting toolbox.
With the disease of bias, then, societal immunity is better achieved when we encourage people to accept ambiguity, engage in critical thinking, and reject strict ideology. This society is something the new common core education system and at times The Daily Show are at least in theory attempting to help create. We will never eradicate bias—not from others, not from ourselves, and not from society. But we can become a people more free of ideology and less free of facts.
Products To Help Dementia Patients
A FRIENDLY robotic seal, a wearable airbag and a clock that announces “now it’s Sunday morning” are among a range of new products aimed at dementia sufferers.
Argos, the catalogue store, and the Lloyds pharmacy chain are showing interest in the range catering to Britain’s 850,000 dementia sufferers, which will be launched in June on a website stocking 1,000 products.
The venture, which will have commercial backing from the Alzheimer’s Society, is the brainchild of a young entrepreneur who devoted his twenties to caring for his mother after she fell victim to dementia at the shockingly early age of 54.
James Ashwell, 34, founder of Unforgettable.org, said: “Dealing with dementia is not sexy or interesting — it’s grim, which is why there’s such a gap for products that can make life more bearable. I decided that if nobody else was doing it, I needed to do it.”
Ashwell first realised his mother, Fay, had a problem when she terrified him and a school friend by driving home to Sutton Coldfield, Birmingham, on the wrong side of the road when they were passengers in the car.
His father took responsibility for her care, but when he died suddenly of a rare disease, Ashwell gave up his job as a consultant in the oil and gas industry. At 24 he became her main carer, although he was helped for long periods by his brother and sister, until she died in 2011.
Meal times would often become an ordeal lasting two hours as his mother struggled to cope with conventional cutlery and chased food around the plate. She also had trouble swallowing. However painful the process, Ashwell knew that if he did not help her to eat enough she would lose weight and weaken.
The experience helped convince him of the benefits of a matching set of curved cutlery and crockery.
A wearable airbag attached to a belt is designed to prevent hip breaks, one of the main causes of hospital admissions for people with dementia. The complex electronics detect the difference between walking down steps and falling.
Many other products that could help dementia sufferers have so far been sold only in small volumes.
A meeting with Zoltan Bozoky, the government’s chief strategy officer for tackling dementia, convinced Ashwell that the time was right to create a mass-market venture.
He received initial funding from the charitable arm of Bridges Ventures and in 2013 he got the backing of Jeremy Hughes, chief executive of the Alzheimer’s Society. Unforgettable.org now has £2m funding, with the prospect of a further £5m.
Suppliers to the website are offering smart insoles, which fit any shoe and track dementia sufferers via GPS, and home monitoring systems that send alerts if the person does not follow their usual routine.
A clock that dispenses with potentially confusing digital displays, and simply states the time of day (“now it’s Sunday morning”) is already a big seller. A robotic seal provides companionship, making friendly noises when stroked, and a robotic puppet, called Casper, is able to issue gentle instructions and reminders.
Other new products include a simple tablet computer and rip-proof Velcro clothing.
Hughes said: “Making these products more easily accessible will mean more people with dementia will be able to retain their independence for longer.
Video For Dementia Patients
A Hollywood-inspired experiment to see whether simple video recordings can help people with dementia to wake up more calmly each day has begun in New York.
Residents of a care home in the city are being woken up by videos recorded by a relative in the hope that they will lift the morning fog of forgetfulness.
It is an idea borrowed from the 2004 film 50 First Dates, in which a woman with a brain injury, played by Drew Barrymore, loses her memory every day. A suitor, played by Adam Sandler, uses videos to remind her of him.
Charlotte Dell, director of social services at the Hebrew Home in Riverdale, described the film as “fluff” but said it had made her consider whether there might be a benefit for her residents.
“We’re looking to see if we can set a positive tone for the day,” she said. “What better way to start the day than to see the face and hear the voice of someone you love wishing you a wonderful morning?”
People with dementia often feel disorientated and anxious in the morning, wondering where they are and if they are safe. Just as in the film, the video becomes part of the daily routine.
Relatives record a good morning message, use memory-triggering personal anecdotes and remind the residents that carers will soon help them to get dressed and ready for the day.
The programme is designed for care home residents in the early and moderate stages of dementia who are likely to recognise the people in the video and understand what they say.
“Do we know for sure that they know, this is my daughter, this is my son? No,” Ms Dell said. “But they recognise them as somebody they care about and love.”
She plans to evaluate the programme next month and may expand it to more residents. But Ms Dell said anecdotal evidence from staff was “very positive”.
Mary Matthews, an entrepreneur who is creating a smartphone app to assist people with mild dementia, said the combination of visual with aural prompts could be very effective.
“Clinical staff tell me disembodied voices, even if they are familiar, can be confusing if you cannot see the face of the speaker so this sounds like a great idea,” Ms Matthews said.
Her app, Memrica Prompt, reminds people of names, events, places and tasks. It will allow users with early stage dementia to set up a database of pictures of items they might forget when they leave the house or help to put names to faces they might forget.
“In early dementia, reading and writing can be the first things to go, whereas visual recall is deeper and more embedded,” Ms Matthews said.
“There is lots of evidence that, even if someone with dementia no longer recognises a person, they still get an emotional pleasurable reaction when they hear and see a loved one.”
How to Combat Distrust of Science
A recent Pew poll shows that there is a substantial and growing amount of public disagreement about basic scientific facts, including human evolution, the safety of vaccines and whether or not human-caused climate change is real and happening.
Acceptance of science has become increasingly polarized in the United States. Indeed, a recent Pew poll shows that there is a substantial and growing amount of public disagreement about basic scientific facts, including human evolution, the safety of vaccines and whether or not human-caused climate change is real and happening. What is causing this, you might ask?
People often interpret the same information very differently. As psychologists, we are more than familiar with the finding that our brains selectively attend to, process and recall information. One consequence of this is “confirmation bias,” a strong tendency to automatically favor information that supports our prior expectations. When we consider issues that we feel strongly about (e.g., global warming), confirmation bias reaches a new height: it transitions into “motivated reasoning.” Motivated reasoning is the additional tendency to defensively reject information that contradicts deeply held worldviews and opinions. One example of this is the “motivated rejection of science”; if you are personally convinced that global warming is a hoax, you are likely to reject any scientific information to the contrary – regardless of its accuracy.
Yet, if our personal values, beliefs and worldviews really dictate our reality, then aren’t science communicators just blowing in the wind? Not necessarily so. Although some research has indeed shown that factors such as “scientific literacy” are not always associated with, say, more concern for climate change, we have investigated a different, social type of fact: “expert consensus.” Our research shows that highlighting how many experts agree on a controversial issue has a far-reaching psychological influence. In particular, it has the surprising ability to “neutralize” polarizing worldviews and can lead to greater science acceptance.
A recent study by one of us showed that perceived scientific consensus functions as an important “gateway belief.” In the experiment, we asked a national sample of the US population to participate in a public opinion poll about popular topics (participants did not know that the study was really about climate change). Participants were first asked to estimate what percentage of scientists they thought agree that human-caused climate change is happening (0 to 100 percent). We then exposed participants to a number of different experimental treatments that all contained the same basic message, that “97% of climate scientists have concluded that human-caused climate change is happening.” After several quizzes and bogus distractions, we finally asked participants again about their perception of the scientific consensus.
You might expect that given the contested and politicized nature of the climate change problem, such a simple message would have little effect or could even backfire. Indeed, some research has shown that disagreements between parties can become more extreme when exposed to the same evidence. Yet, contrary to the motivated-reasoning hypothesis, our results showed that on average, participants who were exposed to one of the consensus-messages increased their estimate of the consensus by about 13% (up to as much as 20% in some conditions). Moreover, we found that when respondents’ perception of the level of scientific agreement increased, this led to significant changes in other key beliefs about the issue, such as the belief that climate change is happening, human-caused and a worrisome problem. In turn, changes in these beliefs propelled an increase in support for public action. Thus, we found that people’s perception of the degree of scientific consensus seems to act as a “gateway” to other key attitudes about the issue.
What’s even more interesting is that we found the same effect for two differentially motivated audiences: Democrats and Republicans. In fact, the change was significantly more pronounced among Republican respondents, who normally tend to be the most skeptical about the reality of human-caused climate change. These findings are quite remarkable, if not surprising, given that we exposed participants only once, to a single and simple message.
Nonetheless, these new results are consistent with two previous Nature studies. Some years ago, our colleagues showed that people’s perception of the level of scientific agreement was associated with belief in climate change and policy support for the issue. A subsequent experimental study by one of us revealed a causal link between highlighting expert consensus and increased science acceptance. In that study, too, information about the degree of consensus “neutralized” the effect of ideological worldviews.
Since then, numerous studies have reported similar results. One study showed that even a small amount of scientific dissent can undermine support for (environmental) policy. A new paper published just this month reported that respondents across the political spectrum responded positively to information about the scientific consensus on climate change.
Why is “consensus-information” so far-reaching, psychologically speaking?
One feature that clearly distinguishes “consensus” from other types of information is its normative nature. That is, consensus is a powerful descriptive social fact: it tells us about the number of people who agree on important issues (i.e., the norm within a community). Humans evolved living in social groups and much psychological research has shown that people are particularly receptive to social information. Indeed, consensus decision-making is widespread in human and non-human animals. Because decision-strategies that require widespread agreement lie at the very basis of the evolution of human cooperation, people may be biologically wired to pay attention to consensus-data.
In the case of experts, it describes how many scientists agree on important issues and as such, implicitly embodies an authoritatively rich amount of information. Imagine reading a road sign that informs you that 97% of engineers have concluded that the bridge in front of you is unsafe to cross. You would likely base your decision to cross or avoid that bridge on the expert consensus, irrespective of your personal convictions. Few people would get out of their car and spend the rest of the afternoon personally assessing the structural condition of the bridge (even if you were an expert). Similarly, not everyone can afford the luxury of carving out a decade or so to study geophysics and learn how to interpret complex climatological data. Thus, it makes perfect sense for people to use expert consensus as a decision-heuristic to guide their beliefs and behavior. Society has evolved to a point where we routinely defer to others for advice—from our family doctors to car mechanics; we rely on experts to keep our lives safe and productive. Most of us are constrained by limited time and resources and reliance on consensus efficiently reduces the cost of individual learning.
Back to facts. A recent study showed that people are more likely to cling onto their personal ideologies in the absence of “facts.” This suggests that in order to increase acceptance of science, we need more “facts.” We agree but suggest that this is particularly true for an underleveraged but psychologically powerful type of fact — expert consensus.
The consensus on human-caused climate change is among the strongest observed in the sciences—about as strong as the consensus surrounding the link between smoking and lung cancer. Yet, as Harvard science historian Naomi Oreskes has documented, vested-interest groups have long understood the fact that people make (or fail to make) decisions based on consensus-information. Accordingly, so-called “merchants of doubt” have orchestrated influential misinformation campaigns, including denials of the links between smoking and cancer,and between CO2 emissions and climate change. If polarization on science is to be reduced, we need to harness the psychological power of consensus for which it was designed: the public good.
About Belief
THE day I sat down to write this article the news was rather like any other day. A teenager had been found guilty of plotting to behead a British soldier. Fighting had broken out again in Ukraine. Greece was accusing its creditors of being motivated by ideology rather than economic reality. Some English football fans were filmed racially abusing a man on the Paris subway. Admittedly, all of that day's stories were unique in themselves. But at the root they were all about the same thing: the powerful and very human attribute we call belief.
Beliefs define how we see the world and act within it; without them, there would be no plots to behead soldiers, no war, no economic crises and no racism. There would also be no cathedrals, no nature reserves, no science and no art. Whatever beliefs you hold, it's hard to imagine life without them. Beliefs, more than anything else, are what make us human. They also come so naturally that we rarely stop to think how bizarre belief is.
In 1921, philosopher Bertrand Russell put it succinctly when he described belief as "the central problem in the analysis of mind". Believing, he said, is "the most 'mental' thing we do" – by which he meant the most removed from the "mere matter" that our brains are made of. How can a physical object like a human brain believe things? Philosophy has made little progress on Russell's central problem. But increasingly, scientists are stepping in.
"We once thought that human beliefs were too complex to be amenable to science," says Frank Krueger, a neuroscientist at George Mason University in Fairfax, Virginia. "But that era has passed." What is emerging is a picture of belief that is quite different from common-sense assumptions of it – one that has the potential to change some widely held beliefs about ourselves. Beliefs are fundamental to our lives, but when it comes to what we believe and why, it turns out we have a lot less control than you might think.
Our beliefs come in many shapes and sizes, from the trivial and the easily verified – I believe it will rain today – to profound leaps of faith – I believe in God. Taken together they form a personal guidebook to reality, telling us not just what is factually correct but also what is right and good, and hence how to behave towards one another and the natural world. This makes them arguably not just the most mental thing our brains do but also the most important. "The prime directive of the brain is to extract meaning. Everything else is a slave system," says psychologist Peter Halligan at Cardiff University, UK.
Yet, despite their importance, one of the long-standing problems with studying beliefs is identifying exactly what it is you are trying to understand. "Everyone knows what belief is until you ask them to define it," says Halligan. What is generally agreed is that belief is a bit like knowledge, but more personal. Knowing something is true is different from believing it to be true; knowledge is objective, but belief is subjective. It is this leap-of-faith aspect that gives belief its singular, and troublesome, character.
Philosophers have long argued about the relationship between knowing and believing. In the 17th century, René Descartes and Baruch Spinoza clashed over this issue while trying to explain how we arrive at our beliefs. Descartes thought understanding must come first; only once you have understood something can you weigh it up and decide whether to believe it or not. Spinoza didn't agree. He claimed that to know something is to automatically believe it; only once you have believed something can you un-believe it. The difference may seem trivial but it has major implications for how belief works.
If you were designing a belief-acquisition system from scratch it would probably look like the Cartesian one. Spinoza's view, on the other hand, seems implausible. If the default state of the human brain is to unthinkingly accept what we learn as true, then our common-sense understanding of beliefs as something we reason our way to goes out of the window. Yet, strangely, the evidence seems to support Spinoza. For example, young children are extremely credulous, suggesting that the ability to doubt and reject requires more mental resources than acceptance. Similarly, fatigued or distracted people are more susceptible to persuasion. And when neuroscientists joined the party, their findings added weight to Spinoza's view.
Your credulous brain
The neuroscientific investigation of belief began in 2008, when Sam Harris at the University of California, Los Angeles, put people into a brain scanner and asked them whether they believed in various written statements. Some were simple factual propositions, such as "California is larger than Rhode Island"; others were matters of personal belief, such as "There is probably no God". Harris found that statements people believed to be true produced little characteristic brain activity – just a few brief flickers in regions associated with reasoning and emotional reward. In contrast, disbelief produced longer and stronger activation in regions associated with deliberation and decision-making, as if the brain had to work harder to reach a state of disbelief. Statements the volunteers did not believe also activated regions associated with emotion, but in this case pain and disgust.
Harris's results were widely interpreted as further confirmation that the default state of the human brain is to accept. Belief comes easily; doubt takes effort. While this doesn't seem like a smart strategy for navigating the world, it makes sense in the light of evolution. If the sophisticated cognitive systems that underpin belief evolved from more primitive perceptual ones, they would retain many of the basic features of these simpler systems. One of these is the uncritical acceptance of incoming information. This is a good rule when it comes to sensory perception as our senses usually provide reliable information. But it has saddled us with a non-optimal system for assessing more abstract stimuli such as ideas.
Further evidence that this is the case has come from studying how and why belief goes wrong. "When you consider brain damage or psychiatric disorders that produce delusions, you can begin to understand where belief starts," says Halligan. Such delusions include beliefs that seem bizarre to outsiders but completely natural to the person concerned. For example, people sometimes believe that they are dead, that loved ones have been replaced by imposters, or that their thoughts and actions are being controlled by aliens. And, tellingly, such delusions are often accompanied by disorders of perception, emotional processing or "internal monitoring" – knowing, for example, whether you initiated a specific thought or action.
These deficits are where the delusions start, suggests Robyn Langdon of Macquarie University in Sydney, Australia. People with delusions of alien control, for example, often have faulty motor monitoring, so fail to register actions they have initiated as their own. Likewise, people with the delusion known as "mirror-self misidentification" fail to recognise their own reflection. They often also have a sensory deficit called mirror agnosia: they don't "get" reflective surfaces. A mirror looks like a window and if asked to retrieve an object reflected in one they will try to reach into the mirror or around it. Their senses are telling them that the person in the mirror isn't them, and so they believe this to be true. Again, we trust the evidence of our senses, and if they tell us that black is white, we generally do well to believe them.
You may think that you would never be taken in like that but, says Langdon, "we all default to such believing, at least initially". Consider the experience of watching a magic show. Even though you know it's all an illusion, your instinctive reaction is that the magician has altered the laws of physics.
Misperceptions are not delusions, of course. Witnessing someone being sawn in half and put back together doesn't mean we then believe that people can be safely sawn in half. What's more, sensory deficits do not always lead to delusional beliefs. So what else is required? Harris's brain imaging studies delivered an important clue: belief involves both reasoning and emotion.
The feeling of rightness
The formation of delusional belief probably also requires the emotional weighing-up process to be disrupted in some way. It may be that brain injury destroys it altogether, causing people to simply accept the evidence of their senses. Or perhaps it just weakens it, lowering the evidence threshold required to accept a delusional belief.
For example, somebody with a brain injury that disrupts their emotional processing of faces may think "the person who came to see me yesterday looked like my wife, but didn't feel like her, maybe it was an impostor. I will reserve judgement until she comes back." The next meeting feels similarly disconnected, and so the hypothesis is confirmed and the delusion starts to grow.
According to Langdon and others, this is similar to what goes on in the normal process of belief formation. Both involve incoming information together with unconscious reflection on that information until a "feeling of rightness" arrives, and a belief is formed.
This two-stage process could help explain why people without brain damage are also surprisingly susceptible to strange beliefs. Our natural credulity is one thing, and is particularly likely to lead us astray when we are faced with claims based on ideas that are difficult to verify with our senses – "9/11 was an inside job", for example. The second problem is with the "feeling of rightness", which would appear to be extremely fallible (see "What's your delusion?").
So where does the feeling of rightness come from? The evidence suggests that it has three main sources – our evolved psychology, personal biological differences and the society we keep.
The importance of evolved psychology is illuminated by perhaps the most important belief system of all: religion. Although the specifics vary widely, religious belief per se is remarkably similar across the board. Most religions feature a familiar cast of characters: supernatural agents, life after death, moral directives and answers to existential questions. Why do so many people believe such things so effortlessly?
According to the cognitive by-product theory of religion, their intuitive rightness springs from basic features of human cognition that evolved for other reasons. In particular, we tend to assume that agents cause events. A rustle in the undergrowth could be a predator or it could just be the wind, but it pays to err on the side of caution; our ancestors who assumed agency would have survived longer and had more offspring. Likewise, our psychology has evolved to seek out patterns because this was a useful survival strategy. During the dry season, for example, animals are likely to congregate by a water hole, so that's where you should go hunting. Again, it pays for this system to be overactive.
This potent combination of hypersensitive "agenticity" and "patternicity" has produced a human brain that is primed to see agency and purpose everywhere. And agency and purpose are two of religion's most important features – particularly the idea of an omnipotent but invisible agent that makes things happen and gives meaning to otherwise random events. In this way, humans are naturally receptive to religious claims, and when we first encounter them – typically as children – we unquestioningly accept them. There is a "feeling of rightness" about them that originates deep in our cognitive architecture.
According to Krueger, all beliefs are acquired in a similar way. "Beliefs are on a spectrum but they all have the same quality. A belief is a belief."
Our agent-seeking and pattern-seeking brain usually serves us well, but it also makes us susceptible to a wide range of weird and irrational beliefs, from the paranormal and supernatural to conspiracy theories, superstitions, extremism and magical thinking. And our evolved psychology underpins other beliefs too, including dualism – viewing the mind and body as separate entities – and a natural tendency to believe that the group we belong to is superior to others.
A second source of rightness is more personal. When it comes to something like political belief, the assumption has been that we reason our way to a particular stance. But, over the past decade or so, it has become clear that political belief is rooted in our basic biology. Conservatives, for example, generally react more fearfully than liberals to threatening images, scoring higher on measures of arousal such as skin conductance and eye-blink rate. This suggests they perceive the world as a more dangerous place and perhaps goes some way to explaining their stance on issues like law and order and national security.
Another biological reflex that has been implicated in political belief is disgust. As a general rule, conservatives are more easily disgusted by stimuli like fart smells and rubbish. And disgust tends to make people of all political persuasions more averse to morally suspect behaviour, though the response is stronger in conservatives. This has been proposed as an explanation for differences of opinion over important issues such as gay marriage and illegal immigration. Conservatives often feel strong revulsion at these violations of the status quo and so judge them to be morally unacceptable. Liberals are less easily disgusted and less likely to judge them so harshly.
Different realities
These instinctive responses are so influential that people with different political beliefs literally come to inhabit different realities. Many studies have found that people's beliefs about controversial issues align with their moral position on it. Supporters of capital punishment, for example, often claim that it deters crime and rarely leads to the execution of innocent people; opponents say the opposite.
That might simply be because we reason our way to our moral positions, weighing up the facts at our disposal before reaching a conclusion. But there is a large and growing body of evidence to suggest that belief works the other way. First we stake out our moral positions, and then mould the facts to fit.
So if our moral positions guide our factual beliefs, where do morals come from? The short answer: not your brain.
According to Jonathan Haidt at the University of Virginia, our moral judgements are usually rapid and intuitive; people jump to conclusions and only later come up with reasons to justify their decision. To see this in action, try confronting someone with a situation that is offensive but harmless, such as using their national flag to clean a toilet. Most will insist this is wrong but fail to come up with a rationale, and fall back on statements like "I can't explain it, I just know it's wrong".
This becomes clear when you ask people questions that include both a moral and factual element, such as: "Is forceful interrogation of terrorist suspects morally wrong, even when it produces useful information?" or "Is distributing condoms as part of a sex-education programme morally wrong, even when it reduces rates of teenage pregnancy and STDs? People who answer "yes" to such questions are also likely to dispute the facts, or produce their own alternative facts to support their belief. Opponents of condom distribution, for example, often state that condoms don't work so distributing them won't do any good anyway.
What feels right to believe is also powerfully shaped by the culture we grow up in. Many of our fundamental beliefs are formed during childhood. According to Krueger, the process begins as soon as we are born, based initially on sensory perception – that objects fall downwards, for example – and later expands to more abstract ideas and propositions. Not surprisingly, the outcome depends on the beliefs you encounter. "We are social beings. Beliefs are learned from the people you are closest to," says Krueger. It couldn't be any other way. If we all had to construct a belief system from scratch based on direct experience, we wouldn't get very far.
This isn't simply about proximity; it is also about belonging. Our social nature means that we adopt beliefs as badges of cultural identity. This is often seen with hot-potato issues, where belonging to the right tribe can be more important than being on the right side of the evidence. Acceptance of climate change, for example, has become a shibboleth in the US – conservatives on one side, liberals on the other. Evolution, vaccination and others are similarly divisive issues.
So, what we come to believe is shaped to a large extent by our culture, biology and psychology. By the time we reach adulthood, we tend to have a relatively coherent and resilient set of beliefs that stay with us for the rest of our lives (see "Your five core beliefs"). These form an interconnected belief system with a relatively high level of internal consistency. But the idea that this is the product of rational, conscious choices is highly debatable. "If I'm totally honest I didn't really choose my beliefs: I discover I have them," says Halligan. "I sometimes reflect upon them, but I struggle to look back and say, what was the genesis of this belief?"
Forget the facts
The upshot of all this is that our personal guidebook of beliefs is both built on sand and also highly resistant to change. "If you hear a new thing, you try to fit it in with your current beliefs," says Halligan. That often means going to great lengths to reject something that contradicts your position, or seeking out further information to confirm what you already believe.
That's not to say that people's beliefs cannot change. Presented with enough contradictory information, we can and do change our minds. Many atheists, for example, reason their way to irreligion. Often, though, rationality doesn't even triumph here. Instead, we are more likely to change our beliefs in response to a compelling moral argument – and when we do, we reshape the facts to fit with our new belief. More often than not, though, we simply cling to our beliefs.
All told, the uncomfortable conclusion is that some if not all of our fundamental beliefs about the world are based not on facts and reason – or even misinformation – but on gut feelings that arise from our evolved psychology, basic biology and culture. The results of this are plain to see: political deadlock, religious strife, evidence-free policy-making and a bottomless pit of mumbo jumbo. Even worse, the deep roots of our troubles are largely invisible to us. "If you hold a belief, by definition you hold it to be true," says Halligan. "Can you step outside your beliefs? I'm not sure you'd be capable."
The world would be a boring place if we all believed the same things. But it would surely be a better one if we all stopped believing in our beliefs quite so strongly.
Mindfulness - The Plus Side
When 2500 billionaire philanthropists, mega-brand CEOs and world leaders got together at the World Economic Forum in Davos in January, the buzz wasn’t all cyber-terrorism, EU-bailouts and the future of Ukraine. At least one session revelled in lengthy, soothing silences with the occasional ting of a chime. A jam-packed crowd of 100 power-players met for a panel called Leading Mindfully, to meditate and discuss why a 2500-year-old practice is good for business. The gathering was led by Jon Kabat-Zinn, the molecular biologist turned father of mindfulness meditation in the West, alongside Arianna Huffington of the eponymous Post and a board member of Goldman Sachs, among others. If we had any doubts, the Davos meeting nixed them: mindfulness is indisputably one of the hottest trends in business.
While you might expect hippy-founded tech businesses of Silicon Valley – think Apple – would fall for an ancient Buddhist practice promising increased focus and well-being, the range of companies and industries using mindfulness in the workplace is expanding. In the US, thousands of staff at one giant insurance company have taken up the opportunity; Wall Street bankers meditate before high-pressure meetings; fashion designer Eileen Fisher runs meetings in circles with optional om-ing; legal firms offer yoga and meditation spaces. In the hugely lucrative world of professional basketball, Phil Jackson used mindfulness meditation to help coach the Chicago Bulls and Los Angeles Lakers to a combined 11 NBA championships.
In the UK, an all-party parliamentary group was set up to investigate using mindfulness practice in schools, the health service, prisons and Parliament itself. The broadly favourable interim report was released in January to a packed room of media and parliamentarians; it began with a 10-minute guided meditation and included a young Conservative MP relating how mindfulness helped her deal with depression. The Bank of England offers taster meditation sessions; BP has a meditation room; Rob Symes, chief executive of the Outside View, a London-based digital analytics business, uses mindfulness as part of his company’s “health, wealth and happiness programme”, believing that meditation “can improve business decisions and avoid expensive mistakes”.
What exactly is it that makes so many big organisations willing to invest their time and money? At its simplest, mindfulness asks that you sit quietly, focused on the present, grounded in your body, taking note of your senses as you breathe. If you’re distracted with thoughts and feelings, as you will be, you shouldn’t be disheartened, but return to being conscious of the breath. Do this for as little as 10 minutes, preferably daily, and you will have started to build your own mindfulness practice.
Many people use apps or guided meditations to help them focus. Headspace is one company producing simple meditation guides that are being used by more than a million people in 150 countries. The company, co-founded by former Buddhist monk Andy Puddicombe, has worked with more than 100 businesses, including Credit Suisse and KPMG. Puddicombe told Business Reporter: “The requests from these companies vary. Sometimes it’s for focus, sometimes productivity. Increasingly we’re seeing a trend that employers are genuinely interested in the health and well-being of their employees.”
Meditation devotees say it brings them benefits such as sharpening the mind and a greater sense of compassion for themselves and others. But shrouded in Buddhist philosophy, it was unlikely to make a huge impact in the West. In the 60s, hippies adopted the practice, but it wasn’t until 1979 when Kabat-Zinn started his eight-week Mindfulness-Based Stressed Reduction (MBSR) course at the University of Massachusetts that it started to have an impact – though it took more than 10 years and a network TV special for it to really catch on. Now more than 720 clinics around the world teach MBSR. In addition, many people have had success with Mindfulness-Based Cognitive Therapy (MBCT), a form of MBSR that deals with depression, linking thinking and its resulting influence on feelings.
The effects of this simple technique are now being studied by neuroscientists, doctors and psychologists all over the world. Although some of their results may be more robust than others, there is no doubt that they are seeing brain changes. Using fMRI scans to compare the brains of meditators with non-meditators, researchers have found that the prefrontal cortex, the part of the organ that deals with judgment, decision-making and planning, is more active in those who meditate. In another study, meditation increased grey matter in brain regions involved in learning and memory processes, emotion regulation and perspective-taking.
There is also evidence that as little as eight weeks of meditation can reduce the physical and mental effects of stress. When we are under stress, our “fight or flight” response is activated, meaning our adrenal glands release cortisol. If the stress is long term, the health effects range from high blood pressure and depression to lowered immunity, digestive problems and weight gain. But when meditation is consistently practised, grey-matter density in the amygdala, which controls our fight or flight response, is actually reduced.
As expected with something that has become trendy, a lot of buzz surrounds mindfulness, but – neuroscience experiments notwithstanding – not a lot of hard data. David Gelles set out to change that. He has been a meditator since age 18, when he found his mother’s book about Buddhism in their Northern California home. He spent a year in India studying Buddhism, travelling the country and practising meditation for weeks at a time.
When he returned to the US, Gelles became a journalist specialising in the cut-throat world of mergers and acquisitions. It was while sniffing out such stories for his then employer, the Financial Times, that he came upon a local news item: General Mills, the $30 billion food and beverage conglomerate responsible for manufacturing such delights as Wheaties and Betty Crocker cake mix, was using mindfulness meditation to reduce stress and increase focus at its vast factory complex in Minneapolis. He flew to the Midwest to investigate. His conversation with the company’s deputy general counsel, Janice Marturano, about why she set up a mindful leadership course taken by hundreds of employees went viral on the Financial Times website. He knew he was onto something.
Mindful Work: How Meditation is Changing Business from the Inside Out is the result of Gelles’ year of criss-crossing the US, tracking the upsurge of interest in mindfulness among businesses. Gelles is an apostle for the cause: in his view mindfulness can have a profound effect on how we view work and do work – if we would only try it. But he’s also a journalist, prepared to ask the hard questions. His book is filled with case studies and scientific research into businesses to determine whether the staff have benefited from integrating meditation into their buzzing, fizzing, overstretched lives.
“I know there are plenty of sceptics about it – of course there are,” he tells the Listener. “I wanted to find out what people were doing, and use their stories. The research isn’t always that convincing, but people’s experience can be profound and moving.”
Giant US health insurer Aetna is one of his favourites for fending off the naysayers. The company’s health-care costs were reduced 7% after it introduced mindfulness and yoga courses to a third of its staff. “If thousands of employees are less stressed, then they are taking fewer sick days, doing their jobs better – it’s good for them and it’s good for the business,” says Gelles.
It wasn’t easy to get the course going: when new chief executive Mark Bertolini suggested introducing meditation and yoga to help the 50,000-strong workforce deal with stress, his chief medical officer snapped back: “Because you’re doing yoga, everyone has to do yoga?” But Bertolini had used yoga and meditation to help him recover from a near-fatal skiing accident (broken neck, five snapped vertebrae, split shoulder blade) and knew better. “Let’s measure heart-rate variability. Let’s measure cortisol levels if you want to. But let’s see how stressed our people are and look at the results.”
Rigorously planned and analysed, the Aetna pilot programme included viniyoga, which focused on breathing techniques, and mindfulness meditation, which focused on improving work-related stress, work-life balance and self-care. After 12 weeks, the results were impressive: those who completed the course experienced a significant reduction in stress and sleep difficulties. It was estimated that the annual financial benefit to the company was US$2000 ($2700) per employee.
The programme was then extended to more than a third of the workforce, and self-reported stress fell by a third. Employees also became more efficient: according to Bertolini, they are productive for an extra 69 minutes a year because they lose less time between tasks.
If that weren’t enough, they are less overweight too, following the introduction of a mindfulness eating programme. Gelles tells the story of Tandon Bunch, a nurse co-ordinator who worked for Aetna in Arlington, Texas. Bunch, a former college cheerleader who gave up exercise in her thirties due to injury, saw her weight rise to 80kg in middle age. After following the educational support programme, she learnt to listen to her body, to gauge her level of fullness, rather than eat out of habit, and subsequently lost 15kg. These types of changes have led Aetna to estimate it has made an 11 to one return on its investment in mindfulness.
THE ANTI-ASSHOLE EFFECT
Some people might bridle at the thought of corporate bosses citing a precise 69-minute productivity gain or getting involved in workers’ diet and health regimes. Is this a business wanting to have healthier, happier employees or simply a way of boosting the bottom line? Will it be used to wring every last drop of work out of people, rather than reduce their stress or build their resilience?
“I don’t think the profit motive is driving all of this,” says Gelles. “For one thing, you tend to see mindfulness programmes where senior executives have some sense of their workers’ well-being. And you’re certainly not seeing it everywhere that capitalism thrives. No manufacturers of mass warfare that I know of. I think business leaders are coming to mindfulness, at least in part, from a place of responsibility to their workers.”
He agrees that mindfulness is not for everyone and is concerned that charlatans will wade into mindfulness teaching. “There is a risk it will go the way of yoga – you know, hot yoga, cold yoga, cartoon yoga, sex yoga, just people coming up with more brands to make money. I’d be in favour of setting up a national organisation to agree on best practices, but this hasn’t happened yet.”
For Gelles, it’s a core part of life, benefiting him in numerous ways. The most important? He laughs: “It stops me from being an asshole. There are so many opportunities to be an asshole in the 21st century – to snap, get frustrated, take it out on ourselves and others. It may sound corny, but I’m a nicer person because I meditate. To my colleagues, to my wife, to strangers and to myself. And that’s a pretty good reason.”
Mindfulness meditation: do it yourself
• Google “guided meditation” and you’ll find a variety to help you get started, or look for a meditation app.
• Carve out a small amount of time to practise mindfulness meditation before work to set yourself up for the day.
• Aim to be mindful all day long. Don’t do more than one task at a time; finish one, start another. Paying one thing due attention, without distractions, is a major part of mindfulness.
• Try to slow down your response time so you’re less emotionally reactive to requests or demands.
• Don’t work through lunch. Take time to stop and eat, preferably away from your desk, paying attention to what you’re eating and staying in the moment.
• Make an effort to unplug from technology occasionally. Allow yourself a little time not to be distracted by emails and other 21st-century chatter.
Mindfulness - Some Caveats
Meditation is said to bring serenity, but there is a dark side to the new middle-class trend.
Over the past decade, a secular creed has stolen across the western world: meditation and mindfulness, stripped of their origins in south Asian religions, have become endemic among the British middle classes.
Actresses, rappers, politicians and chief executives have all been lured by the promise of spiritual revival, sharper focus or just a sound night’s sleep. What, though, if the price of mindfulness were madness?
The hidden risks of these apparently innocuous pastimes include mania, depression, hallucinations and psychosis, two psychologists have warned.
Miguel Farias, head of the brain, belief and behaviour research group at Coventry University, has been studying transcendental meditation and other contemplative techniques for almost two decades. In The Buddha Pill: Can Meditation Change You?, published yesterday, he and Catherine Wikholm, a researcher in clinical psychology at the University of Surrey, examine the scientific evidence and describe seven common “myths” about the practice.
Chief among these is the belief that it is harmless. One US study found that 63 per cent of people who had been on meditation retreats had suffered at least one side effect, ranging from confusion to panic and depression. One in 14 had experienced “profoundly adverse effects”.
Other researchers found that practising mindfulness for 20 minutes a day raised levels of cortisol, the stress hormone, even though the meditators said that they felt less stressed.
Scientific literature is rich in case studies of patients who appear to have been driven into breakdowns or “dark nights” of mental torment during programmes of meditation, but there have been few experiments involving more than a handful of people. Dr Farias said that the shortage of rigorous statistical studies into the negative repercussions of meditation was a “scandal”.
“The assumption of the majority of both TM [transcendental meditation] and mindfulness researchers is that meditation can only do one good,” he said. “This shows a rather narrow-minded view. How can a technique that allows you to look within and change your perception or reality of yourself be without potential adverse effects? The answer is that it can’t, and all meditation studies should assess not only positive but negative effects.”
Since the Beatles popularised the transcendental meditation movement founded by Maharishi Mahesh Yogi after their journey to India in 1968, millions of Britons have dabbled in various forms of the discipline. Mindfulness, a meditative method derived from Buddhism, has become a craze in recent years, with adherents said to include the Harry Potter actress Emma Watson to Arianna Huffington, the media baron, Liz Truss, the environment secretary, and Andy Burnham, the Labour leadership candidate.
People could do with casting a more critical eye over meditation, Ms Wikholm said. “It is hard to have a balanced view when the media is full of articles attesting to the benefits of meditation and mindfulness,” she said. “We need to be aware that reports of benefits are often inflated . . . whereas studies that do not discover significant benefits rarely pick up media interest, and negative effects are seldom talked about.”
The Buddha Pill details a randomised control trial looking at what yoga and meditation classes can do for British prisoners. Over ten weeks, inmates at seven prisons in the Midlands took 90minute classes once a week and completed tests to measure their higher cognitive functions. “Yoga and meditation significantly improved the prisoners’ mood, and reduced their stress and psychological distress,” Ms Wikholm said. The results suggested that the prisoners taking the classes were more disciplined — but no less aggressive.
Autistics need predictable, structured environment
Autistic brain is hyper-functional — needs predictable, paced environments, study finds.
A new open-access study shows that social and sensory overstimulation drives autistic behaviors and supports the unconventional view that the autistic brain is actually hyper-functional. The research offers new hope, with therapeutic emphasis on paced and non-surprising environments tailored to the individual’s sensitivity.
For decades, autism has been viewed as a form of mental retardation, a brain disease that destroys children’s ability to learn, feel and empathize, thus leaving them disconnected from our complex and ever-changing social and sensory surroundings. From this perspective, the main kind of therapeutic intervention in autism to date aims at strongly engaging the child to revive brain functions believed dormant.
Predictability is key
Now researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) have completed a study that turns this traditional view of autism completely around. The study, conducted on rats exposed to a known risk factor in humans, demonstrates that unpredictable environmental stimulation drives autistic symptoms at least as much as an impoverished environment does.
It also shows that predictable stimulation can prevent these symptoms.
The study is also evidence for a drastic shift in the clinical approach to autism, away from the idea of a damaged brain that demands extensive stimulation. Instead, autistic brains may be hyper-functional and thus require enriched environments that are non-surprising, structured, safe, and tailored to a particular individual’s sensitivity.
“The valproate rat model used is highly relevant for understanding autism, because children exposed to valproate in the womb have an increased chance of presenting autism after birth,” says Prof. Henry Markram, co-author of the study and father of a child with autism. He notes that the rats exposed to valproate in early embryonic development demonstrate behavioral, anatomical and neurochemical abnormalities that are comparable to characteristics of human autism.
The scientists here show that if these rats are reared in a home environment that is calm, safe, and highly predictable with little surprise — while still rich in sensory and social engagement — they do not develop symptoms of emotional over-reactivity such as fear and anxiety, nor social withdrawal or sensory abnormalities.
“We were amazed to see that environments lacking predictability, even if enriched, favored the development of hyper-emotionality in rats exposed to the prenatal autism risk factor,” says Markram.
The study critically shows that in certain individuals, non-predictable environments lead to the development of a wider range of negative symptoms, including social withdrawal and sensory abnormalities. Such symptoms normally prevent individuals from fully benefiting from and contributing to their surroundings, and are thus the targets of therapeutic success.
The study identifies drastically opposite behavioral outcomes depending on levels of predictability in the enriched environment, and suggests that the autistic brain is unusually sensitive to predictability in rearing environment, but to different extent in different individuals.
Hyper-functional brain microcircuits
The study is strong evidence for the Intense World Theory of Autism, proposed in 2007 by neuroscientists Kamila Markram and Henry Markram, both co-authors on the present study. This theory is based on recent research suggesting that the autistic brain, in both humans and animal models, reacts differently to stimuli.
It proposes that an interaction — between an individual’s genetic background with biologically toxic events early in embryonic development — triggers a cascade of abnormalities that create hyper-functional brain microcircuits, the functional units of the brain.
Once activated, these hyper-functional circuits could become autonomous and affect further brain functional connectivity and development. These would lead to an experience of the world as intense, fragmented, and overwhelming; while differences in severity between persons with autism would stem from the system affected and the timing of the effect.
Stable, structured environment
Instead, a stable, structured environment rich in stimuli could help children with autism, by providing a safe haven from an overload of sensory and emotional stimuli, the authors suggest.
This study has immediate implications for clinical and research settings. It suggests that if brain hyper-function can be diagnosed soon after birth, at least some of the debilitating effects of a supercharged brain can be prevented by highly specialized environmental stimulation that is safe, consistent, controlled, announced and only changed very gradually at the pace determined by each child.
The research supports the work of Temple Grandin, PhD, an author and professor of animal science at Colorado State University. One of the therapeutic methods she developed (and used herself) was the “hug machine” (AKA “squeeze machine”), a deep-pressure device designed to calm hypersensitive persons. The device is featured in an award-winning biographical film, Temple Grandin.
Pixar's Inside Out
Ever since Pixar began showing previews of Inside Out, critics have been raving about its stunning animation, beautiful soundtrack, and creative portrayal of protagonist Riley's inner world. Daphna Shohamy, a researcher at Columbia University’s Zuckerman Mind Brain Behavior Institute, appreciates Pixar’s latest movie for a different reason: She believes it has potential as an educational tool.
“I’m excited that people will learn from this movie about how malleable memories are,” she said. Shohamy was part of a group of scientists the filmmakers consulted during the making of the movie, and she explained a few things Inside Out can teach us about memory and the brain.
Memories are susceptible to change.
In the movie, if Sadness touches one of Riley’s happy recollections, it can become tainted by sadness. That's a scary proposition, and one that isn't far from the truth. “They took a concept that is absolutely true in terms of how memories work,” Dr. Shohamy said. “When we retrieve a memory, we bring it back to life, and that will change the way it’s re-stored. It’s a complicated thing to grasp — it’s not like you take a file out and put it back exactly the way it was. They took that idea and used it in a way I thought was beautiful and accurate and incredibly helpful, from an educational standpoint. They made it seem so intuitive — when you bring a memory back from storage and something from the present touches it, that can change the memory.”
Researchers have demonstrated the malleability of memory in different contexts. Elizabeth Phelps, a psychologist at New York University, has conducted experiments on how memories can be manipulated. (Her research could have implications for the treatment of PTSD.) In one 2009 experiment, Phelps showed people colored squares just before administering an electric shock to their wrists; afterwards, seeing the colored square made them sweat and feel fearful. But if the researchers brought them back to the lab and showed them the color without giving them a painful shock, the fear response could be eliminated in both the short term (the next day) and the long term (a year later).
False memories can also be implanted: In one study from 2001, psychologist Elizabeth Loftus had people who’d visited Disneyland read ads for the theme park that featured Bugs Bunny; one-third then claimed to remember meeting Bugs Bunny (a Warner Bros. character) on their trip to Disneyland. In another recent experiment, Lawrence Patihis, a researcher at the University of California, Irvine, asked people with “highly superior autobiographical memory” — people who could remember details like the date on which an Iraqi journalist threw a shoe at George W. Bush — to recall video footage of United Airlines Flight 93 crashing in Pennsylvania on 9/11. Twenty percent of his subjects began reminiscing about watching this footage, which does not exist.
Some memories are disproportionately important to our sense of self.
With the concept of “core memories,” the writers have “really captured something essential, which is that not all memories are created equal,” said Dr. Shohamy. “Not all experiences lead to the creation of long-lasting, robust memories.” Experiences are more likely to turn into long-term memories if they’re emotional or otherwise important.
The representation of memories as individual balls might be misleading.
“A lot of the research treats memories as discrete events, mini-stories: when something started, when it ended. The ball captured that definition of memory, but newer theories focus on how those kinds of different memories influence each other and how we integrate them.” A child learning about a dog, for example, “might have a separate memory of each dog” he’s seen, “but can integrate these memories and extract information so those memories influence each other and come together.”
Memories are constantly exerting influence on our behavior in the present.
Riley’s memories are fundamental to her identity; when they get “lost” in her brain, her personality begins to change. As Dr. Shohamy explained, “Memories in the movie were very obviously not just something to reminisce about. They were very alive, very influential in the protagonist’s behavior. People often don’t understand how important memory is all the time: It’s not just a record of the past. The movie shows, moment by moment, how memories create the structure of who we are.”
Smartphones Show Depression
Doctors could monitor people’s mobile phone use for signs of depression, according to a study that has shown it to be a more accurate indicator of the condition than a daily happiness questionnaire.
The data, which was collected by smartphones, shows a marked difference between the lives of people who are depressed versus those who do not suffer from the condition.
Scientists found that people who had been diagnosed with depression spent four times longer using their smartphones each day than people who were not depressed. Sohrob Saeb, a computer scientist at Northwestern University near Chicago, devised a programme called Purple Robot, which kept tabs on the mobile phone use — excluding phone calls — of 40 people in the study, 14 of whom had been diagnosed with depression.
Over two weeks, the study measured how long they spent at home, how many other places they had visited and how long their phones were active. Those with depression used their mobile phone on average 68 minutes a day, compared with just 17 minutes for those without the condition. The programme picked out people with depressive symptoms with an accuracy of 87 per cent— better than a daily questionnaire that asked participants to rate how unhappy they felt.
David Mohr, also of Northwestern, said the way people used mobile phones was not only a good indicator of whether they suffered from depression but could also show how severe it was.
Psychologists had suggested that compulsive mobile phone use and “technostress” could be linked to depression, but most earlier studies were based on the amount of time people had said they spent on their phones rather than any objective measure.
Stephen Schueller, assistant professor in preventative medicine at the university, said that many people used their phones to try to rid themselves of negative emotions such as boredom or anxiety. “Phones are excellent sources of distraction,” he said. “I imagine that people are probably not using their phones to reach out and call people when they’re depressed.”
Dr Saeb said that his team would now look at exactly how depressed people used their mobile phones.
The study, published in the Journal of Medical Internet Research, did not distinguish between text messages, emails and less social uses. “We are planning to perform another study to look into more details, to see if there are specific usage behaviours that are more predictive of depressive symptoms than others,” he said.
The effect may also be reverse engineered, Dr Saeb suggested. He plans to investigate whether it is possible to ease some aspects of depression by encouraging those with the condition to correct the abnormal behaviour picked up by their phones.
“We will see if we can reduce symptoms of depression by encouraging people to visit more locations throughout the day, have a more regular routine, spend more time in a variety of places or reduce phone use,” he said.
Do You Notice Your Senior Moments?
Losing track of your senior moments could be a warning sign two or three years before dementia is diagnosed, according to a study.
Scientists have called on friends and relatives to alert doctors when people with faltering memories start to stop being aware that they have a problem.
Robert Wilson, a senior neuropsychologist at the Rush Alzheimer’s Disease Center in Chicago, said that so long as people remained aware that they were struggling to remember events in the right order they were in less immediate danger of dementia.
The doctor and his colleagues studied the records of 2,000 Americans with an average age of 79 at the start of the analysis.
The study is said to be the first to show that ebbing “memory awareness” is an almost inevitable part of the disease. “Our data suggest that the problem begins to develop about two and a half years before most people are diagnosed, but that doesn’t mean that it’s really recognisable,” Dr Wilson said. “We think that it probably becomes clinically evident around the time most people are clinically diagnosed. If your family member clearly has a problem with memory, the point where they fail to recognise it might be the time to seek medical attention. “There’s an old clinical saw that says as long as people are still worried about their memory problems, they are probably OK, and this research seems to give a bit more credibility to that idea.”
Dr Wilson and his team found that episodic memory — the ability to recall events from the past in a structured way — began to decline steeply from about two or three years before the dementia diagnosis, although the speed and scale varied significantly from patient to patient. “Episodic memory is the classic problem in dementia, and Alzheimer’s disease in particular, and indeed you would think that awareness of your ongoing memory problems is really a test of episodic memory,” Dr Wilson said.
Post-mortem examinations on 627 of the participants in the study suggested that even when they had not had dementia formally diagnosed, a decline in memory awareness was strongly linked to signs of the brain damage characteristic of dementia, and particularly to the telltale “tangles” that are associated with Alzheimer’s.
“Among persons who died and underwent a neuropathologic examination, decline in memory awareness was associated with multiple dementiarelated pathologies, and no change in memory awareness was observed after controlling for these pathologies,” the authors wrote in the journal Neurology. “These observations suggest that unawareness of amnestic dysfunction [memory loss] is part of the natural history of late-life dementia and is driven by accumulation of dementiarelated pathologies.”
The researchers believe that their results suggest that dementia patients suffer from depression less than might be expected because they are less conscious of their faulty memories than previously thought.
Laura Phipps, of Alzheimer’s Research UK, said the study might help doctors and relatives to track patients’ mental decline before they were diagnosed: “The findings show it’s common for people to lose the ability to recognise the memory difficulties they’re experiencing in the lead-up to a diagnosis and suggests that this is driven by underlying damage in the brain. “The findings highlight the importance of testimony from relatives and close friends at the point of diagnosis to help doctors gain a clearer picture of someone’s memory problems.”
Thinking Magical or Rational
I loved magic shows when I was a kid. I remember being absolutely fascinated by mysterious events and the possibility that some of us might possess supernatural powers such as the ability to read minds, get a glimpse of the future, or, perhaps, suddenly port into another dimension. The human mind is a curious one. Although it is well-known that children have a lively imagination, what about adults? You might be surprised to learn that a recent national poll found that over 71% of Americans believe in “miracles”, 42% of Americans believe that “ghosts” exist, 41% think that “extrasensory perception” (e.g., telepathy) is possible and 29% believe in astrology.
Other recent polls have indicated that public belief in things like conspiracy theories or other pseudo-scientific phenomena are equally prevalent. For example, 21% of Americans think the government is hiding aliens, 28% of Americans believe that a mysterious, secret elite power is plotting a New World Order (NWO) and 14% of Americans believe in Bigfoot. Recent psychological research has found a surprising relationship between these types of personal convictions; espousal of conspiracy theories, pseudo-science and belief in the paranormal turn out to be highly correlated with one another. What could explain these findings?
While perhaps belief in say, lizard people and astrology seem relatively unrelated on the surface, so-called “magical thinking” may very well have a common underlying “cognitive style” — that is, the way in which we think about and make sense of the world. In fact, a new study explored this very question and suggests that the answer may indeed lie in the way we think about things, or, more precisely, the way in which we fail to think about things.
Two researchers at the University of Toulouse in France set out to investigate to what extent “cognitive thinking styles” are predictive of believing in the paranormal after experiencing an “uncanny” event. The research team designed a number of clever experiments to test their hypothesis. In the first study, the researchers invited students on campus to participate in an experiment that investigated astrological signs as a predictor of one’s personality. After providing their date of birth, participants received a personality description that matched their astral theme. In reality, each person was given the same 10 “Barnum” statements. These are statements that could ring true for nearly anyone (e.g., “you have a need for people to like you” or “at times you have serious doubts about whether you have made the right decision”). Participants were then asked to evaluate how accurate they thought this description was. Before starting the experiment, participants were also asked to complete a Cognitive Reflection Test (CRT) as well as a “Paranormal Belief” questionnaire. The cognitive reflection test is a very short three-item test that essentially measures whether you are more of an intuitive or reflective thinker. Consider the following example; if a baseball and a bat cost $1.10 and the bat costs $1 more than the ball, how much does the ball cost? The quick and intuitive answer that comes to mind for most people is simply $0.10. Yet, this is also the wrong answer. More reflective thinkers tend to suppress this automatic and intuitive answer and are more suspicious of the first thing that comes to mind. (If you’re curious, the correct answer is: $0.05).
The researchers found that although both intuitive and reflective thinkers somewhat recognized the statements as being descriptive of their personality, reflective thinkers were much less likely to recognize the Barnum statements as correct. This relationship persisted after controlling for any prior differences in paranormal beliefs. The authors speculated that in contrast to reflective minds, intuitive thinkers might be more likely to accept their “uncanny” experience as proof for the existence of supernatural phenomena.
Best Learning Techniques
What is the best way to learn foreign vocabulary? Should you create a mnemonic? Should you start with the easy words, then work your way up? Or maybe you should just kick things off by staring at a nice calming waterfall.
Well, we are about to find out. An international competition between some of the finest memory scientists in the world is set to find out the ultimate memory techniques, and they are looking for volunteers to help them.
The online contest is run by Ed Cooke, a British memory grandmaster, who has offered $10,000 to the team that comes up with the best online method of teaching volunteers 80 foreign words in an hour, so that they are able to recall them a week later.
An initial field of 20, from universities around the world, was first asked to teach 80 Lithuanian words. The best managed to do twice as well as a baseline, in which people were given the words without guidance. Now the top five teams remain and the competition organisers want 10,000 people to sign up to test their methods. This time the participants have not been told what the language will be.
“While science has done incredible things in identifying ways of learning things, no one has really asked the question, ‘What is the best way of combining these tools together?” said Mr Cooke. “So we decided to set up an applied cognitive science competition, and approached all the top memory science labs in the world.”
There have been similarities in their approaches. “All make good use of repetition and testing,” said Mr Cooke. Research has shown that constantly testing people as they learn does not improve recall directly after a learning session, but significantly improves it a week later. There have also been differences. “Some use spatial memory techniques, associating words with places. Others get people to form visual images of word association,” he said.
Then there are less conventional ones. One of the teams first shows people the foreign word without its translation and asks them to guess a meaning for it. “Because you are made to guess, you seem to pay greater attention, then remember both your first guess and the difference,” said Mr Cooke. Another starts by showing people a calming waterfall for three minutes.
The only British team still in the competition, a collaboration between UCL and Oxford, uses an algorithm to determine which words are the easiest, then begins with those and tests people continuously — introducing new words only when the old ones are learnt.
Hannah Tickle, a PhD student at UCL, said she hoped the competition would produce a useful learning tool. “One of the things that we are keen on is the idea that you can tailor learning to an individual and make it generalisable.”
The winner will not necessarily be the team whose technique gets the best recall — the judges will also consider dropout rate. There is no use in devising a technique that no one wants to use.
“I want to run this for the next ten years, and every year people will be able to take as a baseline the winner from the previous year and make improvements,” Mr Cooke said. “Something similar happened in the World Memory Championship. The world record for memorising a pack of cards has gone from 150 seconds to 21 seconds. That’s cool, but it’s an arcane niche activity. This could be general knowledge available to humanity about how to learn three to four times faster.”
Should Alzheimer’s Patients Be Allowed to Have Sex?
Can an Alzheimer’s patient with dementia so severe she can’t remember her daughters’ names or how to eat a hamburger consent to have sex with her husband? That’s the stark question raised by the case of Henry Rayhons, the former Iowa state legislator who, as the New York Times reported yesterday, has been charged with third-degree felony sexual assault for allegedly raping his wife, Donna Lou Rayhons, in her nursing home.
Rayhons says the sex was consensual. Clinicians at the nursing home where she resided — she has since died — say that they had determined she didn’t have the capacity to consent, and that they had informed Rayhons of this. Charges of this sort are “possibly unprecedented,” as the Times put it, but the underlying questions “will become only more pressing as the population ages and rates of dementia rise.”
At first, the idea of a patient with dementia agreeing to engage in sex would seem to run counter to established notions of consent. If someone no longer even has legal control over their own care or certain aspects of their day-to-day life, how can they make a decision that requires two fully consenting partners?
But a growing number of advocates for the elderly and the cognitively impaired argue that the only humane approach is to have guidelines that do allow for intimate and sexual relationships involving members of these populations, at least in certain cases. “There’s nothing about being cognitively impaired that means that you wouldn’t necessarily appreciate being connected with other people through both nonsexual means and sexual means,” said Dr. Tia Powell, who directs the Montefiore Einstein Center for Bioethics at the Albert Einstein College of Medicine. It’s a view that’s gaining traction, even among larger organizations like the U.K.’s Alzheimer’s Society and the U.S.-based Alzheimer’s Association.
To John Portmann, a religious studies professor at the University of Virginia and the author of The Ethics of Sex and Alzheimer’s, this debate is part of a much broader array of difficult conversations society is going to have to have now that reaching old age — and, as a result, possibly developing dementia — is a normal occurrence in a large swath of the world.
“Alzheimer’s is really forcing us to think about sex and fidelity in a very new way,” said Portmann. “People didn’t live very long in the ancient world, so this problem never arose. And now people are living longer and longer, and until we find a cure for Alzheimer's this problem is just going to get more and more urgent.” He raised the example of former Supreme Court Justice Sandra Day O’Connor’s husband, an Alzheimer’s sufferer who became close with another woman, who also had Alzheimer’s, in the nursing home where he resided. “Do we want to call that adultery?” asked Portmann.
“Everybody is talking about the gays and the lesbians and how they’re changing the morals in America,” he said. “I think actually a more profound kind of rebellion against traditional values is happening in the Alzheimer’s community.”
So what does this strange new landscape mean for the concept of consent? Everyone agrees that people with dementia need to be protected from predators, of course. But Powell said that outside these cases, she’d use a few criteria to determine whether a given activity is acceptable: She’d encourage facility administrators to allow any activity that doesn’t raise any flags for abuse, that seems to bring comfort or enjoyment to the individual with dementia, and that isn’t causing significant harm to others.
In theory, it sounds almost straightforward — a utilitarian approach to Alzheimer’s sex. But in practice, say Powell and others, a great deal of stigma and institutional lead-footedness are making this issue more complicated than it needs to be. “People don’t like to think of impaired people having sex or wanting sex,” said Powell — especially when the impaired person in question is their mom or dad. So when a nurse calls a resident’s daughter and says, “Hey, your mom has a new boyfriend in the nursing home. They seem to want to spend the night together — what do you think?’” as Powell put it, the daughter might blanch — and not as a result of a clear weighing of the pros and cons, but because she’s simply weirded out by the idea.
On top of that, many institutions might take a more conservative, or even punitive, approach to these issues simply as an excuse to not venture into such a fraught landscape. “The institution may be worried in some cases more about its own liability than promoting the autonomy and values and preferences of the person in their care,” said Powell.
As a result of all this messiness, there’s “mass confusion” about what should and shouldn’t be seen as acceptable behavior among facility administrators, said Daniel Kuhn, a licensed clinical social worker who has conducted trainings on dementia and sex for nursing homes.“Very, very few nursing homes have delved into this topic because it is so darn complicated,” he said. “It touches on the ethical and moral and legal areas, and there’s no hard and fast tools available to make a determination.”
Part of the problem in developing clearer guidelines, he said, is that no one has data on whether and to what extent workers in these settings agree with Powell (and Kuhn) that Alzheimer’s patients should be granted some level of agency with regard to sex. “I proposed doing a large scale survey of attitudes along ago but it got shot down,” he said in a follow-up email. Instead, he administers his survey — it asks participants to rank their level of agreement with questions like “Residents who have dementia are not capable of making sound decisions regarding participation in sexual relationships” — to staff at the nursing homes where he does his trainings.
“I've been talking and writing about these issues for 20-plus years,” he said, “and nothing has been done by any professional organization or government entity to offer any help to people at the local level who are involved in this difficult work.” Powell concurred: while she said she saw encouraging signs of some facilities taking these issues more seriously, “There’s a lot of work to be done.”
Kuhn said he saw the rape charges, whatever the outcome of the trial, as a tragic outcome, and clear evidence that society needs to overcome its squeamishness on this issue. “Most [facilities] have just sort of turned a blind eye until there is some kind of a crisis, and then they scurry around figuring out what to do, hoping it all goes away,” he said. “Except in this case it didn’t go away — it blew up.”
HOW is it that people can believe something that they know is not true?
For example, Kansas City Royals fans, sitting in front of their television sets in Kansas City, surely know that there is no possible connection between their lucky hats (or socks, or jerseys) and the outcome of a World Series game at Citi Field in New York, 1,200 miles away. Yet it would be impossible to persuade many of them to watch the game without those lucky charms.
It’s not that people don’t understand that it’s scientifically impossible for their lucky hats to help their team hit a home run or turn a double play — all but the most superstitious would acknowledge that. It’s that they have a powerful intuition and, despite its utter implausibility, they just can’t shake it.
Consider a 1986 study conducted by the psychologist Paul Rozin and his colleagues at the University of Pennsylvania. The participants were asked to put labels on two identical bowls of sugar. The labels read “sucrose” and “sodium cyanide (poison).” Even though the participants were free to choose which label to affix to which bowl, they were nevertheless reluctant, after labeling the bowls, to use sugar from the one that they had just labeled poison. Their intuition was so powerful that it guided their behavior even when they recognized that it was irrational.
Psychologists who study decision making and its shortcomings often rely on the idea, popularized by the psychologist Daniel Kahneman in his book “Thinking, Fast and Slow,” that there are two modes of processing information. There is a “fast system” that is intuitive and quickly generates impressions and judgments, and a “slow system” that operates in a deliberate and effortful manner, and is responsible for overriding the output of the fast system when the slow system detects an error.
Much of the time, the fast system is good enough. When you’re deciding whether to grab your umbrella when leaving the house, you can glance up at the sky to see how gray and cloudy it is. You’re using a shortcut based on similarity (does it look like it is going to rain?) as a substitute for thinking about probability — and generally, this is a good rule of thumb.
But the fast system is also prone to systematic biases and errors. If a gray sky makes you think it will rain and you don’t take into account that you’re visiting San Diego (rather than Seattle), then your judgment is likely to be biased. (Technically speaking, you’re neglecting the base rate that is necessary for a sound probability judgment.)
This is when the slow system can step in. If someone points out that rain in San Diego is very rare, even when the sky looks gray, you might revise your guess and leave your umbrella at home. Your slow system detects an error — and corrects it.
But as one of us, Professor Risen, discusses in a paper just published in Psychological Review, many instances of superstition and magical thinking indicate that the slow system doesn’t always behave this way. When people pause to reflect on the fact that their superstitious intuitions are irrational, the slow system, which is supposed to fix things, very often doesn’t do so. People can simultaneously recognize that, rationally, their superstitious belief is impossible, but persist in their belief, and their behavior, regardless. Detecting an error does not necessarily lead people to correct it.
This cognitive quirk is particularly easy to identify in the context of superstition, but it isn’t restricted to it. If, for example, the manager of a baseball team calls for an ill-advised sacrifice bunt, it is easy to assume that he doesn’t know that the odds indicate his strategy is likely to cost his team runs. But the manager may have all the right information; he may just choose not to use it, based on his intuition in that specific situation.
In fact, sometimes the slow system can exacerbate the problem rather than fix it. Instead of making the manager’s decision more rational, the slow system may double down by trying to rationalize the intuition, generating reasons that it is correct to bunt, at least in this particular case.
Once we realize that detecting an error does not necessarily result in correcting that error — they are two separate processes, not one process, as most “dual system” models assume — then we are in a better position to fix those errors. For example, rather than pointing out to the baseball manager that calling for a sacrifice bunt is irrational (as if he didn’t know that already), you might have him devise a policy ahead of time for what he should do in all such situations, and encourage him stick to it. It is easy to rationalize a powerful but misguided intuition in a specific situation. But it is much harder to concoct such rationalizations when setting a policy for, say, a whole baseball season.
When the manager of a baseball team chooses a strategy despite knowing that, statistically, it will cost his team runs, he’s not being superstitious. He may even be able to rationalize his decision, convincing himself that his decision is correct. But what he’s doing is pretty similar, psychologically, to the fan who wears a lucky hat or the fielder who won’t step on the foul line: He’s got a powerful intuition and he just can’t shake it.
Bias
Almost no one who thinks about bias — what forms it takes, how it trips up effective decision-making, and so on — does so more often or more carefully than behavioral economists do. So it's always interesting to hear them talk about the subject. Back in July, for example, Melissa Dahl flagged a conversation between Danny Kahneman and The Guardian's David Shariatmadari in which Kahneman explained that if he could rid the world of one human bias, it's overconfidence.
In an interview with Richard Thaler, another behavioral-econ godfather (can you have more than one godfather?) and the author of Misbehaving: The Making of Behavioral Economics, Katherine Milkman of the Wharton School of Business mentioned that exchange and asked Thaler if he agreed that overconfidence would be the best bias to axe.
His response is worth excerpting at length:
It’s never a good idea to disagree with Danny. I think that would also be at the top of my list. Let’s add some related biases that contribute to overconfidence, like the confirmation bias. One of the reasons we’re overconfident is that we actively seek evidence that supports our views. That’s true of everybody, that’s part of human nature, so that’s one reason we’re overconfident; we’re out there looking for support that we’re right. We rarely go out of our way to seek evidence that would contradict us. If people want to make a New Year’s resolution, it would be to test their strong beliefs by asking what would convince them that they were wrong, then looking around and seeing whether they might find some evidence for that.
The other one would be a hindsight bias, a notion that was first introduced by Baruch Fischhoff, who was a graduate student of [University of Minnesota psychology professor] Paul E. Meehl. Hindsight bias is the [inclination to believe] that after the fact we all think things were obvious. Now, if you ask people, “What did they think 10 years ago was the prospect that we would have an African-American president before we would have a woman president?” People would say, “Oh, yeah, well, that could have happened. All you needed was the right guy to come along at the right time.” Or some people, of course, will say the wrong guy, but in any case…. In truth, no one thought that back then. The evidence for hindsight bias is overwhelming, and this has huge managerial implications because when managers evaluate the decisions of their employees, they do so with hindsight.
So some project failed and after the fact it’s obvious why it failed and it’s obvious that the employee should have thought of it. Whereas, before the fact, it wasn’t obvious to anybody; otherwise, we wouldn’t have done it. The advice I always give my students in dealing with hindsight bias is before big decisions, get everybody to write stuff down — including the boss — and agree on what the criteria are for good and bad decisions. That will help at least a little bit — after the fact — when things blow up. We’ll have it on record that nobody anticipated the fact that our competitor was going to introduce a better version of our great idea two months before the launch, and we had no way of knowing that was going to happen.
This is all very useful information, of course. But what's most interesting, to me at least, is the bolded part. Think about some belief you're positive of. Try to come up with a list of the pieces of evidence that could convince you you're wrong about it. It's hard, isn't it? And it gets harder the more closely held the belief, the more it feels like an important part of who you are and how you see the world. For a lot of people and a lot of subjects, an honest answer would be "Well, there's nothing that would convince me I'm wrong about this." Which is fine! We're human. And no new piece of evidence is going to come along to prove that, say, murdering an innocent person is wrong or that rain is actually produced by the earth and flies up into the sky, where it forms cloud.
But: Even as we feel our most closely held beliefs couldn't possibly be disproven, we know that human history is nothing but closely held beliefs being disproven. It takes pioneers to shoot down these ideas, and it's always pioneers who have specific sorts of cognitive tendencies that allow them to see through widely embraced illusions. I wonder whether and to what extent we can inculcate these tendencies in ourselves. I wonder if — and I honestly can't point to any studies I'm aware of that can support or debunk this idea — an exercise like the one Thaler is describing might, in the long run and if done consistently and in a rigorous manner, help our brains help us see the world more accurately.
So maybe, as Thaler suggests, this is all pointing to yet another New Year's resolution: In 2016, sit down and think really hard about why you might be wrong.
to two years’ warning of the full onset of Alzheimer’s disease, a landmark study has found.
Scientists believe that they could use it to catch patients at a “sweet spot”, where there is still time to stave off the symptoms of dementia. The disease afflicts 800,000 people in Britain with severe memory loss and cognitive decline. This is projected to rise to a million by 2025.
At present there are no drugs to treat Alzheimer’s, although two clinical trials have shown promising early results and are expected to conclude next year. If they succeed, dementia experts predict that the first drugs could be prescribed within a decade. One of the biggest problems is that the disease is usually diagnosed at an advanced stage, making it much harder for doctors to manage the brain damage.
Researchers have found that a cheap spatial memory test invented in Britain a decade ago can not only reliably diagnose Alzheimer’s but also give months or even years of warning before dementia becomes evident. Early data suggest that it is 93 per cent accurate.
Experts say that the breakthrough could give doctors a precious window of opportunity to prescribe their patients mental exercises or even nextgeneration dementia drugs while they could still be effective.
Dennis Chan, a clinical neuroscience lecturer at the University of Cambridge, said that there was an urgent need for a way of screening the millions of middleaged people who went to their GPs with mild cognitive impairment.
The memory tests for dementia used in doctors’ surgeries are no more predictive than a flip of a coin, although some specialist clinics have achieved better results. The only other options in the NHS are a surgical procedure called a lumbar puncture, which costs £700 and involves draining off a sample of the patient’s spinal fluid to look for the telltale proteins that mark the disease, or a brain scan that costs £1,500.
Known as the “Four Mountains” test (4MT), the new method involves showing patients a picture of a mountain landscape and asking them to identify it among a selection of four landscapes, one of which is the same one seen from a different angle.
Dr Chan’s team carried out a pilot study involving 15 British patients with mild cognitive impairment. The 4MT identified the patients in whom Alzheimer’s was diagnosed over two years with as much accuracy as the surgical technique, and more than twice as much as conventional memory tests. “The caveat is that we have only proven the principle and the real test is in the work being done now,” Dr Chan said.
The test is available as an iPad app that costs £40. Spread over the hundreds of patients who consult each GP practice about memory problems every year, it would cost a few pence a time. The group will publish their findings in a scientific journal this year, and Dr Chan will speak about the test at next month’s Cambridge Science Festival.
Nikolai Axmacher, a neuroscientist at the University of Bonn, said: “It may be possible in the near future to stop further spread of Alzheimer’s pathology . . . A [way of selecting] patients who are most likely to benefit from these new treatments will thus be very important.”
Negative Thoughts
We often define ourselves by the mental chatter that goes on constantly inside our heads. By our thoughts, we have ideas of who we are and what everything around us means.
And the only way that you can begin to recondition your subconscious mind for success is by detaching yourself from the idea that you are your thoughts.
Feelings of success, mindfulness and happiness come from the realization that thoughts come and go of their own accord—that you are not your thoughts. You can watch as your thoughts appear in your mind, almost from thin air, and watch again as they disappear, like a soap bubble bursting. Your thoughts come and they go, and ultimately, you have a choice about whether to act on them or not.
If you have been operating on autopilot for a while, you’ve probably settled into a nice groove, and it takes some serious effort to change that. It's sort of like how you feel on a cold winter morning, while you're nestled in between your warm sheets, when the thought of getting out of bed is uncomfortable, and when doing so requires motivation and willpower.
But whenever you fully grasp the idea that you are something far greater than your thoughts—beyond words at all—you begin to understand that you have the power to choose which thoughts you will think.
Try it out:
Think of a purple banana... Got it?
Think of a flying elephant... Got it?
Think of a green bicycle... Got it?
You were able to conjure up images of these incredibly silly ideas because you have control over your mind. Your mind will do whatever you tell it to do, so altering your subconscious mind, and therefore your life, is no more difficult a task than telling your mind to do new things—new things, like complete belief in yourself and your abilities, regardless of what anyone else thinks or says.
A simple way to grasp this whole idea is to compare your mind and your thoughts to a computer. Your subconscious mind can be compared to the hard drive, the actual machine itself; your conscious mind can be compared to the programs that are loaded on the machine; and you can be compared to the programmer, who chooses the programs that are installed on the computer.
Your thoughts and beliefs are nothing but programs that are installed on your hard drive, and since they determine the course of your life, it would be wise to install the most beneficial programs you can find. All it takes to reprogram your mind is a sincere desire to do so and an indomitable persistence to stick with it day after day.
When you catch yourself thinking the following negative thoughts, replace them with these positive thoughts.
When you think this… Say this instead…
Think about the future! Enjoy the moment you’re in right now.
Don’t do something you’ll regret! In the end, we only regret the things we didn’t get to do.
I wish I hadn’t done that! I can’t change the past—learn and move on.
Will things ever work out? What’s meant to be will happen.
Why did they do that? I can’t control others but I can control how I react to it.
Will I ever find happiness? What can I be happy about right now?
What’s wrong with me? I am perfect exactly the way I am in this moment.
You can flip it and begin training your thoughts to empowering statements of positivity. Changing the way you think is vital to your success. Negative thinking can stop you before you even get started.
Wise Psychological Interventions
Throwing the right switch in your brain can solve even the biggest problems. “If you think, ‘hey, intelligence and skill can develop’, your whole attitude changes”
DURING the second world war, the US government found itself wrestling with a meaty problem. It was trying to encourage citizens to eat offal so that better cuts of meat could be shipped to the troops abroad. But the message wasn’t getting through.
So the government recruited some serious brainpower: renowned anthropologist Margaret Mead and the father of social psychology, Kurt Lewin. Instead of telling people that eating offal was a patriotic duty, Mead and Lewin tried to understand their psychological resistance to eating it in the first place. They found that offal was stigmatised as the food of the poor, and also that people were unsure how to cook it. And so they launched a new campaign to rebrand offal “variety meat” and teach the public how to prepare it. As more people experimented with it, offal lost its stigma and became a dietary mainstay.
It may sound like a straightforward marketing campaign, but for today’s psychologists the initiative has gained near-legendary status. Many cite it as a forerunner to something they call “wise psychological interventions” – apparently simple actions that produce long-lasting changes in behaviour.
Psychologists now believe that WPIs could be the solution to all sorts of problems, from educational underachievement to obesity. Over the past few years they have been quietly assembling a toolkit, and could soon be trying them out on us all.
At the heart of WPIs is the idea of “mental unblocking” – removing psychological barriers that keep people stuck in damaging patterns of behaviour. Simplistic though this may seem, it is actually surprisingly hard to achieve. “Some people think that if it’s just about psychology, people should be able to do it for themselves,” says Greg Walton, a psychologist at Stanford University in California. “But it’s not that easy.” Just because it would be beneficial for you to unthink something doesn’t mean you can just do it, he says. That is where wise interventions come in.
The use of psychology to make us better people may sound familiar. Superficially WPIs are a lot like “nudges” – external interventions designed to guide people towards better choices (New Scientist, 22 June 2013, p 32). That might mean placing fruit at eye level in a canteen, for example, or making people opt out of a pension scheme rather than opt in.
Lasting change
However, wise interventions are different in a number of ways. Nudges are usually specific to a given choice at a given time, whereas WPIs aim to alter behaviour in a lasting way. More significantly, nudges tend to rely on environmental cues, whereas WPIs are rooted in theories about basic human psychology.
Another early demonstration of their potential was provided by Timothy Wilson of the University of Virginia in Charlottesville. Back in 1982, he was trying to find a way to help new college students cope better with worries about their academic performance. Wilson’s solution was inspired by attribution theory, which describes how people account for events – say, whether they blame failures and setbacks on enduring facts about themselves, or on external factors.
When people look inward for the causes of their problems, it can puncture self-esteem and create a barrier to solving them. Wilson wondered whether getting students to attribute their struggles to their current situation, rather than facts about themselves, would unblock them. So he presented them with statistics showing that the majority of new students start with disappointing grades but do better over time. He also showed them videos of older students talking about their improving academic performance. Wilson found that the group’s grades got better more quickly than those of students who did not receive these messages. They were also less likely to have dropped out by the end of the second year.
Laying the foundations
For a long time, this remained an isolated success. “Tim did this amazing study in the early 80s, then everybody forgot about it,” says Dave Yeager of the University of Texas at Austin. “No one was doing field experiments.” Instead, researchers focused on the basic psychological processes that govern our behaviour – work which laid the foundation for today’s WPI research.
Some of the most influential work was done by Stanford psychologist Carol Dweck. Since the 1970s, she has been studying what drives people to persist in the face of difficulties. She found that much depends on whether people have what she calls a “fixed” or a “growth” mindset – that is, whether they see their abilities and personality as set in stone, or malleable. When people with a fixed mindset encounter challenges such as a difficult maths puzzle, they often conclude that they have reached the limit of their abilities and give up. “But if you think, ‘hey, intelligence and skill can develop’, then your whole attitude changes,” says Dweck. “You want to take on the challenges that help you grow.”
In other words, a fixed mindset is a mental block that stops us from achieving something. And it can be reinforced or removed. Dweck’s work also showed that praising successful children for being bright or talented nurtures the fixed mindset, whereas focusing on their hard work and perseverance fosters a growth mindset.
During the 2000s, Dweck began to explore whether promoting a growth mindset might help kids in school. In an influential 2007 study, she tested this idea among low-achieving 12 and 13-year-olds. Half of them were told about how the brain changes and learns, and how intelligence can be boosted; the rest learned about the brain, too, but with the emphasis on memory.
It worked. The “growth” group showed increased motivation in class and got better test scores. Significantly, those who endorsed a fixed mindset most strongly beforehand benefited the most.
Fixed and growth mindsets are now a common starting point for WPIs. For example, Yeager has applied them to bullying – not so much to stop the bullies, but to help victims cope better.
Understandably, bullied kids often retaliate aggressively. In studies of students aged 10 to 14, Yeager showed that an intervention similar to Dweck’s, in which kids learned about how the brain and personality change over time, reduced aggressive retaliation. “By teaching teenagers that people can change, it makes them feel less like they need to escalate things if they’re bullied,” says Yeager.
Another type of WPI has been pioneered by Stanford psychologist Geoffrey Cohen, this time aimed at reducing the achievement gap between white and black university students. Many social and economic factors underlie this gap, but there is also a powerful psychological driver: the stereotype that black people are less academically able than their white peers. For black students this can become a self-fulfilling prophecy: they often do worse on maths tests when surrounded by white students. This has been attributed to “stereotype threat”, which creates anxiety and harms performance. (White students are at risk too, often underperforming in the presence of East Asians, who are often stereotyped as maths whizzes.)
Cohen set out to design an intervention to close the gap. One proven strategy against stereotype threat is to get people to write about values that are important to them, a process called self-affirmation. When Cohen asked middle-school students to do this, he found that even a short session improved the grades of black students relative to controls, closing the achievement gap by 40 per cent. And two years later, after a few top-up sessions, the intervention was still having a clear effect. Cohen has since applied the same approach to the achievement gap between men and women in university science courses.
Yet another kind of intervention boosts the sense of social belonging. When people go through big transitions in life – going to university, say, or moving to a new city – there’s often a period when they are not sure they fit in. Members of minority groups are especially vulnerable.
Cohen and Walton got first-year students to read a report summarising a survey of older students’ experiences at university. The report described how they felt out of place at first, and how these feelings passed as they settled in and made new friends. Reading it not only improved the grades of black students, halving the racial achievement gap, but also increased their self-reported happiness and health. Remarkably, these effects persisted three years on, and much larger studies have replicated them.
All of this is evidence that WPIs offer a new and powerful way to approach difficult social problems, Walton says. “We typically approach such problems with the assumption that there’s a lack of capacity, and we try to bolster that capacity. So we might think, schools are failing, we need to invest more in schools. But in many situations we actually have adequate capacity. And yet that capacity goes unrealised, as people are psychologically not in a position to take advantage.”
Although many WPIs focus on academic performance, there have been experiments in applying them to criminality, teenage pregnancy, relationship problems – even international conflict. Eran Halperin of the Interdisciplinary Center in Herzliya, Israel, has been developing WPIs to reduce tensions in the Israeli-Palestinian conflict. He has shown that nurturing a growth mindset makes people on both sides more open to listening, more willing to compromise for peace, and more likely to forgive.
Not surprisingly, WPIs are attracting attention outside academia. In the UK, the Behavioural Insights Team (BIT) – a partly government-owned firm sometimes dubbed the “Nudge Unit” – is exploring their potential. “Nudges have been very successful in a number of areas,” says Jessica Barnes, a senior adviser at BIT, “but we recognise there are a lot of complex issues that nudges are not necessarily going to address, so we’re also interested in more intensive psychological interventions.”
In September last year, President Obama launched the US Social and Behavioral Sciences Team, which is exploring ways to use nudges and WPIs. Similar units have been set up in Germany, Australia, Singapore, Finland and the Netherlands.
So when can you expect to be wised up? Even advocates of intervention admit that some questions need to be answered before WPIs can be widely rolled out. For starters, we need to know how easy they are to scale up so that it’s not just a select few that can benefit. Early research suggests that WPIs delivered as online modules can reach a mass audience, but it’s early days yet.
Researchers are also keen to avoid the hype and controversy that has surrounded nudges. They are at pains to point out that WPIs are not magic, and cannot help all the people all the time. “They address specific psychological sticking points, and if a person isn’t stuck, then the intervention isn’t necessary,” says Yeager.
These caveats aside, psychologists are increasingly optimistic that WPIs can tackle any problem with a psychological component – in other words, nearly every significant social or personal challenge you can think of. “There are many problems that people have struggled with for generations,” says Walton. “This is a new way to approach them.”
Hallucinations show our brains workings
Far from being flights of fancy, hallucinations reveal the true nature of our reality.
AVINASH AUJAYEB was alone, trekking across a vast white glacier in the Karakoram, a mountain range on the edge of the Himalayan plateau known as the roof of the world. Although he had been walking for hours, his silent surroundings gave little hint that he was making progress. Then suddenly, his world was atilt. A massive icy boulder loomed close one moment, but was desperately far away the next. As the world continued to pulse around him, he began to wonder if he could believe his eyes. He wasn’t entirely sure he was still alive.
A doctor, Aujayeb checked his vitals. Everything seemed fine: he wasn’t dehydrated, nor did he have altitude sickness. Yet the icy expanse continued to warp and shift. Until he came upon a companion, he couldn’t shake the notion that he was dead.
In recent years it has become clear that hallucinations are much more than a rare symptom of mental illness or the result of mind-altering drugs. Their appearance in those of sound mind has led to a better understanding of how the brain can create a world that doesn’t really exist. More surprising, perhaps, is the role they may play in our perceptions of the real world. As researchers explore what is happening in the brain, they are beginning to wonder: do hallucinations make up the very fabric of our reality?
Hallucinations are sensations that appear real but are not elicited by anything in our external environment. They aren’t only visual – they can be sounds, smells, even experiences of touch. It’s difficult to imagine just how real they seem unless you’ve experienced one. As Sylvia, a woman who has had musical hallucinations for years, explains, it’s not like imagining a tune in your head – more like “listening to the radio”.
There is evidence to support the sensation that these experiences are authentic. In 1998, researchers at King’s College London scanned the brains of people having visual hallucinations. They found that brain areas that were active are also active while viewing a real version of the hallucinated image. Those who hallucinated faces, for example, activated areas of the fusiform gyrus, known to contain specialised cells active when we look at real faces. The same was true with hallucinations of colour and written words. It was the first objective evidence that hallucinations are less like imagination and more like real perception.
Their convincing nature helps explain why hallucinations have been given such meaning – even considered messages from gods. But as it became clear that they can be symptoms of mental illnesses such as schizophrenia, they were viewed with increasing suspicion.
We now know that hallucinations occur in people with perfectly sound mental health. The likelihood of experiencing them increases in your 60s; 5 per cent of us will experience one or more hallucinations in our life.
Many people hallucinate sounds or shapes before they drift off to sleep, or just on waking. People experiencing extreme grief have also been known to hallucinate in the weeks after their loss – often visions of their loved one. But the hallucinations that may reveal the most about how our brain works are those that crop up in people who have recently lost a sense.
I have personal experience of this. At 87, my grandmother began to hallucinate after her already poor sight got worse due to cataracts. Her first visitors were women in Victorian dress, then young children. She was experiencing what is known as Charles Bonnet syndrome. Bonnet, a Swiss scientist who lived in the early 1700s, first described the condition in his grandfather, who had begun to lose his vision. One day the older man was sitting talking to his granddaughters when two men appeared, wearing majestic cloaks of red and grey. When he asked why no one had told him they would be coming, the elder Bonnet discovered only he could see them.
It’s a similar story with Sylvia. After an ear infection caused severe hearing loss, she began to hallucinate a sound that was like a cross between a wooden flute and a bell. At first it was a couple of notes that repeated over and over. Later, there were whole tunes. “You’d expect to hear a sound that you recognise, maybe a piano or a trumpet, but it’s not like anything I know in real life,” she says.
Max Livesey was in his 70s when Parkinson’s disease destroyed the nerves that send information from the nose to the brain. Despite his olfactory loss, one day he suddenly noticed the smell of burning leaves. The odours intensified over time, ranging from burnt wood to a horrible onion-like stench. “When they’re at their most intense they can smell like excrement,” he says. They were so powerful they made his eyes water.
Sensory loss doesn’t have to be permanent to bring on such hallucinations. Aujayeb was in fine health, trekking across the glacier. “I felt very tall – the ground appeared far from my eyes. It was like I was seeing the world from over my shoulder,” he explained. His hallucinations continued for 9 hours, but after a good night’s sleep, they were gone.
When our senses are diminished, all of us have the potential to hallucinate. It can take just 30 to 45 minutes for people to experience hallucinations if they try a simple visual deprivation technique (see “Ping-pong perception”, right). In a study run by Jiÿí Wackermann at the Institute for Frontier Areas of Psychology and Mental Health in Freiburg, Germany, one volunteer saw a jumping horse. Another saw an eerily detailed mannequin.
“Our reality is merely a controlled hallucination reined in by our senses”
Yet why should a diminished sense trigger a sight, sound or smell that doesn’t really exist? “The brain doesn’t seem to tolerate inactivity,” said the late neurologist Oliver Sacks when I spoke to him about this in 2014. “The brain seems to respond to diminished sensory input by creating autonomous sensations of its own choosing.” This was noted soon after the second world war, he said, when it was discovered that high-flying aviators in featureless skies and truck drivers on long, empty roads were prone to hallucinations.
Now researchers believe these unreal experiences provide a glimpse into the way our brains stitch together our perception of reality. Although bombarded by thousands of sensations every second, the brain rarely stops providing you with a steady stream of consciousness. When you blink, your world doesn’t disappear. Nor do you notice the hum of traffic outside or the tightness of your socks. Well, you didn’t until they were mentioned. Processing all of those things all the time would be a very inefficient way to run a brain (see “Out of touch”, left). Instead, it takes a few shortcuts.
Let’s use sound as an example. Sound waves enter the ear and are transmitted to the brain’s primary auditory cortex, which processes the rawest elements, such as patterns and pitch. From here, signals get passed on to higher brain regions that process more complex features, such as melody and key changes.
Instead of relaying every detail up the chain, the brain combines the noisy signals coming in with prior experiences to generate a prediction of what’s happening. If you hear the opening notes of a familiar tune, you expect the rest of the song to follow. That prediction passes back to lower regions, where it is compared to the actual input, and to the frontal lobes, which perform a kind of reality check, before it pops up into our consciousness. Only if a prediction is wrong does a signal get passed back to higher areas, which adjust subsequent predictions.
You can test this for yourself. Anil Seth, a cognitive and computational neuroscientist at the University of Sussex, UK, suggests listening to sine-wave speech, basically a degraded version of a speech recording. The first time all you’ll hear is a jumble of beeps and whistles. But if you listen to the original recording and then switch back to the degraded version, you will suddenly be able to make out what is being said. All that has changed is your brain’s expectations of the input. It means it now has better information on which to base its prediction. “Our reality,” says Seth, “is merely a controlled hallucination, reined in by our senses.”
This idea is consistent with what was happening to Sylvia. Although she was mostly deaf, she could still make out some sound – and she discovered that listening to familiar Bach concertos suppressed her hallucinations. Timothy Griffiths, a cognitive neurologist at Newcastle University, UK, scanned Sylvia’s brain before, after and while listening to Bach, and had her rate the intensity of her hallucinations throughout. They were at their quietest just after the real music was played, gradually increasing in volume until the next excerpt.
The brain scans showed that during her hallucinations, the higher regions that process melodies and sequences of tones were talking to one another. Yet, because Sylvia is severely deaf, they were not constrained by the real sounds entering her ears. Her hallucinations are her brain’s best guess at what is out there.
The notion of hallucinations as errant predictions has also been put to the test in completely silent rooms known as anechoic chambers. The quietest place on earth is one such chamber at Orfield Laboratories in Minneapolis, Minnesota. Once inside, you can hear your eyeballs moving. People generally start to hallucinate within 20 minutes of the door closing. But what’s the trigger?
There are two possibilities. One is that sensory regions of the brain sometimes show spontaneous activity that is usually suppressed and corrected by real sensory data coming in from the world. In the deathly silence of an anechoic chamber, the brain may make predictions based on this spontaneous activity. The second possibility is that the brain misinterprets internally generated sounds, says Oliver Mason at University College London. The sound of blood flowing through your ears isn’t familiar, so it could be misattributed as coming from outside you. “Once a sound is given significance, you’ve got a seed,” says Mason, “a starting point on which a hallucination can be built.”
Not everyone reacts the same way inside an anechoic chamber. Some people don’t hallucinate at all. Others do, but realise it was their mind playing tricks. “Some people come out and say ‘I’m convinced you were playing noises in there’,” says Mason.
Understanding why people react differently to a diminished sensory environment could reveal why some are more prone to the delusions and hallucinations associated with mental illness. We know that electrical messages passed across the brain are either excitatory or inhibitory – meaning they either promote or impede activity in neighbouring neurons. In recent experiments, Mason’s team scanned the brains of volunteers as they sat in an anechoic chamber for 25 minutes. Those who had more hallucinatory sensations had lower levels of inhibitory activity across their brain. Perhaps, says Mason, weaker inhibition makes it more likely that irrelevant signals suddenly become meaningful.
People with schizophrenia often have overactivity in their sensory cortices, but poor connectivity from these areas to their frontal lobes. So the brain makes lots of predictions that are not given a reality check before they pass into conscious awareness, says Flavie Waters, a clinical neuroscientist at the University of Western Australia in Perth. In conditions like Charles Bonnet syndrome, it is underactivity in the sensory cortices that triggers the brain to start filling in the gaps, and there is no actual sensory input to help it correct course. In both cases, says Waters, the brain starts listening in on itself, instead of tuning into the outside world. Something similar seems to be true of hallucinations associated with some recreational drug use (see “Under the influence”, below).
As these insights help us to solve the puzzle of perception, they are also providing strategies for treating hallucinations. People with drug-resistant schizophrenia can sometimes reduce their hallucinatory symptoms by learning how to monitor their thoughts, understand the triggers and reframe their hallucinations so that they see them in a more positive, and less distressing, light. “You’re increasing their insight and their ability to follow their thoughts through to more logical conclusions,” says Waters. This seems to give them more control over the influence of their internal world.
This kind of research is also helping people like Livesey reconnect with the external world. If his phantosmia, or smell hallucinations, are driven by a lack of reliable information, then real smells should help him to suppress the hallucinations. He has been trialling sniffing three different scents, three times a day. “Maybe it’s just wishful thinking,” he says, “but it seems to be helping.”
The knowledge that hallucinations can be a byproduct of how we construct reality might change how we experience them. In his later years, Sacks experienced hallucinations after his eyesight began to fail. When he played the piano, he would occasionally see showers of flat symbols when he was looking carefully at musical scores. “I have long since learned to ignore my hallucinations, and occasionally enjoy them,” said Sacks. “I like seeing what my brain is up to when it is at play.”
PING-PONG PERCEPTION
While none of us would want to experience hallucinations as intrusive as those associated with schizophrenia, you might be intrigued to sample a little of what your brain can get up to when running riot. To dip your toes into the waters of altered sensory perception, try the Ganzfeld procedure. All you need is a table tennis ball, some headphones and a bit of tape. Cut the ball in half and tape each segment over your eyes. Make sure you’re sitting in a room that’s evenly lit, and find some white noise to listen to over your headphones. Sit back and wait for the weirdness to start.
UNDER THE INFLUENCE
Hallucinations can be difficult to study, given their subjective nature and the fact that they often occur as a result of medical conditions. However, you can study one kind of hallucination very easily, as long as you can convince people to take drugs in the name of science. As it turns out, it’s not that difficult.
It’s been known for centuries that certain drugs can induce hallucinations, either spontaneously or after chronic use. But only recently have we discovered how these experiences are produced. David Nutt and his colleagues at Imperial College London gave 20 people LSD or a placebo and then scanned their brains. The scans showed that the volunteers’ hallucinations arose from the combined activity of brain regions that don’t normally communicate with each other. Regions responsible for vision, attention, movement and hearing became far more connected, while networks thought to give us an appreciation of the self became less so. That may be why people who take LSD often say they feel their sense of self disintegrating, instead becoming more at one with the world around them.
So are LSD-induced hallucinations similar to those that appear in psychosis or after the loss of a sense? It seems that all hallucinations involve some disruption of networks that usually perform a kind of reality check (see main story). Regardless of the cause, all hallucinations result partly from the brain relying too heavily on its internally generated sensations, misclassifying these as coming from the outside world.
The memories we rehearse
The memories we rehearse are the ones we live with
A million things happened to you today. The second bite of your lunch. The red light on the third block of your commute...
Tomorrow, you'll remember almost none of them.
And the concept that you'd remember something that happened to you when you were twelve is ludicrous.
What actually happened was this: After it (whatever that thing you remember) happened, you started telling yourself a story about that event. You began to develop a narrative about this turning point, about the relationship with your dad or with school or with cars.
Lots of people have had similar experiences, but none of them are telling themselves quite the same story about it as you are.
Over time, the story is rehearsed. Over time, the story becomes completely different from what a videotape would show us, but it doesn't matter, because the rehearsed story is far more vivid than the video ever could be.
And so the story becomes our memory, the story gets rehearsed ever more, and the story becomes the thing we tell ourselves the next time we need to make a choice.
If your story isn't helping you, work to rehearse a new story instead.
Because it's our narrative that determines who we will become.
EQ
When emotional intelligence (EQ) first appeared to the masses, it served as the missing link in a peculiar finding: people with average IQs outperform those with the highest IQs 70 percent of the time. This anomaly threw a massive wrench into the broadly held assumption that IQ was the sole source of success.
Decades of research now point to emotional intelligence as being the critical factor that sets star performers apart from the rest of the pack. The connection is so strong that 90 percent of top performers have high emotional intelligence.
“No doubt emotional intelligence is more rare than book smarts, but my experience says it is actually more important in the making of a leader. You just can’t ignore it.” – Jack Welch
Emotional intelligence is the “something” in each of us that is a bit intangible. It affects how we manage behavior, navigate social complexities, and make personal decisions to achieve positive results.
Despite the significance of EQ, its intangible nature makes it very difficult to know how much you have and what you can do to improve if you’re lacking. You can always take a scientifically validated test, such as the one that comes with the Emotional Intelligence 2.0 book.
Unfortunately, quality (scientifically valid) EQ tests aren’t free. So, I’ve analyzed the data from the million-plus people TalentSmart has tested in order to identify the behaviors that are the hallmarks of a low EQ. These are the behaviors that you want to eliminate from your repertoire.
1. You get stressed easily.
When you stuff your feelings, they quickly build into the uncomfortable sensations of tension, stress, and anxiety. Unaddressed emotions strain the mind and body. Your emotional intelligence skills help make stress more manageable by enabling you to spot and tackle tough situations before things escalate.
People who fail to use their emotional intelligence skills are more likely to turn to other, less effective means of managing their mood. They are twice as likely to experience anxiety, depression, substance abuse, and even thoughts of suicide.
2. You have difficulty asserting yourself.
People with high EQs balance good manners, empathy, and kindness with the ability to assert themselves and establish boundaries. This tactful combination is ideal for handling conflict. When most people are crossed, they default to passive or aggressive behavior. Emotionally intelligent people remain balanced and assertive by steering themselves away from unfiltered emotional reactions. This enables them to neutralize difficult and toxic people without creating enemies.
3. You have a limited emotional vocabulary.
All people experience emotions, but it is a select few who can accurately identify them as they occur. Our research shows that only 36 percent of people can do this, which is problematic because unlabeled emotions often go misunderstood, which leads to irrational choices and counterproductive actions. People with high EQs master their emotions because they understand them, and they use an extensive vocabulary of feelings to do so. While many people might describe themselves as simply feeling “bad,” emotionally intelligent people can pinpoint whether they feel “irritable,” “frustrated,” “downtrodden,” or “anxious.” The more specific your word choice, the better insight you have into exactly how you are feeling, what caused it, and what you should do about it.
4. You make assumptions quickly and defend them vehemently.
People who lack EQ form an opinion quickly and then succumb to confirmation bias, meaning they gather evidence that supports their opinion and ignore any evidence to the contrary. More often than not, they argue, ad nauseam, to support it. This is especially dangerous for leaders, as their under-thought-out ideas become the entire team’s strategy. Emotionally intelligent people let their thoughts marinate, because they know that initial reactions are driven by emotions. They give their thoughts time to develop and consider the possible consequences and counter-arguments. Then, they communicate their developed idea in the most effective way possible, taking into account the needs and opinions of their audience.
5. You hold grudges.
The negative emotions that come with holding on to a grudge are actually a stress response. Just thinking about the event sends your body into fight-or-flight mode, a survival mechanism that forces you to stand up and fight or run for the hills when faced with a threat. When a threat is imminent, this reaction is essential to your survival, but when a threat is ancient history, holding on to that stress wreaks havoc on your body and can have devastating health consequences over time. In fact, researchers at Emory University have shown that holding on to stress contributes to high blood pressure and heart disease. Holding on to a grudge means you’re holding on to stress, and emotionally intelligent people know to avoid this at all costs. Letting go of a grudge not only makes you feel better now but can also improve your health.
6. You don’t let go of mistakes.
Emotionally intelligent people distance themselves from their mistakes, but they do so without forgetting them. By keeping their mistakes at a safe distance, yet still handy enough to refer to, they are able to adapt and adjust for future success. It takes refined self-awareness to walk this tightrope between dwelling and remembering. Dwelling too long on your mistakes makes you anxious and gun shy, while forgetting about them completely makes you bound to repeat them. The key to balance lies in your ability to transform failures into nuggets of improvement. This creates the tendency to get right back up every time you fall down.
7. You often feel misunderstood.
When you lack emotional intelligence, it’s hard to understand how you come across to others. You feel misunderstood because you don’t deliver your message in a way that people can understand. Even with practice, emotionally intelligent people know that they don’t communicate every idea perfectly. They catch on when people don’t understand what they are saying, adjust their approach, and re-communicate their idea in a way that can be understood.
8. You don’t know your triggers.
Everyone has triggers—situations and people that push their buttons and cause them to act impulsively. Emotionally intelligent people study their triggers and use this knowledge to sidestep situations and people before they get the best of them.
9. You don’t get angry.
Emotional intelligence is not about being nice; it’s about managing your emotions to achieve the best possible outcomes. Sometimes this means showing people that you’re upset, sad, or frustrated. Constantly masking your emotions with happiness and positivity isn’t genuine or productive. Emotionally intelligent people employ negative and positive emotions intentionally in the appropriate situations.
10. You blame other people for how they make you feel.
Emotions come from within. It’s tempting to attribute how you feel to the actions of others, but you must take responsibility for your emotions. No one can make you feel anything that you don’t want to. Thinking otherwise only holds you back.
11. You’re easily offended.
If you have a firm grasp of who you are, it’s difficult for someone to say or do something that gets your goat. Emotionally intelligent people are self-confident and open-minded, which create a pretty thick skin. You may even poke fun at yourself or let other people make jokes about you because you are able to mentally draw the line between humor and degradation.
Bringing It All Together
Unlike your IQ, your EQ is highly malleable. As you train your brain by repeatedly practicing new emotionally intelligent behaviors, it builds the pathways needed to make them into habits. As your brain reinforces the use of these new behaviors, the connections supporting old, destructive behaviors die off. Before long, you begin responding to your surroundings with emotional intelligence without even having to think about it.
MAGA Fear of Being Wrong
As the January 6 hearings restarted today after the long weekend, I was thinking about the weird, psychotic fear that has overtaken millions of Americans. I include in those millions people who are near and dear to me, friends I have known for years who now seem to speak a different language, a kind of Fox-infused, Gish Galloping, “what-about” patois that makes no sense even if you slow it down or add punctuation.
Such conversations are just part of life in divided America now. We live in a democracy, and there’s no law (nor should there be) against the willing suffocation of one’s own brain cells with television and the internet. But living in an alternate reality is unhealthy—and dangerous, as I realized yet again while watching the January 6 committee hearings and listening to the stories of Republicans, such as Arizona House Speaker Rusty Bowers and others, describing the threats and harassment they have received for doing their duty to the Constitution.
And the threats don’t stop with political figures; families are now in the crosshairs. Representative Adam Kinzinger, for example, tweeted Monday about a letter he received in which the writer threatened not only to kill him, but to kill his wife and infant son.
There have always been unstable people in America, and they have always done frightening things. But there seem to be a lot more of them now. Some of them are genuinely dangerous, but many more are just rage-drunk nihilists who will threaten any public figures targeted by their preferred television hosts or websites, regardless of party or policy.
The more I think about it—and I spent years researching such problems while writing a book about democracy—the more I think that such people are less angry than they are terrified.
Many of you will respond: Of course they’re terrified. They’re scared of demographic change, of cultural shifts, of being looked down upon for being older and uneducated in an increasingly young and educated world.
Success!
All true. But I think there’s more to it.
I think the Trump superfans are terrified of being wrong. I suspect they know that for many years they’ve made a terrible mistake—that Trump and his coterie took them to the cleaners and the cognitive dissonance is now rising to ear-splitting, chest-constricting levels. And so they will literally threaten to kill people like Kinzinger (among others) if that’s what it takes to silence the last feeble voice of reason inside themselves.
We know from studies (and from experience as human beings) that being wrong makes us feel uncomfortable. It’s an actual physiological sensation, and when compounded by humiliation, it becomes intolerable. The ego cries out for either silence or assent. In the modern media environment, this fear expresses itself as a demand for the comfort of massive doses of self-justifying rage delivered through the Fox or Newsmax or OAN electronic EpiPen that stills the allergic reaction to truth and reason.
These outlets are eager to oblige. It’s not you, the hosts assure the viewers. It’s them. You made the right decisions years ago and no matter how much it now seems that you were fooled and conned, you are on the side of right and justice.
This therapy works for as long as the patient is glued to the television or computer screen. The moment someone like Bowers or Kinzinger or Liz Cheney appears and attacks the lie, the anxiety and embarrassment rise like reflux in the throat, and it must be stopped, even if it means threatening to kill the messenger.
No one who truly believes they are right threatens to hurt anyone for expressing a contrary view. The snarling threat of violence never comes from people who calmly believe they are in the right. It is always the instant resort of the bully who feels the hot flush of shame rising in the cheeks and the cold rock of fear dropping in the pit of the stomach.
In the film adaptation of the Cold War epic Tinker Tailor Soldier Spy, John le Carré’s fictional British intelligence officer George Smiley describes his opposite number, the Soviet spymaster Karla. Smiley knows Karla can be beaten, he says, because Karla “is a fanatic. And the fanatic is always concealing a secret doubt.”