It seemed as if it would be a perfectly ordinary occasion, that hot August day in 1959. Three generations of a large Oklahoma family gathered at a studio in nearby Perryton, Tex., to have a photo taken of the elders, 14 siblings ranging in age from 29 to 52. Afterward, everyone went to a nearby park for a picnic.
Among the group were two cousins, Doug Whitney, who was 10, and Gary Reiswig, who was 19. Doug’s mother and Gary’s father were brother and sister. Doug does not remember any details of that day, but Gary says he can never forget it. His father, and some of his aunts and uncles, just did not seem right. They stared blankly. They were confused, smiling and nodding, even though it seemed as if they weren’t really following the conversation.
Seeing them like that reminded Gary of what his grandfather had been like years before. In 1936, at the age of 53, his grandfather was driving with his grandmother and inexplicably steered into the path of a train. He survived, but his wife did not. Over the next decade, he grew more and more confused. By the time he died at 63, he was unable to speak, unable to care for himself, unable to find his way around his house. Now here were the first signs of what looked like the same condition in several of his children.
“We were looking at the grimness face to face,” Gary says. “After that, we gradually stopped getting together.”
It was the start of a long decline for Gary’s father and his siblings. Their memories became worse, their judgment faltered, they were disoriented. Then one day in 1963, Gary, who was living in Illinois at the time, went with his mother to take his father to a doctor in Oklahoma City. The doctor had recently examined his father’s brother, and after administering some simple memory tests and hearing about the rest of the family, concluded that he probably had Alzheimer’s disease. Gary and his mother took his father in for the same exam, and the doctor confirmed Gary’s fears.
Gary’s mother wanted to keep his father’s condition a secret and asked Gary to tell no one. But his uncle’s wife, Aunt Ester May, wanted to let everyone in the extended family know. Most reacted the way Gary’s mother had — they wanted to keep the information to themselves.
When Doug first heard the news, he hoped his mother, Mildred Whitney, might escape the terrible illness, and for a few years she seemed fine. But on Thanksgiving Day 1971, Mildred, who was then 50 and never used recipes, could not remember how to make her famous pumpkin pie.
That was the beginning of her precipitous fall. Five years later, after she lost her ability to walk, or speak, or recognize her own children, she died. In the end, 10 of those 14 brothers and sisters developed Alzheimer’s, showing symptoms, on average, at around age 50. The family, once close, soon scattered, each descendant of the 14 privately finding a way to live with the possibility that he or she could be next.
More than five decades later, many of these relatives have come together to be part of a large international study of families who carry an Alzheimer’s gene. The study, known as DIAN (for Dominantly Inherited Alzheimer Network), involves more than 260 people in the United States, Britain and Australia and includes at least 10 members of Doug and Gary’s family. Since 2008, researchers have been monitoring the brains of subjects who have mutations in any of three genes that cause Alzheimer’s to see how the disease develops before symptoms occur. By early next year, DIAN researchers plan to begin a new phase. Subjects will receive one of three experimental drugs that the researchers hope will slow or stop the disease in people otherwise destined to get it. (A similar study is expected to start around the same time in Colombia, testing one drug in a large extended family that carries a mutation in one gene that causes Alzheimer’s.)
Though as much as 99 percent of all Alzheimer’s cases are not a result of a known genetic mutation, researchers have determined that the best place to find a treatment or cure for the disease is to study those who possess a mutation that causes it. It’s a method that has worked for other diseases. Statins, the drugs that are broadly prescribed to block the body’s cholesterol synthesis, were first found effective in studies of people who inherited a rare gene that led to severe and early heart disease.
Alzheimer’s is the sixth leading cause of death in this country, and is the only disease among the 10 deadliest that cannot be prevented, slowed or cured. But DIAN investigators say that within a decade there could be a drug that staves off brain destruction and death.
This sense of optimism has been a long time coming. In 1901, a German psychiatrist, Alois Alzheimer, first noted the disease when he described the case of a 51-year-old woman named Auguste Deter. “She sits on the bed with a helpless expression,” Alzheimer wrote. “What is your name? Auguste. Your husband? Ah, my husband. She looks as if she didn’t understand the question.”
Five years later, when Auguste Deter died, Alzheimer examined her brain. It was the color of sandpaper and the texture of tofu, like every other brain. But there the similarities ended. Deter’s brain was shriveled and flecked with tiny particles that stuck to it like barnacles. No one had ever seen such a thing before in any brain.
Pathologists now recognize that the particles are deposits of a protein fragment, beta amyloid, that accumulates in brains with Alzheimer’s and is a hallmark of the disease. Alzheimer also noticed something else in Deter’s brain. Inside her ruined brain cells were tangles: grotesquely twisted ropes of a protein now known as tau. They are not unique to Alzheimer’s — they show up in the course of aging and in other degenerative brain diseases, including Parkinson’s and Pick’s disease, a rare form of dementia whose distinguishing symptoms include erratic and inappropriate behavior. Alzheimer speculated that the tangles in the brain cells were grim signs of the brain’s destruction. But what caused that destruction was a mystery. “All in all we have to face a peculiar disease process,” Alzheimer wrote.
There matters stood until the latter part of the 20th century. A leading Alzheimer’s researcher, Paul Aisen of the University of California, San Diego, told me that when he was in medical school in the late 1970s, his instructors never talked about Alzheimer’s. There was little to say other than that it was a degenerative brain disease with no known cause and no effective treatment. Scientists just did not have the tools to figure out what was going wrong in the brains of these people, or why.
All anyone knew was that the disease followed a relentless path, starting with symptoms so subtle they could be dismissed as normal carelessness or inattentiveness. A person would forget what was just said, or miss an appointment, or maybe become confused driving home one day. Gradually those small memory lapses would progress until the person, now wearing a blank stare, would no longer recognize family members and would be unable to eat or use a bathroom. At autopsy, the brain would be ruined, shrunken and peppered with plaques.
Rudolph Tanzi, a professor of neurology and an Alzheimer’s researcher at Harvard University, explained what it was like for researchers back then to look at an Alzheimer’s brain and try to figure out what caused the devastation. Imagine, he says, that you are an alien from another planet who has never heard of football. You go into a stadium at 5 o’clock, after a game has been played, and see trash in the stands, a littered field, torn turf. How, he asks, could you figure out that it was all caused by a football game? “For decades, that was where we were in trying to figure out the cause of Alzheimer’s disease,” Tanzi says.
But as molecular biology advanced, scientists realized that if they could study large families in whom the disease seemed to be inherited, they might be able to hunt down a gene that caused Alzheimer’s and understand what it did. The difficulty was finding these families and persuading them to participate in the research. A breakthrough came in the late 1980s when a woman who lived in Nottingham, England, contacted a team of Alzheimer’s researchers at St. Mary’s Hospital in London, led by John Hardy, and asked if they wanted to study her family. Alzheimer’s had appeared in three generations, she said, and her father was one of 10 children, 5 of whom developed the disease.
In the English family, the pattern of inheritance seemed clear — the child of someone with the disease had a fifty-fifty chance of developing Alzheimer’s — which meant that it was very likely that a gene was causing the disease. By comparing the DNA sequences of family members who developed Alzheimer’s to the sequences of those who did not develop the disease, the researchers discovered that the family’s disease was caused by a mutated gene on chromosome 21. Everyone in the family who had Alzheimer’s had that mutated gene. No one who escaped the disease had the mutation. And all who inherited the mutated gene eventually got Alzheimer’s. There were no exceptions.
“Sometimes in science, you generate the information and the data gradually,” Alison Goate, who was a young geneticist in the research group, told me. “This was like, boom, a eureka moment.” She says she remembers thinking, “I am the first person to see a cause of Alzheimer’s disease.”
During those years of slow scientific progress on Alzheimer’s, Gary Reiswig made a series of decisions that reflected his fears. He’d been trained as a minister in a conservative arm of the Christian Church (Disciples of Christ), but after his father died at 56, Gary, who was then 27, began questioning his calling. If he was going to get Alzheimer’s in 10 or 20 years, was this the way he wanted to spend his remaining time?
He left the ministry, deeply upsetting his extended family. “Here was our golden boy, rejecting the faith,” Gary says, referring to the way his family responded. “It was hard to go back to my hometown.”
In 1970, he and his wife divorced, and in 1973 he remarried and faced another difficult decision. His new wife, Rita, wanted children. She knew when she married Gary that there was Alzheimer’s disease in his family. “But somehow, it didn’t seem exactly real until we started talking about having a child,” Gary says. “There is a tremendous life force that drives people to love, make love and have children. You just can’t overcome it.” And because the risk to a hypothetical child was so far in the future, they were able to convince themselves that it wasn’t truly real.
Their son was born in 1977. Meanwhile, Alzheimer’s continued to cut a swath through Gary’s family. His older sister lived on a farm in Oklahoma, and he and Rita visited her a couple of times a year. On one trip, when his sister was 43, Gary realized she was starting to show the same unmistakable symptoms of the disease he had seen in his father.
Gary was about to turn 40 in 1979 and was working as a city planner in Pittsburgh. He knew he could not continue in that job if he had Alzheimer’s, so one day he said to Rita, “Let’s get ourselves in a position where if this disease hits me, I can be helpful.”
He found what he was hoping for when he saw an advertisement for an inn for sale in East Hampton, N.Y. He could be an innkeeper, Gary thought, transitioning to simple maintenance work if his memory began to fail. So he quit his job, and he and Rita bought the inn and moved to Long Island in June 1979. “I cast myself loose from dependence on bosses in case I began to lose my mental capacities,” Gary told me.
Though the actual work was more complicated than Gary had anticipated, he found he knew the basics. He had learned to make business decisions by helping his father with the family farm, and he was good at dealing with people from working as a city planner. But all the while, as he managed the inn, Gary had his eye to a future when nothing would be easy, when “my duties could be shifted from complex to simple, mental to merely manual, if the situation demanded it.”
Then, one day in 1986, he got a call from his aunt Ester May, who had made some life-changing decisions of her own. After watching her husband die, Ester May had made it a mission to find someone who might help the family. Eventually, her quest led her to Thomas Bird, who is currently a professor of neurology, medicine and medical genetics at the University of Washington in Seattle and a research neurologist at the Seattle V.A. hospital. Like Alison Goate in England, Bird was looking for large families with a hereditary form of Alzheimer’s disease to provide blood samples that could be analyzed in an attempt to isolate other genetic culprits. For Bird and others searching for Alzheimer’s genes, there were still some fundamental questions that needed to be answered: What were these genes and what did they do to cause the disease? Was there just one gene that causes Alzheimer’s in these families, or were there several? If there were several, there might be many paths to the disease. If there was one — or several that when mutated all had the same effect — the task of finding a cure might be easier.
As soon as Ester May spoke to Bird, she got to work, calling family members and cajoling them to join the study. The consent forms said all data would be kept private, and as is typical in research, even if a gene were found, the participants would not be told if they had it. By taking part in the study they would be contributing to science. They would be doing it to benefit others in the future, not themselves.
Gary agreed to participate, and he went to his internist’s office in East Hampton to have blood drawn and sent to Bird. He’s not sure how many of his cousins also gave blood, but he estimates, from asking around, that about 30 did. Of his father’s generation, 5 out of 14 gave blood — the rest were already dead from the disease.
Gary says he didn’t need to persuade his brother and sister to participate. “By the time Dr. Bird’s study began, my sister was already having symptoms,” he says.
Then Gary put the study out of his mind while he continued on the path he had already set for himself — making use of the limited time he had to live his life before he might be overcome by the disease.
Doug approached the possibility of Alzheimer’s differently, spending his life away from the family tragedies, only distantly aware of what was unfolding. At 18, Doug left home to join the Navy. He stayed in the military for 20 years, and for most of that time, he and his wife, Ione, were stationed around the world, visiting immediate family members a couple of times a year on all-too-brief road trips. When he retired from the Navy in 1988, they settled in Port Orchard, Wash., where Doug had a job with a contractor, scheduling maintenance for ships. Because he’d been out of the country for so long, he didn’t participate in Bird’s study.
Doug is a taciturn man, not one to spill his emotions. Ione is the talker, ebullient and friendly, speaking for Doug in interviews, answering e-mails. She told me that the most difficult time for Doug was when Roger, the oldest of Doug’s seven siblings, started showing signs of the disease when he was 48. (None of the others seem to have symptoms.) In 2001, Roger was deteriorating badly in a nursing home in Grove, Okla., and Doug flew there to be with him one last time. “It had been at least six months since Roger recognized anyone,” Ione says. Doug spent the afternoon and evening with him. The next day, Roger died. He was 55 and left behind three children, one of whom was just a few weeks younger than Doug and Ione’s son, Brian.
In 1995, four years after Alison Goate and her colleagues found the first Alzheimer’s gene, two more genes were discovered. One was found by Bird’s team using the blood from several families, including Gary and Doug’s. Other research groups studying other families made similar discoveries. The three genes are on different chromosomes, and different families have different mutations in the genes, but in every case, the mutated gene leads to the same result: the brake that normally slows down the accumulation of beta amyloid, a toxic protein that forms plaques, no longer works. Beta amyloid piles up and sets the inexorable disease process in motion.
In the years since, researchers have theorized that when the brain makes too much beta amyloid, it creates a toxic environment — “a bad neighborhood,” as some investigators put it. The beta amyloid clumps into hard plaques that form outside cells. Once brain cells are living in that bad neighborhood, the abnormal tangled strands of tau proteins show up inside, killing the cells from within.
The researchers have tended to focus on stopping beta amyloid from accumulating rather than stopping tau. Most beta amyloid drugs either stymie the enzymes that produce it or clear away the amyloid after it’s made. But drug development is hard, and it has taken years for companies to find promising compounds and take them through the phases of preclinical testing.
Several years ago, the first large studies of these new drugs were carried out using people who already had Alzheimer’s. Most of those initial studies are still under way, but a few have been completed, with disappointing results — despite the drugs, the disease continues unabated in these Alzheimer’s patients.
Randall J. Bateman, director of the DIAN Therapeutic Trials Unit at Washington University School of Medicine in St. Louis, says it is far too soon to admit defeat. He notes that the history of medicine is replete with stories of drugs that were almost abandoned because they were initially studied in the wrong group or were administered in the wrong dose or at the wrong time in the course of a disease. Even penicillin was a failure at first. It was initially tested by dabbing it on skin infections, Bateman says. But the way the drug was applied to the infections and its low dose made it impossible for the drug to cure even an infection that would otherwise respond to it. Finally, when the drug was tested at the right dose in the right patients, it cured eye infections and also pneumonia in people who were certain to have died without it.
“Even something as effective as penicillin can fail unless it is administered properly,” Bateman says. He predicts that in the future it will become clear that for Alzheimer’s drugs to be effective, they would have to be given earlier.
“In Alzheimer’s, we are coming to realize that it’s more difficult to treat after there are symptoms,” Bateman says. By then “extensive neuronal death has occurred.” Tau has been destroying brain cells, and “the adult brain does not replace those lost neurons.”
Other diseases work the same way. In Parkinson’s, for example, the substantia nigra — a small, black, crescent-shape group of brain cells that control movement — starts to die. But there are no symptoms until 70 to 90 percent of the substantia nigra is gone. No one has yet found a way to restore those missing cells.
In order to address this, Bateman says that the DIAN researchers will try to use drugs to stop the accretion of amyloid in people with the Alzheimer’s gene who haven’t yet shown symptoms. The study is building on others that followed middle-aged subjects for years, watching for early signs in the brains of those who eventually develop Alzheimer’s.
One study in particular has been helpful. It’s called ADNI (Alzheimer’s Disease Neuroimaging Initiative), and it began in October 2004. ADNI includes 200 people whose memories are normal, 400 with mild memory problems that might be harbingers of Alzheimer’s disease and 200 with Alzheimer’s disease. Researchers regularly give these subjects memory tests and do brain imaging and other tests to watch for the progress of Alzheimer’s. The study found that characteristic brain changes — shrinkage of the memory center, beta amyloid plaques, excessive synthesis of beta amyloid and tau — arise more than a decade before a person has symptoms.
The first phase of the DIAN study also looks at the progression of Alzheimer’s in the brain, but using only subjects who are members of families with Alzheimer’s genes. When these people join DIAN, Bateman and his colleagues test their memory and reasoning as well as administer spinal taps and scans to monitor changes in their brains. The researchers test the subjects every one to three years, and they have found that they can see troubling brain changes in people with the gene as many as 20 years before they would be expected to show symptoms based on their parent’s age when the disease was first diagnosed. Given the results from DIAN and other studies, Bateman concluded that the ideal time to give an experimental drug is within 15 years of the suspected onset.
Before they could begin testing drugs on people with an Alzheimer’s gene, though, the researchers had to solve a delicate problem. DIAN participants are aware that they have a fifty-fifty chance of possessing an Alzheimer’s gene, and they know they can be tested and find out if they inherited it — but almost no one wants to know. The researchers can give the drugs only to people who have the gene, however. (You don’t want to give a drug that affects the brain to healthy people.) If the study took only people with the gene, all those who were accepted would know that they had it. In order to avoid this problem, the DIAN researchers are inviting members of families with one of the mutated genes to join, regardless of whether the individuals know they possess the gene. Subjects won’t know which group they are in, but the researchers will know, and they will assign those who don’t have an Alzheimer’s gene to the placebo group. The participants with the gene will be randomly assigned to receive one of three experimental drugs or a placebo. The researchers say that within two years, they will have an indication about whether any of the drugs are working.
Bateman explained that the next step in Alzheimer’s research would be to study people who do not have the gene. The idea would be to look at, say, 70-year-olds who seem cognitively normal but who are at an age where Alzheimer’s is increasingly likely. Those subjects would be given scans and other tests to see whether, despite the absence of symptoms, their brains showed changes consistent with the beginning of Alzheimer’s. They would then be enrolled in a drug study. If the drug were to prevent the disease in these people, researchers predict that tests for beta amalyoid plaques might become a recommended preventive medical procedure. People might be tested at age 50 and periodically afterward. Anyone getting plaques would take the drug to prevent Alzheimer’s disease.
In 1995, the same year that Bird discovered Gary’s family’s Alzheimer’s gene, Gary made a discovery of his own. That August, his younger brother and his sister-in-law were visiting, and it was clear that his brother had Alzheimer’s. He would become confused by the simplest things. That first morning, he tried to open a latched door, gave up, then tried to open a window, thinking it was a door. Gary was desolate seeing his brother’s condition and could not help thinking that he could be next.
On the day that Gary’s brother and his wife departed, Gary picked up The New York Times. “There was this headline,” he told me. “ ‘Third Gene Tied to Early Onset Alzheimer’s.’ ” The article described a discovery by the Seattle group, in collaboration with other researchers, that was being published that day in Science magazine. Gary was pretty sure it was his family whose gene had been found.
He got a copy of Science and turned to the article, which included a family tree with members who had the gene represented by black diamonds. Those who did not have the gene were represented by white diamonds.
It was scary even to look. Gary knew every person in that diagram, and he knew he was there too. Would he be a black diamond or a white one? He followed his family line, from his grandfather’s generation to his father’s — there were the 14 siblings — to his own. He saw his older sister, who had been given a diagnosis of Alzheimer’s and was represented by a black diamond. He saw his younger brother, a black diamond. Bracketed between them was Gary. His diamond was white. He had prepared all his adult life for that gene. And by an incredible stroke of luck, he did not have it.
His first sensation, he told me, was “lightness, like a weight, a burden, had been lifted off my shoulders.” For several hours he floated, elated by the news. Now his children did not have to worry that they would get it. His wife would not have to worry that she would be caring for Gary as he spiraled down into the chasm of the disease. He had spent his life preparing for an inheritance he had escaped.
Soon though, he moved from joy to sadness. “My feelings of happiness for myself and my children seemed to make light of what my siblings and family faced,” Gary said.
A decade ago, Gary and Doug spoke briefly at a family reunion in Oklahoma City. It was the first time they had seen each other since that fateful picnic four decades earlier. Then in 2009, when Gary was in Seattle, meeting with Bird for a book he was writing about his family, “The Thousand Mile Stare,” he decided to look up Doug and Ione. They talked, and last year Doug joined Phase 1 of the DIAN study, after learning about it from Gary. His testing took place at Washington University in St. Louis over three days in March.
First Doug was given a cognitive endurance course. The idea was to wear the brain out by taxing it with progressively harder tasks in order to see its limits. It’s like giving someone a heart stress test, Bateman says, in which a person must run on a treadmill until exhaustion sets in. The goal is to get a base line reading. New studies are indicating that one of the first symptoms of Alzheimer’s is progressively poorer performances on challenging cognitive and memory tests.
Some tasks were simple — name as many animals as you can in one minute. Others were harder. One was a test for working memory, in which the subject is shown simple arithmetic problems, like 7+5 = 12. In some, the answer is correct; in others, it is not. The subject presses a key on a computer to indicate whether the answer is right or wrong. As soon as one problem is completed, another pops up. After three or four problems, the subject is asked to type, from memory, the second number of each problem.
Doug found it exhausting. That afternoon, the testing continued with standard memory tests and questions for Ione about whether Doug has changed in his ability to handle finances or deal with daily events in his life. (The answer was no.) Then there was a test in which Ione was asked to recall something that happened in the prior week and something that happened in the prior month, in great detail. She was sent out of the room and Doug was called in and asked to recall the same event. (He performed well.) At the end of the first day, Doug was given an M.R.I., the first he ever had, to look for shrinkage of his hippocampus, a telltale sign of Alzheimer’s.
The next morning, Bateman gave Doug a spinal tap to collect the fluid that bathes Doug’s brain and spinal cord. After 10 minutes, Bateman held up a tube filled halfway with a clear, beige-tinged liquid. In it were proteins, including beta amyloid, that can reveal if Alzheimer’s is on its way. The spinal tap was followed by more brain scans the next day, and then Doug and Ione went home.
After they returned to Port Orchard, Doug decided he wanted to know whether he carried the Alzheimer’s gene. He and Ione thought he would be safe, Ione told me. They thought the cognitive tests had gone well, and Doug was in his early 60s. Most of his family members who had Alzheimer’s got it when they were in their 50s.
Last year, on May 31, his 62nd birthday, Doug went to a lab to get his blood drawn. When the results came back in June, they were the last thing Doug and Ione expected: Doug had the mutated gene.
“The first reaction was shock,” Ione said. The couple had gone through a tense period when Doug was in his late 40s and early 50s, and they kept waiting for him to start showing symptoms of the disease. Ione still remembers a couple of occasions when Doug lost his way on familiar routes.
“I thought: Oh, my gosh. This is it,” she says. “It is so easy to get sucked into that constant fear.” But as the years went by, they put the fear behind them.
Now it is back. “It’s kind of like we went through this once already,” Ione said. The fear is compounded by thoughts of their two children. Brian, their son, is 40, and is married with a 2-year-old daughter. Karen, their daughter, is 38 and unmarried. Like Doug, Karen decided she had to know and arranged to be tested. She does not have the gene.
That, Ione says, is the one bright spot in all this. Hearing the news about Karen made her realize how worried she was. “You feel like a rock was lifted from your chest. You didn’t know the rock was there but now it’s gone.”
The first thing Brian did was buy additional life insurance, just in case. Though he initially said he wanted to be tested, so far he has not gone through with it. He plans to join Bateman’s study. If he does, he will, of course, have a gene test but will not be told the result.
Doug says little about how the devastating news affects him. He’s continuing to work, planning to retire when he is 65. Then he figures he will do a lot of fishing and household repairs.
He also wants to join the drug phase of DIAN. It is his one hope of staving off the inevitable, assuming he is placed in a group that is randomly assigned to take one of the experimental drugs.
But even if a drug ultimately proves effective, it will no doubt take time for Bateman and his team to figure out when best to give it and at what dose. It is quite unlikely that a cure will be found in the next few years.
As for Brian, if he does have the gene, perhaps science will come up with the right drug at the right time before his symptoms set in. And if his young daughter were to have it, too, researchers imagine that there will be a cure by the time she faces her own dire future. That is what they cling to, Ione says. “I’d never even heard the word ‘Alzheimer’s’ until I was pregnant with Brian,” she said. “And there was no hope at that point. If you had the gene, that was it.” Meanwhile, she and Doug are going on with their lives. “We’re just hanging in there. Life can be cruel.”
The Hazards of Confidence
By DANIEL KAHNEMAN
Many decades ago I spent what seemed like a great deal of time under a scorching sun, watching groups of sweaty soldiers as they solved a problem. I was doing my national service in the Israeli Army at the time. I had completed an undergraduate degree in psychology, and after a year as an infantry officer, I was assigned to the army's Psychology Branch, where one of my occasional duties was to help evaluate candidates for officer training. We used methods that were developed by the British Army in World War II.
One test, called the leaderless group challenge, was conducted on an obstacle field. Eight candidates, strangers to one another, with all insignia of rank removed and only numbered tags to identify them, were instructed to lift a long log from the ground and haul it to a wall about six feet high. There, they were told that the entire group had to get to the other side of the wall without the log touching either the ground or the wall, and without anyone touching the wall. If any of these things happened, they were to acknowledge it and start again.
A common solution was for several men to reach the other side by crawling along the log as the other men held it up at an angle, like a giant fishing rod. Then one man would climb onto another's shoulder and tip the log to the far side. The last two men would then have to jump up at the log, now suspended from the other side by those who had made it over, shinny their way along its length and then leap down safely once they crossed the wall. Failure was common at this point, which required starting over.
As a colleague and I monitored the exercise, we made note of who took charge, who tried to lead but was rebuffed, how much each soldier contributed to the group effort. We saw who seemed to be stubborn, submissive, arrogant, patient, hot-tempered, persistent or a quitter. We sometimes saw competitive spite when someone whose idea had been rejected by the group no longer worked very hard. And we saw reactions to crisis: who berated a comrade whose mistake caused the whole group to fail, who stepped forward to lead when the exhausted team had to start over. Under the stress of the event, we felt, each man's true nature revealed itself in sharp relief.
After watching the candidates go through several such tests, we had to summarize our impressions of the soldiers' leadership abilities with a grade and determine who would be eligible for officer training. We spent some time discussing each case and reviewing our impressions. The task was not difficult, because we had already seen each of these soldiers' leadership skills. Some of the men looked like strong leaders, others seemed like wimps or arrogant fools, others mediocre but not hopeless. Quite a few appeared to be so weak that we ruled them out as officer candidates. When our multiple observations of each candidate converged on a coherent picture, we were completely confident in our evaluations and believed that what we saw pointed directly to the future. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment. The obvious best guess about how he would do in training, or in combat, was that he would be as effective as he had been at the wall. Any other prediction seemed inconsistent with what we saw.
Because our impressions of how well each soldier performed were generally coherent and clear, our formal predictions were just as definite. We rarely experienced doubt or conflicting impressions. We were quite willing to declare: "This one will never make it," "That fellow is rather mediocre, but should do O.K." or "He will be a star." We felt no need to question our forecasts, moderate them or equivocate. If challenged, however, we were fully prepared to admit, "But of course anything could happen."
We were willing to make that admission because, as it turned out, despite our certainty about the potential of individual candidates, our forecasts were largely useless. The evidence was overwhelming. Every few months we had a feedback session in which we could compare our evaluations of future cadets with the judgments of their commanders at the officer-training school. The story was always the same: our ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much.
We were downcast for a while after receiving the discouraging news. But this was the army. Useful or not, there was a routine to be followed, and there were orders to be obeyed. Another batch of candidates would arrive the next day. We took them to the obstacle field, we faced them with the wall, they lifted the log and within a few minutes we saw their true natures revealed, as clearly as ever. The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions.
I thought that what was happening to us was remarkable. The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not. We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each particular prediction was valid. I was reminded of visual illusions, which remain compelling even when you know that what you see is false. I was so struck by the analogy that I coined a term for our experience: the illusion of validity.
I had discovered my first cognitive fallacy.
Decades later, I can see many of the central themes of my thinking about judgment in that old experience. One of these themes is that people who face a difficult question often answer an easier one instead, without realizing it. We were required to predict a soldier's performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, "What you see is all there is." We had made up a story from the little we knew but had no way to allow for what we did not know about the individual's future, which was almost everything that would actually matter. When you know as little as we did, you should not make extreme predictions like "He will be a star." The stars we saw on the obstacle field were most likely accidental flickers, in which a coincidence of random events - like who was near the wall - largely determined who became a leader. Other events - some of them also random - would determine later success in training and combat.
You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.
The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.
I coined the term 'illusion of validity' because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true - that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.
I first visited a Wall Street firm in 1984. I was there with my longtime collaborator Amos Tversky, who died in 1996, and our friend Richard Thaler, now a guru of behavioral economics. Our host, a senior investment manager, had invited us to discuss the role of judgment biases in investing. I knew so little about finance at the time that I had no idea what to ask him, but I remember one exchange. "When you sell a stock," I asked him, "who buys it?" He answered with a wave in the vague direction of the window, indicating that he expected the buyer to be someone else very much like him. That was odd: because most buyers and sellers know that they have the same information as one another, what made one person buy and the other sell? Buyers think the price is too low and likely to rise; sellers think the price is high and likely to drop. The puzzle is why buyers and sellers alike think that the current price is wrong.
Most people in the investment business have read Burton Malkiel's wonderful book "A Random Walk Down Wall Street." Malkiel's central idea is that a stock's price incorporates all the available knowledge about the value of the company and the best predictions about the future of the stock. If some people believe that the price of a stock will be higher tomorrow, they will buy more of it today. This, in turn, will cause its price to rise. If all assets in a market are correctly priced, no one can expect either to gain or to lose by trading.
We now know, however, that the theory is not quite right. Many individual investors lose consistently by trading, an achievement that a dart-throwing chimp could not match. The first demonstration of this startling conclusion was put forward by Terry Odean, a former student of mine who is now a finance professor at the University of California, Berkeley.
Odean analyzed the trading records of 10,000 brokerage accounts of individual investors over a seven-year period, allowing him to identify all instances in which an investor sold one stock and soon afterward bought another stock. By these actions the investor revealed that he (most of the investors were men) had a definite idea about the future of two stocks: he expected the stock that he bought to do better than the one he sold.
To determine whether those appraisals were well founded, Odean compared the returns of the two stocks over the following year. The results were unequivocally bad. On average, the shares investors sold did better than those they bought, by a very substantial margin: 3.3 percentage points per year, in addition to the significant costs of executing the trades. Some individuals did much better, others did much worse, but the large majority of individual investors would have done better by taking a nap rather than by acting on their ideas. In a paper titled "Trading Is Hazardous to Your Wealth," Odean and his colleague Brad Barber showed that, on average, the most active traders had the poorest results, while those who traded the least earned the highest returns. In another paper, "Boys Will Be Boys," they reported that men act on their useless ideas significantly more often than women do, and that as a result women achieve better investment results than men.
Of course, there is always someone on the other side of a transaction; in general, it's a financial institution or professional investor, ready to take advantage of the mistakes that individual traders make. Further research by Barber and Odean has shed light on these mistakes. Individual investors like to lock in their gains; they sell 'winners,' stocks whose prices have gone up, and they hang on to their losers. Unfortunately for them, in the short run going forward recent winners tend to do better than recent losers, so individuals sell the wrong stocks. They also buy the wrong stocks. Individual investors predictably flock to stocks in companies that are in the news. Professional investors are more selective in responding to news. These findings provide some justification for the label of 'smart money' that finance professionals apply to themselves.
Although professionals are able to extract a considerable amount of wealth from amateurs, few stock pickers, if any, have the skill needed to beat the market consistently, year after year. The diagnostic for the existence of any skill is the consistency of individual differences in achievement. The logic is simple: if individual differences in any one year are due entirely to luck, the ranking of investors and funds will vary erratically and the year-to-year correlation will be zero. Where there is skill, however, the rankings will be more stable. The persistence of individual differences is the measure by which we confirm the existence of skill among golfers, orthodontists or speedy toll collectors on the turnpike.
Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.
More important, the year-to-year correlation among the outcomes of mutual funds is very small, barely different from zero. The funds that were successful in any given year were mostly lucky; they had a good roll of the dice. There is general agreement among researchers that this is true for nearly all stock pickers, whether they know it or not - and most do not. The subjective experience of traders is that they are making sensible, educated guesses in a situation of great uncertainty. In highly efficient markets, however, educated guesses are not more accurate than blind guesses.
Some years after my introduction to the world of finance, I had an unusual opportunity to examine the illusion of skill up close. I was invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some 25 anonymous wealth advisers, for eight consecutive years. The advisers' scores for each year were the main determinant of their year-end bonuses. It was a simple matter to rank the advisers by their performance and to answer a question: Did the same advisers consistently achieve better returns for their clients year after year? Did some advisers consistently display more skill than others?
To find the answer, I computed the correlations between the rankings of advisers in different years, comparing Year 1 with Year 2, Year 1 with Year 3 and so on up through Year 7 with Year 8. That yielded 28 correlations, one for each pair of years. While I was prepared to find little year-to-year consistency, I was still surprised to find that the average of the 28 correlations was .01. In other words, zero. The stability that would indicate differences in skill was not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.
No one in the firm seemed to be aware of the nature of the game that its stock pickers were playing. The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed. On the evening before the seminar, Richard Thaler and I had dinner with some of the top executives of the firm, the people who decide on the size of bonuses. We asked them to guess the year-to-year correlation in the rankings of individual advisers. They thought they knew what was coming and smiled as they said, "not very high" or "performance certainly fluctuates." It quickly became clear, however, that no one expected the average correlation to be zero.
What we told the directors of the firm was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill. This should have been shocking news to them, but it was not. There was no sign that they disbelieved us. How could they? After all, we had analyzed their own results, and they were certainly sophisticated enough to appreciate their implications, which we politely refrained from spelling out. We all went on calmly with our dinner, and I am quite sure that both our findings and their implications were quickly swept under the rug and that life in the firm went on just as before. The illusion of skill is not only an individual aberration; it is deeply ingrained in the culture of the industry. Facts that challenge such basic assumptions - and thereby threaten people's livelihood and self-esteem - are simply not absorbed. The mind does not digest them. This is particularly true of statistical studies of performance, which provide general facts that people will ignore if they conflict with their personal experience.
The next morning, we reported the findings to the advisers, and their response was equally bland. Their personal experience of exercising careful professional judgment on complex problems was far more compelling to them than an obscure statistical result. When we were done, one executive I dined with the previous evening drove me to the airport. He told me, with a trace of defensiveness, "I have done very well for the firm, and no one can take that away from me." I smiled and said nothing. But I thought, privately: Well, I took it away from you this morning. If your success was due mostly to chance, how much credit are you entitled to take for it?
We often interact with professionals who exercise their judgment with evident confidence, sometimes priding themselves on the power of their intuition. In a world rife with illusions of validity and skill, can we trust them? How do we distinguish the justified confidence of experts from the sincere overconfidence of professionals who do not know they are out of their depth? We can believe an expert who admits uncertainty but cannot take expressions of high confidence at face value. As I first learned on the obstacle field, people come up with coherent stories and confident predictions even when they know little or nothing. Overconfidence arises because people are often blind to their own blindness.
True intuitive expertise is learned from prolonged experience with good feedback on mistakes. You are probably an expert in guessing your spouse's mood from one word on the telephone; chess players find a strong move in a single glance at a complex position; and true legends of instant diagnoses are common among physicians. To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals' experience and on the quality and speed with which they discover their mistakes. Anesthesiologists have a better chance to develop intuitions than radiologists do. Many of the professionals we encounter easily pass both tests, and their off-the-cuff judgments deserve to be taken seriously. In general, however, you should not take assertive and confident people at their own evaluation unless you have independent reason to believe that they know what they are talking about. Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion.
(See also the article below)
Psychologist and Nobel Prize winner Daniel Kahneman says that, given a choice, we will usually make the wrong one.
Daniel Kahneman thinks he won the Nobel Prize for being a fool. Over lunch I judge that there is something about him that makes it unwise for me to tell him that this is not very likely. And anyway, if global prestige, the leadership of an entire field of economics and a worldwide bestselling book haven’t persuaded him, it’s unlikely that I will.
What Kahneman will accept, I think, is that he is not the only fool. I am a fool too. We’re pretty much all fools.
The Princeton professor has changed our understanding of ourselves and rocked economics to its foundations. If social scientists believe that in the past 30 years they have got much nearer to the truth, then Kahneman is one of the reasons why. If being a fool makes me an equal of Kahneman, I accept my status with equanimity.
Let’s start at another lunch. Let’s start in 1969 in the Cafe Rimon in Jerusalem. It’s the favourite haunt of junior faculty members from the Hebrew University. It’s Friday noon. The place is filling up as it usually did at that time. And a revolution is about to start.
On one side of the table is Kahneman, a psychologist with a statistical bent, with time served in the Israeli military telling the top brass what they didn’t want to hear — that their favoured method of choosing officers was hopeless, because the test results and the achievement of selected candidates weren’t correlated. And finding out that they ignored the evidence and ploughed on anyway.
On the other side is a slightly younger man. He’s Amos Tversky, who’s been working away in Michigan on the science of decision-making. The two men had come fresh from an argument. But over lunch, says Kahneman, “we just had a grand time”. The argument, a friendly intellectual affair, was concerned with whether most people were good instinctive statisticians. Tversky was an optimist; he thought we weren’t too bad at numbers. Kahneman disagreed. He told Tversky of his own experience. “One of my lines of research wasn’t working at all. I had adopted a rule that I would never be satisfied with one study and I would have to do the study again and get the same results before I would be sure ... I was fairly inconsistent and never got the same results.” Eventually, he realised why. His sample sizes were too small.
“I was teaching statistics. This was material that should have been transparent to me. But it wasn’t.” Was he the lone fool? Or, as he suggested to Tversky, were most people poor as intuitive statisticians?
It didn’t take long for Tversky to become convinced. And the two embarked on studies that showed that Kahneman was right. People trust information garnered from ridiculously small samples, they confuse correlation (two facts are related) with causation (one fact causes the other) and they are for ever seeing patterns in events and numbers that are, in fact, random.
It was — this paper that Kahneman now calls “a joke, a serious joke” — just the start. The beginning of a revolution against standard economic thinking. In paper after paper, following this first one, Kahneman and Tversky revealed the inadequacy of the most basic assumption made by economists — that man is rational. Ultimately, this work created a new strand of economic thinking — behavioural economics — and earned Kahneman the Nobel Prize for Economics in 2002, even though he is not an economist.
Did it take economists too long to see the point? He says that Tversky always joked that economists didn’t really believe in rationality since they thought it was true of people in general, but not of their spouse or their dean. And then he adds that 30 years between the first paper and the Nobel Prize is “very, very fast”.
Let me give you an example of the departure the work represents. You are offered a bet; a 50:50 gamble, a coin toss. Heads and you lose $100, tails and you win $150. Classic economic theory is clear about what you will do. You’ll take the bet, because the expected value is positive. But in reality? People don’t. They are so averse to losing something they already have that even a much bigger potential gain doesn’t compensate them for the risk.
Here’s another example. We react differently to the same question framed in a different way. Let’s say a doctor is asked to make a decision about two treatments for lung cancer: surgery or radiation. The five-year survival rates favour surgery, but there are short-term risks. When told that the one-month survival rate after surgery is 90 per cent, as many as 84 per cent of the doctors chose the surgical option. When the same point was put in another way — there is 10 per cent mortality in the first month — only half the doctors chose surgery.
And this is just one of dozens of ways we behave irrationally.
We are, for instance, prone to something called the halo effect. “If you like the President’s politics,” Kahneman has written, “you probably like his voice and appearance as well.” And we package our opinions up to make neat narratives and help us form an identity even when the logical link isn’t there. “There is,” Kahneman told me, “a very high correlation in the US between attitudes to marriage and beliefs about global warming.”
We tend to use information that comes quickly to mind in order to form judgments, producing a predictable bias. This explains how Robbie Williams came sixth in a poll to identify the most influential musicians of the past millennium, just ahead of Mozart.
The list of such biases is a long one. We have, Kahneman argues, two types of thought processes. System one: quick, intuitive, automatic, but prone to being fooled by its own mental shortcuts. And system two: more contemplative, deeper and harder to deploy. This can correct for error, but more often acts as a lawyer and lobbyist for our emotions.
And things get worse. Kahneman doesn’t really think we can do much about them. Even knowing that they are there doesn’t help you overcome them. Strangely enough, if he had done he wouldn’t have written his new book in the first place.
One of our biases is that we can ignore the lessons of experience. A group of people compiling a report will estimate they can do it in a year, even though every other similar report has taken comparable groups five years. Kahneman knew this, yet still wrote Thinking, Fast and Slow.
“When I started the book I told Richard Thaler [the author of Nudge] that I had 18 months to finish it. He laughed hysterically and said, ‘You have written about that, haven’t you? It’s not going to work the way you expect.’ ” How long did it take you, I ask. “Four years, and it was very painful. It’s not yet clear to me that it was a good idea to write the book in spite of its being quite successful.” I assure him, having read it, that it was indeed a good idea. “For you it’s easy,” he replies.
The book is dedicated to Tversky and many of the ideas in it are his, but tragically he isn’t here to enjoy its reception. He died of cancer in 1996. Over our lunch his friend talks of him often. Indeed, Kahneman finds it hard to accept the praise and recognition he gets because Tversky isn’t around to share it. “For me,” he says, “winning the Nobel Prize has been a much smaller psychological event than for most other people because I always felt that I was part of a winning team and by myself I would never have won it.”
I point out that, despite this, winning the Nobel Prize is quite cool. “Well, yeah, it’s quite good,” he eventually accepts. “For reasons that people don’t appreciate, by the way, and which took me completely by surprise. What makes it very good is the pleasure that it gives other people. Everybody who knows you is thrilled.” Yes, I say, I told my mum I was having lunch with a Nobel Prize winner. “You know, people who wouldn’t come to your funeral nevertheless are absolutely thrilled.” I promise to come if my schedule allows.
His downbeat attitude extends to academic life. “I discouraged my daughter and son-in-law from entering academic life.” Why did you do that? “Two things. You shouldn’t be in academic life if you have a thin skin, and the other one is that you absolutely have to have the ability to exaggerate the importance of what you are doing. If you can’t do that, you can’t be an academic, because a very small problem has to look big to you, otherwise you can’t mobilise yourself to spend so much time and effort on it.”
But when I put it to him that the financial crisis vindicated his own work by showing up the irrational behaviour of bankers, he replies: “Oddly enough, not very much. Standard economics explains that very well, what happened.” The bankers were acting rationally in their own interests rather than that of their banks. “So my sense is that it is undoubtedly true that behavioural economics have gained greatly in credibility from the crisis, but I am not sure that this is for the right reason.”
All this modesty, all of it becoming, and none of it false. Yet it shouldn’t be mistaken for self-doubt. Kahneman knows what he has done and stands by his work. When I present some academic criticisms of behavioural economics — for instance that the effects are not very large — he is quick to call the point “not very serious”. He knows, too, the impact he has had.
It’s just I think that it’s hard to spend your life studying human foibles, to conclude that they are ineradicable and then take yourself too seriously. Daniel Kahneman certainly doesn’t.
Alzheimer's Risk Factors
Alzheimer's disease cases could potentially be prevented through lifestyle changes and treatment or prevention of chronic medical conditions, according to a study led by Deborah Barnes, PhD, a mental health researcher at the San Francisco VA Medical Center.
Analyzing data from studies around the world involving hundreds of thousands of participants, Barnes concluded that worldwide, the biggest modifiable risk factors for Alzheimer's disease are, in descending order of magnitude, low education, smoking, physical inactivity, depression, mid-life hypertension, diabetes and mid-life obesity.
In the United States, Barnes found that the biggest modifiable risk factors are physical inactivity, depression, smoking, mid-life hypertension, mid-life obesity, low education and diabetes.
Together, these risk factors are associated with up to 51 percent of Alzheimer's cases worldwide (17.2 million cases) and up to 54 percent of Alzheimer's cases in the United States (2.9 million cases), according to Barnes.
"What's exciting is that this suggests that some very simple lifestyle changes, such as increasing physical activity and quitting smoking, could have a tremendous impact on preventing Alzheimer's and other dementias in the United States and worldwide," said Barnes, who is also an associate professor of psychiatry at the University of California, San Francisco.
The study results were presented at the 2011 meeting of the Alzheimer's Association International Conference on Alzheimer's Disease in Paris, France, and published online on July 19, 2011 in Lancet Neurology.
Barnes cautioned that her conclusions are based on the assumption that there is a causal association between each risk factor and Alzheimer's disease. "We are assuming that when you change the risk factor, then you change the risk," Barnes said. "What we need to do now is figure out whether that assumption is correct."
Senior investigator Kristine Yaffe, MD, chief of geriatric psychiatry at SFVAMC, noted that the number of people with Alzheimer's disease is expected to triple over the next 40 years. "It would be extremely significant if we could find out how to prevent even some of those cases," said Yaffe, who is also a professor of psychiatry, neurology and epidemiology at UCSF.
The mental fallout from the Sept. 11 attacks has taught psychologists far more about their field's limitations than about their potential to shape and predict behavior, a wide-ranging review has found.
The report, a collection of articles due to be published next month in a special issue of the journal American Psychologist, relates a succession of humbling missteps after the attacks.
Experts greatly overestimated the number of people in New York who would suffer lasting emotional distress.
Therapists rushed in to soothe victims using methods that later proved to be harmful to some.
And they fell to arguing over whether watching an event on television could produce the same kind of traumatic reaction as actually being there.
These and other stumbles have changed the way mental health workers respond to traumatic events, said Roxane Cohen Silver, a psychologist at the University of California, Irvine, who oversaw the special issue along with editors at the journal. "You have to understand," she said, "that before 9/11 we didn't have any good way to estimate the response to something like this other than - well, estimates" based on earthquakes and other trauma.
Chaos reigned in the New York area after the twin towers fell, both on the streets and in the minds of many mental health professionals who felt compelled to help but were unsure how. Therapists by the dozens volunteered their services, eager to relieve the suffering of anyone who looked stricken. Freudian analysts installed themselves at fire stations, unbidden and unpaid, to help devastated firefighters. Employee assistance programs offered free therapy, warning of the consequences of letting people grieve on their own.
Some given treatment undoubtedly benefited, researchers say, but others became annoyed or more upset. At least one commentator referred to therapists' response as "trauma tourism."
"We did a case study in New York and couldn't really tell if people had been helped by the providers - but the providers felt great about it," said Patricia Watson, a co-author of one of the articles and associate director of the terrorism and disaster programs at the National Center for Child Traumatic Stress. "It makes sense; we know that altruism makes people feel better."
But researchers later discovered that the standard approach at the time, in which the therapist urges a distressed person to talk through the experience and emotions, backfires for many people. They plunge even deeper into anxiety and depression when forced to relive the mayhem.
Crisis response teams now take a much less intense approach called psychological first aid, teaching basic coping skills and having victims recount experiences only if it seems helpful.
One of the biggest lessons of Sept. 11, said Richard McNally, a psychologist at Harvard who did not contribute to the new report, was that it "brought attention to the limitations of this debriefing."
Another, he said, was that it drove home the fact that people are far more resilient than experts thought. No one disputes that thousands of Americans who lost loved ones or fled from the collapsing skyscrapers are still living with deep emotional wounds. Yet estimates after the attack projected epidemic levels of post-traumatic stress, afflicting perhaps 100,000 people, or 35 percent of those exposed to the attack in one way or another.
Later studies found rates closer to 10 percent for first responders, and lower for other New Yorkers. (The prevalence in children was slightly higher.)"Some of us were making this case about resilience well before 9/11, but what the attack did was bring a lot more attention to it," said George A. Bonanno, a psychology professor at Columbia.
It also stirred a debate that may soon change the definition of post-traumatic stress. In the breathless weeks and months after the attack, experts and news articles warned that people who had no direct connection to the tragedy would also develop diagnosable symptoms merely from seeing the images on a television screen.
Dr. Silver, who was among the first to question overestimates of trauma, has found evidence for such effects in her own studies. "The distress spilled over the outside communities, mostly to people who saw the images and had pre-existing psychological problems," she said. "The numbers are low, but I think the data is convincing."
Dr. McNally, among others, disagrees. "The notion that TV caused P.T.S.D. seems absurd," he said in an e-mail.
The editors of the Diagnostic and Statistical Manual, the so-called encyclopedia of mental disorders compiled by the American Psychiatric Association, are debating whether to change the criteria for post-traumatic stress to exclude such at-a-distance cases.
The new report reviewed hundreds of other types of 9/11 studies, political and social. Americans on average became more prejudiced toward Arabs after the attack, as well as more likely to contribute to charities and more supportive of aggressive government action against suspected terrorists.
But these and other findings were not new; studies after previous attacks in other countries found similar things. For all their fury and devastation, the attacks gave rise to no new theories of behavior, no new therapies.
Instead, some authors said, the chief effect on the social sciences was to caution against applying theories so readily to real life. Another author in the new collection, Philip E. Tetlock, a psychologist at the Wharton School at the University of Pennsylvania, notes that intelligence agencies employ scientists to try to predict the behavior of foreign leaders and terrorists - and that their track record has been mixed.
"The closer scientists come to applying their favorite abstractions to real-world problems," the article concludes, "the harder it becomes to keep track of the inevitably numerous variables and to resist premature closure on desired conclusions."
Why Do We Choke Under Pressure?
Whether it's missing a golf putt, scoring poorly on a big test, or blowing a job interview or sales presentation, you've likely had some first-hand experience with choking under pressure. Performing below your abilities in a stress-filled situation happens in the workplace and at school, in sports and in the arts - and it's not simply that your nerves get the better of you.
There are two main theories about why people choke: One is that thoughts and worries distract your attention from the task at hand, and you don't access your talents. A second explanation suggests that pressure causes individuals to think too much about all the skills involved and this messes up their execution.
Psychologists are hoping to understand when and why some people are more likely to succeed in high-stakes settings while others fail. But people usually think all high-pressure situations have the same effects on performance, says Marci DeCaro, an assistant professor in the department of psychological and brain sciences at the University of Louisville in Louisville, Kentucky.
DeCaro and a team of researchers recently published a study in the Journal of Experimental Psychology that found not all high-pressure situations are the same, and they looked at how different types of pressure influenced performance.
They compared "monitoring pressure" - being watched by others, whether it's a teacher, audience, or video camera - to "outcome pressure" -- seeking a high test score, prize money, scholarship, or title -- to lower-key situations.
In one experiment, scientists tracked 130 undergraduate students ability to complete two sets of tasks on a computer in which they were asked to correctly categorize shapes and symbols. One-third of the group was in a pressure-monitoring condition (they were told their performance was being videotaped), another group was in an outcome-pressure situation (they were told their accuracy on the first task had been determined, and they were offered a financial incentive to perform 20 percent better), and a third group was a low-pressure control.
Researchers found that tempting students with money hurt their performance by distracting them from an attention-demanding task, perhaps because they worried more and relied less on their working memory. Believing you're being watched caused students to focus their attention on the skills needed to complete a proceduralized task and less on the outcome, and their performance suffered.
Pressure itself isn't always bad, DeCaro says, it depends on the task and type of pressure encountered. "Pressure hurts performance if it leads you to pay attention in a way that is bad for the particular task you're doing," says DeCaro. Some skills are better performed when you devote a lot of attention to them, like solving math problems, she explains, while others (a well-learned sports skill like your golf putt) are performed better without thinking too closely about the steps you're taking.
Knowing what kinds of pressure situations lead you to focus too much or not enough, might help you find ways to overcome the problem.
How to tell when someone's lying
Professor of psychology R. Edward Geiselman at the University of California, Los Angeles, has been studying for years how to effectively detect deception to ensure public safety, particularly in the wake of renewed threats against the U.S. following the killing of Osama bin Laden.
Geiselman and his colleagues have identified several indicators that a person is being deceptive. The more reliable red flags that indicate deceit, Geiselman said, include:
When questioned, deceptive people generally want to say as little as possible. Geiselman initially thought they would tell an elaborate story, but the vast majority give only the bare-bones. Studies with college students, as well as prisoners, show this. Geiselman's investigative interviewing techniques are designed to get people to talk.
Although deceptive people do not say much, they tend to spontaneously give a justification for what little they are saying, without being prompted.
They tend to repeat questions before answering them, perhaps to give themselves time to concoct an answer.
They often monitor the listener's reaction to what they are saying. "They try to read you to see if you are buying their story," Geiselman said.
They often initially slow down their speech because they have to create their story and monitor your reaction, and when they have it straight "will spew it out faster," Geiselman said. Truthful people are not bothered if they speak slowly, but deceptive people often think slowing their speech down may look suspicious. "Truthful people will not dramatically alter their speech rate within a single sentence," he said.
They tend to use sentence fragments more frequently than truthful people; often, they will start an answer, back up and not complete the sentence.
They are more likely to press their lips when asked a sensitive question and are more likely to play with their hair or engage in other 'grooming' behaviors. Gesturing toward one's self with the hands tends to be a sign of deception; gesturing outwardly is not.
Truthful people, if challenged about details, will often deny that they are lying and explain even more, while deceptive people generally will not provide more specifics.
When asked a difficult question, truthful people will often look away because the question requires concentration, while dishonest people will look away only briefly, if at all, unless it is a question that should require intense concentration.
If dishonest people try to mask these normal reactions to lying, they would be even more obvious, Geiselman said. Among the techniques he teaches to enable detectives to tell the truth from lies are:
Have people tell their story backwards, starting at the end and systematically working their way back. Instruct them to be as complete and detailed as they can. This technique, part of a "cognitive interview" Geiselman co-developed with Ronald Fisher, a former UCLA psychologist now at Florida International University, "increases the cognitive load to push them over the edge." A deceptive person, even a 'professional liar,' is "under a heavy cognitive load" as he tries to stick to his story while monitoring your reaction.
Ask open-ended questions to get them to provide as many details and as much complete information as possible ("Can you tell me more about ...?" "Tell me exactly..."). First ask general questions, and only then get more specific.
Don't interrupt, let them talk and use silent pauses to encourage them to talk.
Humans have a perplexing tendency to fear rare threats such as shark attacks while blithely ignoring far greater risks like unsafe sex and an unhealthy diet. Those illusions are not just silly - they make the world a more dangerous place.
Last march, as the world watched the aftermath of the Japanese earthquake/tsunami/nuclear near-meltdown, a curious thing began happening in West Coast pharmacies. Bottles of potassium iodide pills used to treat certain thyroid conditions were flying off the shelves, creating a run on an otherwise obscure nutritional supplement. Online, prices jumped from $10 a bottle to upwards of $200. Some residents in California, unable to get the iodide pills, began bingeing on seaweed, which is known to have high iodine levels.
The Fukushima disaster was practically an infomercial for iodide therapy. The chemical is administered after nuclear exposure because it helps protect the thyroid from radioactive iodine, one of the most dangerous elements of nuclear fallout. Typically, iodide treatment is recommended for residents within a 10-mile radius of a radiation leak. But people in the United States who were popping pills were at least 5,000 miles away from the Japanese reactors. Experts at the Environmental Protection Agency estimated that the dose of radiation that reached the western United States was equivalent to 1/100,000 the exposure one would get from a round-trip international flight.
Although spending $200 on iodide pills for an almost nonexistent threat seems ridiculous (and could even be harmful - side effects include skin rashes, nausea, and possible allergic reactions), 40 years of research into the way people perceive risk shows that it is par for the course. Earthquakes? Tsunamis? Those things seem inevitable, accepted as acts of God. But an invisible, man-made threat associated with Godzilla and three-eyed fish? Now that's something to keep you up at night. "There's a lot of emotion that comes from the radiation in Japan," says cognitive psychologist Paul Slovic, an expert on decision making and risk assessment at the University of Oregon. "Even though the earthquake and tsunami took all the lives, all of our attention was focused on the radiation."
We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the 19th and 20th centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table.
But in the late 1960s and early 1970s, that vision of homo economicus - a person who acts in his or her best interest when given accurate information - was knee-capped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have two different systems - logic and instinct, or the head and the gut - that sometimes give us conflicting advice, but we are also at the mercy of deep-seated emotional associations and mental shortcuts.
People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level.
Even if a risk has an objectively measurable probability - like the chances of dying in a fire, which are 1 in 1,177 - people will assess the risk subjectively, mentally calibrating the risk based on dozens of subconscious calculations. If you have been watching news coverage of wildfires in Texas nonstop, chances are you will assess the risk of dying in a fire higher than will someone who has been floating in a pool all day. If the day is cold and snowy, you are less likely to think global warming is a threat.
Our hardwired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain's emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.
A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac 'n' cheese. "People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level," says Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, "Risk as Feelings," debunked theories that decision making in the face of risk or uncertainty relies largely on reason. "Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless," he says.
Even Charles Darwin failed to break the amygdala's iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.
The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.
In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending $10 on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a by-product of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.
David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don't Always Match the Facts, has dubbed this disconnect the perception gap. "Even perfect information perfectly provided that addresses people's concerns will not convince everyone that vaccines don't cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot," he says. "Risk communication can't totally close the perception gap, the difference between our fears and the facts."
In the early 1970s, psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later, Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social sciences at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the 18th hole in a lighting storm.
Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer 10 years from now, so it is fine to blow our savings on a boat - we'll pay it off then.
Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread - the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns - are inherently more risky than other events.
But it is heuristics - the subtle mental strategies that often give rise to such biases - that do much of the heavy lifting in risk perception. The 'availability' heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news, and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of athero-sclerosis will most likely never be the subject of a summer thriller.
The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 6 people in this country, and 800,000 annually, hardly even rates with our gut.
The 'representative' heuristic makes us think something is probable if it is part of a known set of characteristics. John wears glasses, is quiet, and carries a calculator. John is therefore a mathematician? An engineer? His attributes taken together seem to fit the common stereotype.
But of all the mental rules of thumb and biases banging around in our brain, the most influential in assessing risk is the 'affect' heuristic. Slovic calls affect a "faint whisper of emotion" that creeps into our decisions. Simply put, positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier.
One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it's all about the short-term "hedonic" pleasure. The good outweighs the bad, which they never fully expect to experience.
Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drug overdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.
Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Island did not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.
The strategy persists even today. In the aftermath of Japan's nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country.
True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens of Japan. Seaweed, anyone?
At least a few technology promoters have become much more savvy in understanding the way the public perceives risk. The nanotechnology world in particular has taken a keen interest in this process, since even in its infancy it has faced high-profile fears. Nanotech, a field so broad that even its backers have trouble defining it, deals with materials and devices whose components are often smaller than 1/100,000,000,000 of a meter. In the late 1980s, the book Engines of Creation by the nanotechnologist K. Eric Drexler put forth the terrifying idea of nanoscale self-replicating robots that grow into clouds of 'gray goo' and devour the world. Soon gray goo was turning up in video games, magazine stories, and delightfully bad Hollywood action flicks (see, for instance, the last G.I. Joe movie).
The odds of nano-technology's killing off humanity are extremely remote, but the science is obviously not without real risks. In 2008 a study led by researchers at the University of Edinburgh suggested that carbon nanotubes, a promising material that could be used in everything from bicycles to electrical circuits, might interact with the body the same way asbestos does. In another study, scientists at the University of Utah found that nanoscopic particles of silver used as an antimicrobial in hundreds of products, including jeans, baby bottles, and washing machines, can deform fish embryos.
The nanotech community is eager to put such risks in perspective. "In Europe, people made decisions about genetically modified food irrespective of the technology," says Andrew Maynard, director of the Risk Science Center at the University of Michigan and an editor of the International Handbook on Regulating Nanotechnologies. "People felt they were being bullied into the technology by big corporations, and they didn't like it. There have been very small hints of that in nanotechnology." He points to incidents in which sunblock makers did not inform the public they were including zinc oxide nanoparticles in their products, stoking the skepticism and fears of some consumers.
For Maynard and his colleagues, influencing public perception has been an uphill battle. A 2007 study conducted by the Cultural Cognition Project at Yale Law School and coauthored by Paul Slovic surveyed 1,850 people about the risks and benefits of nanotech. Even though 81 percent of participants knew nothing or very little about nanotechnology before starting the survey, 89 percent of all respondents said they had an opinion on whether nanotech's benefits outweighed its risks.
In other words, people made a risk judgment based on factors that had little to do with any knowledge about the technology itself. And as with public reaction to nuclear power, more information did little to unite opinions. "Because people with different values are predisposed to draw different factual conclusions from the same information, it cannot be assumed that simply supplying accurate information will allow members of the public to reach a consensus on nanotechnology risks, much less a consensus that promotes their common welfare," the study concluded.
It should come as no surprise that nanotech hits many of the fear buttons in the psychometric paradigm: It is a man-made risk; much of it is difficult to see or imagine; and the only available images we can associate with it are frightening movie scenes, such as a cloud of robots eating the Eiffel Tower. "In many ways, this has been a grand experiment in how to introduce a product to the market in a new way," Maynard says. "Whether all the up-front effort has gotten us to a place where we can have a better conversation remains to be seen."
That job will be immeasurably more difficult if the media - in particular cable news - ever decide to make nanotech their fear du jour. In the summer of 2001, if you switched on the television or picked up a news magazine, you might think the ocean's top predators had banded together to take on humanity. After 8-year-old Jessie Arbogast's arm was severed by a seven-foot bull shark on Fourth of July weekend while the child was playing in the surf of Santa Rosa Island, near Pensacola, Florida, cable news put all its muscle behind the story. Ten days later, a surfer was bitten just six miles from the beach where Jessie had been mauled. Then a lifeguard in New York claimed he had been attacked. There was almost round-the-clock coverage of the "Summer of the Shark," as it came to be known. By August, according to an analysis by historian April Eisman of Iowa State University, it was the third-most-covered story of the summer until the September 11 attacks knocked sharks off the cable news channels.
All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the 'availability' heuristic was screaming at them that sharks were an imminent threat.
While we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities per year in the United States.
"Certainly anytime we have a situation like that where there's such overwhelming media attention, it's going to leave a memory in the population," says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded 30 to 40 media calls a day that summer. "Perception problems have always been there with sharks, and there's a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there's a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out."
Then again, getting out the real word comes with its own risks - like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, 20 years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of 100 scientists on the long-term health effects of the level 7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent.
Even though the percentages are low, the numbers are little comfort for the people living in the shadow of the reactor's cement sarcophagus who are literally worrying themselves sick. In the same report, the WHO states that "the mental health impact of Chernobyl is the largest problem unleashed by the accident to date," pointing out that fear of contamination and uncertainty about the future has led to widespread anxiety, depression, hypochondria, alcoholism, a sense of victimhood, and a fatalistic outlook that is extreme even by Russian standards. A recent study in the journal Radiology concludes that "the Chernobyl accident showed that overestimating radiation risks could be more detrimental than underestimating them. Misinformation partially led to traumatic evacuations of about 200,000 individuals, an estimated 1,250 suicides, and between 100,000 and 200,000 elective abortions."
It is hard to fault the Chernobyl survivors for worrying, especially when it took 20 years for the scientific community to get a grip on the aftereffects of the disaster, and even those numbers are disputed. An analysis commissioned by Greenpeace in response to the WHO report predicts that the Chernobyl disaster will result in about 270,000 cancers and 93,000 fatal cases.
Chernobyl is far from the only chilling illustration of what can happen when we get risk wrong. During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.
It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, D.C., bringing together scientists, policy makers, and others to discuss how risk perception and communication impact society. "Risk perception is not emotion and reason, or facts and feelings. It's both, inescapably, down at the very wiring of our brain," says Ropeik. "We can't undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means."
Ropeik says policy makers need to stop issuing reams of statistics and start making policies that manipulate our risk perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions About Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won't work as hard to save them, or that they won't be able to have an open- casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests - controversially - 'mandated choice,' in which people must check 'yes' or 'no' to organ donation on their driver's license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.
In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and that is the next step in risk perception and risk communication. "Our risk perception is flawed enough to create harm," he says, "but it's something society can do something about."
Conscious and Unconscious
Neuroscientist David Eagleman explores the processes and skills of the subconscious mind, which our conscious selves rarely consider.
Only a tiny fraction of the brain is dedicated to conscious behavior. The rest works feverishly behind the scenes regulating everything from breathing to mate selection. In fact, neuroscientist David Eagleman of Baylor College of Medicine argues that the unconscious workings of the brain are so crucial to everyday functioning that their influence often trumps conscious thought. To prove it, he explores little-known historical episodes, the latest psychological research, and enduring medical mysteries, revealing the bizarre and often inexplicable mechanisms underlying daily life.
Eagleman's theory is epitomized by the deathbed confession of the 19th-century mathematician James Clerk Maxwell, who developed fundamental equations unifying electricity and magnetism. Maxwell declared that 'something within him' had made the discoveries; he actually had no idea how he'd achieved his great insights. It is easy to take credit after an idea strikes you, but in fact, neurons in your brain secretly perform an enormous amount of work before inspiration hits. The brain, Eagleman argues, runs its show incognito. Or, as Pink Floyd put it, 'There's someone in my head, but it's not me.'
There is a looming chasm between what your brain knows and what your mind is capable of accessing. Consider the simple act of changing lanes while driving a car. Try this: Close your eyes, grip an imaginary steering wheel, and go through the motions of a lane change. Imagine that you are driving in the left lane and you would like to move over to the right lane. Before reading on, actually try it. I'll give you 100 points if you can do it correctly.
It's a fairly easy task, right? I'm guessing that you held the steering wheel straight, then banked it over to the right for a moment, and then straightened it out again. No problem.
Like almost everyone else, you got it completely wrong. The motion of turning the wheel rightward for a bit, then straightening it out again would steer you off the road: you just piloted a course from the left lane onto the sidewalk. The correct motion for changing lanes is banking the wheel to the right, then back through the center, and continuing to turn the wheel just as far to the left side, and only then straightening out. Don't believe it? Verify it for yourself when you're next in the car. It's such a simple motor task that you have no problem accomplishing it in your daily driving. But when forced to access it consciously, you're flummoxed.
The lane-changing example is one of a thousand. You are not consciously aware of the vast majority of your brain's ongoing activities, nor would you want to be - it would interfere with the brain's well-oiled processes. The best way to mess up your piano piece is to concentrate on your fingers; the best way to get out of breath is to think about your breathing; the best way to miss the golf ball is to analyze your swing. This wisdom is apparent even to children, and we find it immortalized in poems such as 'The Puzzled Centipede':
A centipede was happy quite,
Until a frog in fun
Said, "Pray tell which leg comes after which?"
This raised her mind to such a pitch,
She lay distracted in the ditch
Not knowing how to run.
The ability to remember motor acts like changing lanes is called procedural memory, and it is a type of implicit memory - meaning that your brain holds knowledge of something that your mind cannot explicitly access. Riding a bike, tying your shoes, typing on a keyboard, and steering your car into a parking space while speaking on your cell phone are examples of this. You execute these actions easily but without knowing the details of how you do it. You would be totally unable to describe the perfectly timed choreography with which your muscles contract and relax as you navigate around other people in a cafeteria while holding a tray, yet you have no trouble doing it. This is the gap between what your brain can do and what you can tap into consciously.
The concept of implicit memory has a rich, if little-known, tradition. By the early 1600s, Rene Descartes had already begun to suspect that although experience with the world is stored in memory, not all memory is accessible. The concept was rekindled in the late 1800s by the psychologist Hermann Ebbinghaus, who wrote that "most of these experiences remain concealed from consciousness and yet produce an effect which is significant and which authenticates their previous existence."
To the extent that consciousness is useful, it is useful in small quantities, and for very particular kinds of tasks. It's easy to understand why you would not want to be consciously aware of the intricacies of your muscle movement, but this can be less intuitive when applied to your perceptions, thoughts, and beliefs, which are also final products of the activity of billions of nerve cells. We turn to these now.
Chicken Sexers and Plane Spotters
When chicken hatchlings are born, large commercial hatcheries usually set about dividing them into males and females, and the practice of distinguishing gender is known as chick sexing. Sexing is necessary because the two genders receive different feeding programs: one for the females, which will eventually produce eggs, and another for the males, which are typically destined to be disposed of because of their uselessness in the commerce of producing eggs; only a few males are kept and fattened for meat. So the job of the chick sexer is to pick up each hatchling and quickly determine its sex in order to choose the correct bin to put it in. The problem is that the task is famously difficult: male and female chicks look exactly alike.
Well, almost exactly. The Japanese invented a method of sexing chicks known as vent sexing, by which experts could rapidly ascertain the sex of one-day-old hatchlings. Beginning in the 1930s, poultry breeders from around the world traveled to the Zen-Nippon Chick Sexing School in Japan to learn the technique.
The mystery was that no one could explain exactly how it was done. It was somehow based on very subtle visual cues, but the professional sexers could not say what those cues were. They would look at the chick's rear (where the vent is) and simply seem to know the correct bin to throw it in.
And this is how the professionals taught the student sexers. The master would stand over the apprentice and watch. The student would pick up a chick, examine its rear, and toss it into one bin or the other. The master would give feedback: yes or no. After weeks on end of this activity, the student's brain was trained to a masterful - albeit unconscious - level.
Meanwhile, a similar story was unfolding oceans away. During World War II, under constant threat of bombings, the British had a great need to distinguish incoming aircraft quickly and accurately. Which aircraft were British planes coming home and which were German planes coming to bomb? Several airplane enthusiasts had proved to be excellent spotters, so the military eagerly employed their services. These spotters were so valuable that the government quickly tried to enlist more spotters - but they turned out to be rare and difficult to find. The government therefore tasked the spotters with training others.
It was a grim attempt. The spotters tried to explain their strategies but failed. No one got it, not even the spotters themselves. Like the chicken sexers, the spotters had little idea how they did what they did - they simply saw the right answer.
With a little ingenuity, the British finally figured out how to successfully train new spotters: by trial-and-error feedback. A novice would hazard a guess and an expert would say yes or no. Eventually the novices became, like their mentors, vessels of the mysterious, ineffable expertise.
The Knowledge Gap
There can be a large gap between knowledge and awareness. When we examine skills that are not amenable to introspection, the first surprise is that implicit memory is completely separable from explicit memory: You can damage one without hurting the other.
Consider patients with anterograde amnesia, who cannot consciously recall new experiences in their lives. If you spend an afternoon trying to teach them the video game Tetris, they will tell you the next day that they have no recollection of the experience, that they have never seen this game before - and, most likely, that they have no idea who you are, either. But if you look at their performance on the game the next day, you'll find that they have improved exactly as much as nonamnesiacs. Implicitly their brains have learned the game: The knowledge is simply not accessible to their consciousness. (Interestingly, if you wake up an amnesic patient during the night after he has played Tetris, he'll report that he was dreaming of colorful falling blocks but will have no idea why.)
Of course, it's not just sexers and spotters and amnesiacs who enjoy unconscious learning. Essentially everything about your interaction with the world rests on this process. You may have a difficult time putting into words the characteristics of your father's walk, or the shape of his nose, or the way he laughs - but when you see someone who walks, looks, or laughs the way he does, you know it immediately.
One of the most impressive features of brains - and especially human brains - is the flexibility to learn almost any kind of task that comes their way. Give an apprentice the desire to impress his master in a chicken-sexing task and his brain devotes its massive resources to distinguishing males from females. Give an unemployed aviation enthusiast a chance to be a national hero and his brain learns to distinguish enemy aircraft from local flyboys. This flexibility of learning accounts for a large part of what we consider human intelligence. While many animals are properly called intelligent, humans distinguish themselves in that they are so flexibly intelligent, fashioning their neural circuits to match the task at hand. It is for this reason that we can colonize every region on the planet, learn the local language we're born into, and master skills as diverse as playing the violin, high-jumping, and operating space shuttle cockpits.
The Liar in Your Head
On December 31, 1974, Supreme Court Justice William O. Douglas was debilitated by a stroke that paralyzed his left side and confined him to a wheelchair. But Justice Douglas demanded to be checked out of the hospital on the grounds that he was fine. He declared that reports of his paralysis were 'a myth.' When reporters expressed skepticism, he invited them to join him for a hike, a move interpreted as absurd. He even claimed to be kicking football field goals with his paralyzed leg. As a result of this apparently delusional behavior, Douglas was dismissed from his seat on the Supreme Court.
What Douglas experienced is called anosognosia. This term describes a total lack of awareness about an impairment. It's not that Justice Douglas was lying - his brain actually believed that he could move just fine. But shouldn't the contradicting evidence alert those with anosognosia to a problem? It turns out that alerting the system to contradictions relies on particular brain regions, especially one called the anterior cingulate cortex. Because of these conflict-monitoring regions, incompatible ideas will result in one side or another's winning: The brain either constructs a story that makes them compatible or ignores one side of the debate. In special circumstances of brain damage, this arbitration system can be damaged, and then conflict can cause no trouble to the conscious mind.
Now Batting: Your Subconscious
On August 20, 1974, in a game between the California Angels and the Detroit Tigers, The Guinness Book of Records clocked Nolan Ryan's fastball at 100.9 miles per hour. If you work the numbers, you'll see that Ryan's pitch departs the mound and crosses home plate - 60 feet, 6 inches away - in four-tenths of a second. This gives just enough time for light signals from the baseball to hit the batter's eye, work through the circuitry of the retina, activate successions of cells along the loopy superhighways of the visual system at the back of the head, cross vast territories to the motor areas, and modify the contraction of the muscles swinging the bat. Amazingly, this entire sequence is possible in less than four-tenths of a second; otherwise no one would ever hit a fastball. But even more surprising is that conscious awareness takes longer than that: about half a second. So the ball travels too rapidly for batters to be consciously aware of it.
One does not need to be consciously aware to perform sophisticated motor acts. You can notice this when you begin to duck from a snapping tree branch before you are aware that it's coming toward you, or when you're already jumping up when you first become aware of a phone's ring.
For both men and women, wearing revealing attire causes them to be seen as more sensitive but less competent, says a new study by University of Maryland psychologist Kurt Gray and colleagues from Yale and Northeastern University.
In an article published Nov. 10 in the Journal of Personality and Social Psychology, the researchers write that it would be absurd to think people's mental capacities fundamentally change when they remove clothing. "In six studies, however, we show that taking off a sweater-or otherwise revealing flesh-can significantly change the way a mind is perceived."
Past research, feminist theory and parental admonishments all have long suggested that when men see a woman wearing little or nothing, they focus on her body and think less of her mind. The new findings by Gray, et al. both expand and change our understanding of how paying attention to someone's body can alter how both men and women view both women and men.
"An important thing about our study is that, unlike much previous research, ours applies to both sexes. It also calls into question the nature of objectification because people without clothes are not seen as mindless objects, but they are instead attributed a different kind of mind," says UMD's Gray.
"We also show that this effect can happen even without the removal of clothes. Simply focusing on someone's attractiveness, in essence concentrating on their body rather than their mind, makes you see him or her as less of an agent [someone who acts and plans] and more of an experiencer."
Objectification vs. Two Kinds of Mind
Traditional research and theories on objectification suggest that we see the mind of others on a continuum between the full mind of a normal human and the mindlessness of an inanimate object. The idea of objectification is that looking at someone in a sexual context-such as in pornography-leads people to focus on physical characteristics, turning them into an object without a mind or moral status.
However, recent findings indicate that rather than looking at others on a continuum from object to human, we see others as having two aspects of mind: agency and experience. Agency is the capacity to act, plan and exert self-control, whereas experience is the capacity to feel pain, pleasure and emotions. Various factors -- including the amount of skin shown -- can shift which type of mind we see in another person.
In multiple experiments, the researchers found further support for the two kinds of mind view. When men and women in the study focused on someone's body, perceptions of agency (self-control and action) were reduced, and perceptions of experience (emotion and sensation) were increased. Gray and colleagues suggest that this effect occurs because people unconsciously think of minds and bodies as distinct, or even opposite, with the capacity to act and plan tied to the "mind" and the ability to experience or feel tied to the body.
According to Gray, their findings indicate that the change in perception that results from showing skin is not all bad. "A focus on the body, and the increased perception of sensitivity and emotion it elicits might be good for lovers in the bedroom," he says.
Their study also found that a body focus can actually increase moral standing. Although those wearing little or no clothes -- or otherwise represented as a body -- were seen to be less morally responsible, they also were seen to be more sensitive to harm and hence deserving of more protection. "Others appear to be less inclined to harm people with bare skin and more inclined to protect them. In one experiment, for example, people viewing male subjects with their shirts off were less inclined to give those subjects uncomfortable electric shocks than when the men had their shirts on." Gray says.
However, Gray and his coauthors note that in work or academic contexts, where people are primarily evaluated on their capacity to plan and act, a body focus clearly has negative effects. Seeing someone as a body strips him or her of competence and leadership, potentially impacting job evaluations. "Even more than robbing someone of agency, the increased experience that may accompany body perceptions may lead those who are characterized in terms of their bodies to be seen as more reactive and emotional, traits that may also serve to work against career advancement," they write.
Even the positive aspects of a body focus, such as an increased desire to protect from harm, can be ultimately harmful, the authors say, pointing to the "benevolent sexism" common in the United States in the 1950s, in which men oppressed women under the guise of protecting them.
When faced with making a complicated decision, our automatic instinct to avoid misfortune can result in missing out on rewards, and could even contribute to depression, according to new research.
The results of a new study, published in the journal PLoS Computational Biology, suggest that our brains subconsciously use a simplistic strategy in order to filter out options when faced with a complex decision. However, the research also highlights how this strategy can lead to poor choices, and could possibly contribute to depression -- a condition characterised by impaired decision-making.
In the study, researchers at UCL looked at how people make chains of several decisions, where each step depends on the previous one. Often, the total number of possible choices is far too large to consider them each individually. One way to simplify the problem is to avoid considering any plan where the first step has a seriously negative association -- no matter what the overall outcome would be. This 'pruning' decision-making bias, which was demonstrated in this paper for the first time, can result in poor decisions.
Lead author Dr Quentin Huys from the UCL Gatsby Computational Neuroscience Unit, explained: "Imagine planning a holiday -- you could not possibly consider every destination in the world. To reduce the number of options, you might instinctively avoid considering going to any countries that are more than 5 hours away by plane because you don't enjoy flying.
"This strategy simplifies the planning process and guarantees that you won't have to endure an uncomfortable long-haul flight. However, it also means that you might miss out on an amazing trip to an exotic destination."
In the study, the researchers asked a group of 46 volunteers with no known psychiatric disorders to plan chains of decisions in which they moved around a maze -- on each step they either gained or lost money. The volunteers instinctively avoided paths starting with large losses, even if those decisions would have won them the most money overall. Interestingly, the amount of pruning the volunteers showed was related to the extent to which they reported experiencing depressive symptoms, though none were actually clinically depressed.
Neir Eshel, co-author of the paper, formerly at the UCL Institute for Cognitive Neuroscience and now at Harvard Medical School, said: "The reflex to prune the number of possible options is a double-edged sword. Although necessary to simplify complicated decisions, it could also lead to poor choices."
The researchers link the surprising association with depressive symptoms to the brain chemical serotonin, which is known to be involved in both avoidance and depression, and may also contribute to the optimism bias. However, this role for serotonin in pruning needs to be confirmed in further studies.
A Neuro Surgeon's View
A new exhibition reminds the celebrated neurosurgeon Henry Marsh that we still don’t know our own minds
The preserved, dead human brain is not a thing of beauty — it is a cold, grey, slimy mass, smelling horribly of formaldehyde. As one looks at this object (admittedly bottled and hence odourless) in the Wellcome Trust’s new exhibition Brains: The Mind as Matter, it is not easy to consider that this thought, which feels so light, and our sense of self and being, which feel so familiar, are made of something as alien and gross as this.
The living brain, however, is not without beauty, though it is a sight that only the staff working in neurosurgical operating theatres — and a few privileged patients — ever get to see. Seen through the optics of an operating microscope, the brain’s surface glitters with cerebro-spinal fluid and the red arteries and blue veins branch across it like intricate river estuaries seen from space. (The brain has a mass of blood vessels supplying it because thinking is an energy-intensive process. One quarter of the blood pumped by the heart goes to the brain — and a large part of brain surgery is negotiating one’s way around these vessels.) As I often operate on the brain under local anaesthetic, with the patient wide awake, some of my patients can see the same view, shown on a video monitor. When I operate on tumours in the visual cortex, the part of the brain at the back of the head responsible for vision, I have had patients using their visual cortex to look at itself on the video monitor.
One feels that there should be some kind of philosophical equivalent of acoustic feedback as my patient’s brain sees itself but, of course, there is not. When I pointed out to one patient the part of his brain that was responsible for speech, he replied (or rather the part of his brain known as Broca’s area replied): “It’s crazy.” The more one thinks about the brain, the more difficult it becomes.
Although I know that if I damage certain parts of the brain during an operation I will be faced by a disabled patient afterwards, I still feel that mind and matter are separate entities. Entire schools of philosophy, and countless books, have been devoted to this conviction, the so-called mind-body problem. Many theories have been suggested by philosophers, starting with Descartes, whose dualism had mind and matter as entirely separate. He proposed that the physical brain communicated with the immaterial mind in the pineal gland, a small pea-shaped structure in the centre of the brain.
The pineal gland, I might add, can occasionally give rise to tumours that I will cheerfully remove (along with the gland) without apparently interfering with the ability of the patient’s brain to communicate with his mind. Since Descartes, philosophers have proposed many theories, such as parallelism (that mental and brain events do not influence each other but are nevertheless in harmony) and epiphenomenalism (that mental events are a by-product of brain events in the same way that the ticking of a clock is a by-product of the clock’s machinery), to name but a few. Finally, there is materialism — the view that the mind-body problem is not really a problem at all and that “mind” is a physical phenomenon.
It is difficult for brain surgeons not to be materialists, though I suppose a few might manage it. The complicated philosophical theories cannot survive the crude reality of brain surgery, and there are several sets of neurosurgical instruments in the exhibition that convey this crudity. The delicacy and complexity of the brain — 100 billion nerve cells linked by an even greater number of electro-chemical connections — is not matched, alas, by the tools that we surgeons use.
The identity of mind and matter is most apparent for neurosurgeons when we see patients who have suffered damage to the frontal lobes of their brain. It is easy enough to believe that mind and matter are separate entities when brain damage produces “physical” disability such as paralysis or loss of vision. It is much more difficult when somebody’s personality is changed (almost invariably for the worse). A kind and thoughtful husband can become coarse and violent, even though his intellect is perfectly preserved. He is no longer the person that he was. If the lives of head-injured patients with frontal brain damage have been saved by emergency brain surgery, their enthusiastic young surgeons usually see this as a triumph. But all too often it becomes apparent as time passes that their social and moral nature has been irreversibly damaged — that they have been left “a bit frontal” as doctors say. It is a depressing experience to sit in one’s outpatient clinic and listen to the sad litany of marital breakdown, unemployment and depression that is an all too common outcome from head injury.
The exhibition traces the way in which the analogies used to “understand” the brain have changed as our ability to examine it deepened from the 17th century onwards and as technology advanced. Aristotle thought its purpose was to cool the blood. Descartes saw it as a system of hydraulic tubes. In the 19th century it was compared to a telephone exchange and in the modern era, inevitably, to a computer. But the fact is that we have never met anything like a brain before and it is by no means certain that we will ever have the analogies or the language with which to understand it. The great physicist Richard Feynman famously observed that we cannot understand quantum mechanics, and all we can do is accept the mathematics and experimental evidence, even though the behaviour of particles at an atomic level is so utterly at odds with our everyday experience. But even if, in principle, we can understand our brains, there are certain practical difficulties.
First, science is based on experimentation, and we can perform only limited experiments on our own brains. Experiments on animals tell us only about animals, and ultimately what really interests us is ourselves. “Experiments of nature”, such as strokes and head injuries, which have provided the foundations of neuroscience, can only take us a limited distance. Second, as the exhibition tries to show, the sheer complexity and minuscule scale on which the brain works might defy any attempt to unravel it.
Even if we can one day produce a wiring diagram of the human brain — the most complex object in the universe, as cliché has it — it is by no means certain that we will be any closer to understanding how it actually works. Modern technology and scanning have advanced our understanding of the brain immensely but, relative to the brain’s complexity, it is still like looking at a starry sky through cheap binoculars. Nor can we know for certain that new technologies will be developed in future that will take us beyond what is currently available. We can now see planets circling distant stars, but we will never reach them. We may be equally frustrated in our attempts to look inwards into our own brains.
Finally, there is the most profound and fascinating question of all — the question of consciousness. How does matter give rise to awareness? Does each brain cell contain a hundred billionth part of consciousness and, if not, how does consciousness arise when brain cells are linked together? The trouble with consciousness is that it cannot be observed, it can only be experienced. It is a difficult, if not impossible, subject for scientific study. Recent functional MRI brain scanning, for instance, of patients in a persistent vegetative state has shown that, despite their complete immobility and unresponsiveness, some of them have appropriate activity in their brains in response to being spoken to — but there is no way of knowing if they are conscious.
My work as a neurosurgeon means that I have little choice but to accept that thought is a physical phenomenon, that mind is matter. Certain conclusions follow from this that many will find unpalatable — that animals are conscious and can suffer as much as we do, that there is no human soul and that an afterlife is most unlikely. Most religions fail when faced by this central tenet of neuroscience. Some difficult questions result as well, to which I do not know the answer. Did the murderer commit murder because there was an imbalance of chemicals in his brain, and if so, was that imbalance his fault? It is important, however, to realise that to accept that mind is matter does not diminish us in any way — instead it elevates matter and tells us how little we understand it and how little we understand ourselves. The inner sense of being and consciousness within each one of us is as great and wonderful a mystery as the great mystery of the universe around us.
Henry Marsh is senior consultant neurosurgeon on the Atkinson Morley Wing at St George’s Hospital in London and has been the subject of two documentaries, the most recent of which is the award-winning The English Surgeon (2007) about his work in the Ukraine. Brains: The Mind as Matter, Wellcome Collection, London NW1 (020-7611 2222), March 29 to June 17 2012
Making Big Decisions About Money
We're bad it. And marketers know this.
Consider: you're buying a $30,000 car and you have the option of upgrading the stereo to the 18 speaker, 100 watt version for just $500 more. Should you?
Or perhaps you're considering two jobs, one that you love and one that pays $2,000 more. Which to choose?
Or...You are lucky enough to be able to choose between two colleges. One, the one with the nice campus and slightly more famous name, will cost your parents (and your long-term debt) about $200,000 for four years, and the other ("lesser" school) has offered you a full scholarship.Which should you take?
In a surprisingly large number of cases, we take the stereo, even though we'd never buy a nice stereo at home, or we choose to "go with our heart because college is so important" and pick the expensive college. (This is, of course, a good choice to have to make, as most people can't possibly find the money).
Here's one reason we mess up: Money is just a number.
Comparing dreams of a great stereo (four years of driving long distances, listening to great music!) compared with the daily reminder of our cheapness makes picking the better stereo feel easier. After all, we're not giving up anything but a number.
The college case is even more clear. $200,000 is a number that's big, sure, but it doesn't have much substance. It's not a number we play with or encounter very often. The feeling about the story of compromise involving something tied up in our self-esteem, though, that feeling is something we deal with daily.
Here's how to undo the self-marketing. Stop using numbers.
You can have the stereo if you give up going to Starbucks every workday for the next year and a half. Worth it?
If you go to the free school, you can drive there in a brand new Mini convertible, and every summer you can spend $25,000 on a top-of-the-line internship/experience, and you can create a jazz series and pay your favorite musicians to come to campus to play for you and your fifty coolest friends, and you can have Herbie Hancock give you piano lessons and you can still have enough money left over to live without debt for a year after you graduate while you look for the perfect gig...
Suddenly, you're not comparing "this is my dream," with a number that means very little. You're comparing one version of your dream with another version.
Many of the bestselling business books of the past decade, such as “Freakonomics” and “The Undercover Economist”, started with an implicit, fundamental premise: "If it can't be quantified or calculated, it can't be true."
These books often reduced baffling and complex scenarios - everything from global warming to why there are so many Starbucks stores in your neighborhood - to simple explanations supported by basic economic thinking. Sometimes these explanations contained charts, graphs and little diagrams that made the world appear neat, tidy and orderly. A decade ago, in fact, Google made news when they hired UC Berkeley economics professor Hal Varian as their first in-house economist. Varian was charged with modeling consumer behaviors and consulting on corporate strategy. The announcement further projected the belief that, in short, economics was the key to market success.
Today, Google should be looking for a prize-winning neuroscientist.
The new generation of business thinking combines a more nuanced understanding that many actions can neither be quantified nor calculated. Often, these insights are culled from the cutting edge of neuroscience. This new "what you didn't know about your brain can help you"-genre most likely started with science writer Jonah Lehrer's wonderful ”Proust was a Neuroscientist.”. In the book, Lehrer explains how many of the underpinnings of modern neuroscience were actually discovered by the likes of Proust, Stravinsky and Escoffier.
A slew of other titles have followed, each of them offering unique insights into the workings of the human brain. Along the way, we've been told how the human brain decides what to buy, why traditional brainstorming approaches don't work as well as they should, how changing the default settings can change the final outcome and why companies need to understand and cultivate the habits of their customers.
We are, as a society, experiencing a profound reappraisal of traditional economics and its shortcomings. The world is suddenly a lot more irrational than we ever thought, full of black swans. In Economics 101, we're taught that economic models are able to predict the behavior of coldly rational decision-makers. Charts and graphs follow a simple mathematical beauty. When we lower interest rates, we expect a certain reaction. When we devise incentives for customers, we expect them to react in a certain way. When we provide customers with a menu of choices, we expect them to answer in a certain way.
The only problem, of course, is that humans are not always rational.
Not surprisingly, some of the most popular business titles of the past few years have combined research findings at the cutting edge of economic theory. Perhaps the best example is Daniel Kahneman's ”Thinking, Fast and Slow,” which is now edging up the bestseller lists. Kahneman is a Nobel Prize-winning economist who has helped to popularize the latest in economics thinking, including loss aversion. Interestingly enough, Kahneman refers to himself as a psychologist, rather than an economist.
This new thinking about the way the human brain works is starting to impact everything -- how supermarkets stock their shelves, when coupon offers are sent out to consumers, and how to devise the perfect title that will get you to click on a news article (wait, did you think that your reading this was an accident?). A retail store such as Target now knows that you're pregnant before your parents do, thanks to the wonders of understanding customer purchase habits. On the Web, understanding human behavior is everything, given that the best and brightest of our generation are now engaged in an elaborate game of getting people to click on a specific button, text link or banner ad.
A decade ago, if you asked top business leaders whether they'd ever consider reading a book on neuroscience, they probably would have looked askance at you while tapping away at their BlackBerry. Today, they realize that profits lie in understanding how the human brain works, how people make decisions, and what influences the final purchase. What they may not realize, however, is that this understanding is moreso in a neuroscientist's wheelhouse than an economist's.
One of the foundations of modern psychology is that human personality can be described in terms of five different forms of behavior. These are:
1. Agreeableness--being helpful, cooperative and sympathetic towards others
2. Conscientiousness--being disciplined, organized and achievement-oriented
3. Extraversion--having a higher degree of sociability, assertiveness and talkativeness
4. Neuroticism--the degree of emotional stability, impulse control and anxiety
5. Openness--having a strong intellectual curiosity and a preference for novelty and variety
Psychologists have spent much time and many years developing tests that can classify people according to these criteria.
Today, Shuotian Bai at the Graduate University of Chinese Academy of Sciences in Beijing and a couple of buddies say they have developed an online version of the test that can determine an individual's personality traits from their behavior on a social network such as Facebook or Renren, an increasingly popular Chinese competitor.
Their method is relatively simple. These guys asked just over 200 Chinese students with Renren accounts to complete online, a standard personality test called the Big Five Inventory, which was developed at the University of California, Berkeley during the 1990s.
At the same time, these guys analyzed the Renren pages of each student, recording their age and sex and various aspects of their online behavior such as the frequency of their blog posts as well as the emotional content of the posts such as whether angry, funny or surprised and so on.
Finally, they used various number crunching techniques to reveal correlations between the results of the personality tests and the online behavior.
It turns out, they say, that various online behaviors are a good indicator of personality type. For example, conscientious people are more likely to post asking for help such as a location or e-mail address; a sign of extroversion is an increased use of emoticons; the frequency of status updates correlates with openness; and a measure of neuroticism is the rate at which blog posts attract angry comments.
Based on these correlations, these guys say they can automatically predict personality type simply by looking at an individual's social network statistics.
That could be extremely useful for social networks. Shuotian and comapny point out that a network might use this to recommend specific services. They give the rather naive example of an outgoing user who may prefer international news and like to make friends with others.
Other scenarios are at least as likely. For example, such an approach might help to improve recommender systems in general. Perhaps people who share similar personality characteristics are more likely to share similar tastes in books, films or each other.
There is also the obvious prospect that social networks would use this data for commercial gain; to target specific adverts to users for example. And finally there is the worry that such a technique could be used to identify vulnerable individuals who might be most susceptible to nefarious persuasion.
Ethics aside, there are also certain questions marks over the result. One important caveat is how people's response to psychology studies online differs from those done at other times. That could clearly introduce some bias. Then there are the more general questions of how online and offline behaviours differs and how these tests vary across cultures. These are things that Shuotian and Co. want to study in the future.
In the meantime, it is becoming increasingly clear that the data associated with our online behavior is a rich and valuable source of information about our innermost natures.
Why We Don't Believe In Science
Last week, Gallup announced the results of their latest survey on Americans and evolution. The numbers were a stark blow to high-school science teachers everywhere: forty-six per cent of adults said they believed that “God created humans in their present form within the last 10,000 years.” Only fifteen per cent agreed with the statement that humans had evolved without the guidance of a divine power.
What’s most remarkable about these numbers is their stability: these percentages have remained virtually unchanged since Gallup began asking the question, thirty years ago. In 1982, forty-four per cent of Americans held strictly creationist views, a statistically insignificant difference from 2012. Furthermore, the percentage of Americans that believe in biological evolution has only increased by four percentage points over the last twenty years.
Such poll data raises questions: Why are some scientific ideas hard to believe in? What makes the human mind so resistant to certain kinds of facts, even when these facts are buttressed by vast amounts of evidence?
A new study in Cognition, led by Andrew Shtulman at Occidental College, helps explain the stubbornness of our ignorance. As Shtulman notes, people are not blank slates, eager to assimilate the latest experiments into their world view. Rather, we come equipped with all sorts of naïve intuitions about the world, many of which are untrue. For instance, people naturally believe that heat is a kind of substance, and that the sun revolves around the earth. And then there’s the irony of evolution: our views about our own development don’t seem to be evolving.
This means that science education is not simply a matter of learning new theories. Rather, it also requires that students unlearn their instincts, shedding false beliefs the way a snake sheds its old skin.
To document the tension between new scientific concepts and our pre-scientific hunches, Shtulman invented a simple test. He asked a hundred and fifty college undergraduates who had taken multiple college-level science and math classes to read several hundred scientific statements. The students were asked to assess the truth of these statements as quickly as possible.
To make things interesting, Shtulman gave the students statements that were both intuitively and factually true (“The moon revolves around the Earth”) and statements whose scientific truth contradicts our intuitions (“The Earth revolves around the sun”).
As expected, it took students much longer to assess the veracity of true scientific statements that cut against our instincts. In every scientific category, from evolution to astronomy to thermodynamics, students paused before agreeing that the earth revolves around the sun, or that pressure produces heat, or that air is composed of matter. Although we know these things are true, we have to push back against our instincts, which leads to a measurable delay.
What’s surprising about these results is that even after we internalize a scientific concept—the vast majority of adults now acknowledge the Copernican truth that the earth is not the center of the universe—that primal belief lingers in the mind. We never fully unlearn our mistaken intuitions about the world. We just learn to ignore them.
Shtulman and colleagues summarize their findings:
When students learn scientific theories that conflict with earlier, naïve theories, what happens to the earlier theories? Our findings suggest that naïve theories are suppressed by scientific theories but not supplanted by them.
While this new paper provides a compelling explanation for why Americans are so resistant to particular scientific concepts—the theory of evolution, for instance, contradicts both our naïve intuitions and our religious beliefs—it also builds upon previous research documenting the learning process inside the head. Until we understand why some people believe in science we will never understand why most people don’t.
In a 2003 study, Kevin Dunbar, a psychologist at the University of Maryland, showed undergraduates a few short videos of two different-sized balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time—a refutation of Aristotle, who claimed that heavier objects fell faster.
While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. They found the two balls falling at the same rate to be deeply unrealistic. (Intuitively, we’re all Aristotelians.) Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: there was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The A.C.C. is typically associated with the perception of errors and contradictions—neuroscientists often refer to it as part of the “Oh shit!” circuit—so it makes sense that it would be turned on when we watch a video of something that seems wrong, even if it’s right.
This data isn’t shocking; we already know that most undergrads lack a basic understanding of science. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to identify the error; they knew Galileo’s version was correct.
But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful. If you don’t want to think about the ice cream in the freezer, or need to focus on some tedious task, your D.L.P.F.C. is probably hard at work.
According to Dunbar, the reason the physics majors had to recruit the D.L.P.F.C. is because they were busy suppressing their intuitions, resisting the allure of Aristotle’s error. It would be so much more convenient if the laws of physics lined up with our naïve beliefs—or if evolution was wrong and living things didn’t evolve through random mutation. But reality is not a mirror; science is full of awkward facts. And this is why believing in the right version of things takes work.
Of course, that extra mental labor isn’t always pleasant. (There’s a reason they call it “cognitive dissonance.”) It took a few hundred years for the Copernican revolution to go mainstream. At the present rate, the Darwinian revolution, at least in America, will take just as long.
In the late nineteen-sixties, Carolyn Weisz, a four-year-old with long brown hair, was invited into a “game room” at the Bing Nursery School, on the campus of Stanford University. The room was little more than a large closet, containing a desk and a chair. Carolyn was asked to sit down in the chair and pick a treat from a tray of marshmallows, cookies, and pretzel sticks. Carolyn chose the marshmallow. Although she’s now forty-four, Carolyn still has a weakness for those air-puffed balls of corn syrup and gelatine. “I know I shouldn’t like them,” she says. “But they’re just so delicious!” A researcher then made Carolyn an offer: she could either eat one marshmallow right away or, if she was willing to wait while he stepped out for a few minutes, she could have two marshmallows when he returned. He said that if she rang a bell on the desk while he was away he would come running back, and she could eat one marshmallow but would forfeit the second. Then he left the room.
Although Carolyn has no direct memory of the experiment, and the scientists would not release any information about the subjects, she strongly suspects that she was able to delay gratification. “I’ve always been really good at waiting,” Carolyn told me. “If you give me a challenge or a task, then I’m going to find a way to do it, even if it means not eating my favorite food.” Her mother, Karen Sortino, is still more certain: “Even as a young kid, Carolyn was very patient. I’m sure she would have waited.” But her brother Craig, who also took part in the experiment, displayed less fortitude. Craig, a year older than Carolyn, still remembers the torment of trying to wait. “At a certain point, it must have occurred to me that I was all by myself,” he recalls. “And so I just started taking all the candy.” According to Craig, he was also tested with little plastic toys—he could have a second one if he held out—and he broke into the desk, where he figured there would be additional toys. “I took everything I could,” he says. “I cleaned them out. After that, I noticed the teachers encouraged me to not go into the experiment room anymore.”
Footage of these experiments, which were conducted over several years, is poignant, as the kids struggle to delay gratification for just a little bit longer. Some cover their eyes with their hands or turn around so that they can’t see the tray. Others start kicking the desk, or tug on their pigtails, or stroke the marshmallow as if it were a tiny stuffed animal. One child, a boy with neatly parted hair, looks carefully around the room to make sure that nobody can see him. Then he picks up an Oreo, delicately twists it apart, and licks off the white cream filling before returning the cookie to the tray, a satisfied look on his face.
Most of the children were like Craig. They struggled to resist the treat and held out for an average of less than three minutes. “A few kids ate the marshmallow right away,” Walter Mischel, the Stanford professor of psychology in charge of the experiment, remembers. “They didn’t even bother ringing the bell. Other kids would stare directly at the marshmallow and then ring the bell thirty seconds later.” About thirty per cent of the children, however, were like Carolyn. They successfully delayed gratification until the researcher returned, some fifteen minutes later. These kids wrestled with temptation but found a way to resist.
The initial goal of the experiment was to identify the mental processes that allowed some people to delay gratification while others simply surrendered. After publishing a few papers on the Bing studies in the early seventies, Mischel moved on to other areas of personality research. “There are only so many things you can do with kids trying not to eat marshmallows.”
But occasionally Mischel would ask his three daughters, all of whom attended the Bing, about their friends from nursery school. “It was really just idle dinnertime conversation,” he says. “I’d ask them, ‘How’s Jane? How’s Eric? How are they doing in school?’ ” Mischel began to notice a link between the children’s academic performance as teen-agers and their ability to wait for the second marshmallow. He asked his daughters to assess their friends academically on a scale of zero to five. Comparing these ratings with the original data set, he saw a correlation. “That’s when I realized I had to do this seriously,” he says. Starting in 1981, Mischel sent out a questionnaire to all the reachable parents, teachers, and academic advisers of the six hundred and fifty-three subjects who had participated in the marshmallow task, who were by then in high school. He asked about every trait he could think of, from their capacity to plan and think ahead to their ability to “cope well with problems” and get along with their peers. He also requested their S.A.T. scores.
Once Mischel began analyzing the results, he noticed that low delayers, the children who rang the bell quickly, seemed more likely to have behavioral problems, both in school and at home. They got lower S.A.T. scores. They struggled in stressful situations, often had trouble paying attention, and found it difficult to maintain friendships. The child who could wait fifteen minutes had an S.A.T. score that was, on average, two hundred and ten points higher than that of the kid who could wait only thirty seconds.
Carolyn Weisz is a textbook example of a high delayer. She attended Stanford as an undergraduate, and got her Ph.D. in social psychology at Princeton. She’s now an associate psychology professor at the University of Puget Sound. Craig, meanwhile, moved to Los Angeles and has spent his career doing “all kinds of things” in the entertainment industry, mostly in production. He’s currently helping to write and produce a film. “Sure, I wish I had been a more patient person,” Craig says. “Looking back, there are definitely moments when it would have helped me make better career choices and stuff.”
Mischel and his colleagues continued to track the subjects into their late thirties - Ozlem Ayduk, an assistant professor of psychology at the University of California at Berkeley, found that low-delaying adults have a significantly higher body-mass index and are more likely to have had problems with drugs—but it was frustrating to have to rely on self-reports. “There’s often a gap between what people are willing to tell you and how they behave in the real world,” he explains. And so, last year, Mischel, who is now a professor at Columbia, and a team of collaborators began asking the original Bing subjects to travel to Stanford for a few days of experiments in an fMRI machine. Carolyn says she will be participating in the scanning experiments later this summer; Craig completed a survey several years ago, but has yet to be invited to Palo Alto. The scientists are hoping to identify the particular brain regions that allow some people to delay gratification and control their temper. They’re also conducting a variety of genetic tests, as they search for the hereditary characteristics that influence the ability to wait for a second marshmallow.
If Mischel and his team succeed, they will have outlined the neural circuitry of self-control. For decades, psychologists have focussed on raw intelligence as the most important variable when it comes to predicting success in life. Mischel argues that intelligence is largely at the mercy of self-control: even the smartest kids still need to do their homework. “What we’re really measuring with the marshmallows isn’t will power or self-control,” Mischel says. “It’s much more important than that. This task forces kids to find a way to make the situation work for them. They want the second marshmallow, but how can they get it? We can’t control the world, but we can control how we think about it.”
Walter Mischel is a slight, elegant man with a shaved head and a face of deep creases. He talks with a Brooklyn bluster and he tends to act out his sentences, so that when he describes the marshmallow task he takes on the body language of an impatient four-year-old. “If you want to know why some kids can wait and others can’t, then you’ve got to think like they think,” Mischel says.
Mischel was born in Vienna, in 1930. His father was a modestly successful businessman with a fondness for café society and Esperanto, while his mother spent many of her days lying on the couch with an ice pack on her forehead, trying to soothe her frail nerves. The family considered itself fully assimilated, but after the Nazi annexation of Austria, in 1938, Mischel remembers being taunted in school by the Hitler Youth and watching as his father, hobbled by childhood polio, was forced to limp through the streets in his pajamas. A few weeks after the takeover, while the family was burning evidence of their Jewish ancestry in the fireplace, Walter found a long-forgotten certificate of U.S. citizenship issued to his maternal grandfather decades earlier, thus saving his family.
The family settled in Brooklyn, where Mischel’s parents opened up a five-and-dime. Mischel attended New York University, studying poetry under Delmore Schwartz and Allen Tate, and taking studio-art classes with Philip Guston. He also became fascinated by psychoanalysis and new measures of personality, such as the Rorschach test. “At the time, it seemed like a mental X-ray machine,” he says. “You could solve a person by showing them a picture.” Although he was pressured to join his uncle’s umbrella business, he ended up pursuing a Ph.D. in clinical psychology at Ohio State.
But Mischel noticed that academic theories had limited application, and he was struck by the futility of most personality science. He still flinches at the naïveté of graduate students who based their diagnoses on a battery of meaningless tests. In 1955, Mischel was offered an opportunity to study the “spirit possession” ceremonies of the Orisha faith in Trinidad, and he leapt at the chance. Although his research was supposed to involve the use of Rorschach tests to explore the connections between the unconscious and the behavior of people when possessed, Mischel soon grew interested in a different project. He lived in a part of the island that was evenly split between people of East Indian and of African descent; he noticed that each group defined the other in broad stereotypes. “The East Indians would describe the Africans as impulsive hedonists, who were always living for the moment and never thought about the future,” he says. “The Africans, meanwhile, would say that the East Indians didn’t know how to live and would stuff money in their mattress and never enjoy themselves.”
Mischel took young children from both ethnic groups and offered them a simple choice: they could have a miniature chocolate bar right away or, if they waited a few days, they could get a much bigger chocolate bar. Mischel’s results failed to justify the stereotypes—other variables, such as whether or not the children lived with their father, turned out to be much more important—but they did get him interested in the question of delayed gratification. Why did some children wait and not others? What made waiting possible? Unlike the broad traits supposedly assessed by personality tests, self-control struck Mischel as potentially measurable.
In 1958, Mischel became an assistant professor in the Department of Social Relations at Harvard. One of his first tasks was to develop a survey course on “personality assessment,” but Mischel quickly concluded that, while prevailing theories held personality traits to be broadly consistent, the available data didn’t back up this assumption. Personality, at least as it was then conceived, couldn’t be reliably assessed at all. A few years later, he was hired as a consultant on a personality assessment initiated by the Peace Corps. Early Peace Corps volunteers had sparked several embarrassing international incidents—one mailed a postcard on which she expressed disgust at the sanitary habits of her host country—so the Kennedy Administration wanted a screening process to eliminate people unsuited for foreign assignments. Volunteers were tested for standard personality traits, and Mischel compared the results with ratings of how well the volunteers performed in the field. He found no correlation; the time-consuming tests predicted nothing. At this point, Mischel realized that the problem wasn’t the tests—it was their premise. Psychologists had spent decades searching for traits that exist independently of circumstance, but what if personality can’t be separated from context? “It went against the way we’d been thinking about personality since the four humors and the ancient Greeks,” he says.
While Mischel was beginning to dismantle the methods of his field, the Harvard psychology department was in tumult. In 1960, the personality psychologist Timothy Leary helped start the Harvard Psilocybin Project, which consisted mostly of self-experimentation. Mischel remembers graduate students’ desks giving way to mattresses, and large packages from Ciba chemicals, in Switzerland, arriving in the mail. Mischel had nothing against hippies, but he wanted modern psychology to be rigorous and empirical. And so, in 1962, Walter Mischel moved to Palo Alto and went to work at Stanford.
There is something deeply contradictory about Walter Mischel—a psychologist who spent decades critiquing the validity of personality tests—inventing the marshmallow task, a simple test with impressive predictive power. Mischel, however, insists there is no contradiction. “I’ve always believed there are consistencies in a person that can be looked at,” he says. “We just have to look in the right way.” One of Mischel’s classic studies documented the aggressive behavior of children in a variety of situations at a summer camp in New Hampshire. Most psychologists assumed that aggression was a stable trait, but Mischel found that children’s responses depended on the details of the interaction. The same child might consistently lash out when teased by a peer, but readily submit to adult punishment. Another might react badly to a warning from a counsellor, but play well with his bunkmates. Aggression was best assessed in terms of what Mischel called “if-then patterns.” If a certain child was teased by a peer, then he would be aggressive.
One of Mischel’s favorite metaphors for this model of personality, known as interactionism, concerns a car making a screeching noise. How does a mechanic solve the problem? He begins by trying to identify the specific conditions that trigger the noise. Is there a screech when the car is accelerating, or when it’s shifting gears, or turning at slow speeds? Unless the mechanic can give the screech a context, he’ll never find the broken part. Mischel wanted psychologists to think like mechanics, and look at people’s responses under particular conditions. The challenge was devising a test that accurately simulated something relevant to the behavior being predicted. The search for a meaningful test of personality led Mischel to revisit, in 1968, the protocol he’d used on young children in Trinidad nearly a decade earlier. The experiment seemed especially relevant now that he had three young daughters of his own. “Young kids are pure id,” Mischel says. “They start off unable to wait for anything—whatever they want they need. But then, as I watched my own kids, I marvelled at how they gradually learned how to delay and how that made so many other things possible.”
A few years earlier, in 1966, the Stanford psychology department had established the Bing Nursery School. The classrooms were designed as working laboratories, with large one-way mirrors that allowed researchers to observe the children. In February, Jennifer Winters, the assistant director of the school, showed me around the building. While the Bing is still an active center of research—the children quickly learn to ignore the students scribbling in notebooks—Winters isn’t sure that Mischel’s marshmallow task could be replicated today. “We recently tried to do a version of it, and the kids were very excited about having food in the game room,” she says. “There are so many allergies and peculiar diets today that we don’t do many things with food.”
Mischel perfected his protocol by testing his daughters at the kitchen table. “When you’re investigating will power in a four-year-old, little things make a big difference,” he says. “How big should the marshmallows be? What kind of cookies work best?” After several months of patient tinkering, Mischel came up with an experimental design that closely simulated the difficulty of delayed gratification. In the spring of 1968, he conducted the first trials of his experiment at the Bing. “I knew we’d designed it well when a few kids wanted to quit as soon as we explained the conditions to them,” he says. “They knew this was going to be very difficult.”
At the time, psychologists assumed that children’s ability to wait depended on how badly they wanted the marshmallow. But it soon became obvious that every child craved the extra treat. What, then, determined self-control? Mischel’s conclusion, based on hundreds of hours of observation, was that the crucial skill was the “strategic allocation of attention.” Instead of getting obsessed with the marshmallow—the “hot stimulus”—the patient children distracted themselves by covering their eyes, pretending to play hide-and-seek underneath the desk, or singing songs from “Sesame Street.” Their desire wasn’t defeated—it was merely forgotten. “If you’re thinking about the marshmallow and how delicious it is, then you’re going to eat it,” Mischel says. “The key is to avoid thinking about it in the first place.”
In adults, this skill is often referred to as metacognition, or thinking about thinking, and it’s what allows people to outsmart their shortcomings. (When Odysseus had himself tied to the ship’s mast, he was using some of the skills of metacognition: knowing he wouldn’t be able to resist the Sirens’ song, he made it impossible to give in.) Mischel’s large data set from various studies allowed him to see that children with a more accurate understanding of the workings of self-control were better able to delay gratification. “What’s interesting about four-year-olds is that they’re just figuring out the rules of thinking,” Mischel says. “The kids who couldn’t delay would often have the rules backwards. They would think that the best way to resist the marshmallow is to stare right at it, to keep a close eye on the goal. But that’s a terrible idea. If you do that, you’re going to ring the bell before I leave the room.”
According to Mischel, this view of will power also helps explain why the marshmallow task is such a powerfully predictive test. “If you can deal with hot emotions, then you can study for the S.A.T. instead of watching television,” Mischel says. “And you can save more money for retirement. It’s not just about marshmallows.”
Subsequent work by Mischel and his colleagues found that these differences were observable in subjects as young as nineteen months. Looking at how toddlers responded when briefly separated from their mothers, they found that some immediately burst into tears, or clung to the door, but others were able to overcome their anxiety by distracting themselves, often by playing with toys. When the scientists set the same children the marshmallow task at the age of five, they found that the kids who had cried also struggled to resist the tempting treat.
The early appearance of the ability to delay suggests that it has a genetic origin, an example of personality at its most predetermined. Mischel resists such an easy conclusion. “In general, trying to separate nature and nurture makes about as much sense as trying to separate personality and situation,” he says. “The two influences are completely interrelated.” For instance, when Mischel gave delay-of-gratification tasks to children from low-income families in the Bronx, he noticed that their ability to delay was below average, at least compared with that of children in Palo Alto. “When you grow up poor, you might not practice delay as much,” he says. “And if you don’t practice then you’ll never figure out how to distract yourself. You won’t develop the best delay strategies, and those strategies won’t become second nature.” In other words, people learn how to use their mind just as they learn how to use a computer: through trial and error.
But Mischel has found a shortcut. When he and his colleagues taught children a simple set of mental tricks—such as pretending that the candy is only a picture, surrounded by an imaginary frame—he dramatically improved their self-control. The kids who hadn’t been able to wait sixty seconds could now wait fifteen minutes. “All I’ve done is given them some tips from their mental user manual,” Mischel says. “Once you realize that will power is just a matter of learning how to control your attention and thoughts, you can really begin to increase it.”
Marc Berman, a lanky graduate student with an easy grin, speaks about his research with the infectious enthusiasm of a freshman taking his first philosophy class. Berman works in the lab of John Jonides, a psychologist and neuroscientist at the University of Michigan, who is in charge of the brain-scanning experiments on the original Bing subjects. He knows that testing forty-year-olds for self-control isn’t a straightforward proposition. “We can’t give these people marshmallows,” Berman says. “They know they’re part of a long-term study that looks at delay of gratification, so if you give them an obvious delay task they’ll do their best to resist. You’ll get a bunch of people who refuse to touch their marshmallow.”
This meant that Jonides and his team had to find a way to measure will power indirectly. Operating on the premise that the ability to delay eating the marshmallow had depended on a child’s ability to banish thoughts of it, they decided on a series of tasks that measure the ability of subjects to control the contents of working memory—the relatively limited amount of information we’re able to consciously consider at any given moment. According to Jonides, this is how self-control “cashes out” in the real world: as an ability to direct the spotlight of attention so that our decisions aren’t determined by the wrong thoughts.
Last summer, the scientists chose fifty-five subjects, equally split between high delayers and low delayers, and sent each one a laptop computer loaded with working-memory experiments. Two of the experiments were of particular interest. The first is a straightforward exercise known as the “suppression task.” Subjects are given four random words, two printed in blue and two in red. After reading the words, they’re told to forget the blue words and remember the red words. Then the scientists provide a stream of “probe words” and ask the subjects whether the probes are the words they were asked to remember. Though the task doesn’t seem to involve delayed gratification, it tests the same basic mechanism. Interestingly, the scientists found that high delayers were significantly better at the suppression task: they were less likely to think that a word they’d been asked to forget was something they should remember.
In the second, known as the Go/No Go task, subjects are flashed a set of faces with various expressions. At first, they are told to press the space bar whenever they see a smile. This takes little effort, since smiling faces automatically trigger what’s known as “approach behavior.” After a few minutes, however, subjects are told to press the space bar when they see frowning faces. They are now being forced to act against an impulse. Results show that high delayers are more successful at not pressing the button in response to a smiling face.
When I first started talking to the scientists about these tasks last summer, they were clearly worried that they wouldn’t find any behavioral differences between high and low delayers. It wasn’t until early January that they had enough data to begin their analysis (not surprisingly, it took much longer to get the laptops back from the low delayers), but it soon became obvious that there were provocative differences between the two groups. A graph of the data shows that as the delay time of the four-year-olds decreases, the number of mistakes made by the adults sharply rises.
The big remaining question for the scientists is whether these behavioral differences are detectable in an fMRI machine. Although the scanning has just begun—Jonides and his team are still working out the kinks—the scientists sound confident. “These tasks have been studied so many times that we pretty much know where to look and what we’re going to find,” Jonides says. He rattles off a short list of relevant brain regions, which his lab has already identified as being responsible for working-memory exercises. For the most part, the regions are in the frontal cortex—the overhang of brain behind the eyes—and include the dorsolateral prefrontal cortex, the anterior prefrontal cortex, the anterior cingulate, and the right and left inferior frontal gyri. While these cortical folds have long been associated with self-control, they’re also essential for working memory and directed attention. According to the scientists, that’s not an accident. “These are powerful instincts telling us to reach for the marshmallow or press the space bar,” Jonides says. “The only way to defeat them is to avoid them, and that means paying attention to something else. We call that will power, but it’s got nothing to do with the will.”
The behavioral and genetic aspects of the project are overseen by Yuichi Shoda, a professor of psychology at the University of Washington, who was one of Mischel’s graduate students. He’s been following these “marshmallow subjects” for more than thirty years: he knows everything about them from their academic records and their social graces to their ability to deal with frustration and stress. The prognosis for the genetic research remains uncertain. Although many studies have searched for the underpinnings of personality since the completion of the Human Genome Project, in 2003, many of the relevant genes remain in question. “We’re incredibly complicated creatures,” Shoda says. “Even the simplest aspects of personality are driven by dozens and dozens of different genes.” The scientists have decided to focus on genes in the dopamine pathways, since those neurotransmitters are believed to regulate both motivation and attention. However, even if minor coding differences influence delay ability—and that’s a likely possibility—Shoda doesn’t expect to discover these differences: the sample size is simply too small.
In recent years, researchers have begun making house visits to many of the original subjects, including Carolyn Weisz, as they try to better understand the familial contexts that shape self-control. “They turned my kitchen into a lab,” Carolyn told me. “They set up a little tent where they tested my oldest daughter on the delay task with some cookies. I remember thinking, I really hope she can wait.”
While Mischel closely follows the steady accumulation of data from the laptops and the brain scans, he’s most excited by what comes next. “I’m not interested in looking at the brain just so we can use a fancy machine,” he says. “The real question is what can we do with this fMRI data that we couldn’t do before?” Mischel is applying for an N.I.H. grant to investigate various mental illnesses, like obsessive-compulsive disorder and attention-deficit disorder, in terms of the ability to control and direct attention. Mischel and his team hope to identify crucial neural circuits that cut across a wide variety of ailments. If there is such a circuit, then the same cognitive tricks that increase delay time in a four-year-old might help adults deal with their symptoms. Mischel is particularly excited by the example of the substantial subset of people who failed the marshmallow task as four-year-olds but ended up becoming high-delaying adults. “This is the group I’m most interested in,” he says. “They have substantially improved their lives.”
Mischel is also preparing a large-scale study involving hundreds of schoolchildren in Philadelphia, Seattle, and New York City to see if self-control skills can be taught. Although he previously showed that children did much better on the marshmallow task after being taught a few simple “mental transformations,” such as pretending the marshmallow was a cloud, it remains unclear if these new skills persist over the long term. In other words, do the tricks work only during the experiment or do the children learn to apply them at home, when deciding between homework and television?
Angela Lee Duckworth, an assistant professor of psychology at the University of Pennsylvania, is leading the program. She first grew interested in the subject after working as a high-school math teacher. “For the most part, it was an incredibly frustrating experience,” she says. “I gradually became convinced that trying to teach a teen-ager algebra when they don’t have self-control is a pretty futile exercise.” And so, at the age of thirty-two, Duckworth decided to become a psychologist. One of her main research projects looked at the relationship between self-control and grade-point average. She found that the ability to delay gratification—eighth graders were given a choice between a dollar right away or two dollars the following week—was a far better predictor of academic performance than I.Q. She said that her study shows that “intelligence is really important, but it’s still not as important as self-control.”
Last year, Duckworth and Mischel were approached by David Levin, the co-founder of KIPP, an organization of sixty-six public charter schools across the country. KIPP schools are known for their long workday—students are in class from 7:25 A.M. to 5 P.M.—and for dramatic improvement of inner-city students’ test scores. (More than eighty per cent of eighth graders at the KIPP academy in the South Bronx scored at or above grade level in reading and math, which was nearly twice the New York City average.) “The core feature of the KIPP approach is that character matters for success,” Levin says. “Educators like to talk about character skills when kids are in kindergarten—we send young kids home with a report card about ‘working well with others’ or ‘not talking out of turn.’ But then, just when these skills start to matter, we stop trying to improve them. We just throw up our hands and complain.”
Self-control is one of the fundamental “character strengths” emphasized by KIPP—the KIPP academy in Philadelphia, for instance, gives its students a shirt emblazoned with the slogan “Don’t Eat the Marshmallow.” Levin, however, remained unsure about how well the program was working—“We know how to teach math skills, but it’s harder to measure character strengths,” he says—so he contacted Duckworth and Mischel, promising them unfettered access to KIPP students. Levin also helped bring together additional schools willing to take part in the experiment, including Riverdale Country School, a private school in the Bronx; the Evergreen School for gifted children, in Shoreline, Washington; and the Mastery Charter Schools, in Philadelphia.
For the past few months, the researchers have been conducting pilot studies in the classroom as they try to figure out the most effective way to introduce complex psychological concepts to young children. Because the study will focus on students between the ages of four and eight, the classroom lessons will rely heavily on peer modelling, such as showing kindergartners a video of a child successfully distracting herself during the marshmallow task. The scientists have some encouraging preliminary results—after just a few sessions, students show significant improvements in the ability to deal with hot emotional states—but they are cautious about predicting the outcome of the long-term study. “When you do these large-scale educational studies, there are ninety-nine uninteresting reasons the study could fail,” Duckworth says. “Maybe a teacher doesn’t show the video, or maybe there’s a field trip on the day of the testing. This is what keeps me up at night.”
Mischel’s main worry is that, even if his lesson plan proves to be effective, it might still be overwhelmed by variables the scientists can’t control, such as the home environment. He knows that it’s not enough just to teach kids mental tricks—the real challenge is turning those tricks into habits, and that requires years of diligent practice. “This is where your parents are important,” Mischel says. “Have they established rituals that force you to delay on a daily basis? Do they encourage you to wait? And do they make waiting worthwhile?” According to Mischel, even the most mundane routines of childhood—such as not snacking before dinner, or saving up your allowance, or holding out until Christmas morning—are really sly exercises in cognitive training: we’re teaching ourselves how to think so that we can outsmart our desires. But Mischel isn’t satisfied with such an informal approach. “We should give marshmallows to every kindergartner,” he says. “We should say, ‘You see this marshmallow? You don’t have to eat it. You can wait. Here’s how.’ ”
Exercise and the Brain
Exercise clears the mind. It gets the blood pumping and more oxygen is delivered to the brain. But Dartmouth’s David Bucci thinks there is much more going on.
“In the last several years there have been data suggesting that neurobiological changes are happening — [there are] very brain-specific mechanisms at work here,” says Bucci, an associate professor in the Department of Psychological and Brain Sciences.
From his studies, Bucci and his collaborators have revealed important new findings:
The effects of exercise are different on memory as well as on the brain, depending on whether the exerciser is an adolescent or an adult.
A gene has been identified which seems to mediate the degree to which exercise has a beneficial effect. This has implications for the potential use of exercise as an intervention for mental illness.
Bucci began his pursuit of the link between exercise and memory with attention deficit hyperactivity disorder (ADHD), one of the most common childhood psychological disorders. Bucci is concerned that the treatment of choice seems to be medication.
“The notion of pumping children full of psycho-stimulants at an early age is troublesome,” Bucci cautions. “We frankly don’t know the long-term effects of administering drugs at an early age—drugs that affect the brain—so looking for alternative therapies is clearly important.”
Anecdotal evidence from colleagues at the University of Vermont started Bucci down the track of ADHD. Based on observations of ADHD children in Vermont summer camps, athletes or team sports players were found to respond better to behavioral interventions than more sedentary children. While systematic empirical data is lacking, this association of exercise with a reduction of characteristic ADHD behaviors was persuasive enough for Bucci.
Coupled with his interest in learning and memory and their underlying brain functions, Bucci and teams of graduate and undergraduate students embarked upon a project of scientific inquiry, investigating the potential connection between exercise and brain function. They published papers documenting their results, with the most recent now available in the online version of the journal Neuroscience.
Bucci is quick to point out that “the teams of both graduate and undergraduates are responsible for all this work, certainly not just me.” Michael Hopkins, a graduate student at the time, is first author on the papers.
Early on, laboratory rats that exhibit ADHD-like behavior demonstrated that exercise was able to reduce the extent of these behaviors. The researchers also found that exercise was more beneficial for female rats than males, similar to how it differentially affects male and female children with ADHD.
Moving forward, they investigated a mechanism through which exercise seems to improve learning and memory. This is “brain derived neurotrophic factor” (BDNF) and it is involved in growth of the developing brain. The degree of BDNF expression in exercising rats correlated positively with improved memory, and exercising as an adolescent had longer lasting effects compared to the same duration of exercise, but done as an adult.
“The implication is that exercising during development, as your brain is growing, is changing the brain in concert with normal developmental changes, resulting in your having more permanent wiring of the brain in support of things like learning and memory,” says Bucci. “It seems important to [exercise] early in life.”
Bucci’s latest paper was a move to take the studies of exercise and memory in rats and apply them to humans. The subjects in this new study were Dartmouth undergraduates and individuals recruited from the Hanover community.
Bucci says that, “the really interesting finding was that, depending on the person’s genotype for that trophic factor [BDNF], they either did or did not reap the benefits of exercise on learning and memory. This could mean that you may be able to predict which ADHD child, if we genotype them and look at their DNA, would respond to exercise as a treatment and which ones wouldn’t.”
Bucci concludes that the notion that exercise is good for health including mental health is not a huge surprise. “The interesting question in terms of mental health and cognitive function is how exercise affects mental function and the brain.” This is the question Bucci, his colleagues, and students continue to pursue.
Why Smart People Are Stupid
Here’s a simple arithmetic question: A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?
The vast majority of people respond quickly and confidently, insisting the ball costs ten cents. This answer is both obvious and wrong. (The correct answer is five cents for the ball and a dollar and five cents for the bat.)
For more than five decades, Daniel Kahneman, a Nobel Laureate and professor of psychology at Princeton, has been asking questions like this and analyzing our answers. His disarmingly simple experiments have profoundly changed the way we think about thinking. While philosophers, economists, and social scientists had assumed for centuries that human beings are rational agents—reason was our Promethean gift—Kahneman, the late Amos Tversky, and others, including Shane Frederick (who developed the bat-and-ball question), demonstrated that we’re not nearly as rational as we like to believe.
When people face an uncertain situation, they don’t carefully evaluate the information or look up relevant statistics. Instead, their decisions depend on a long list of mental shortcuts, which often lead them to make foolish decisions. These shortcuts aren’t a faster way of doing the math; they’re a way of skipping the math altogether. Asked about the bat and the ball, we forget our arithmetic lessons and instead default to the answer that requires the least mental effort.
Although Kahneman is now widely recognized as one of the most influential psychologists of the twentieth century, his work was dismissed for years. Kahneman recounts how one eminent American philosopher, after hearing about his research, quickly turned away, saying, “I am not interested in the psychology of stupidity.”
The philosopher, it turns out, got it backward. A new study in the Journal of Personality and Social Psychology led by Richard West at James Madison University and Keith Stanovich at the University of Toronto suggests that, in many instances, smarter people are more vulnerable to these thinking errors. Although we assume that intelligence is a buffer against bias—that’s why those with higher S.A.T. scores think they are less prone to these universal thinking mistakes—it can actually be a subtle curse.
West and his colleagues began by giving four hundred and eighty-two undergraduates a questionnaire featuring a variety of classic bias problems. Here’s a example:In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.
West also gave a puzzle that measured subjects’ vulnerability to something called “anchoring bias,” which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects were first asked if the tallest redwood tree in the world was more than X feet, with X ranging from eighty-five to a thousand feet. Then the students were asked to estimate the height of the tallest redwood tree in the world. Students exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the tallest tree in the world was only a hundred and eighteen feet. Given an anchor of a thousand feet, their estimates increased seven-fold.
But West and colleagues weren’t simply interested in reconfirming the known biases of the human mind. Rather, they wanted to understand how these biases correlated with human intelligence. As a result, they interspersed their tests of bias with various cognitive measurements, including the S.A.T. and the Need for Cognition Scale, which measures “the tendency for an individual to engage in and enjoy thinking.”
The results were quite disturbing. For one thing, self-awareness was not particularly useful: as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
Perhaps our most dangerous bias is that we naturally assume that everyone else is more susceptible to thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is rooted in our ability to spot systematic mistakes in the decisions of others—we excel at noticing the flaws of friends—and inability to spot those same mistakes in ourselves. Although the bias blind spot itself isn’t a new concept, West’s latest paper demonstrates that it applies to every single bias under consideration, from anchoring to so-called “framing effects.” In each instance, we readily forgive our own minds but look harshly upon the minds of other people.
And here’s the upsetting punch line: intelligence seems to make things worse. The scientists gave the students four measures of “cognitive sophistication.” As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes. Education also isn’t a savior; as Kahneman and Shane Frederick first noted many years ago, more than fifty per cent of students at Harvard, Princeton, and M.I.T. gave the incorrect answer to the bat-and-ball question.
What explains this result? One provocative hypothesis is that the bias blind spot arises because of a mismatch between how we evaluate others and how we evaluate ourselves. When considering the irrational choices of a stranger, for instance, we are forced to rely on behavioral information; we see their biases from the outside, which allows us to glimpse their systematic thinking errors. However, when assessing our own bad choices, we tend to engage in elaborate introspection. We scrutinize our motivations and search for relevant reasons; we lament our mistakes to therapists and ruminate on the beliefs that led us astray.
The problem with this introspective approach is that the driving forces behind biases—the root causes of our irrationality—are largely unconscious, which means they remain invisible to self-analysis and impermeable to intelligence. In fact, introspection can actually compound the error, blinding us to those primal processes responsible for many of our everyday failings. We spin eloquent stories, but these stories miss the point. The more we attempt to know ourselves, the less we actually understand.
Your Mind On Magic
PINCH a coin at its edge between the thumb and first fingers of your right hand and begin to place it in your left palm, without letting go. Begin to close the fingers of the left hand. The instant the coin is out of sight, extend the last three digits of your right hand and secretly retract the coin. Make a fist with your left — as if holding the coin — as your right hand palms the coin and drops to the side.
You’ve just performed what magicians call a retention vanish: a false transfer that exploits a lag in the brain’s perception of motion, called persistence of vision. When done right, the spectator will actually see the coin in the left palm for a split second after the hands separate.
This bizarre afterimage results from the fact that visual neurons don’t stop firing once a given stimulus (here, the coin) is no longer present. As a result, our perception of reality lags behind reality by about one one-hundredth of a second.
Magicians have long used such cognitive biases to their advantage, and in recent years scientists have been following in their footsteps, borrowing techniques from the conjurer’s playbook in an effort not to mystify people but to study them. Magic may seem an unlikely tool, but it’s already yielded several widely cited results. Consider the work on choice blindness — people’s lack of awareness when evaluating the results of their decisions.
In one study, shoppers in a blind taste test of two types of jam were asked to choose the one they preferred. They were then given a second taste from the jar they picked. Unbeknown to them, the researchers swapped the flavors before the second spoonful. The containers were two-way jars, lidded at both ends and rigged with a secret compartment that held the other jam on the opposite side — a principle that’s been used to bisect countless showgirls. This seems like the sort of thing that wouldn’t scan, yet most people failed to notice that they were tasting the wrong jam, even when the two flavors were fairly dissimilar, like grapefruit and cinnamon-apple.
In a related experiment, volunteers were shown a pair of female faces and asked which they found more attractive. Then they were given a closer look at their putative selection. In fact, the researchers swapped the selection for the “less attractive” face. Again, this bit of fraud flew by most people. Not only that, when pressed to justify their choices, the duped victims concocted remarkably detailed post hoc justifications.
Such tricks suggest that we are often blind to the results of our own decisions. Once a choice is made, our minds tend to rewrite history in a way that flatters our volition, a fact magicians have exploited for centuries. “If you are given a choice, you believe you have acted freely,” says Teller, one half of the study-in-contrasts duo Penn and Teller. “This is one of the darkest of all psychological secrets.”
Another dark psychological secret magicians routinely take advantage of is known as change blindness — the failure to detect changes in consecutive scenes. One of the most beautiful demonstrations is an experiment conducted by the psychologist Daniel Simons in which he had an experimenter stop random strangers on the street and ask for directions.
Midway through the conversation, a pair of confederates walked between them and blocked the stranger’s view, and the experimenter switched places with one of the stooges. Moments later, the stranger was talking to a completely different person — yet strange as it may sound, most didn’t notice.
What are the neural correlates of these cognitive hiccups? One possible answer comes from studies of the so-called face test, in which a volunteer is shown two faces in quick succession. Normally, just about anyone can distinguish the faces provided they’re shown within about half a second. But if the person is distracted by a task like counting, or by a flashing light, the faces start to look the same.
Here’s where it gets interesting, though. Scientists have found a way to induce change blindness, with a machine called a transcranial magnetic stimulator, which uses a magnetic field to disrupt localized brain regions. In one experiment, a T.M.S. was used to scramble the parietal cortex, which controls attention. Subjects were then given the face test. With the machine turned off, they did fine. But when the T.M.S. was on, most failed the test. Conclusion? Misdirection paralyzes part of your cortex.
Such blind spots confirm what many philosophers have long suspected: reality and our perception of it are incommensurate to a far greater degree than is often believed. For all its apparent fidelity, the movie in our heads is a “Rashomon” narrative pieced together from inconsistent and unreliable bits of information. It is, to a certain extent, an illusion.
Overdiagnosis can turn healthy individuals into anxious patients, with one GP suggesting that modern psychiatry is medicalising normality.
Professional ethical guidelines forbid doctors from endorsing pharmaceutical products, but I hope the General Medical Council will make an exception on this occasion, as the drug I am about to promote is truly amazing.
May I introduce Havidol (avafynetyme) — “when more is not enough”. It’s the first, and only, treatment for dysphoric social attention consumption deficit anxiety disorder (DSACDAD). Not only will it help you to achieve everything you desire and deserve, but the side effects are pretty tasty too: shiny teeth, glowing skin and, in men, hair growth and delayed sexual climax.
OK, so there is no such condition and no such panacea, but DSACDAD and Havidol have featured in recent debate between doctors on the medicalisation of normality, known as medical creep, where overdiagnosis turns healthy individuals into anxious patients.
The British Medical Journal (BMJ) has carried a couple of interesting articles on the subject recently. The first, by Des Spence, a GP, questions diagnostic criteria used by psychiatrists — criteria which suggest that a quarter of the population of the United States has a “mental illness”, one in 30 boys is on the autistic spectrum, and one in six children has ADHD (attention deficit hyperactivity disorder).
Those behind the criteria cite better awareness and diagnosis to explain these statistics, but Spence’s conclusion is more sinister — that modern psychiatry is medicalising normality, an unwelcome move that may benefit only psychiatrists and the pharmaceutical industry, “for which mental ill health is the profit nirvana of lifelong multiple medications”.
It is not just psychiatry that is being criticised. Last week the BMJ carried an article by Australian researchers entitled Preventing overdiagnosis: how to stop harming the healthy, which highlighted growing evidence that modern medicine is causing harm “through ever earlier detection and ever wider definition of disease”.
Examples given range from the introduction of new conditions in which common difficulties have been reclassified as diseases — such as female sexual dysfunction (low libido, difficulty achieving orgasm, pain on intercourse) — to incidentalomas (innocent lumps and bumps picked up by increasingly sensitive scanning technology that lead to unnecessary further investigation or treatment).
I know how worrying incidentalomas can be. Five years ago I submitted myself to a total body scan for an article I was writing, only to be told that I had three nodules on my lungs, one of which was worryingly large. The consultant suggested repeat scans every three months to see if the nodules were growing, in which case I would need to have them removed.
I was eventually given the all-clear, but it is not an experience that I would care to repeat and, even if the nodules had turned out to be cancerous, it is unclear whether the early diagnosis would have saved my life or just meant I lived with the disease for longer.
Screening for other cancers has been implicated in overdiagnosis too. Recent controversy around the National Breast Cancer Screening Programme has centred on concerns that mammography can overdiagnose some types of early cancerous change, putting a significant minority of women through unnecessary anxiety and treatment.
The PSA blood test for prostate cancer is another example. In some men it is a lifesaver, but in others it detects a problem that may never have otherwise come to light. But, as with all tests, once you have found an abnormality, both patient and doctor feel bound to do something about it, and the resulting treatment — such as radical surgery or X-ray treatment — can cause more problems than it solves.
Medical creep is an issue in less sinister conditions too. A Canadian study highlighted in the BMJ article suggests that a third of people receiving a diagnosis of asthma may not have it, and two-thirds of them probably don’t need the medicines they are prescribed. And there is evidence that we are overtreating and overdiagnosing lots of other problems, from high blood pressure and raised cholesterol levels to chronic kidney disease and osteoporosis.
Ironically, while doctors are debating what to do about this overdiagnosis, patients remain more worried about underdiagnosis, with the general public often struggling to understand why more testing, and more screening, can be anything other than a good thing. PSA being a case in point.
Don’t get me wrong, most medical advances, including new drugs, are welcome and needed, but a generous dollop of scepticism is healthy.
Doctor doesn’t always know best.
For more detailed information on Havidol, and guidance on how to persuade your doctor to prescribe it, visit the spoof website havidol.com. Des Spence's article on the advance of modern psychiatry is in the May 2 edition of the BMJ, and Preventing overdiagnosis is in the May 30 edition.
Narcissism and Humility
SOME experts claim there is an epidemic of narcissism. Arrogant young people are the usual suspects — the sneering, supercilious student or the selfish, dismissive intern. But pompous, compliment-demanding executives are also held to suffer from an excess of self-love.
The narcissism disease probably has its roots in the self-esteem movement that began in America. Teachers were worried about (mostly minority) students who did poorly at school. They noticed that the pupils’ bad results led them to feel bad about themselves and rebel.
The teachers argued, with more passion than evidence, that the way to get them to do better at school was to work on their self-esteem rather than their maths or grammar. The naive belief took hold that if you felt better about yourself, you could discover and release your natural abilities.
This folly was fuelled by the “multiple intelligence” gurus and their claim that any human capability was a type of intelligence. So dancing became an intelligence. This is why there are daft degrees in trivial activities, with students expressing great offence — all part of clinical narcissism — at any hint that they might be doing a pointless degree at a bogus university.
The theorists were right about the relationship between self-esteem and (academic) success but wrong about the direction of causality. Doing well is the cause, not the result, of high self-esteem. Nurturing self-esteem in the hope that it brings success just feeds narcissism. And narcissism is a hungry and fragile plant.
Healthy self-regard comes from finding strengths, working on them and building a skills base. It involves dedication and resolve. And from that investment flows self-esteem.
So what happened to humility? Nearly all religions condemn arrogance and praise humility. There are stories and parables that warn against arrogance and cite pride as one of the seven deadly sins. There are strictures on selfishness and ignoring one’s fellow man, and on the foolishness of chasing materialism rather than spiritualism.
Note the charm of the Amish and the strength of the Quakers. Amish adolescents can look sheltered, naive and vulnerable; they seem throwbacks to another age. But who would you choose to teach or spend time with — a class of Amish or Quakers, or a gang of feral inner-city children who trumpet their rights?
Humility begets kindness. It is as attractive as hubris can be repulsive. But there are two other types of humility. First there is British false humility — the kind that always foxes Americans. It can be spotted in how people talk about success. So you say “I was fortunate enough” to be selected for Oxbridge, the Olympics or promotion to the board. The idea is that you invoke luck to explain success — not talent, hard work or family privilege. The understatement continues when describing an occupation. The answer “I sell vacuum cleaners” or “I dabble in art” could mean that you are sitting next to Sir James Dyson or Charles Saatchi.
It is a trick, of course, but it fulfils some important social rules. Arrogance, self-importance and being a show-off are a “pretty poor show”. But believing in yourself and your ability is absolutely fundamental. You have to be sufficiently strong to show weakness. You have to be sufficiently confident to be humble.
There are cases where a sort of bumbling humility is not thought of so highly by the British. This may be nicely illustrated by the famous comment that Winston Churchill made about one of his adversaries. Clement Attlee is a humble man, said Churchill, but then he has a lot to be humble about.
This is indeed very different from the second form of humility, which is the debilitating and dangerous kind. It is the belief that putting yourself forward or first in any situation is morally wrong. The psychiatrists may call this “dependent personality disorder”. These humble people are frequently exploited by their selfish colleagues.
Often this humility is driven by psychological problems around inadequacy. Religions reinforce this. Consider the Prayer of Humble Access — “We are not worthy so much as to gather up the crumbs under thy table” — or the many prayers of penitence. Clearly, religions can go too far in encouraging the believers to feel worthless in the sight of the Almighty.
It may be best to try the humility of those non-sacramental groups like the Quakers or the Salvation Army — people who remind themselves how fortunate they are and how many are less so, and set about doing something to help them.
So, it may be that the excessively humble do not inherit the earth, but those who think about the plight of others do.
Your Brain On Love
Love may not be blind, but it does make you dumb, according to brain scans of people in the early days of a romance.
MRI images show that when people gaze at pictures of their loved ones, the rational parts of their brains shut down, allowing the heart to rule the head. As a result, would-be suitors, their critical faculties dulled, are more likely to overlook niggling personality traits.
Robin Dunbar, an evolutionary psychologist at the University of Oxford, believes that the “rose-tinted spectacles” effect encourages people to take greater risks.
“What seems to be happening is that you have subconsciously made up your mind that you are interested in the person and the rational bit of the brain — the bit that would normally say ‘hang on a minute’ — gets switched off,” he said. “The more emotional parts of the brain are given a free ride. It looks very much like the rose-tinted spectacles kicking in.”
Professor Dunbar’s theory emerged after he analysed findings of brain scan experiments carried out a decade ago at University College London. The research by Semir Zeki and Andreas Bartels used functional Magnetic Resonance Imaging to look at the brain activity of 17 volunteers as they were shown pictures of their boyfriends or girlfriends. The volunteers were recruited to the experiment because all professed to be “truly, madly and deeply in love”. As they lay in the MRI scanner, they were shown three images of friends and one of their partner. “What struck me looking at the data was that parts of the frontal lobe, which is the region of the brain that does the heavy rational work, were deactivated when they looked at pictures of their beloved, compared to pictures of their friends,” said Professor Dunbar, who discussed the science of falling of love at The Times Cheltenham Science Festival last night.
The brain regions affected by “rose-tinted spectacle syndrome” are the dorsolateral prefrontal cortex and the orbitofrontal cortex. These are important in theory of mind, or the ability to see the world from someone else’s perspective, and in rational thought and reasoning.
“In a relationship you are in a trade-off between caution and just going for it,” Professor Dunbar said. “There is a view that emotion exists to get you off the fence. A purely rational organism would sit on the fence all the time to avoid being hurt. But if you don’t engage, you won’t form relationships. If the prefrontal cortex is shut down, that protective and cautious element goes.”
The effect is seen in both men and women. However, women appear to be the driving force in keeping relationships going.
Earlier this year, a study of mobile phone records by Professor Dunbar showed that men call their romantic partners more than any other person in the first seven years of a relationship. But after seven years, their focus shifts to other friends.
Women, by contrast, continue to phone their partners more than anyone else for the first 14 years of a relationship. Only then do they tend to shift their attention to friends.
Professor Dunbar, whose book The Science of Love and Betrayal was published this year, rejected the idea that falling in love was merely a cultural phenomenon. “If you look at poetry from all over the world, and at poems going back 5,000 years, you see the same thing — people describing falling in love,” he said. “It doesn’t mean everyone experiences it. It is just that it is widespread and it long predates Mills and Boon.”
The Power of Situation
CLEARLY, a person’s decisions are determined by circumstances. Just how closely they are determined, however, has only recently become apparent. Experiments conducted over the past few years have revealed that giving someone an icy drink at a party leads him to believe he is getting the cold shoulder from fellow guests, that handing over a warm drink gives people a sense of warmth from others, and—most astonishingly—that putting potential voters in chairs which lean slightly to the left causes them to become more agreeable towards policies associated with the left of the political spectrum.
The latest of these studies also looks at the effect of furniture. It suggests that something as trivial as the stability of chairs and tables has an effect on perceptions and desires.
The study was conducted by David Kille, Amanda Forest and Joanne Wood at the University of Waterloo, in Canada, and will be published soon in Psychological Science. Mr Kille and his collaborators asked half of their volunteers (47 romantically unattached undergraduate students) to sit in a slightly wobbly chair next to a slightly wobbly table while engaged in the task assigned. The others were asked to sit in chairs next to tables that looked physically identical, but were not wobbly.
Once in their chairs, participants were asked to judge the stability of the relationships of four celebrity couples: Barack and Michelle Obama, David and Victoria Beckham, Jay-Z and Beyoncé, and Johnny Depp and Vanessa Paradis. They did this by rating how likely they thought it was, on a scale of one to seven, that a couple would break up in the next five years. A score of one meant “extremely unlikely to dissolve”. A score of seven meant “extremely likely to dissolve”.
After they had done this, they were asked to rate their preferences for various traits in a potential romantic partner. Traits on offer included some which a pilot study indicated people associate with a sense of psychological stability (such as being trustworthy or reliable), some that are associated with psychological instability (being spontaneous or adventurous, for example) and some with no real relevance to instability or stability (like being loving or funny). Participants rated the various traits on another one-to-seven scale, with one indicating “not at all desirable” and seven meaning “extremely desirable”.
The results reveal that just as cold drinks lead to perceptions of social conditions being cold, tinkering with feelings of physical stability leads to perceptions of social instability. Participants who sat in wobbly chairs at wobbly tables gave the celebrity couples an average stability score of 3.2 while those whose furniture did not wobble gave them 2.5.
What was particularly intriguing, though, was that those sitting at wonky furniture not only saw instability in the relationships of others but also said that they valued stability in their own relationships more highly. They gave stability-promoting traits in potential romantic partners an average desirability score of 5.0, whereas those whose tables and chairs were stable gave these same traits a score of 4.5. The difference is not huge, but it is statistically significant. Even a small amount of environmental wobbliness seems to promote a desire for an emotional rock to cling to.
Extroverts and Introverts
Are introversion and shyness the same thing? "Though in popular media they're often viewed as the same, we know in the scientific community that, conceptually or empirically, they're unrelated," Schmidt says.
The two get confused because they both are related to socializing-but lack of interest in socializing is very clearly not the same as fearing it. Schmidt and Arnold H. Buss of the University of Texas wrote a chapter titled "Understanding Shyness" for the upcoming book The Development of Shyness and Social Withdrawal. There they write, "Sociability refers to the motive, strong or weak, of wanting to be with others, whereas shyness refers to behavior when with others, inhibited or uninhibited, as well as feelings of tension and discomfort." This differentiation between motivation and behavior is consistent with the ability many of us have to behave like extroverts when we choose, whereas shy people cannot turn their shyness off and on.
Seven things extroverts should know about their introverted friends:
1) We don’t need alone time because we don’t like you. We need alone time because we need alone time. Don’t take it personally.
2) We aren’t judging anyone when we sit quietly. We're just sitting quietly, probably enjoying watching extroverts in action.
3) If we say we’re having fun, we’re having fun, even though it might not look that way to you.
4) If we leave early, it’s not because we’re party poopers. We’re just pooped. Socializing takes a lot out of us.
5) If you want to hear what we have to say, give us time to say it. We don’t fight to be heard over other people. We just clam up.
6) We’re not lonely, we’re choosy. And we’re loyal to friends who don’t try to make us over into extroverts.
7) Anything but the telephone.
Seven things introverts should know about their extroverted friends:
1) Extroverts don’t understand introversion unless someone explains it.
2) Extroverts who try to get you to loosen up usually aren’t doing it to annoy you. They mean well.
3) Extroverts produce a lot of words but quantity does not preclude quality. There's often plenty of good stuff in there for those with the patience to listen.
4) Extroverts can teach us plenty about glad-handing and small talking. These are useful skills, whether or not you enjoy them.
5) Extroverts can’t read your mind and they’re not big on catching hints. Say what you want.
6) At parties, think of extroverted friends as a glider tow plane. They pull you in and get you started, but eventually you have to sail on your own.
7) Extroverts come in all different styles, just like introverts. Keep a lookout for extroverts with a quiet side, who make dandy friends.
Ego Depletion and Self Control
Do you have what it takes to resist temptation? Or do you find yourself indulging too often in a decadent dessert, using company time to check Facebook, or foregoing morning exercise in favor of sleep? We do not need a science experiment to understand the universality of cravings, desires and longings, or to understand how human desire serves as a double-edged sword. Urges motivate us in positive and important ways - to seek food, find shelter, make friends, get sleep, procreate. But left unchecked, our urges and desires can lead to a myriad of negative consequences, from obesity and poor health to reduced productivity, overspending, damaged relationships, substance abuse, and violence.
If your willpower is weak, a little divine intervention may help. In a series of studies, Kevin Rounding and colleagues tested participants' self-control by asking them to endure discomfort to earn a reward, or to delay immediate payment to obtain a larger stipend. Before the test of self-control, half of the participants were exposed to words with religious themes (e.g., divine, spirit, God) in a puzzle-solving task, and half completed the same task without the religious primes. Those who saw the primes were willing to endure greater discomfort and delay gratification longer than those who did not. Additional studies showed that religious primes also fortified self-control after the fact. In these studies, participants first attempted to resist temptation, and afterward half of the participants viewed religious primes while the other half did not. Finally, all participants were faced with an additional task involving self-restraint. Exposure to the religious words refueled resolve, as participants who saw the religious primes were able to persist at a frustrating task far longer than those who did not.
Resisting temptation can be difficult, especially if it involves repeated self-denial. Indeed, entire industries have evolved to provide support for those who have trouble saying "no" (consider weight loss and smoking cessation programs). Research by Roy Baumeister, Kathleen Vohs, and Dianne Tice sheds light on why self-control can be so elusive. According to Baumeister and colleagues, self-control operates in many ways like a muscle: It depends on a limited energy source that can be depleted. Thus with overexertion, particularly in a short time frame, self-control will fatigue and ultimately fail.
Support for the notion that self-control taxes a limited resource, and that depletion of this resource will lead to lapses in resistance, comes from studies that measure individuals' ability to resist temptation on consecutive tasks. In these studies, some participants first performed a self-control task (e.g., passing up chocolate chip cookies and instead eating a healthier alternative), while others performed a task that allowed them to indulge (e.g., eating the cookies). The critical question is how the experience of resisting temptation affected self-control when individuals were then immediately given another self-control challenge (e.g., solving a difficult puzzle without getting frustrated). Although researchers have varied both the initial temptation and the subsequent self-control challenge across studies (including physical, intellectual, and emotional enticements), the pattern of findings has been the same: People who successfully deny an urge or desire are less likely to regulate their behavior if faced with another test of self-control shortly thereafter.
This ego-depletion, as Baumeister and colleagues call it, occurs not only in the lab but in everyday experience as well. In a recent study, adults carried smart phones for a week, and were queried about their cravings at seven random times every day from early morning until late at night. When signaled, participants were to report whether or not they had experienced a desire within the last 30 minutes, and to indicate the nature of the desire (e.g., eating, coffee, sex, sleep, alcohol, social media, tobacco, spending, etc.). They also indicated the strength of the desire, whether it conflicted with other goals, whether they attempted to resist the desire, and whether they fulfilled the desire. When individuals repeatedly denied their impulses in a given day, the likelihood that they would give in to future temptations that day increased. This heightened vulnerability to temptation occurred even when the urges varied over the day, suggesting that the simple act of self-denial, regardless of what we are denying, weakens our global resolve.
Fortunately, there may to be ways to fortify our self-control beyond reminders of the divine. One obvious short-term step is to indulge a little in our cravings, particularly if we know we have to face a strong temptation or desire later in the day. If for example you are trying to watch what you eat and you plan to eat dinner out with friends, fulfilling other small urges earlier in the day (e.g., sleeping an extra 10 minutes or leaving work 30 minutes early) may improve your chances of skipping the chocolate cake at dessert.
In keeping with the muscle model of willpower, research suggests that you can also increase your self-control through regular exertion over time. Although repeated self-denial drains resolve in an immediate sense, it is possible to build endurance through the routine practice of self-control over time. When people engage in daily exercises of self-control, or focus efforts to enhance willpower in one area (e.g., spending), they show gradual improvements in their ability to resist impulses, and these benefits extend to tasks that are unrelated (e.g., studying or household chores).
Lacking the discipline to start your own self-control regimen? There is still hope. For those seeking a small but simple boost in willpower, other studies show that a cool glass of lemonade (with sugar) can replenish glucose in the bloodstream and (at least temporarily) rejuvenate one's resolve. Other quick fixes include a dose of laughter, monetary incentives, and an emphasis on social goals.
In a world where temptations seem to lurk around every corner, it may be prudent to take a converging methods approach to maintaining and improving self-control, with daily practice, a good sense of humor, the occasional financial incentive, and, if the spirit moves you, a divine reminder. And don't forget to indulge in a chocolate chip cookie every once in a while - that small indulgence may be just what you need to prevent a big misstep.
As both the midget in the country of Brobdingnag and the giant on the island of Lilliput, Lemuel Gulliver—the protagonist of Jonathan Swift's Gulliver's Travels—experienced firsthand that size is relative. As we cast a neuroscientific light on this classic book, it seems clear to us that Swift, a satirist, essayist and poet, knew a few things about the mind, too. Absolute size is meaningless to our brain: we gauge size by context. The same medium-sized circle will appear smaller when surrounded by large circles and bigger when surrounded by tiny ones, a phenomenon discovered by German psychologist Hermann Ebbinghaus. Social and psychological context also causes us to misperceive size. Recent research shows that spiders appear larger to people who suffer from arachnophobia than to those who are unafraid of bugs and that men holding weapons seem taller and stronger than men who are holding tools. In this article, we present a collection of illusions that will expand your horizons and shrink your confidence in what is real. Try them out for size!
Do you see tiny objects photographed with a macro lens? Look again. This remarkable illusion combines tilt-shift photography—in which the photographer uses selective focus and a special lens or tilted shot angle to make regular objects look toy-sized—with the strategic placement of a giant coin. Art designers Theo Tveterås and Lars Marcus Vedeler, from the Skrekkøgle group, created the enormous 50-cent euro coin from painted and lacquered wood at a 20:1 scale.
Barbie Trashes Her Dream House
At first sight, they look like real-life scenes from the television show Hoarders, precleanup. In reality, they are photographs of 1:6 scale dioramas by St. Louis–born artist Carrie M. Becker. She makes the cardboard boxes, garbage bags and other trash herself. The furniture and tiny objects are from Barbie's dream house and a Japanese miniatures company called Re-Ment. Becker filths up the rooms with actual dirt collected from the filter of a DustBuster, using the occasional Re-Ment meatball to simulate dog poop on the floor. When she photographs the scenes without an external reference, our brain relies on our everyday experience and assumes that the minuscule objects are life size. Only in proximity to an extraneous, actual-size object does the illusion fail.
You can look 10 pounds thinner with a well-known slimming trick: vertical lines elongate your shape and give you a more svelte appearance, right? Wrong! Vision scientists Peter Thompson and Kyriaki Mikellidou of the University of York in England say instead that it is time to ditch your vertical-striped wardrobe and invest in some horizontal-striped outfits. They found that vertical stripes on clothing make the wearer appear fatter and shorter than horizontal stripes do. Notice that the vertical-striped lady seems to have wider hips than the horizontal-striped model in the accompanying cartoons. The phenomenon is based on the Helmholtz illusion, in which a square made up of horizontal lines appears to be taller and narrower than an identical square made of vertical lines. The original report from 1867 of this illusion contained the intriguing reflection that ladies' frocks with horizontal stripes make the figure look taller. Because the remark ran counter to contemporary popular belief, the York researchers decided to put it to the test, finding that 19th-century German physicist and physician Hermann von Helmholtz did indeed have a great eye for fashion.
The full moon rising on the horizon appears to be massive. Hours later, when the moon is high overhead, it looks much smaller. Yet the disk that falls on your retina is not smaller for the overhead moon than it is for the rising moon. So why does the overhead moon seem smaller? One answer is that your brain infers the larger size of the rising moon because you see it next to trees, hills or other objects on the horizon. Your brain literally enlarges the moon to fit the context. Look for this effect the next time you see the moon in real life.
Objects project smaller images on our retinas as they move away from us, which can make it hard to decide if an item is truly small or just far away (as we see in this photograph). Forced perspective photography uses this ambiguity to great effect, while eliminating many of the habitual strategies that our brain uses to distinguish size from distance, such as stereopsis (our visual system can calculate the depth in a scene from the slight differences between our left and right retinal images) and motion parallax (as we move, objects closer to us move farther across our field of view than distant objects do).
Tall and Venti
Is your cuppa joe half empty or half full? It depends on your outlook—and on a little twist on the Jastrow illusion, named after Polish-born American psychologist Joseph Jastrow. In this classic illusion, two identical arches positioned in a certain configuration appear to have very different lengths. Magician Greg Wilson and writer and producer David Gripenwaldt realized that Starbucks coffee sleeves have the perfect shape for an impromptu demonstration of the Jastrow illusion, so now you can amaze your office mates at your next coffee break. All you need to do is align the coffee sleeves as in the accompanying photograph and—presto!—your tall cup sleeve is now venti-sized! Your brain compares the upper arch's lower right corner with the lower arch's upper right corner and concludes, incorrectly, that the upper sleeve is shorter than the lower sleeve. We would like to thank magician Victoria Skye for her demonstration of the Jastrow illusion with Starbucks coffee sleeves.
Card Magic Tricks
Think of a playing card. Got one in mind?
Although it may have felt like a free choice, think again: Most people choose one of only four cards, out of a deck of 52. For now, remember your card — we’ll return to it later.
For thousands of years, magicians have amazed audiences by developing and applying intuitions about the mind. Skilled magicians can manipulate memories, control attention, and influence choices. But magicians rarely know why these principles work. Studying magic could reveal the mechanisms of the mind that enable these principles, to uncover the why rather than just the how.
Some of these principles, such as illusions and misdirection, have recently lead to interesting discoveries. For example, in one study, a magician threw a ball into the air a few times. On the third throw, however, he only pretended to throw it. Two thirds of the participants reported seeing the ball vanish in mid-air, even though it never left his hand. The participants saw something amazing — something that never actually happened.
Another example is misdirection, where the magician hides the secret by manipulating what the audience perceives or thinks. One study tracked participants’ eye movements while showing them a vanishing cigarette trick. Even if participants looked directly at the secret move, they did not notice it if their attention was directed elsewhere. They looked, but did not see, thanks to the magician’s misdirection.
Other principles of magic involve card tricks. Magicians can often influence people to choose a particular card from a deck, or even know which card people will choose when asked to think of one. Studying these phenomena could help us learn about the mind, as did the study of illusions and misdirection.
But before we can understand card magic, we have to understand exactly how people perceive the cards themselves. To do this, I teamed up with another researcher and magician, Alym Amlani, as well as professor Ronald Rensink at the University of British Columbia. We applied well-known techniques from vision science to measure how well people see, remember, like, and choose each of the 52 cards in a standard deck. For example, people saw cards quickly presented one after another on a computer while they searched for a target card; their accuracy indicated the card’s visibility. To measure choice, we asked over a thousand people to either name or visualize a card, then recorded their selections.
Measuring these factors allowed us to test magicians’ intuitions about different cards. Our results confirmed several of these intuitions. For example, magicians believe that people treat the Ace of Spades and Queen of Hearts differently from other cards. Sure enough, accuracy for detecting and remembering was highest for the Ace of Spades, and both cards were among the most liked and most often chosen. Other cards chosen frequently were Sevens and Threes, consistent with other studies on how people choose digits.
Magicians also believe they know which cards people are least likely to choose. Now consider: Which card do you think people will name the least often?
Many magicians believe the answer is a mid-valued Club, like the Six of Clubs. Others appear to share that belief; hecklers sometimes end up choosing the Six during magic tricks. In fact, during pilot testing, when asked to name a card several people smugly asserted, “The Six of Clubs!”, perhaps trying to act unpredictably. But by doing so, they in fact acted more predictably. As it turned out, however, it was the black Nines that were chosen the least. Of the 1150 selections people made in our experiment, these cards were only chosen four times.
Several other common beliefs were also disproven. For example, magicians often say that when asked to name a card, women choose the Queen of Hearts more than men do. In our sample, we found the opposite: men chose the Queen of Hearts more than women did, and women chose the King of Hearts more than men did.
Other results appeared to be completely new. For example, people detected most cards equally well, except for the Six of Hearts and Diamonds, which seemed to be misreported more than any other cards. In other words, people saw red Sixes that were not there. Also, women seemed to prefer lower number cards, and men preferred higher ones. We don’t know why.
A final interesting result was that the exact wording of the question seemed to influence which cards people chose. When asked to name a card, over half of the people chose one of four cards: the Ace of Spades (25%), or the Queen (14%), Ace (6%), or King (6%) of Hearts. If you’re like most people, you may have chosen one of these cards when asked at the beginning of this article. (A full list of cards and their frequencies is also available.)
But when asked to visualize a card, people seemed to choose the Ace of Hearts more often. In our sample, they chose it almost twice as often when asked to visualize (11%) rather than name (6%) a card. Perhaps something about the visualization process makes people more likely to think of this particular card.
Systematic studies such as these can help form the basis of a psychology of card magic. Magicians can improve their tricks by knowing which cards people like the best or choose the most. Meanwhile, psychologists can follow up on unexpected findings to understand why people may misreport seeing red Sixes or why the wording of a question may bring different cards to mind.
And this is only the beginning. Applying these results, we can uncover the mechanisms behind the principles of card magic. If magicians can influence the audience’s decisions, what factors enable this influence? Why do people still feel like they have a free choice? Answers to these questions could provide new insights into persuasion, marketing, and decision making. Ultimately, we hope to develop a science of magic, where almost any trick can be understood in terms of its underlying psychological mechanisms. Such a science can keep the secrets of magic, while revealing the secrets of the mind.
The Reverse Psychology of Temptation
If you want to help someone stick to a decision, try tempting him out of it.
Published on August 6, 2012 by Peter Bregman in How We Work
"Oh this is delicious, Peter. The ice cream is homemade, the perfect consistency, And this lemon cookie on top, mmmmm. Are you sure you don't want some?"
Tom* smiled devilishly as he reached across the table to hand me a spoon. Tom is my client, the CEO of a $900 million company. I was in San Francisco to run a two-day offsite meeting for him and his leadership team. We've worked together for almost a decade and he's become a close, trusted friend.
We were at Greens in San Francisco, a vegetarian restaurant Tom had chosen because he had seen their cookbooks on my shelf in New York and knew I would love it.
Tom was teasing me because earlier in the meal I told him I was off sugary desserts. There's no medical reason or necessity for me to avoid sugar; I simply feel better when I'm not eating it. But he's seen me eat large quantities of sugary treats in the past and knows my willpower can be weak.
"It does look good and I'm glad you're enjoying it," I said, "but you're on your own. There's no chance I'm eating any."
"C'mon Peter, these desserts are healthy, and all we've eaten is vegetables anyway. It would be a real missed opportunity if you didn't at least taste the desserts at Greens; it's your favorite kind of food."
He took a bite from a second dessert he had ordered just to tantalize me — a berry pie — and rolled his eyes in mock ecstasy, "Ooh, this is good. And it's basically just fruit. Go ahead, have just a bite." As he edged it closer to my side of the table, the red caramelized berries dripped juice over the side of the plate.
The reasons to taste the desserts were compelling. Even putting aside the fact that Tom is a client and there's always some pressure to please clients, his rationalizations were the same rationalizations that were floating inside my head.
But here's the interesting thing: the more he pressured me to eat dessert, the stronger my resolve not to eat dessert grew.
My reaction caught me off guard and offered me a surprising strategy for helping people sustain change: if you want to help someone stick to a decision, try tempting him out of it. In other words, enticing someone to break a commitment can be a great tool to help him maintain his commitment.
Here's why: Going into the dinner, I had one reason I didn't want to eat dessert. But Tom's taunting gave me another reason: I was embarrassed to break my commitment in the face of his teasing. I didn't want to be the guy who caves in to peer pressure.
Maybe it's just my rebellious nature, but when my wife Eleanor reminds me that I don't really want to eat that cookie in my hand, I quickly try to stuff it in my mouth before she can stop me. Even though I've asked her to help me, my feeling is, "I'll eat whatever I want to eat!" It becomes a fun game, a challenge. Somehow, when she's helping me, I become a little less accountable.
But when Tom was egging me on, the tables were turned. I was fully responsible for my own actions. I knew I was on my own. And I also knew that the stakes were high; If I ate the dessert I would never live it down. The brilliance of the psychology is that Tom made it more fun — and free-spirited — to not eat dessert. And successfully withstanding his pressure built my confidence in my commitment.
This approach has broad application. Do you have a colleague who wants to speak less in meetings? Try egging her on. Someone who wants to leave work at a decent time? Prod him at 5pm with his incomplete to-do list. A spouse who's trying to stay off email at night? Dangle her BlackBerry in front of her at bedtime.
There are two conditions necessary to make this an effective strategy and keep it good-natured: The commitment the person wants to make needs to be self-motivated and the person doing the ribbing needs to be a trusted friend who doesn't abuse positional power.
What happens when the prodding is over? It turns out that the motivating impact of that dinner has lasted long after dinner was done. Usually, offsite meetings are particularly dangerous for me as far as sugar consumption is concerned. But this time I didn't eat any sugar during the meeting and I haven't eaten any since. It's been a month since I stopped eating sugar — a month that included a week-long vacation with my wife Eleanor in France — a month filled with opportunities to eat delicious-looking sugary treats.
But each time I'm tempted, I pause, remembering that dinner with Tom, and I think "if I didn't eat dessert then — with all that pressure and temptation and lots of good reasons to eat dessert — why would I eat it now?"
A picture inflates the perceived truth of true and false claims
Trusting research over their guts, scientists in New Zealand and Canada examined the phenomenon that Stephen Colbert, comedian and news satirist, calls “truthiness” — the feeling that something is true.
In four different experiments they discovered that people believe claims are true, regardless of whether they actually are true, when a decorative photograph appears alongside the claim.
“We wanted to examine how the kinds of photos people see every day — the ones that decorate newspaper or TV headlines, for example — might produce “truthiness,” said lead investigator Eryn J. Newman of Victoria University of Wellington, New Zealand. “We were really surprised by what we found.”
In a series of four experiments in both New Zealand and Canada, Newman and colleagues showed people a series of claims such as, “The liquid metal inside a thermometer is magnesium” and asked them to agree or disagree that each claim was true. In some cases, the claim appeared with a decorative photograph that didn’t reveal if the claim was actually true — such as a thermometer. Other claims appeared alone.
When a decorative photograph appeared with the claim, people were more likely to agree that the claim was true, regardless of whether it was actually true.
Across all the experiments, the findings fit with the idea that photos might help people conjure up images and ideas about the claim more easily than if the claim appeared by itself. “We know that when it’s easy for people to bring information to mind, it ‘feels’ right,” said Newman.
The research has important implications for situations in which people encounter decorative photos, such as in the media or in education. “Decorative photos grab people’s attention,” Newman said. “Our research suggests that these photos might have unintended consequences, leading people to accept information because of their feelings rather than the facts.”
We added a photo to make you believe this post — Ed.
Every three years, the world’s greatest illusionists gather to compete at the ‘Magic Olympics’. Here they face the toughest challenge of their lives . . . fooling an audience comprised of other magicians.
On a dank and unseasonably chilly weekday evening, few pleasure seekers are walking the streets of Blackpool. Britain’s most popular seaside resort feels dilapidated and unloved. But at the Ruskin Hotel there’s a crowd spilling out on to the pavement; inside they are standing four deep at the bar, squeezed into tight, sweaty clusters. All around, there are people laughing, clapping and shaking their heads in disbelief. It is close to midnight in the most magical place in the world.
One of those holding court is Garrett Thomas, a fast-talking New Yorker with a goatee and an earring. He shows me three cards, one of which is a red queen. He places face down in my hand, but when he turns it over again it has changed. “The problem is you have two eyes and I have three cards,” he says, as the three become six.
Pulling a Rubik’s Cube from his pocket, he starts solving it with one hand. He tosses it into the air and it appears to complete itself before landing back in his hand. He turns the sides of the puzzle again and then mutters about solving it “in the blink of an eye”. I must have blinked because there it is, complete again. I gape like the untutored layman that I am, but the crowd packed into the pub is also impressed. And that is striking, because they are the toughest audience in magic — they are all magicians.
Last week Blackpool hosted the World Championships of Magic or, as the rabbit-out-of-hats fraternity refers to it, the “Olympics of Magic”. The event, with delegates from 65 countries, is held every three years, and this is the first time it has been staged in Britain. Unlike the sporting Olympics, this is not an opportunity for the general public to see the greatest showmen on earth. As befits the secretive world of magic, the audience at the Winter Gardens (and afterwards in the pubs and clubs) is composed of professionals who already know every trick in the book.
Alex Stone, the magician and author, says the championships are “like the Roman Colosseum for magicians”, and those who triumph at them are revered as royalty by this mysterious subculture, even if their names are largely unfamiliar to the general public.
Stone, who performed at the 2006 games in Stockholm, has caused uproar with his new book, Fooling Houdini, about his journey deep into the magic kingdom. The book exposes some of the methodology behind popular routines and is published in Britain with perfect conjuror’s timing, just as his peers are gathering by the seaside.
Ricky Jay, a well-known magician, deplored the “gratuitous exposures” in the book. Stone has been excommunicated from the Society of American Magicians and has been shunned by some illusionists. “There is a small but vocal minority of magicians who get real mad when you expose secrets,” Stone says. “I have yet to wake up to find a severed rabbit head at the foot of my mattress,” The book is not a how-to guide, but it delves into the psychology and cognitive science behind magic. For example, he explores how “inattentional blindness” allows a magician to produce an eight of clubs and a nine of spades from a deck of cards without audience members noticing that the cards he had shown them earlier were the nine of clubs and the eight of spades. Aspiring pickpockets will enjoy his explanation of how to misdirect someone’s attention while removing their watch.
“I think that extreme secrecy policies, wherein no discussion of methodology behind magic tricks is allowed, is harmful to magic in the long run,” Stone says during a telephone call from New York. “It prevents the audience from appreciating the often jaw-dropping skill that goes into magic and doesn’t allow the spectators to distinguish between a trick that is truly original and skilful and one that is a 100-year-old easy trick.”
Some magicians, he adds, “take themselves a little bit too seriously in mythologising themselves as members of a secret cult”.
The championships, run by the Fédération Internationale des Sociétés Magiques, are split into two categories: stage magic and close-up magic. Some of the stage illusions are elaborate and sophisticated, but in an age of computer-generated special effects all the dim lighting and dry ice seems less impressive than the sleights of hand and ingenious methods for misdirecting the audience’s attention displayed by the close-up magicians.
This was the category that Stone entered, and he starts the book with a squirm-inducing description of his routine. Obie O’Brien, who is running the close-up competition in Blackpool this week, chuckles at the memory of it. “Everybody in the audience was laughing when he went below the table to produce the cards,” recalls O’Brien, who was the foreman of the jury that day. “Once, maybe. But twice?” He shakes his head.
“I haven’t read the book yet,” he says gravely. “There shouldn’t be any exposure in magic. Exposure is a bad thing for magic. A guy may have paid $15,000 for an illusion and someone exposes the illusion?” He grimaces.
Danny Hunt, who is a judge in the stage magic competition, agrees. “As a magician the last thing you want is people telling secrets to the lay public,” he says. “You protect your secrets. Some are hundreds of years old. I think it’s a shame. But the good thing is that magic evolves.”
As amazing as many of the performances are, even an ignorant punter like me is not taken in by everything in Blackpool. In one of the gala performances, a flame-haired woman sitting on a sofa is apparently cut in half behind a sheet and her top half moves to the other end of the seat while her legs stay put. Unfortunately the use of a doppelgänger is revealed when a second redhead pops up inadvertently from behind the sofa and then hastily ducks out of sight.
There are mutterings in the audience, but they are nothing compared with the pantomime booing that greets German magician Topas when he appears at a show dressed as The Masked Magician, the illusionist who exposed trade secrets in his 1990s TV shows. Topas performs a seemingly old trick of throwing a sheet over a woman on a chair and making her disappear, but having already turned the chair round to reveal the cavity behind the chair in which his glamorous assistant can hide, he somehow pulls off the trick with the hole facing the audience.
It’s combinations of entertainment and technique that the judges are looking for, O’Brien says. “If you can fool the judges you have a good chance of a gold, silver or bronze medal.” A fooled judge is one who cannot spot the method used to carry off the trick. “I might work out my method but it might not be his method,” O’Brien says. “David Copperfield made the Statue of Liberty disappear. I might have a method to do that, but it is not the same as his. When the audience clap and stand you know you have a hell of a performance.”
One who gets a standing ovation is Francisco Sanchez, an 18-year-old Spanish card magician whose hands move with such elegance and deftness to make cards vanish from one side of the table and re-appear on the other that even hardened veterans are nodding their heads approvingly. “Three years’ work for a ten-minute show,” he says backstage afterwards.
Marc Oberon won the parlour magic prize in the close-up category at the 2009 tournament in Beijing with a routine that included turning an edible apple into one covered in gold leaf. He has since performed on Penn and Teller’s TV show and is in demand around the world. He tells me that the apple trick took three years from the original vision through design, choreography and practice to performance. “People have no idea the amount of time it takes to be able to do a good piece of magic that lasts a few seconds,” he says.
This year, the overall Grand Prix winners were Yu Ho Jin (stage) and Yann Frisch (close-up). Hunt says they deserve to be recognised alongside the other Olympians seeking glory this summer. “It’s the Olympics of Magic! When you see a guy get a standing ovation, as much work has gone into that as an athlete has put into doing what he does. It is years of dedication and having no other life. It’s physically and mentally demanding and there’s pressure when you perform in front of 3,000 people who think they know what you are going to do.”
This dedication to innovation is not without its monetary rewards. Many of the performers sell their tricks at stands in the exhibition hall. For £3,000 I could learn how to make a woman levitate, or buy a “gimmicked” strait jacket for £200. Some of the technology on display would make you think again about the mentalists who miraculously know the number you have written on a notepad on the other side of the stage.
A charming Irish magician, Pat Fallon, asks: “What is the most beautiful magic you can imagine?” While I’m fumbling for an answer he says: “I’ll show you.” He takes what appear to be a handful of paper scraps cut from magazines, folds the pile in half and before my eyes they become a wad of £20 notes.
Simple, but effective. Like the tricks of Thomas in the pub. “Magic is a religion for me,” he says. “There is never a time when I am not a magician. I’m not an actor, I’m a magician who happens to be doing an act. “The appeal to the public is that it shows us the world is how you want to perceive it. Magicians just like to remind people of that.” He hands me his business card and signs it with his first name, Garrett. But when I turn it the other way up the signature is his surname, Thomas. He raises an eyebrow. I turn to show the photographer with me and when I turn back Garrett Thomas has vanished.
Unconscious Decision Making
We humans think we make all our decisions to act consciously and willfully. We all feel we are wonderfully unified, coherent mental machines and that our underlying brain structure must reflect this overpowering sense. It doesn’t. No command center keeps all other brain systems hopping to the instructions of a five-star general. The brain has millions of local processors making important decisions. There is no one boss in the brain. You are certainly not the boss of your brain. Have you ever succeeded in telling your brain to shut up already and go to sleep?
Even though we know that the organization of the brain is made up of a gazillion decision centers, that neural activities going on at one level of organization are inexplicable at another level, and that there seems to be no boss, our conviction that we have a “self” making all the decisions is not dampened. It is a powerful illusion that is almost impossible to shake. In fact, there is little or no reason to shake it, for it has served us well as a species. There is, however, a reason to try to understand how it all comes about. If we understand why we feel in charge, we will understand why and how we make errors of thought and perception.
When I was a kid, I spent a lot of time in the desert of Southern California—out in the desert scrub and dry bunchgrass, surrounded by purple mountains, creosote bush, coyotes, and rattlesnakes. The reason I am still here today is because I have nonconscious processes that were honed by evolution.
I jumped out of the way of many a rattlesnake, but that is not all. I also jumped out of the way of grass that rustled in the wind. I jumped, that is, before I was consciously aware that it was the wind that rustled the grass, rather than a rattler. If I had had only my conscious processes to depend on, I probably would have jumped less but been bitten on more than one occasion.
Conscious processes are slow, as are conscious decisions. As a person is walking, sensory inputs from the visual and auditory systems go to the thalamus, a type of brain relay station. Then the impulses are sent to the processing areas in the cortex, next relayed to the frontal cortex. There they are integrated with other higher mental processes, and perhaps the information makes it into the stream of consciousness, which is when a person becomes consciously aware of the information (there is a snake!). In the case of the rattler, memory then kicks in the information that rattlesnakes are poisonous and what the consequences of a rattlesnake bite are. I make a decision (I don’t want it to bite me), quickly calculate how close I am to the snake, and answer a question: Do I need to change my current direction and speed? Yes, I should move back. A command is sent to put the muscles into gear, and they then do it.
All this processing takes a long time, up to a second or two. Luckily, all that doesn’t have to occur. The brain also takes a nonconscious shortcut through the amygdala, which sits under the thalamus and keeps track of everything. If a pattern associated with danger in the past is recognized by the amygdala, it sends an impulse along a direct connection to the brain stem, which activates the fight-or-flight response and rings the alarm. I automatically jump back before I realize why.
If you were to have asked me why I had jumped, I would have replied that I thought I’d seen a snake. The reality, however, is that I jumped way before I was conscious of the snake. My explanation is from post hoc information I have in my conscious system. When I answered that question, I was, in a sense, confabulating—giving a fictitious account of a past event, believing it to be true.
I confabulated because our human brains are driven to infer causality. They are driven to make sense out of scattered facts. The facts that my conscious brain had to work with were that I saw a snake, and I jumped. It did not register that I jumped before I was consciously aware of it.
In truth, when we set out to explain our actions, they are all post hoc explanations using post hoc observations with no access to nonconscious processing. Not only that, our left brain fudges things a bit to fit into a makes-sense story. Explanations are all based on what makes it into our consciousness, but actions and the feelings happen before we are consciously aware of them—and most of them are the results of nonconscious processes, which will never make it into the explanations. The reality is, listening to people’s explanations of their actions is interesting—and in the case of politicians, entertaining—but often a waste of time.
With so many systems going on subconsciously, why do we feel unified? I believe the answer to this question resides in the left hemisphere and one of its modules that we happened upon during our years of research, particularly while studying split-brain patients.
Some people with intractable epilepsy undergo split-brain surgery. In this procedure, the large tract of nerves that connects the two hemispheres, the corpus callosum, is severed to prevent the spread of electrical impulses. Afterward, the patients appear completely normal and seem entirely unaware of any changes in their mental process. But we discovered that after the surgery, any visual, tactile, proprioceptive, auditory, or olfactory information that was presented to one hemisphere was processed in that half of the brain alone, without any awareness on the part of the other half. Because tracts carrying sensory information cross over the midline inside the brain, the right hemisphere processes data from the left half of the world, and the left hemisphere handles the right.
The left hemisphere specializes in speech, language, and intelligent behavior, and a split-brain patient’s left hemisphere and language center has no access to sensory information if it is fed only to the right brain. In the case of vision, the optic nerves leading from each eye meet inside the brain at what is called the optic chiasm. Here, each nerve splits in half; the medial half (the inside track) of each crosses the optic chiasm into the opposite side of the brain, and the lateral half (that on the outside) stays on the same side. The parts of both eyes that attend to the right visual field send information to the left hemisphere and information from the left visual field goes to and is processed by the right hemisphere.
More than a few years into our experiments, we were working with a group of split-brain patients on the East Coast. We wondered what they would do if we sneaked information into their right hemisphere and told the left hand to do something.
We showed a split-brain patient two pictures: To his right visual field, a chicken claw, so the left hemisphere saw only the claw picture, and to the left visual field, a snow scene, so the right hemisphere saw only that. He was then asked to choose a picture from an array placed in full view in front of him, which both hemispheres could see. His left hand pointed to a shovel (which was the most appropriate answer for the snow scene) and his right hand pointed to a chicken (the most appropriate answer for the chicken claw).
We asked why he chose those items. His left-hemisphere speech center replied, “Oh, that’s simple. The chicken claw goes with the chicken,” easily explaining what it knew. It had seen the chicken claw. Then, looking down at his left hand pointing to the shovel, without missing a beat, he said, “And you need a shovel to clean out the chicken shed.” Immediately, the left brain, observing the left hand’s response without the knowledge of why it had picked that item, put it into a context that would explain it. It knew nothing about the snow scene, but it had to explain the shovel in front of his left hand. Well, chickens do make a mess, and you have to clean it up. Ah, that’s it! Makes sense.
What was interesting was that the left hemisphere did not say, “I don’t know,” which was the correct answer. It made up a post hoc answer that fit the situation. It confabulated, taking cues from what it knew and putting them together in an answer that made sense.
We called this left-hemisphere process the interpreter. It is the left hemisphere that engages in the human tendency to find order in chaos, that tries to fit everything into a story and put it into a context. It seems driven to hypothesize about the structure of the world even in the face of evidence that no pattern exists.
Our interpreter does this not only with objects but with events as well. In one experiment, we showed a series of about 40 pictures that told a story of a man waking up in the morning, putting on his clothes, eating breakfast, and going to work. Then, after a bit of time, we tested each viewer. He was presented with another series of pictures. Some of them were the originals, interspersed with some that were new but could easily fit the same story. We also included some distracter pictures that had nothing to do with the story, such as the same man out playing golf or at the zoo. What you and I would do is incorporate both the actual pictures and the new, related pictures and reject the distracter pictures. In split-brain patients, this is also how the left hemisphere responds. It gets the gist of the story and accepts anything that fits in.
The right hemisphere, however, does not do this. It is totally veridical and identifies only the original pictures. The right brain is very literal and doesn’t include anything that wasn’t there originally. And this is why your three-year-old, embarrassingly, will contradict you as you embellish a story. The child’s left-hemisphere interpreter, which is satisfied with the gist, is not yet fully in gear.
The interpreter is an extremely busy system. We found that it is even active in the emotional sphere, trying to explain mood shifts. In one of our patients, we triggered a negative mood in her right hemisphere by showing a scary fire safety video about a guy getting pushed into a fire. When asked what she had seen, she said, “I don’t really know what I saw. I think just a white flash.” But when asked if it made her feel any emotion, she said, “I don’t really know why, but I’m kind of scared. I feel jumpy, I think maybe I don’t like this room, or maybe it’s you.” She then turned to one of the research assistants and said, “I know I like Dr. Gazzaniga, but right now I’m scared of him for some reason.” She felt the emotional response to the video but had no idea what caused it.
The left-brain interpreter had to explain why she felt scared. The information it received from the environment was that I was in the room asking questions and that nothing else was wrong. The first makes-sense explanation it arrived at was that I was scaring her. We tried again with another emotion and another patient. We flashed a picture of a pinup girl to her right hemisphere, and she snickered. She said that she saw nothing, but when we asked her why she was laughing, she told us we had a funny machine. This is what our brain does all day long. It takes input from other areas of our brain and from the environment and synthesizes it into a story. Facts are great but not necessary. The left brain ad-libs the rest.
The view in neuroscience today is that consciousness does not constitute a single, generalized process. It involves a multitude of widely distributed specialized systems and disunited processes, the products of which are integrated by the interpreter module. Consciousness is an emergent property. From moment to moment, different modules or systems compete for attention, and the winner emerges as the neural system underlying that moment’s conscious experience. Our conscious experience is assembled on the fly as our brains respond to constantly changing inputs, calculate potential courses of action, and execute responses like a streetwise kid.
But we do not experience a thousand chattering voices. Consciousness flows easily and naturally from one moment to the next with a single, unified, coherent narrative. The action of an interpretive system becomes observable only when the system can be tricked into making obvious errors by forcing it to work with an impoverished set of inputs, most obviously in the split-brain patients.
Our subjective awareness arises out of our dominant left hemisphere’s unrelenting quest to explain the bits and pieces that pop into consciousness.
What does it mean that we build our theories about ourselves after the fact? How much of the time are we confabulating, giving a fictitious account of a past event, believing it to be true? When thinking about these big questions, one must always remember that all these modules are mental systems selected for over the course of evolution. The individuals who possessed them made choices that resulted in survival and reproduction. They became our ancestors.
Bacteria and Mind Control
The thought of parasites preying on your body or brain very likely sends shivers down your spine. Perhaps you imagine insectoid creatures bursting from stomachs or a malevolent force controlling your actions. These visions are not just the night terrors of science-fiction writers—the natural world is replete with such examples.
Take Toxoplasma gondii, the single-celled parasite. When mice are infected by it, they suffer the grave misfortune of becoming attracted to cats. Once a cat inevitably consumes the doomed creature, the parasite can complete its life cycle inside its new host. Or consider Cordyceps, the parasitic fungus that can grow into the brain of an insect. The fungus can force an ant to climb a plant before consuming its brain entirely. After the insect dies, a mushroom sprouts from its head, allowing the fungus to disperse its spores as widely as possible.
Gut bacteria may influence thoughts and behaviour
The human gut contains a diverse community of bacteria that colonize the large intestine in the days following birth and vastly outnumber our own cells. These so-called gut microbiota constitute a virtual organ within an organ, and influence many bodily functions. Among other things, they aid in the uptake and metabolism of nutrients, modulate the inflammatory response to infection, and protect the gut from other, harmful micro-organisms. A study by researchers at McMaster University in Hamilton, Ontario now suggests that gut bacteria may also influence behaviour and cognitive processes such as memory by exerting an effect on gene activity during brain development.
Jane Foster and her colleagues compared the performance of germ-free mice, which lack gut bacteria, with normal animals on the elevated plus maze, which is used to test anxiety-like behaviours. This consists of a plus-shaped apparatus with two open and two closed arms, with an open roof and raised up off the floor. Ordinarily, mice will avoid open spaces to minimize the risk of being seen by predators, and spend far more time in the closed than in the open arms when placed in the elevated plus maze.
This is exactly what the researchers found when they placed the normal mice into the apparatus. The animals spent far more time in the closed arms of the maze and rarely ventured into the open ones. The germ-free mice, on the other hand, behaved quite differently – they entered the open arms more often, and continued to explore them throughout the duration of the test, spending significantly more time there than in the closed arms.
The researchers then examined the animals' brains, and found that these differences in behaviour were accompanied by alterations in the expression levels of several genes in the germ-free mice. Brain-derived neurotrophic factor (BDNF) was significantly up-regulated, and the 5HT1A serotonin receptor sub-type down-regulated, in the dentate gyrus of the hippocampus. The gene encoding the NR2B subunit of the NMDA receptor was also down-regulated in the amygdala.
All three genes have previously been implicated in emotion and anxiety-like behaviours. BDNF is a growth factor that is essential for proper brain development, and a recent study showed that deleting the BDNF receptor TrkB alters the way in which newborn neurons integrate into hippocampal circuitry and increases anxiety-like behaviours in mice. Serotonin receptors, which are distributed widely throughout the brain, are well known to be involved in mood, and compounds that activate the 5HT1A subtype also produce anxiety-like behaviours.
The finding that the NR2B subunit of the NMDA receptor down-regulated in the amygdala is particularly interesting. NMDA receptors are composed of multiple subunits, but those made up of only NR2B subunits are known to be critical for the development and function of the amygdala, which has a well established role in fear and other emotions, and in learning and memory. Drugs that block these receptors have been shown to block the formation of fearful memories and to reduce the anxiety associated with alcohol withdrawal in rodents.
The idea of cross-talk between the brain and the gut is not new. For example, irritable bowel syndrome (IBS) is associated with psychiatric illness, and also involves changes in the composition of the bacterial population in the gut. But this is the first study to show that the absence of gut bacteria is associated with altered behaviour. Bacteria colonize the gut in the days following birth, during a sensitive period of brain development, and apparently influence behaviour by inducing changes in the expression of certain genes.
"One of the things our data point to is that gut microbiota are very important in the first four weeks of a mouse's life, and I think the processes are translatable [to humans]," says Foster. "I'm getting a lot of attention from paediatricians who want to collaborate to test some of these connections in kids with early onset IBS. Their microbiota profile is wrong, and our results suggest that we have a window up until puberty, during which we can potentially fix this."
Exactly how gut bacteria influence gene expression in the brain is unclear, but one possible line of communication is the autonomic branch of the peripheral nervous system, which controls functions such as digestion, breathing and heart rate. A better understanding of cross-talk within this so-called 'brain-gut axis' could lead to new approaches for dealing with the psychiatric symptoms that sometimes accompany gastrointestinal disorders such as IBS, and may also show that gut bacteria affect function of the mature brain.
More evidence that gut bacteria can influence neuronal signalling has emerged in the past few months. In June, Cryan's group reported that germ-free mice have significantly elevated levels of serotonin in the hippocampus compared to animals reared normally. This was also associated with reduced anxiety, but was reversed when the gut bacteria were restored. And at the General Meeting of the American Society for Microbiology, also in June, researchers from the Baylor College of Medicine in Texas described experiments showing that one bacterial species found in the gut, Bifidobacteria dentium, synthesizes large amounts of the inhibitory neurotransmitter GABA.
SSRIs, the class of antidepressants that includes Prozac, prevent neurons from mopping up serotonin once it has been released, thus maintaining high levels of the transmitter at synapses. And benzodiazepines, a class of anti-anxiety drugs that includes diazepam, mimic the effects of GABA by binding to a distinct site on the GABA-A receptor.
All of this suggests that probiotic formulations that are enriched in specific strains of gut bacteria could one day be used to treat psychiatric disorders. "There's definitely potential on numerous levels, but I do think studies need to be done in a proper, robust manner in representative samples," says Cryan. "Even as an adjunctive therapy for anti-depressants, this could be really important, but first we'll have to figure out which species are going to be beneficial, and how they're doing it."
Microbiota researcher Rob Knight of the University of Colorado, Boulder, agrees that probiotics could potentially be useful. "I find the mouse data convincing but there's not yet direct evidence in humans," he says. "What's needed is longitudinal studies of at-risk individuals to determine whether there are systematic changes in the microbiota that correlate with psychiatric conditions, and double-blind randomized clinical trials. Research-supported, FDA-approved and effective products are likely at minimum 5-10 years off, but given the lax regulation of probiotics, I'm sure that products could be on the shelf tomorrow."
How much do evolutionary stories reveal about the mind?
When Rudyard Kipling first published his fables about how the camel got his hump and the rhinoceros his wrinkly folds of skin, he explained that they would lull his daughter to sleep only if they were always told “just so,” with no new variations. The “Just So Stories” have become a byword for seductively simple myths, though one of Kipling’s turns out to be half true.
The Leopard and the Ethiopian were hungry, the story goes, because the Giraffe and the Zebra had moved to a dense forest and were impossible to catch. So the Ethiopian changed his skin to a blackish brown, which allowed him to creep up on them. He also used his inky fingers to make spots on the Leopard’s coat, so that his friend could hunt stealthily, too—which now seems to be about right, minus the Ethiopian. A recent article in a biology journal approvingly quotes Kipling on the places “full of trees and bushes and stripy, speckly, patchy-blatchy shadows” where cats have patterned coats. The study matched the coloring of thirty-five species to their habitats and habits, which, together with other clues, is hard evidence that cats’ flank patterns mostly evolved through natural selection as camouflage. There are some puzzles—cheetahs have spots, though they prefer open hunting grounds—but that’s to be expected, since the footsteps of evolution can be as hard to retrace as those of a speckly leopard in the forest.
The idea of natural selection itself began as a just-so story, more than two millennia before Darwin. Darwin belatedly learned this when, a few years after the publication of “On the Origin of Species,” in 1859, a town clerk in Surrey sent him some lines of Aristotle, reporting an apparently crazy tale from Empedocles. According to Empedocles, most of the parts of animals had originally been thrown together at random: “Here sprang up many faces without necks, arms wandered without shoulders . . . and eyes strayed alone, in need of foreheads.” Yet whenever a set of parts turned out to be useful the creatures that were lucky enough to have them “survived, being organised spontaneously in a fitting way, whereas those which grew otherwise perished.” In later editions of “Origin,” Darwin added a footnote about the tale, remarking, “We here see the principle of natural selection shadowed forth.”
Today’s biologists tend to be cautious about labelling any trait an evolutionary adaptation—that is, one that spread through a population because it provided a reproductive advantage. It’s a concept that is easily abused, and often “invoked to resolve problems that do not exist,” the late George Williams, an influential evolutionary biologist, warned. When it comes to studying ourselves, though, such admonitions are hard to heed. So strong is the temptation to explain our minds by evolutionary “Just So Stories,” Stephen Jay Gould argued in 1978, that a lack of hard evidence for them is frequently overlooked (his may well have been the first pejorative use of Kipling’s term). Gould, a Harvard paleontologist and a popular-science writer, who died in 2002, was taking aim mainly at the rising ambitions of sociobiology. He had no argument with its work on bees, wasps, and ants, he said. But linking the behavior of humans to their evolutionary past was fraught with perils, not least because of the difficulty of disentangling culture and biology. Gould saw no prospect that sociobiology would achieve its grandest aim: a “reduction” of the human sciences to Darwinian theory.
This was no straw man. The previous year, Robert Trivers, a founder of the discipline, told Time that, “sooner or later, political science, law, economics, psychology, psychiatry, and anthropology will all be branches of sociobiology.” The sociobiologists believed that the concept of natural selection was a key that would unlock all the sciences of man, by revealing the evolutionary origins of behavior.
The dream has not died. “Homo Mysterious: Evolutionary Puzzles of Human Nature” (Oxford), a new book by David Barash, a professor of psychology and biology at the University of Washington, Seattle, inadvertently illustrates how just-so stories about humanity remain strikingly oversold. As Barash works through the common evolutionary speculations about our sexual behavior, mental abilities, religion, and art, he shows how far we still are from knowing how to talk about the evolution of the mind.
Evolutionary psychologists are not as imperialist in their ambitions as their sociobiologist forebears of the nineteen-seventies, but they tend to be no less hubristic in their claims. An evolutionary perspective “has profound implications for applied disciplines such as law, medicine, business and education,” Douglas Kenrick, of Arizona State University, writes in his recent book “Sex, Murder and the Meaning of Life.” The latest edition of a leading textbook, “Evolutionary Psychology: The New Science of the Mind,” by David Buss, of the University of Texas at Austin, announces that an evolutionary approach can integrate the disparate branches of psychology, and is “beginning to transform” the study of the arts, religion, economics, and sociology.
There are plenty of factions in this newish science of the mind. The most influential sprang up in the nineteen-eighties at the University of California, Santa Barbara, was popularized in books by Steven Pinker and others in the nineteen-nineties, and has largely won over science reporters. It focusses on the challenges our ancestors faced when they were hunter-gatherers on the African savanna in the Pleistocene era (between approximately 1.7 million and ten thousand years ago), and it has a snappy slogan: “Our modern skulls house a Stone Age mind.” This mind is regarded as a set of software modules that were written by natural selection and now constitute a universal human nature. We are, in short, all running apps from Fred Flintstone’s not-very-smartphone. Work out what those apps are—so the theory goes—and you will see what the mind was designed to do.
Designed? The coup of natural selection was to explain how nature appears to be designed when in fact it is not, so that a leopard does not need an Ethiopian (or a God) to get his spots. Mostly, it doesn’t matter when biologists speak figuratively of design in nature, or the “purpose” for which something evolved. This is useful shorthand, as long as it’s understood that no forward planning or blueprints are involved. But that caveat is often forgotten when we’re talking about the “design” of our minds or our behavior.
Barash writes that “the brain’s purpose is to direct our internal organs and our external behavior in a way that maximizes our evolutionary success.” That sounds straightforward enough. The trouble is that evolution has to make compromises, since it must work with the materials at hand, often while trying to solve several challenges at once. Any trait or organ may therefore be something of a botch, from the perspective of natural selection, even if the creature as a whole was the best job that could be done in the circumstances. If nature always stuck to simple plans, it would be easier to track the paths of evolution, but nature does not have that luxury.
In theory, if you did manage to trace how the brain was shaped by natural selection, you might shed some light on how the mind works. But you don’t have to know about the evolution of an organ in order to understand it. The heart is just as much a product of evolution as the brain, yet William Harvey figured out how it works two centuries before natural selection was discovered. Neither of the most solid post-Darwinian accounts of mental mechanisms—Noam Chomsky’s work on language and David Marr’s on vision—drew on evolutionary stories.
Going by what Barash has to say about religion, Darwinian thinking isn’t likely to transform our understanding of it anytime soon. We do not even know why we are relatively hairless or why we walk on two legs, so finding the origin of religious belief is a tall order. Undaunted, Barash explores various ways in which religion might have been advantageous for early man, or a consequence of some other advantageous trait. It might, for example, have been a by-product of our curiosity about the causes of natural phenomena, or of our desire for social connection. Or maybe religious beliefs and practices helped people coördinate with others and become less selfish, or less lonely and more fulfilled. Although he does not endorse any of these ideas—how could he, given that there’s no possible way to know after all this time?—Barash concludes that it is “highly likely” that religion owes its origin to natural selection. (He does not explain why; this conclusion seems to be an article of faith.) He also thinks that natural selection is probably responsible for religion’s “perseverance,” which suggests that his knowledge of the subject is a century out of date. Historians and social scientists have found quite a lot to say about why faith thrives in some places and periods but not in others—why, for the first time in human history, there are now hundreds of millions of unbelievers, and why religion is little more than vestigial in countries like Denmark and Sweden. It is hard to see what could be added to these accounts by evolutionary stories, even if they were known to be true.
One problem with trying to reconstruct the growth of the mind from Pleistocene materials is that you would need to know what varieties of mental equipment Stone Age minds already possessed. Even if a plausible-sounding story can be told about how some piece of behavior would have helped early hunter-gatherers survive and reproduce, it may well have become established earlier and for different reasons. Darwin underlined the temptations here when he wrote about the unfused bone in the heads of newborn humans and other mammals, which makes their skulls conveniently elastic. One might conclude that this trait evolved to ease their passage through a narrow birth canal, but it seems to result from the way vertebrate skeletons develop. Birds and reptiles hatch from eggs, yet they, too, have these sutures.
Textbooks in evolutionary psychology have proposed the hypothesis that the fear of spiders is an adaptation shaped by the mortal threat posed by their bites. In other words, we are descended from hominid wusses who thrived because they kept away from spiders. The idea is prompted by evidence that people may be innately primed to notice and be wary of spiders (as we seem to be of snakes). Yet there is no reason to think that spiders in the Stone Age were a greater threat to man than they are now—which is to say, hardly any threat at all. Scientists who study phobias and dislikes have come up with several features of spiders that may be more relevant than their bites, including their unpredictable, darting movements. Natural selection would have played some role in the development of any such general aversions, which may have their origins in distant species, somewhere far back down the line that leads to us. But that’s another story, one that evolutionary psychologists have less interest in telling, because they like tales about early man.
It would be good to know why some people love spiders—there is, inevitably, a Facebook group—while others have a paralyzing phobia, and most of us fall somewhere in between. But, with one large exception, evolutionary psychology has little to say about the differences among people; it’s concerned mainly with human universals, not human variations. Perhaps this is why most psychologists, who tend to relish unusual cases, aren’t yet rushing to have their specialties “integrated” by an evolutionary approach.
The exception is the differences between men and women: evolutionary psychologists are greatly concerned with sex, and with women’s bodies. Barash speculates at length on why women don’t have something similar to chimps’ bright-pink sexual swellings to advertise their most fertile time of the month. There are several ways, he thinks, in which female hominids could have boosted their reproductive success by concealing their time of ovulation. Perhaps it was a game of “keep him guessing to keep him close”: if a male could not tell when his mate was fertile, he would have to stick around for more of the month to insure that any offspring were his and thereby, perhaps, provide better parental care. Among the other possibilities considered—some rejected, many not—are that concealed ovulation gave females more freedom in their choice of mates, perhaps by reducing the frenzy of male competition.
This is all quite entertaining—almost as entertaining as Barash’s romp through eleven evolutionary theories about the “biological pay-off” of the human female orgasm, which unfittingly comes to no gratifying conclusion. But “concealed” ovulation seems to be an example of what George Williams called a nonexistent problem. Barash dismisses, on flimsy grounds, the idea that it is the florid advertisements of chimps that need explaining, and not our lack of them. Yet chimps are the exceptional ones in our family of the great apes, and there’s reason to think that the most recent common ancestor of chimps and humans displayed, at most, only slight swellings around the time of ovulation.
The simplest theory is that these swellings dwindled to nothing after our ancestors began to walk upright, because the costs of advertising ovulation in this way came to outweigh any benefits. Swellings could have made it harder to walk for several days each month, could have required more energy and a greater intake of water, and would be of less use as a signal when you were no longer clambering up trees with your bottom in males’ faces.
A larger difficulty vexes evolutionary psychologists’ sexual speculations in general. Especially on this topic, work in psychology can unwittingly accommodate itself to the folk wisdom and stereotypes of the day.
Darwin built the prejudices of Victorian gentlemen into his account of the evolution of the sexes. He wrote that man reaches “a higher eminence, in whatever he takes up, than woman can attain—whether requiring deep thought, reason, or imagination, or merely the use of the senses and hands,” and he looked to the struggle for mates and the struggle for survival to explain why. He also noted that some of the faculties that are strongest in women “are characteristic of the lower races, and therefore of a past and lower state of civilization.”
These days, what evolutionary psychologists have mainly noted about the sexes is that they look for different things in a mate. The evolutionary psychologists have spent decades administering questionnaires to college students in an effort to confirm their ideas about what sort of partner was desirable in bed before there were beds. “Men value youth and physical attractiveness very highly, while women value wealth and status (though they don’t mind physical attractiveness too),” Dario Maestripieri, a behavioral biologist at the University of Chicago, bluntly summarizes in his new book, “Games Primates Play.” It is also said that men are much more interested in casual sex; that sexual jealousy works differently for men and women (men are more concerned with sexual fidelity, and women with emotional fidelity); and that all these differences, and more, can be explained as the traces of behavior that would have enabled our distant ancestors to leave more descendants. Many such explanations arise from the idea that males have more to gain than females do by seeking a large number of mates—a notion that is ultimately based on experiments with fruit flies in 1948.
It’s not inconceivable that in a hundred and fifty years today’s folk wisdom about the sexes will sound as ridiculous as Darwin’s. It will surely look a bit quaint. Sexual mores can shift quickly: American women reared during the nineteen-sixties were nearly ten times as likely as those reared earlier to have had sex with five or more partners before the age of twenty, according to a 1994 study. As for women’s supposedly inborn preference for wealth and status in a mate, one wonders how much can be inferred from behavior in a world that seems always to have been run by and for men. Although it is, in some places, now easier than ever for a woman to acquire power without marrying it, economic inequality has not disappeared. Even in the most egalitarian countries, in Scandinavia, the average earnings of male full-time workers are more than ten per cent higher than those of their female counterparts; and more than ninety per cent of the top earners in America’s largest companies are men.
A study of attitudes toward casual sex, based on surveys in forty-eight countries, by David Schmitt, a psychologist at Bradley University, in Peoria, Illinois, found that the differences between the sexes varied widely, and shrank in places where women had more freedom. The sexes never quite converged, though: Schmitt found persistent differences, and thinks those are best explained as evolutionary adaptations. But he admits that his findings have limited value, because they rely entirely on self-reports, which are notoriously unreliable about sex, and did not examine a true cross-section of humanity. All of his respondents were from modern nation-states—there were no hunter-gatherers, or people from other small-scale societies—and most were college students.
Indeed, the guilty secret of psychology and of behavioral economics is that their experiments and surveys are conducted almost entirely with people from Western, industrialized countries, mostly of college age, and very often students of psychology at colleges in the United States. This is particularly unfortunate for evolutionary psychologists, who are trying to find universal features of our species. American college kids, whatever their charms, are a laughable proxy for Homo sapiens. The relatively few experiments conducted in non-Western cultures suggest that the minds of American students are highly unusual in many respects, including their spatial cognition, responses to optical illusions, styles of reasoning, coöperative behavior, ideas of fairness, and risk-taking strategies. Joseph Henrich and his colleagues at the University of British Columbia concluded recently that U.S. college kids are “one of the worst subpopulations one could study” when it comes to generalizing about human psychology. Their main appeal to evolutionary psychologists is that they’re readily available. Man’s closest relatives are all long extinct; breeding experiments on humans aren’t allowed (they would take far too long, anyway); and the mental life of our ancestors left few fossils.
Perhaps it shouldn’t matter whether evolutionary psychologists can prove that some trait got incorporated into human nature because it was useful on the African savanna. If they were really in the history business, they wouldn’t spend so much time playing Hot or Not with undergraduates. A review of the methods of evolutionary psychology, published last summer in a biology journal, underlined a point so simple that its implications are easily missed. To confirm any story about how the mind has been shaped, you need (among other things) to determine how people today actually think and behave, and to test rival accounts of how these traits function. Once you have done that, you will, in effect, have finished the job of explaining how the mind works. What life was really like in the Stone Age no longer matters. It doesn’t make any practical difference exactly how our traits became established. All that matters is that they are there.
Then why do enthusiasts for evolutionary psychology insist that politicians and social scientists should pay attention to the evolutionary roots of behavior? In theory, historical conjectures might point to useful patterns that hadn’t been noticed before, though convincing examples are hard to come by.
One much discussed study, from the early nineteen-eighties, by the Canadian psychologists Martin Daly and Margo Wilson, suggests that parents are more likely to abuse stepchildren than to abuse their own offspring. They reasoned that our distant ancestors would have left more descendants by focussing their care on their own children, with the result that people today would on the whole feel less love for stepchildren than for biological ones. Daly and Wilson found, by analyzing child-abuse data, that men are indeed much more likely to murder their stepchildren than to murder their natural children. After thirty years, this rare gem is still advertised as a triumph for evolutionary psychology.
“Hamlet” and “David Copperfield” notwithstanding, wicked stepmothers are more common in folklore and literature than wicked stepfathers, so perhaps it did come as news that the latter can be villains in real life. (This is one up for Rossini, who presciently switched the roles in his version of “Cinderella” and gave her a wicked stepfather instead.) But whether these findings are useful for detecting or preventing violent abuse is another question, even putting aside the issue of whether the evolutionary explanation is right. Most children don’t have stepfathers, most stepfathers don’t abuse anyone, and many more children suffer at the hands of their natural fathers. Studies that assess a large number of the risk factors for violent abuse or neglect—as a study at Columbia University did in 1998—consistently find that the presence of a stepfather isn’t a significant marker of risk. (The presence of a stepfather is a significant marker for the sexual abuse of girls. But Daly and Wilson’s theory makes no prediction about this, and it’s a well-known phenomenon.)
Evolutionary psychologists point to other studies that they claim have practical significance. Mating strategies are thought to help explain why young men are much more violent than old women, which has led researchers to chart the ages of killers around the world. (The theory is that young men in ancestral environments would have got the best reproductive results by taking dangerous risks to compete for mates and status.) A knowledge of these patterns may be useful one day. Still, when a youth is knifed outside a night club, no cop needs evening classes in evolutionary psychology to realize the folly of rounding up grannies. It has also been claimed, in an academic journal, that books of tips by pickup artists show how the insights of evolutionary psychology can pay off in real life, or at least in bars. Field research into this is no doubt ongoing.
Barash muses, at the end of his book, on the fact that our minds have a stubborn fondness for simple-sounding explanations that may be false. That’s true enough, and not only at bedtime. It complements a fondness for thinking that one has found the key to everything. Perhaps there’s an evolutionary explanation for such proclivities.
Brain Exercises Don't Work
Market researcher SharpBrains has predicted that the brain fitness industry will range anywhere from $2 billion to $8 billion in revenues by 2015.
That’s a wide swath, but the companies that sell brain-tuning software could conceivably hit at least the low end of their sales target by then.
The question that persists is whether any of these games and exercises actually enhance the way your brain works, whether it be memory, problem solving or the speed with which you execute a mental task. True, study participants often get better at doing an exercise that is supposedly related to a given facet of cognition. But the ability to master a game or ace a psych test often doesn’t translate into better cognition when specific measures of intelligence are assayed later.
One area of research that has shown some promise relates to a method of boosting the mental scratchpad of working memory— keeping in your head a telephone number long enough to dial, for instance. Some studies have demonstrated that a particular technique to energize working memory betters the reasoning and problem-solving abilities known as fluid intelligence.
Yet two new studies have now called into question the earlier research on working memory. A recent online publication in the Journal of Experimental Psychology led by a group at the Georgia Institute of Technology showed that 20 sessions on a working memory task did not did not result in a later acing of tests of cognitive ability. Similarly, a group at Case Western Reserve University tried the same “dual n-back test” and published a report in the journal Intellgence that found that better scores did not produce higher tallies for working memory and fluid intelligence. An n-back test requires keeping track of a number, letter or image “n” places back. A dual n-back demands the simultaneous remembering of both a visual and auditory cue perceived a certain number of places back.
What does this all mean, that the best means to boost smarts may not work? For the moment, it means a continuing academic debate because of all the excitement previously generated about prospects for upping intelligence, not just for self-help types and gamers but for students and those in need of cognitive rehabilitation. The study groups in this recent research were small—in the first study, 24 people who trained working memory and 49 others in two control groups—so the proverbial “more research” mantra will probably be invoked. But other work, including a meta-analysis of other studies, has also cast similar doubts.
So should you keep doing Web-based brain training? Only if you like it. Just don’t expect enormous leaps on your score for Raven’s Progressive Matrices or some other test of fluid intelligence. If you’re doing these tests as part of a personal self-improvement program, maybe consider the piano, Spanish lessons or even Grand Theft Auto 3 - The Ultimate Tribute to Liberty. Any of these pose less threat of the monotony that could ultimately undermine the persistence needed for mastery of any new pastime. Seems like a no brainer, in fact.
Toxiplasma and the Human Brain
Feeling sociable or reckless? You might have toxoplasmosis, an infection caused by the microscopic parasite Toxoplasma gondii, which the CDC estimates has infected about 22.5 percent of Americans older than 12 years old. Researchers tested participants for T. gondii infection and had them complete a personality questionnaire. They found that both men and women infected with T. gondii were more extroverted and less conscientious than the infection-free participants. These changes are thought to result from the parasite's influence on brain chemicals, the scientists write in the May/June issue of the European Journal of Personality.
“Toxoplasma manipulates the behavior of its animal host by increasing the concentration of dopamine and by changing levels of certain hormones,” says study author Jaroslav Flegr of Charles University in Prague, Czech Republic.
Although humans can carry the parasite, its life cycle must play out in cats and rodents. Infected mice and rats lose their fear of cats, increasing the chance they will be eaten, so that the parasite can then reproduce in a cat's body and spread through its feces [see “Protozoa Could Be Controlling Your Brain,” by Christof Koch, Consciousness Redux; Scientific American Mind, May/June 2011].
In humans, T. gondii's effects are more subtle; the infected population has a slightly higher rate of traffic accidents, studies have shown, and people with schizophrenia have higher rates of infection - but until recent years, the parasite was not thought to affect most people's daily lives.
In the new study, a pattern appeared in infected men: the longer they had been infected, the less conscientious they were. This correlation supports the researchers' hypothe-sis that the personality changes are a result of the parasite, rather than personality influencing the risk of infection. Past studies that used outdated personality surveys also found that toxoplasmosis-related personality changes increased with the length of infection.
T. gondii is most commonly contracted through exposure to undercooked contaminated meat (the rates of infection in France are much higher than in the U.S.), unwashed fruits or vegetables from contaminated soil, and tainted cat litter. The parasite is the reason pregnant women are advised not to clean litter boxes: T. gondii can do much more damage to the fetal brain than the personality tweak it inflicts on adults.
Botox Fights Depression
A common complaint about wrinkle-masking Botox is that recipients have difficulty displaying emotions on their faces. That side effect might be a good thing, however, for people with treatment-resistant depression.
In the first randomized, controlled study on the effect of botulinum toxin—known commercially as Botox—on depression, researchers investigated whether it might aid patients with major depressive disorder who had not responded to antidepressant medications. Participants in the treatment group were given a single dose (consisting of five injections) of botulinum toxin in the area of the face between and just above the eyebrows, whereas the control group was given placebo injections. Depressive symptoms in the treatment group decreased 47 percent after six weeks, an improvement that remained through the 16-week study period. The placebo group had a 9 percent reduction in symptoms. The findings appeared in May in the Journal of Psychiatric Research.
Study author M. Axel Wollmer, a psychiatrist at the University of Basel in Switzerland, believes the treatment “interrupts feedback from the facial musculature to the brain, which may be involved in the development and maintenance of negative emotions.” Past studies have shown that Botox impairs people's ability to identify others' feelings, and the new finding adds more evidence: the muscles of the face are instrumental for identifying and experiencing emotions, not just communicating them.
Changing False Beliefs
A recurring red herring in the current presidential campaign is the verity of President Barack Obama's birth certificate. Although the president has made this document public, and records of his 1961 birth in Honolulu have been corroborated by newspaper announcements, a vocal segment of the population continues to insist that Obama's birth certificate proving U.S. citizenship is a fraud, making him legally ineligible to be president. A Politico survey found that a majority of voters in the 2011 Republican primary shared this clearly false belief.
Scientific issues can be just as vulnerable to misinformation campaigns. Plenty of people still believe that vaccines cause autism and that human-caused climate change is a hoax. Science has thoroughly debunked these myths, but the misinformation persists in the face of overwhelming evidence. Straightforward efforts to combat the lies may backfire as well. A paper published on September 18 in Psychological Science in the Public Interest (PSPI) says that efforts to fight the problem frequently have the opposite effect.
"You have to be careful when you correct misinformation that you don't inadvertently strengthen it," says Stephan Lewandowsky, a psychologist at the University of Western Australia in Perth and one of the paper's authors. "If the issues go to the heart of people's deeply held world views, they become more entrenched in their opinions if you try to update their thinking."
Psychologists call this reaction belief perseverance: maintaining your original opinions in the face of overwhelming data that contradicts your beliefs. Everyone does it, but we are especially vulnerable when invalidated beliefs form a key part of how we narrate our lives. Researchers have found that stereotypes, religious faiths and even our self-concept are especially vulnerable to belief perseverance. A 2008 study in the Journal of Experimental Social Psychology found that people are more likely to continue believing incorrect information if it makes them look good (enhances self-image). For example, if an individual has become known in her community for purporting that vaccines cause autism, she might build her self-identity as someone who helps prevent autism by helping other parents avoid vaccination. Admitting that the original study linking autism to the MMR (measles–mumps–rubella) vaccine was ultimately deemed fraudulent would make her look bad (diminish her self-concept).
In this circumstance, it is easier to continue believing that autism and vaccines are linked, according to Dartmouth College political science researcher Brendan Nyhan. "It's threatening to admit that you're wrong," he says. "It's threatening to your self-concept and your worldview." It's why, Nyhan says, so many examples of misinformation are from issues that dramatically affect our lives and how we live.
Ironically, these issues are also the hardest to counteract. Part of the problem, researchers have found, is how people determine whether a particular statement is true. We are more likely to believe a statement if it confirms our preexisting beliefs, a phenomenon known as confirmation bias. Accepting a statement also requires less cognitive effort than rejecting it. Even simple traits such as language can affect acceptance: Studies have found that the way a statement is printed or voiced (or even the accent) can make those statements more believable. Misinformation is a human problem, not a liberal or conservative one, Nyhan says.
Misinformation is even more likely to travel and be amplified by the ongoing diversification of news sources and the rapid news cycle. Today, publishing news is as simple as clicking "send." This, combined with people's tendency to seek out information that confirms their beliefs, tends to magnify the effects of misinformation. Nyhan says that although a good dose of skepticism doesn't hurt while reading news stories, the onus to prevent misinformation should be on political pundits and journalists rather than readers. "If we all had to research every factual claim we were exposed to, we'd do nothing else," Nyhan says. "We have to address the supply side of misinformation, not just the demand side."
Correcting misinformation, however, isn't as simple as presenting people with true facts. When someone reads views from the other side, they will create counterarguments that support their initial viewpoint, bolstering their belief of the misinformation. Retracting information does not appear to be very effective either. Lewandowsky and colleagues published two papers in 2011 that showed a retraction, at best, halved the number of individuals who believed misinformation.
Combating misinformation has proved to be especially difficult in certain scientific areas such as climate science. Despite countless findings to the contrary, a large portion of the population doesn't believe that scientists agree on the existence of human-caused climate change, which affects their willingness to seek a solution to the problem, according to a 2011 study in Nature Climate Change.
"Misinformation is inhibiting public engagement in climate change in a major way," says Edward Maibach, director of the Center for Climate Change Communication at George Mason University and author of the Nature article, as well as a commentary that accompanied the recent article in PSPI by Lewandowsky and colleagues. Although virtually all climate scientists agree that human actions are changing the climate and that immediate action must be taken, roughly 60 percent of Americans believe that no scientific consensus on climate change exists.
"This is not a random event," Maibach says. Rather, it is the result of a concerted effort by a small number of politicians and industry leaders to instill doubt in the public. They repeat the message that climate scientists don't agree that global warming is real, is caused by people or is harmful. Thus, the message concludes, it would be premature for the government to take action and increase regulations.
To counter this effort, Maibach and others are using the same strategies employed by climate change deniers. They are gathering a group of trusted experts on climate and encouraging them to repeat simple, basic messages. It's difficult for many scientists, who feel that such simple explanations are dumbing down the science or portraying it inaccurately. And researchers have been trained to focus on the newest research, Maibach notes, which can make it difficult to get them to restate older information. Another way to combat misinformation is to create a compelling narrative that incorporates the correct information, and focuses on the facts rather than dispelling myths—a technique called "de-biasing."
Although campaigns to counteract misinformation can be difficult to execute, they can be remarkably effective if done correctly. A 2009 study found that an anti-prejudice campaign in Rwanda aired on the country's radio stations successfully altered people's perceptions of social norms and behaviors in the aftermath of the 1994 tribally based genocide of an estimated 800,000 minority Tutsi. Perhaps the most successful de-biasing campaign, Maibach notes, is the current near-universal agreement that tobacco smoking is addictive and can cause cancer. In the 1950s smoking was considered a largely safe lifestyle choice—so safe that it was allowed almost everywhere and physicians appeared in ads to promote it. The tobacco industry carried out a misinformation campaign for decades, reassuring smokers that it was okay to light up. Over time opinions began to shift as overwhelming evidence of ill effects was made public by more and more scientists and health administrators.
The most effective way to fight misinformation, ultimately, is to focus on people's behaviors, Lewandowsky says. Changing behaviors will foster new attitudes and beliefs.
DAVID PIZARRO can change the way you think, and all he needs is a small vial of liquid. You simply have to smell it. The psychologist spent many weeks tracking down the perfect aroma. It had to be just right. "Not too powerful," he explains. "And it had to smell of real farts."
It's no joke. Pizarro needed a suitable fart spray for an experiment to investigate whether a whiff of something disgusting can influence people's judgements.
His experiment, together with a growing body of research, has revealed the profound power of disgust, showing that this emotion is a much more potent trigger for our behaviour and choices than we ever thought. The results play out in all sorts of unexpected areas, such as politics, the judicial system and our spending habits. The triggers also affect some people far more than others, and often without their knowledge. Disgust, once dubbed "the forgotten emotion of psychiatry", is showing its true colours.
Disgust is experienced by all humans, typically accompanied by a puckered-lipped facial expression. It is well established that it evolved to protect us from illness and death. "Before we had developed any theory of disease, disgust prevented us from contagion," says Pizarro, based at Cornell University in Ithaca, New York. The sense of revulsion makes us shy away from biologically harmful things like vomit, faeces, rotting meat and, to a certain extent, insects.
Disgust's remit broadened when we became a supersocial species. After all, other humans are all potential disease-carriers, says Valerie Curtis, director of the Hygiene Centre at the London School of Hygiene and Tropical Medicine. "We've got to be very careful about our contact with others; we've got to mitigate those disease-transfer risks," she says. Disgust is the mechanism for doing this - causing us to shun people who violate the social conventions linked to disgust, or those we think, rightly or wrongly, are carriers of disease. As such, disgust is probably an essential characteristic for thriving on a cooperative, crowded planet.
Yet the idea that disgust plays a deeper role in people's everyday behaviour emerged only recently. It began when researchers decided to investigate the interplay between disgust and morality. One of the first was psychologist Jonathan Haidt at the University of Virginia in Charlottesville, who in 2001 published a landmark paper proposing that instinctive gut feelings, rather than logical reasoning, govern our judgements of right and wrong.
Haidt and colleagues went on to demonstrate that a subliminal sense of disgust - induced by hypnosis - increased the severity of people's moral judgements about shoplifting or political bribery, for example (Psychological Science, vol 16, p 780). Since then, a number of studies have illustrated the unexpected ways in which disgust can influence our notions of right and wrong.
In 2008, Simone Schnall, now at the University of Cambridge, showed that placing people in a room with an unacknowledged aroma of fart spray and a filthy desk increased the severity of their moral judgements about, say, whether it's OK to eat your dead pet dog (Personality and Social Psychology Bulletin, vol 34, p 1096) "One would think that one makes decisions about whether a behaviour is right or wrong by considering the pros and cons and arriving at a balanced judgement. We showed this wasn't the case," says Schnall.
Perhaps it's no surprise, then, to find that the more "disgustable" you are, the more likely you are to be politically conservative, says Pizarro, who has studied this correlation. Similarly, the more conservative that people are, the harsher their moral judgements become in the presence of disgust stimuli.
Together, these findings raise all sorts of interesting, and troubling, questions about people's prejudices, and the ways in which they might be influenced or even deliberately manipulated. Humanity already has a track record of using disgust as a weapon against "outsiders" - lower castes, immigrants and homosexuals. Nazi propaganda notoriously depicted Jewish people as filthy rats.
Now there is empirical evidence that inducing disgust can cause people to shun certain minority groups - at least temporarily. That's what Pizarro acquired his fart spray to explore. Along with Yoel Inbar of Tilburg University in the Netherlands and colleagues, he primed a room with the foul-smelling spray, then invited people in to complete a questionnaire, asking them to rate their feelings of warmth towards various social groups, such as the elderly or homosexuals. The researchers didn't mention the pong to the participants, who were a mix of heterosexual male and female US college students.
Reeking of prejudice
While the whiff did not influence people's feelings towards many social groups, one effect was stark: those in the smelly room, on average, felt less warmth towards homosexual men compared to participants in a non-smelly room. The effect was of equal strength among political liberals and conservatives (Emotion, vol 12, p 23). This finding is consistent with previous studies showing that a stronger susceptibility to disgust is linked with disapproval of gay people.
In another experiment, making western people feel more vulnerable to disease - by showing pictures of different pathogens - made them view foreign groups, such as Nigerian immigrants, less favourably (Group Processes & Intergroup Relations, vol 7, p 333).
"It's not that I think we could change liberals to conservatives by grossing them out, but sometimes all you need is a temporary little boost," says Pizarro. He points out that if there happened to be disgust triggers in or around a polling station, for example, it could in principle sway undecided voters to a more conservative decision. "Subtle influences in places where you're voting might actually have an effect."
To an extent, many politicians have already come to the same conclusions about disgust's ability to sway the views of their electorates. In April this year, Republicans made hay of a story about President Barack Obama eating dog meat as a boy, which was recounted in his memoir. The criticism of Obama might have seemed like the typical, if surreal, electioneering you would expect in the run-up to a presidential election, but the psychology of disgust suggests that it would have struck deeper with many voters than the Democrats might have realised.
Other politicians have gone further when employing disgust to win votes. Ahead of the primaries for the 2010 gubernatorial election in New York state, candidate Carl Paladino of the Tea Party sent out thousands of flyers impregnated with the smell of rotten garbage, with a message to "get rid of the stink" alongside pictures of his rivals. While Paladino didn't manage to beat his Democrat opponent in the race to be governor, some political analysts believe his bold tactics and smelly flyers helped him thrash rivals to win the Republican nomination against the odds.
At the same time as the role that disgust plays in politics was emerging, others were searching for its effects in yet more realms of life. Given that disgust influences judgements of right and wrong, it made sense to look to the legal system.
Sometimes disgust is arguably among the main reasons that a society chooses to deem an act illegal - necrophilia, some forms of pornography, or sex between men, for example. In court, disgusting crimes can attract harsher penalties. For example, in some US states, the death penalty is sought for murders with an "outrageously or wantonly vile" element.
Research led by Sophieke Russell at the University of Kent in Canterbury, UK, holds important lessons about how juries arrive at decisions of guilt and sentencing - and possible pointers for achieving genuine justice in courts. She showed that once people feel a sense of disgust, it is difficult for them to take into account mitigating factors important in the process of law, such as the intentions of the people involved in a case. Disgust also clouds a juror's judgement more than feelings of anger.
It is for these reasons that philosopher Martha Nussbaum at the University of Chicago Law School has argued strongly to stop using the "politics of disgust" as a basis for legal judgements. She argues instead for John Stuart Mill's principle of harm, whereby crimes are judged solely on the basis of the harm they cause. It is a contentious view. Others, such as Dan Kahan of Yale Law School, argue that "it would certainly be a mistake - a horrible one - to accept the guidance of disgust uncritically. But it would be just as big an error to discount it in all contexts." Besides, disgust could never be eliminated from trials, because this would mean never exposing the jury to descriptions of crimes or pictures of crime scenes.
Beyond the courtroom, psychologists searching for disgust's influence have found it in various everyday scenarios. Take financial transactions. It's possible that a particularly unhygienic car dealer, for instance, could make a difference to the price for which you agree to sell your vehicle. Jennifer Lerner and colleagues at Carnegie Mellon University showed that a feeling of disgust can cause people to sell their property at knock-down prices. After watching a scene from the film Trainspotting, in which a character reaches into the bowl of an indescribably filthy toilet, they sold a pack of pens for an average of $2.74, compared with a price of $4.58 for participants shown a neutral clip of coral reefs. Curiously, the disgusted participants denied being influenced by the Trainspotting clip, and instead justified their actions with more rational reasons.
Lerner, now at Harvard, calls it the "disgust-disposal" effect, in which the yuck factor causes you to expel objects in close proximity, regardless of whether they are the cause of your disgust. She also found that people were less likely to buy something when feeling disgust. Perhaps this is why, aside from public health campaigns, there is little evidence of product advertisers using disgust as part of their marketing strategies.
So, armed with all this knowledge about the psychology of disgust, is it possible to spot and overcome the subtle triggers that influence behaviour? And would we want to?
Some would argue that instead of trying to overcome our sense of disgust, we should listen to our gut feelings and be guided by them. The physician Leon Kass, who was chairman of George W. Bush's bioethics council from 2001 to 2005, has made the case for the "wisdom of repugnance". "Repugnance is the emotional expression of deep wisdom, beyond reason's power to fully articulate it," he wrote in his 2002 book Life, Liberty and the Defense of Dignity.
Still, is it really desirable for, say, bad smells to encourage xenophobia or homophobia? "I think it's very possible to override disgust. That's my hope, in fact," says Pizarro. "Even though we might have very strong disgust reactions, we should be tasked with coming up with reasons independent of this reflexive gut reaction."
For those seeking to avoid disgust's influence, it's first worth noting that some people are more likely to be grossed out than others, and that the triggers vary according to culture (see "Cheese and culture"). In general, women tend to be more easily disgusted than men, and are far more likely to be disgusted about sex. Women are also particularly sensitive to disgust in the early stages of pregnancy or just after ovulation - both times when their immune system is dampened.
The young are more likely to be influenced by the yuck factor, and we tend to become less easily disgusted as we grow old. This could boil down to the fact that our senses become less acute with age, or perhaps it is simply that older people have had more life experience and take a more rational view of potential threats.
If they so choose, it is possible for anybody to become desensitised to disgusting things by continued exposure over time. For example, while faeces is the most potent disgust trigger, it's amazing how easy it is to overcome it when you have to deal with your own offspring's bowel movements. And psychologists have shown that after spending months dissecting bodies, medical students become less sensitive to disgust relating to death and bodily deformity.
Pizarro suspects that there may also be shortcuts to overriding disgust - even if the tips he has found so far may not be especially practical for day-to-day life. One of his most recent experiments shows that if you can prevent people from making that snarled-lip expression when they experience disgust - by simply asking them to hold a pencil between their lips - you can reduce their feeling of disgust when they are made to view revolting images. This, in turn, makes their judgement of moral transgressions less severe.
Happily, our lives are already a triumph over disgust. If we let it rule us completely, we'd never leave the house in the morning. As Paul Rozin, often called the "father of the psychology of disgust", has pointed out, we live in a world where the air we breathe comes from the lungs of other people, and contains molecules of animal and human faeces.
It would be wise not to think about that too much. It really is quite disgusting.
Cheese and culture
On a recent summer's day, a stench filled New Scientist's London office. It smelled like sweaty feet bathed in vomit, or something long past its sell-by date. Soon its source became clear: someone had returned from Paris with a selection of France's finest soft cheeses. How can something that smells revolting be so delicious?
For a start, no matter how potent, smells can be ambiguous. We need more information to tell us whether something really is revolting or not.
"With smell, the meaning is based on context much more so than with vision," says smell researcher Rachel Herz, author of the book That's Disgusting. In other words, a vomit smell in an alley beside a bar will immediately conjure up a mental picture of a disgusting source, but exactly the same aroma would evoke deliciousness in a fine restaurant, she says.
The stinky cheese also illustrates the power of culture over our minds. Westerners have learned that cheese is a good thing to eat - a badge of cultural distinction, even. This explains why rotten shark meat is a delicacy in Iceland, says Herz, and the liquor chicha, made from chewed and spat-out maize, is a popular drink in parts of South America. Food choices mark out who is part of our group - hence the strong religious taboos about pork which have endured long past the time when consuming it carried a serious risk of food poisoning.
The influence of culture on disgust isn't limited to food. Kissing is public is seen as distasteful in India, whereas Brits are more revulsed by mistreatment of animals. Christian participants in one study even experienced a sense of disgust when reading a passage from Richard Dawkins's atheist manifesto The God Delusion. As Herz says: "To a large extent, what is disgusting or not is in the mind of the beholder."
Many things probably transcend cultural influence, however. Using a selection of disgusting images, Valerie Curtis at the London School of Hygiene and Tropical Medicine discovered a universal disgust towards faeces, with vomit, pus, spit and a variety of insects following close behind in the revulsion stakes. Delicious, these are not.
Possessed By Demons
In his new book, novelist and former psychologist Frank Tallis explores the psychology behind demonic possession.
Your latest novel is about a man who is possessed. Did any of your patients have that belief?
Once a patient came in and said: "I am possessed by a demon." This guy wasn't insane, he wasn't schizophrenic - he just had this particular belief. In my day we called it "monosymptomatic delusion", but now it would be called something like "delusional disorder". That's when you're completely sound and reasonable in every respect except you have one belief that is absolutely bonkers.
Why would an otherwise well person believe something like that?
He was misattributing certain symptoms he had to a demonic presence. When you're possessed, you're supposed to get headaches, and he was getting loads of headaches.
I can't imagine making that assumption myself...
You have to have an openness to it. Lots of people are open to all kinds of spiritual and magical beliefs. An individual could have a perfectly harmless interest in the supernatural but then something happens that triggers this delusion and they get stuck with it, reinforcing it by piling up one misinterpretation after another. If you go out looking for evidence, you will find it.
What kind of evidence?
In my patient's case, he wanted to know the demon's name, so he got a Ouija board out. This shows he had a willingness to go down a particular path. When you think about the way that brains work, our natural inclination is to look for causes.
Could anyone end up with delusions like these?
Theoretically, yes, in the right circumstances. Maybe we all get such episodes in our lives. It's not that unusual for people to think they are seriously ill without much evidence. Who hasn't had a health scare for no good reason? That's taking a symptom and extrapolating, then finding more evidence that supports the belief.
Are there any other examples?
The big one is people suspecting that their spouse is cheating on them. Morbid obsessions about infidelity are relatively common and produce spectacular behaviours, often in individuals who otherwise are OK. In a way, falling in love is kind of monosymptomatic delusion. Even though you're a rational person, you can engage in all kinds of irrational behaviour because you are fixated on a particular individual.
Can these delusions be treated?
In the past they were treated with lots of medication or were perceived as untreatable. But these days, not just monosymptomatic delusions but all forms of psychotic illness are increasingly treated with cognitive behavioural therapy. You cultivate a sort of scientific attitude in the patient, getting them to test their beliefs. It is probably the most important new advance in psychotherapy.
Morality and mental Processes
When it really comes down to it—when the chips are down and the lights are off—are we naturally good? That is, are we predisposed to act cooperatively, to help others even when it costs us? Or are we, in our hearts, selfish creatures?
This fundamental question about human nature has long provided fodder for discussion. Augustine’s doctrine of original sin proclaimed that all people were born broken and selfish, saved only through the power of divine intervention. Hobbes, too, argued that humans were savagely self-centered; however, he held that salvation came not through the divine, but through the social contract of civil law. On the other hand, philosophers such as Rousseau argued that people were born good, instinctively concerned with the welfare of others. More recently, these questions about human nature—selfishness and cooperation, defection and collaboration—have been brought to the public eye by game shows such as Survivor and the UK’s Golden Balls, which test the balance between selfishness and cooperation by pitting the strength of interpersonal bonds against the desire for large sums of money.
But even the most compelling televised collisions between selfishness and cooperation provide nothing but anecdotal evidence. And even the most eloquent philosophical arguments mean noting without empirical data.
A new set of studies provides compelling data allowing us to analyze human nature not through a philosopher’s kaleidoscope or a TV producer’s camera, but through the clear lens of science. These studies were carried out by a diverse group of researchers from Harvard and Yale—a developmental psychologist with a background in evolutionary game theory, a moral philosopher-turned-psychologist, and a biologist-cum-mathematician—interested in the same essential question: whether our automatic impulse—our first instinct—is to act selfishly or cooperatively.
This focus on first instincts stems from the dual process framework of decision-making, which explains decisions (and behavior) in terms of two mechanisms: intuition and reflection. Intuition is often automatic and effortless, leading to actions that occur without insight into the reasons behind them. Reflection, on the other hand, is all about conscious thought—identifying possible behaviors, weighing the costs and benefits of likely outcomes, and rationally deciding on a course of action. With this dual process framework in mind, we can boil the complexities of basic human nature down to a simple question: which behavior—selfishness or cooperation—is intuitive, and which is the product of rational reflection? In other words, do we cooperate when we overcome our intuitive selfishness with rational self-control, or do we act selfishly when we override our intuitive cooperative impulses with rational self-interest?
To answer this question, the researchers first took advantage of a reliable difference between intuition and reflection: intuitive processes operate quickly, whereas reflective processes operate relatively slowly. Whichever behavioral tendency—selfishness or cooperation—predominates when people act quickly is likely to be the intuitive response; it is the response most likely to be aligned with basic human nature.
The experimenters first examined potential links between processing speed, selfishness, and cooperation by using 2 experimental paradigms (the “prisoner’s dilemma” and a “public goods game”), 5 studies, and a tot al of 834 participants gathered from both undergraduate campuses and a nationwide sample. Each paradigm consisted of group-based financial decision-making tasks and required participants to choose between acting selfishly—opting to maximize individual benefits at the cost of the group—or cooperatively—opting to maximize group benefits at the cost of the individual. The results were striking: in every single study, faster—that is, more intuitive—decisions were associated with higher levels of cooperation, whereas slower—that is, more reflective—decisions were associated with higher levels of selfishness. These results suggest that our first impulse is to cooperate—that Augustine and Hobbes were wrong, and that we are fundamentally “good” creatures after all.
The researchers followed up these correlational studies with a set of experiments in which they directly manipulated both this apparent influence on the tendency to cooperate—processing speed—and the cognitive mechanism thought to be associated with this influence—intuitive, as opposed to reflective, decision-making. In the first of these studies, researchers gathered 891 participants (211 undergraduates and 680 participants from a nationwide sample) and had them play a public goods game with one key twist: these participants were forced to make their decisions either quickly (within 10 seconds) or slowly (after at least 10 seconds had passed). In the second, researchers had 343 participants from a nationwide sample play a public goods game after they had been primed to use either intuitive or reflective reasoning. Both studies showed the same pattern—whether people were forced to use intuition (by acting under time constraints) or simply encouraged to do so (through priming), they gave significantly more money to the common good than did participants who relied on reflection to make their choices. This again suggests that our intuitive impulse is to cooperate with others.
Taken together, these studies—7 total experiments, using a whopping 2,068 participants—suggest that we are not intuitively selfish creatures. But does this mean that we our naturally cooperative? Or could it be that cooperation is our first instinct simply because it is rewarded? After all, we live in a world where it pays to play well with others: cooperating helps us make friends, gain social capital, and find social success in a wide range of domains. As one way of addressing this possibility, the experimenters carried out yet another study. In this study, they asked 341 participants from a nationwide sample about their daily interactions—specifically, whether or not these interactions were mainly cooperative; they found that the relationship between processing speed (that is, intuition) and cooperation only existed for those who reported having primarily cooperative interactions in daily life. This suggests that cooperation is the intuitive response only for those who routinely engage in interactions where this behavior is rewarded - that human 'goodness' may result from the acquisition of a regularly rewarded trait.
Throughout the ages, people have wondered about the basic state of human nature - whether we are good or bad, cooperative or selfish. This question—one that is central to who we are - has been tackled by theologians and philosophers, presented to the public eye by television programs, and dominated the sleepless nights of both guilt-stricken villains and bewildered victims; now, it has also been addressed by scientific research. Although no single set of studies can provide a definitive answer - no matter how many experiments were conducted or participants were involved - this research suggests that our intuitive responses, or first instincts, tend to lead to cooperation rather than selfishness.
Although this evidence does not definitely solve the puzzle of human nature, it does give us evidence we may use to solve this puzzle for ourselves—and our solutions will likely vary according to how we define 'human nature.' If human nature is something we must be born with, then we may be neither good nor bad, cooperative nor selfish. But if human nature is simply the way we tend to act based on our intuitive and automatic impulses, then it seems that we are an overwhelmingly cooperative species, willing to give for the good of the group even when it comes at our own personal expense.
I do not think like Sherlock Holmes. Not in the least. That was the rather disheartening conclusion I reached while researching a book on the detective’s mental prowess. I’d hoped to discover that I had the secret to Sherlockian thought. What I found instead was that it would be hard work indeed to even begin to approximate the essence of the detective’s approach to the world: his ever-mindful mindset and his relentless mental energy. Holmes was a man eternally on, who relished that on-ness and floundered in its absence. It would be exhausting to think like Sherlock. And would it really be worth it in the end?
It all began with those pesky steps, the stairs leading up to the legendary residence that Sherlock Holmes shares with Dr. Watson, 221B Baker Street. Why couldn't Watson recall the number of steps? "I believe my eyes are as good as yours," Watson tells his new flatmate - as, in fact, they are. But the competence of the eyes isn't the issue. Instead, the distinction lies in how those eyes are deployed. "You see, but you do not observe," Holmes tells his companion. And Holmes? "Now, I know there are seventeen steps," he continues, "“because I have both seen and observed."
To both see and observe: Therein lies the secret. When I first heard the words as a child, I sat up with recognition. Like Watson, I didn't have a clue. Some 20 years later, I read the passage a second time in an attempt to decipher the psychology behind its impact. I realized I was no better at observing than I had been at the tender age of 7. Worse, even. With my constant companion Sir Smartphone and my newfound love of Lady Twitter, my devotion to Count Facebook, and that itch my fingers got whenever I hadn't checked my email for, what, 10 minutes already? OK, five - but it seemed a lifetime. Those Baker Street steps would always be a mystery.
The confluence of seeing and observing is central to the concept of mindfulness, a mental alertness that takes in the present moment to the fullest, that is able to concentrate on its immediate landscape and free itself of any distractions.
Mindfulness allows Holmes to observe those details that most of us don’t even realize we don't see. It’s not just the steps. It’s the facial expressions, the sartorial details, the seemingly irrelevant minutiae of the people he encounters. It’s the sizing up of the occupants of a house by looking at a single room. It's the ability to distinguish the crucial from the merely incidental in any person, any scene, any situation. And, as it turns out, all of these abilities aren't just the handy fictional work of Arthur Conan Doyle. They have some real science behind them. After all, Holmes was born of Dr. Joseph Bell, Conan Doyle's mentor at the University of Edinburgh, not some, well, more fictional inspiration. Bell was a scientist and physician with a sharp mind, a keen eye, and a notable prowess at pinpointing both his patients' disease and their personal details. Conan Doyle once wrote to him, "Round the centre of deduction and inference and observation which I have heard you inculcate, I have tried to build up a man who pushed the thing as far as it would go."
Over the past several decades, researchers have discovered that mindfulness can lead to improvements in physiological well-being and emotional regulation. It can also strengthen connectivity in the brain, specifically in a network of the posterior cingulate cortex, the adjacent precuneus, and the medial prefrontal cortex that maintains activity when the brain is resting. Mindfulness can even enhance our levels of wisdom, both in terms of dialectism (being cognizant of change and contradictions in the world) and intellectual humility (knowing your own limitations). What’s more, mindfulness can lead to improved problem solving, enhanced imagination, and better decision making. It can even be a weapon against one of the most disturbing limitations that our attention is up against: inattentional blindness.
When inattentional blindness (sometimes referred to as attentional blindness) strikes, our focus on one particular element in a scene or situation or problem causes the other elements to literally disappear. Images that hit our retina are not then processed by our brain but instead dissolve into the who-knows-where, so that we have no conscious experience of having ever been exposed to them to begin with. The phenomenon was made famous by Daniel Simons and Christopher Chabris: In their provocative study, students repeatedly failed to see a person in a gorilla suit who walked onto a basketball court midgame, pounded his chest, and walked off. But the phenomenon actually dates to research conducted by Ulric Neisser, the father of cognitive psychology, in the 1960s and 1970s.
One evening, Neisser noticed that when he looked out the window at twilight, he had the ability to see either the twilight or the reflection of the room on the glass. Focusing on the one made the other vanish. No matter what he did, he couldn’t pay active attention to both. He termed this phenomenon 'selective looking' and went on to study its effects in study after study of competing attentional demands. Show a person two superimposed videos, and he fails to notice when card players suddenly stop their game, stand up, and start shaking hands - or fails to realize that someone spoke to him in one ear while he’s been listening to a conversation with the other. In a real-world illustration of the innate inability to split attention in any meaningful way, a road construction crew once paved over a dead deer in the road. They simply did not see it, so busy were they ensuring that their assignment was properly carried out.
Inattentional blindness, more than anything else, illustrates the limitations of our attentional abilities. Try as we might, we can never see both twilight and reflection. We can't ever multitask the way we think we can. Each time we try, either the room or the world outside it will disappear from conscious processing. That's why Holmes is so careful about where and when he deploys that famed keenness of observation. Were he to spread himself too thin—imagine modern-day Holmes, be it Benedict Cumberbatch or Jonny Lee Miller, pulling out his cell to check his email as he walks down the street and has a conversation at the same time, something you'll never see either of these current incarnations actually doing - he'd be unable to deploy his observation as he otherwise would. Enter the email, exit the Baker Street steps - and then some.
It's not an easy task, that constant cognitive vigilance, the eternal awareness of our own limitations and the resulting strategic allocation of attention. Even Holmes, I'm willing to bet, couldn't reach that level of mindfulness and deliberate thought all at once. It came with years of motivation and practice. To think like Holmes, we have to both want to think like him and practice doing so over and over and over, even when the effort becomes exhausting and seems a pointless waste of energy. Mindfulness takes discipline.
Even after I discovered my propensity for sneaking over to email or Twitter when I wasn't quite sure what to write next, the discovery alone wasn't enough to curb my less-than-ideal work habits. I thought it would be. And I tried, I really did. But somehow, up that browser window popped, seemingly of its own volition. What, me? Attempt to multitask while writing my book? Never.
And so, I took the Odyssean approach: I tied myself to the mast to resist the sirens' call of the Internet. I downloaded Freedom, a program that blocked my access completely for a specified amount of time, and got to writing. The results shocked me. I was woefully bad at maintaining my concentration for large chunks of time. Over and over, my fingers made their way to that habitual key-press combination that would switch the window from my manuscript to my online world - only to discover that that world was off-limits for another - how long is left? Has it really been only 20 minutes?
Over time, the impulse became less frequent. And what's more, I found that my writing - and my thinking, it bears note - was improving with every day of Internet-less interludes. I could think more fluidly. My brain worked more conscientiously. In those breaks when, before, there would be a quick check of email or a surreptitious run to my Twitter feed, there would be a self-reflecting concentration that quickly rummaged through my brain attic. (You can't write about Holmes without mentioning his analogy for the human mind at least once.) I came up with multiple ways of moving forward where before I would find myself stuck. Pieces that had taken hours to write suddenly were completed in a fraction of the time.
Until that concrete evidence of effectiveness, I had never quite believed that focused attention would make such a big difference. As much research as I’d read, as much science as I'd examined, it never quite hit home. It had taken Freedom, but I was finally taking Sherlock Holmes at his word. I was learning the benefits of both seeing and observing—and I was no longer trading in the one for the other without quite realizing what I was doing.
Self-binding software, of course, is not always an option to keep our brains mindfully on track. Who is to stop us from checking our phone mid-dinner or having the TV on as background noise? But here's what I learned. Those little nudges to limit your own behavior have a more lasting effect, even in areas where you've never used them. They make you realize just how limited your attention is in reality - —and how often we wave our own limitations off with a disdainful motion. Not only did that nagging software make me realize how desperately I was chained to my online self, but I began to notice how often my hand reached for my phone when I was walking down the street or sitting in the subway, how utterly unable I had become to just do what I was doing, be it walking or sitting or even reading a book, without trying to get in just a little bit more.
I did my best to resist. Now, something that was once thoughtless habit became a guilt-inducing twinge. I would force myself to replace the phone without checking it, to take off my headphones and look around, to resist the urge to place a call just because I was walking to an appointment and had a few minutes of spare time. It was hard. But it was worth it, if only for my enhanced perceptiveness, for the quickly growing pile of material that I wouldn’t have even noticed before, for the tangible improvements in thought and clarity that came with every deferred impulse. It’s not for nothing that study after study has shown the benefits of nature on our thinking: Being surrounded by the natural world makes us more reflective, more creative, sharper in our cognition. But if we’re too busy talking on the phone or sending a text, we won’t even notice that we've walked by a tree.
If we follow Holmes’ lead, if we take his admonition to not only see but also observe, and do so as a matter of course, we may not only find ourselves better able to rattle off the number of those proverbial steps in a second, but we may be surprised to discover that the benefits extend much further: We may even be happier as a result. Even brief exercises in mindfulness, for as little as five minutes a day, have been shown to shift brain activity in the frontal lobes toward a pattern associated with positive and approach-oriented emotional states. And the mind-wandering, multitasking alternative? It may do more than make us less attentive. It may also make us less happy.
As Daniel Gilbert discovered after tracking thousands of participants in real time, a mind that is wandering away from the present moment is a mind that isn't happy. He developed an iPhone app that would prompt subjects to answer questions on what they were currently doing and what they were thinking about at various points in the day. In 46.9 percent of samples Gilbert and his colleagues collected, people were not thinking about whatever it was they were doing—even if what they were doing was actually quite pleasant, like listening to music or playing a game. And their happiness? The more their minds wandered, the less happy they were—regardless of the activity. As Gilbert put it in a paper in Science, "The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost."
Thinking like Sherlock Holmes isn't just a way to enhance your cognitive powers. It is also a way to derive greater happiness and satisfaction from life.
If you have pondered how intelligent and educated people can, in the face of overwhelming contradictory evidence, believe that evolution is a myth, that global warming is a hoax, that vaccines cause autism and asthma, that 9/11 was orchestrated by the Bush administration, conjecture no more. The explanation is in what I call logic-tight compartments—modules in the brain analogous to watertight compartments in a ship.
The concept of compartmentalized brain functions acting either in concert or in conflict has been a core idea of evolutionary psychology since the early 1990s. According to University of Pennsylvania evolutionary psychologist Robert Kurzban in Why Everyone (Else) Is a Hypocrite (Princeton University Press, 2010), the brain evolved as a modular, multitasking problem-solving organ—a Swiss Army knife of practical tools in the old metaphor or an app-loaded iPhone in Kurzban's upgrade. There is no unified “self” that generates internally consistent and seamlessly coherent beliefs devoid of conflict. Instead we are a collection of distinct but interacting modules often at odds with one another. The module that leads us to crave sweet and fatty foods in the short term is in conflict with the module that monitors our body image and health in the long term. The module for cooperation is in conflict with the one for competition, as are the modules for altruism and avarice or the modules for truth telling and lying.
Compartmentalization is also at work when new scientific theories conflict with older and more naive beliefs. In the 2012 paper 'Scientific Knowledge Suppresses but Does Not Supplant Earlier Intuitions' in the journal Cognition, Occidental College psychologists Andrew Shtulman and Joshua Valcarcel found that subjects more quickly verified the validity of scientific statements when those statements agreed with their prior naive beliefs. Contradictory scientific statements were processed more slowly and less accurately, suggesting that "naive theories survive the acquisition of a mutually incompatible scientific theory, coexisting with that theory for many years to follow."
Cognitive dissonance may also be at work in the compartmentalization of beliefs. In the 2010 article “When in Doubt, Shout!” in Psychological Science, Northwestern University researchers David Gal and Derek Rucker found that when subjects' closely held beliefs were shaken, they "engaged in more advocacy of their beliefs ... than did people whose confidence was not undermined." Further, they concluded that enthusiastic evangelists of a belief may in fact be "boiling over with doubt," and thus their persistent proselytizing may be a signal that the belief warrants skepticism.
In addition, our logic-tight compartments are influenced by our moral emotions, which lead us to bend and distort data and evidence through a process called motivated reasoning. The module housing our religious preferences, for example, motivates believers to seek and find facts that support, say, a biblical model of a young earth in which the overwhelming evidence of an old earth must be denied. The module containing our political predilections, if they are, say, of a conservative bent, may motivate procapitalists to believe that any attempt to curtail industrial pollution by way of the threat of global warming must be a liberal hoax.
What can be done to break down the walls separating our logic-tight compartments? In the 2012 paper 'Misinformation and Its Correction: Continued Influence and Successful Debiasing' in Psychological Science in the Public Interest, University of Western Australia psychologist Stephan Lewandowsky and his colleagues suggest these strategies: "Consider what gaps in people's mental event models are created by debunking and fill them using an alternative explanation.... To avoid making people more familiar with misinformation..., emphasize the facts you wish to communicate rather than the myth. Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.... Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect."
Debunking by itself is not enough. We must replace bad bunk with sound science.
Culture and Thinking
IN THE SUMMER of 1995, a young graduate student in anthropology at UCLA named Joe Henrich traveled to Peru to carry out some fieldwork among the Machiguenga, an indigenous people who live north of Machu Picchu in the Amazon basin. The Machiguenga had traditionally been horticulturalists who lived in single-family, thatch-roofed houses in small hamlets composed of clusters of extended families. For sustenance, they relied on local game and produce from small-scale farming. They shared with their kin but rarely traded with outside groups.
While the setting was fairly typical for an anthropologist, Henrich’s research was not. Rather than practice traditional ethnography, he decided to run a behavioral experiment that had been developed by economists. Henrich used a “game”—along the lines of the famous prisoner’s dilemma—to see whether isolated cultures shared with the West the same basic instinct for fairness. In doing so, Henrich expected to confirm one of the foundational assumptions underlying such experiments, and indeed underpinning the entire fields of economics and psychology: that humans all share the same cognitive machinery—the same evolved rational and psychological hardwiring.
The test that Henrich introduced to the Machiguenga was called the ultimatum game. The rules are simple: in each game there are two players who remain anonymous to each other. The first player is given an amount of money, say $100, and told that he has to offer some of the cash, in an amount of his choosing, to the other subject. The second player can accept or refuse the split. But there’s a hitch: players know that if the recipient refuses the offer, both leave empty-handed. North Americans, who are the most common subjects for such experiments, usually offer a 50-50 split when on the giving end. When on the receiving end, they show an eagerness to punish the other player for uneven splits at their own expense. In short, Americans show the tendency to be equitable with strangers—and to punish those who are not.
Among the Machiguenga, word quickly spread of the young, square-jawed visitor from America giving away money. The stakes Henrich used in the game with the Machiguenga were not insubstantial—roughly equivalent to the few days’ wages they sometimes earned from episodic work with logging or oil companies. So Henrich had no problem finding volunteers. What he had great difficulty with, however, was explaining the rules, as the game struck the Machiguenga as deeply odd.
When he began to run the game it became immediately clear that Machiguengan behavior was dramatically different from that of the average North American. To begin with, the offers from the first player were much lower. In addition, when on the receiving end of the game, the Machiguenga rarely refused even the lowest possible amount. “It just seemed ridiculous to the Machiguenga that you would reject an offer of free money,” says Henrich. “They just didn’t understand why anyone would sacrifice money to punish someone who had the good luck of getting to play the other role in the game.”
The potential implications of the unexpected results were quickly apparent to Henrich. He knew that a vast amount of scholarly literature in the social sciences—particularly in economics and psychology—relied on the ultimatum game and similar experiments. At the heart of most of that research was the implicit assumption that the results revealed evolved psychological traits common to all humans, never mind that the test subjects were nearly always from the industrialized West. Henrich realized that if the Machiguenga results stood up, and if similar differences could be measured across other populations, this assumption of universality would have to be challenged.
Henrich had thought he would be adding a small branch to an established tree of knowledge. It turned out he was sawing at the very trunk. He began to wonder: What other certainties about “human nature” in social science research would need to be reconsidered when tested across diverse populations?
Henrich soon landed a grant from the MacArthur Foundation to take his fairness games on the road. With the help of a dozen other colleagues he led a study of 14 other small-scale societies, in locales from Tanzania to Indonesia. Differences abounded in the behavior of both players in the ultimatum game. In no society did he find people who were purely selfish (that is, who always offered the lowest amount, and never refused a split), but average offers from place to place varied widely and, in some societies—ones where gift-giving is heavily used to curry favor or gain allegiance—the first player would often make overly generous offers in excess of 60 percent, and the second player would often reject them, behaviors almost never observed among Americans.
The research established Henrich as an up-and-coming scholar. In 2004, he was given the U.S. Presidential Early Career Award for young scientists at the White House. But his work also made him a controversial figure. When he presented his research to the anthropology department at the University of British Columbia during a job interview a year later, he recalls a hostile reception. Anthropology is the social science most interested in cultural differences, but the young scholar’s methods of using games and statistics to test and compare cultures with the West seemed heavy-handed and invasive to some. “Professors from the anthropology department suggested it was a bad thing that I was doing,” Henrich remembers. “The word ‘unethical’ came up.”
So instead of toeing the line, he switched teams. A few well-placed people at the University of British Columbia saw great promise in Henrich’s work and created a position for him, split between the economics department and the psychology department. It was in the psychology department that he found two kindred spirits in Steven Heine and Ara Norenzayan. Together the three set about writing a paper that they hoped would fundamentally challenge the way social scientists thought about human behavior, cognition, and culture.
A MODERN LIBERAL ARTS education gives lots of lip service to the idea of cultural diversity. It’s generally agreed that all of us see the world in ways that are sometimes socially and culturally constructed, that pluralism is good, and that ethnocentrism is bad. But beyond that the ideas get muddy. That we should welcome and celebrate people of all backgrounds seems obvious, but the implied corollary—that people from different ethno-cultural origins have particular attributes that add spice to the body politic—becomes more problematic. To avoid stereotyping, it is rarely stated bluntly just exactly what those culturally derived qualities might be. Challenge liberal arts graduates on their appreciation of cultural diversity and you’ll often find them retreating to the anodyne notion that under the skin everyone is really alike.
If you take a broad look at the social science curriculum of the last few decades, it becomes a little more clear why modern graduates are so unmoored. The last generation or two of undergraduates have largely been taught by a cohort of social scientists busily doing penance for the racism and Eurocentrism of their predecessors, albeit in different ways. Many anthropologists took to the navel gazing of postmodernism and swore off attempts at rationality and science, which were disparaged as weapons of cultural imperialism.
Economists and psychologists, for their part, did an end run around the issue with the convenient assumption that their job was to study the human mind stripped of culture. The human brain is genetically comparable around the globe, it was agreed, so human hardwiring for much behavior, perception, and cognition should be similarly universal. No need, in that case, to look beyond the convenient population of undergraduates for test subjects. A 2008 survey of the top six psychology journals dramatically shows how common that assumption was: more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners—with nearly 70 percent from the United States alone. Put another way: 96 percent of human subjects in these studies came from countries that represent only 12 percent of the world’s population.
Henrich’s work with the ultimatum game was an example of a small but growing countertrend in the social sciences, one in which researchers look straight at the question of how deeply culture shapes human cognition. His new colleagues in the psychology department, Heine and Norenzayan, were also part of this trend. Heine focused on the different ways people in Western and Eastern cultures perceived the world, reasoned, and understood themselves in relationship to others. Norenzayan’s research focused on the ways religious belief influenced bonding and behavior. The three began to compile examples of cross-cultural research that, like Henrich’s work with the Machiguenga, challenged long-held assumptions of human psychological universality.
Some of that research went back a generation. It was in the 1960s, for instance, that researchers discovered that aspects of visual perception were different from place to place. One of the classics of the literature, the Müller-Lyer illusion, showed that where you grew up would determine to what degree you would fall prey to the illusion that these two lines are different in length.
Researchers found that Americans perceive the line with the ends feathered outward (B) as being longer than the line with the arrow tips (A). San foragers of the Kalahari, on the other hand, were more likely to see the lines as they are: equal in length. Subjects from more than a dozen cultures were tested, and Americans were at the far end of the distribution—seeing the illusion more dramatically than all others.
More recently psychologists had challenged the universality of research done in the 1950s by pioneering social psychologist Solomon Asch. Asch had discovered that test subjects were often willing to make incorrect judgments on simple perception tests to conform with group pressure. When the test was performed across 17 societies, however, it turned out that group pressure had a range of influence. Americans were again at the far end of the scale, in this case showing the least tendency to conform to group belief.
As Heine, Norenzayan, and Henrich furthered their search, they began to find research suggesting wide cultural differences almost everywhere they looked: in spatial reasoning, the way we infer the motivations of others, categorization, moral reasoning, the boundaries between the self and others, and other arenas. These differences, they believed, were not genetic. The distinct ways Americans and Machiguengans played the ultimatum game, for instance, wasn’t because they had differently evolved brains. Rather, Americans, without fully realizing it, were manifesting a psychological tendency shared with people in other industrialized countries that had been refined and handed down through thousands of generations in ever more complex market economies. When people are constantly doing business with strangers, it helps when they have the desire to go out of their way (with a lawsuit, a call to the Better Business Bureau, or a bad Yelp review) when they feel cheated. Because Machiguengan culture had a different history, their gut feeling about what was fair was distinctly their own. In the small-scale societies with a strong culture of gift-giving, yet another conception of fairness prevailed. There, generous financial offers were turned down because people’s minds had been shaped by a cultural norm that taught them that the acceptance of generous gifts brought burdensome obligations. Our economies hadn’t been shaped by our sense of fairness; it was the other way around.
The growing body of cross-cultural research that the three researchers were compiling suggested that the mind’s capacity to mold itself to cultural and environmental settings was far greater than had been assumed. The most interesting thing about cultures may not be in the observable things they do—the rituals, eating preferences, codes of behavior, and the like—but in the way they mold our most fundamental conscious and unconscious thinking and perception.
For instance, the different ways people perceive the Müller-Lyer illusion likely reflects lifetimes spent in different physical environments. American children, for the most part, grow up in box-shaped rooms of varying dimensions. Surrounded by carpentered corners, visual perception adapts to this strange new environment (strange and new in terms of human history, that is) by learning to perceive converging lines in three dimensions.
When unconsciously translated in three dimensions, the line with the outward-feathered ends (C) appears farther away and the brain therefore judges it to be longer. The more time one spends in natural environments, where there are no carpentered corners, the less one sees the illusion.
As the three continued their work, they noticed something else that was remarkable: again and again one group of people appeared to be particularly unusual when compared to other populations—with perceptions, behaviors, and motivations that were almost always sliding down one end of the human bell curve.
In the end they titled their paper “The Weirdest People in the World?” (pdf) By “weird” they meant both unusual and Western, Educated, Industrialized, Rich, and Democratic. It is not just our Western habits and cultural preferences that are different from the rest of the world, it appears. The very way we think about ourselves and others—and even the way we perceive reality—makes us distinct from other humans on the planet, not to mention from the vast majority of our ancestors. Among Westerners, the data showed that Americans were often the most unusual, leading the researchers to conclude that “American participants are exceptional even within the unusual population of Westerners—outliers among outliers.”
Given the data, they concluded that social scientists could not possibly have picked a worse population from which to draw broad generalizations. Researchers had been doing the equivalent of studying penguins while believing that they were learning insights applicable to all birds.
NOT LONG AGO I met Henrich, Heine, and Norenzayan for dinner at a small French restaurant in Vancouver, British Columbia, to hear about the reception of their weird paper, which was published in the prestigious journal Behavioral and Brain Sciences in 2010. The trio of researchers are young—as professors go—good-humored family men. They recalled that they were nervous as the publication time approached. The paper basically suggested that much of what social scientists thought they knew about fundamental aspects of human cognition was likely only true of one small slice of humanity. They were making such a broadside challenge to whole libraries of research that they steeled themselves to the possibility of becoming outcasts in their own fields.
“We were scared,” admitted Henrich. “We were warned that a lot of people were going to be upset.”
“We were told we were going to get spit on,” interjected Norenzayan.
“Yes,” Henrich said. “That we’d go to conferences and no one was going to sit next to us at lunchtime.”
Interestingly, they seemed much less concerned that they had used the pejorative acronym WEIRD to describe a significant slice of humanity, although they did admit that they could only have done so to describe their own group. “Really,” said Henrich, “the only people we could have called weird are represented right here at this table.”
Still, I had to wonder whether describing the Western mind, and the American mind in particular, as weird suggested that our cognition is not just different but somehow malformed or twisted. In their paper the trio pointed out cross-cultural studies that suggest that the “weird” Western mind is the most self-aggrandizing and egotistical on the planet: we are more likely to promote ourselves as individuals versus advancing as a group. WEIRD minds are also more analytic, possessing the tendency to telescope in on an object of interest rather than understanding that object in the context of what is around it.
The WEIRD mind also appears to be unique in terms of how it comes to understand and interact with the natural world. Studies show that Western urban children grow up so closed off in man-made environments that their brains never form a deep or complex connection to the natural world. While studying children from the U.S., researchers have suggested a developmental timeline for what is called “folkbiological reasoning.” These studies posit that it is not until children are around 7 years old that they stop projecting human qualities onto animals and begin to understand that humans are one animal among many. Compared to Yucatec Maya communities in Mexico, however, Western urban children appear to be developmentally delayed in this regard. Children who grow up constantly interacting with the natural world are much less likely to anthropomorphize other living things into late childhood.
Given that people living in WEIRD societies don’t routinely encounter or interact with animals other than humans or pets, it’s not surprising that they end up with a rather cartoonish understanding of the natural world. “Indeed,” the report concluded, “studying the cognitive development of folkbiology in urban children would seem the equivalent of studying ‘normal’ physical growth in malnourished children.”
During our dinner, I admitted to Heine, Henrich, and Norenzayan that the idea that I can only perceive reality through a distorted cultural lens was unnerving. For me the notion raised all sorts of metaphysical questions: Is my thinking so strange that I have little hope of understanding people from other cultures? Can I mold my own psyche or the psyches of my children to be less WEIRD and more able to think like the rest of the world? If I did, would I be happier?
Henrich reacted with mild concern that I was taking this research so personally. He had not intended, he told me, for his work to be read as postmodern self-help advice. “I think we’re really interested in these questions for the questions’ sake,” he said.
The three insisted that their goal was not to say that one culturally shaped psychology was better or worse than another—only that we’ll never truly understand human behavior and cognition until we expand the sample pool beyond its current small slice of humanity. Despite these assurances, however, I found it hard not to read a message between the lines of their research. When they write, for example, that weird children develop their understanding of the natural world in a “culturally and experientially impoverished environment” and that they are in this way the equivalent of “malnourished children,” it’s difficult to see this as a good thing.
THE TURN THAT HENRICH, Heine, and Norenzayan are asking social scientists to make is not an easy one: accounting for the influence of culture on cognition will be a herculean task. Cultures are not monolithic; they can be endlessly parsed. Ethnic backgrounds, religious beliefs, economic status, parenting styles, rural upbringing versus urban or suburban—there are hundreds of cultural differences that individually and in endless combinations influence our conceptions of fairness, how we categorize things, our method of judging and decision making, and our deeply held beliefs about the nature of the self, among other aspects of our psychological makeup.
We are just at the beginning of learning how these fine-grained cultural differences affect our thinking. Recent research has shown that people in “tight” cultures, those with strong norms and low tolerance for deviant behavior (think India, Malaysia, and Pakistan), develop higher impulse control and more self-monitoring abilities than those from other places. Men raised in the honor culture of the American South have been shown to experience much larger surges of testosterone after insults than do Northerners. Research published late last year suggested psychological differences at the city level too. Compared to San Franciscans, Bostonians’ internal sense of self-worth is more dependent on community status and financial and educational achievement. “A cultural difference doesn’t have to be big to be important,” Norenzayan said. “We’re not just talking about comparing New York yuppies to the Dani tribesmen of Papua New Guinea.”
As Norenzayan sees it, the last few generations of psychologists have suffered from “physics envy,” and they need to get over it. The job, experimental psychologists often assumed, was to push past the content of people’s thoughts and see the underlying universal hardware at work. “This is a deeply flawed way of studying human nature,” Norenzayan told me, “because the content of our thoughts and their process are intertwined.” In other words, if human cognition is shaped by cultural ideas and behavior, it can’t be studied without taking into account what those ideas and behaviors are and how they are different from place to place.
This new approach suggests the possibility of reverse-engineering psychological research: look at cultural content first; cognition and behavior second. Norenzayan’s recent work on religious belief is perhaps the best example of the intellectual landscape that is now open for study. When Norenzayan became a student of psychology in 1994, four years after his family had moved from Lebanon to America, he was excited to study the effect of religion on human psychology. “I remember opening textbook after textbook and turning to the index and looking for the word ‘religion,’ ” he told me, “Again and again the very word wouldn’t be listed. This was shocking. How could psychology be the science of human behavior and have nothing to say about religion? Where I grew up you’d have to be in a coma not to notice the importance of religion on how people perceive themselves and the world around them.”
Norenzayan became interested in how certain religious beliefs, handed down through generations, may have shaped human psychology to make possible the creation of large-scale societies. He has suggested that there may be a connection between the growth of religions that believe in “morally concerned deities”—that is, a god or gods who care if people are good or bad—and the evolution of large cities and nations. To be cooperative in large groups of relative strangers, in other words, might have required the shared belief that an all-powerful being was forever watching over your shoulder.
If religion was necessary in the development of large-scale societies, can large-scale societies survive without religion? Norenzayan points to parts of Scandinavia with atheist majorities that seem to be doing just fine. They may have climbed the ladder of religion and effectively kicked it away. Or perhaps, after a thousand years of religious belief, the idea of an unseen entity always watching your behavior remains in our culturally shaped thinking even after the belief in God dissipates or disappears.
Why, I asked Norenzayan, if religion might have been so central to human psychology, have researchers not delved into the topic? “Experimental psychologists are the weirdest of the weird,” said Norenzayan. “They are almost the least religious academics, next to biologists. And because academics mostly talk amongst themselves, they could look around and say, ‘No one who is important to me is religious, so this must not be very important.’” Indeed, almost every major theorist on human behavior in the last 100 years predicted that it was just a matter of time before religion was a vestige of the past. But the world persists in being a very religious place.
HENRICH, HEINE, AND NORENZAYAN’S FEAR of being ostracized after the publication of the WEIRD paper turned out to be misplaced. Response to the paper, both published and otherwise, has been nearly universally positive, with more than a few of their colleagues suggesting that the work will spark fundamental changes. “I have no doubt that this paper is going to change the social sciences,” said Richard Nisbett, an eminent psychologist at the University of Michigan. “It just puts it all in one place and makes such a bold statement.”
More remarkable still, after reading the paper, academics from other disciplines began to come forward with their own mea culpas. Commenting on the paper, two brain researchers from Northwestern University argued (pdf) that the nascent field of neuroimaging had made the same mistake as psychologists, noting that 90 percent of neuroimaging studies were performed in Western countries. Researchers in motor development similarly suggested that their discipline’s body of research ignored how different child-rearing practices around the world can dramatically influence states of development. Two psycholinguistics professors suggested that their colleagues had also made the same mistake: blithely assuming human homogeneity while focusing their research primarily on one rather small slice of humanity.
At its heart, the challenge of the WEIRD paper is not simply to the field of experimental human research (do more cross-cultural studies!); it is a challenge to our Western conception of human nature. For some time now, the most widely accepted answer to the question of why humans, among all animals, have so successfully adapted to environments across the globe is that we have big brains with the ability to learn, improvise, and problem-solve.
Henrich has challenged this “cognitive niche” hypothesis with the “cultural niche” hypothesis. He notes that the amount of knowledge in any culture is far greater than the capacity of individuals to learn or figure it all out on their own. He suggests that individuals tap that cultural storehouse of knowledge simply by mimicking (often unconsciously) the behavior and ways of thinking of those around them. We shape a tool in a certain manner, adhere to a food taboo, or think about fairness in a particular way, not because we individually have figured out that behavior’s adaptive value, but because we instinctively trust our culture to show us the way. When Henrich asked Fijian women why they avoided certain potentially toxic fish during pregnancy and breastfeeding, he found that many didn’t know or had fanciful reasons. Regardless of their personal understanding, by mimicking this culturally adaptive behavior they were protecting their offspring. The unique trick of human psychology, these researchers suggest, might be this: our big brains are evolved to let local culture lead us in life’s dance.
The applications of this new way of looking at the human mind are still in the offing. Henrich suggests that his research about fairness might first be applied to anyone working in international relations or development. People are not “plug and play,” as he puts it, and you cannot expect to drop a Western court system or form of government into another culture and expect it to work as it does back home. Those trying to use economic incentives to encourage sustainable land use will similarly need to understand local notions of fairness to have any chance of influencing behavior in predictable ways.
Because of our peculiarly Western way of thinking of ourselves as independent of others, this idea of the culturally shaped mind doesn’t go down very easily. Perhaps the richest and most established vein of cultural psychology—that which compares Western and Eastern concepts of the self—goes to the heart of this problem. Heine has spent much of his career following the lead of a seminal paper published in 1991 by Hazel Rose Markus, of Stanford University, and Shinobu Kitayama, who is now at the University of Michigan. Markus and Kitayama suggested that different cultures foster strikingly different views of the self, particularly along one axis: some cultures regard the self as independent from others; others see the self as interdependent. The interdependent self—which is more the norm in East Asian countries, including Japan and China—connects itself with others in a social group and favors social harmony over self-expression. The independent self—which is most prominent in America—focuses on individual attributes and preferences and thinks of the self as existing apart from the group.
The classic “rod and frame” task: Is the line in the center vertical?
That we in the West develop brains that are wired to see ourselves as separate from others may also be connected to differences in how we reason, Heine argues. Unlike the vast majority of the world, Westerners (and Americans in particular) tend to reason analytically as opposed to holistically. That is, the American mind strives to figure out the world by taking it apart and examining its pieces. Show a Japanese and an American the same cartoon of an aquarium, and the American will remember details mostly about the moving fish while the Japanese observer will likely later be able to describe the seaweed, the bubbles, and other objects in the background. Shown another way, in a different test analytic Americans will do better on something called the “rod and frame” task, where one has to judge whether a line is vertical even though the frame around it is skewed. Americans see the line as apart from the frame, just as they see themselves as apart from the group.
Heine and others suggest that such differences may be the echoes of cultural activities and trends going back thousands of years. Whether you think of yourself as interdependent or independent may depend on whether your distant ancestors farmed rice (which required a great deal of shared labor and group cooperation) or herded animals (which rewarded individualism and aggression). Heine points to Nisbett at Michigan, who has argued (pdf) that the analytic/holistic dichotomy in reasoning styles can be clearly seen, respectively, in Greek and Chinese philosophical writing dating back 2,500 years. These psychological trends and tendencies may echo down generations, hundreds of years after the activity or situation that brought them into existence has disappeared or fundamentally changed.
And here is the rub: the culturally shaped analytic/individualistic mind-sets may partly explain why Western researchers have so dramatically failed to take into account the interplay between culture and cognition. In the end, the goal of boiling down human psychology to hardwiring is not surprising given the type of mind that has been designing the studies. Taking an object (in this case the human mind) out of its context is, after all, what distinguishes the analytic reasoning style prevalent in the West. Similarly, we may have underestimated the impact of culture because the very ideas of being subject to the will of larger historical currents and of unconsciously mimicking the cognition of those around us challenges our Western conception of the self as independent and self-determined. The historical missteps of Western researchers, in other words, have been the predictable consequences of the WEIRD mind doing the thinking.
We Respond To Individuals, Not Mass Humanity
Rob Portman, Republican senator from Ohio and one-time contender for Romney’s would-be VP slot, announced on Friday that he has reversed his very public stance against gay marriage. As the well-known conservative stated in an Op-Ed piece on Friday, he now believes that “if two people are prepared to make a lifetime commitment to love and care for each other in good times and in bad, the government shouldn’t deny them the opportunity to get married.”
What’s the reason behind this seemingly sudden change of heart? According to Portman, it can all be credited to his son, Will — an openly gay 21-year-old man who came out of the closet to his conservative father two years ago.
This seems very similar to other political idiosyncrasies that we’ve seen in the past when there are family members involved. Dick Cheney’s support of gay marriage within an otherwise conservative platform is largely due to his love for his lesbian daughter, Mary. And these two politicians’ positions on gay marriage are not all that different from the political views of Sarah Palin, mother of a disabled child who is opposed to all government spending — except, of course, when that spending is earmarked for programs benefiting disabled children.
These views have often been criticized by the media, given snarky names, and demeaned as narcissistic or self-centered. And sure, it is certainly possible that these politicians (and the many others like them) are consciously picking and choosing their political platforms in a selfish way to maximize self-interest. However, considering what we know about social psychology, it’s fairly short-sighted to assume that these political about-faces are always the conscious result of these intentionally selfish motives. More likely, they are actually the result of a common psychological phenomenon that impacts all of our decisions — the identifiable victim effect.
Broadly speaking, the identifiable victim effect states one thing: Individual stories will have a far greater sway on our attitudes, intentions, and behavior than any long list of numbers, statistics, and facts. For example, if you see an ad for Save the Children with a picture of a single, emaciated Malian child named Rokia, you will donate significantly more to the charity (about 50% more, on average) than if you see a message listing the statistics about how many people are starving throughout all of Africa.
So why do individual stories have such a greater pull on us than statistics — especially when, rationally, learning about millions of people being impacted by something should impact your attitudes and actions much more than hearing about just one?
First of all, these individual stories are vivid. Stories about people are graphic, full of individual details, and typically involve strong visual imagery. Similarly, our experiences with close loved ones are vivid; we know a lot about their lives and individual personalities, and we come into frequent contact with them. Decades of research has informed us that vivid information has a much stronger influence on what people think and believe than dull, boring statistics. Even if the facts themselves are supposed to be “shocking,” numbers on a page will never hit us at the same vivid level as a picture of a wounded puppy or a video of a crying little girl. Pure information will never really impact us in the same way that seeing something happen to our friends or loved ones will.
Secondly, in addition to being vivid and full of graphic details, individual accounts are emotional, and emotion is an invaluable component of persuasion. For example, men and women asked to donate money to support the charity March of Dimes would consistently donate more money if they were asked outside of a church as they walked in to confession (aka while they felt fairly guilty) than if they were asked when they were walking out of confession (aka when their guilt had already been resolved). We use emotions as a cue for what we should think and do. If you feel guilty? Do something good to resolve it. If you feel happy? Do something good to maintain that positive state. Without even realizing it, our emotions will sway our attitudes and actions — and no facts or numbers will manage to hit our emotions as strongly as an individual story of heartache and woe, or the thoughts, feelings, and lives of the people that we love and care about. In fact, as I’ve written about before, there are entire lines of research devoted to informing us about all of the ways in which our emotions impact our moral and political judgments. (Spoiler Alert: They impact them a lot.)
So what does a bunch of research on Mali, March of Dimes, and starving children have to do with Portman’s new attitude towards gay marriage? Well, it will all click together once we realize that the broad logic underlying the identifiable victim effect is not necessarily about the presence of “victims” themselves. Rather, the main point is that it’s harder to work up the empathy and the emotional connection to care about numbers and figures to the point where they will actually sway your opinions and political actions. Plenty of journalists have remarked recently that Portman is showing a lack of empathy because he couldn’t bring himself to care about other people’s children. Maybe so — and, certainly, there are plenty of people whose attitudes towards important political issues aren’t solely determined by the lives and interests of their friends and family. After all, you can certainly be in favor of legalizing gay marriage without being closely related to someone who is gay. But even so, the fact remains that it’s much easier to become emotionally invested in a cause when there’s a name and a face tied to it — especially when that name and that face belong to someone who is particularly close to you. The more you’re emotionally invested in something, the more you can identify a single person being impacted by the issue in vivid, emotional detail — and the more likely that person is to sway your attitudes.
What this means is that the tendency for people like Portman and Cheney to only care about gay marriage once they have children who are affected, or the tendency for people like Palin to support government spending on a cause that would impact her own son, is not out of the ordinary. In fact, it’s a core aspect of human cognitive biases. Of course issues that impact your own family members are going to have a greater pull on your beliefs and political attitudes. They are going to involve individual people, they are going to be more vivid, and they will be more emotional. It doesn’t have to be knowingly selfish, and it doesn’t have to involve conscious self-interest (although it could). But, to give these three (and the many others like them) the benefit of the doubt, it could simply be that they, like most others, don’t receive the emotional pull from numbers and figures that they do from close family members.
The point is, regardless of political affiliation, it’s not necessarily a sign of narcissism or selfishness if someone is susceptible to the effects of identifiable victims or individual stories. It’s just a cognitive bias that all of us face, which we need to be aware of if we wish to understand why people make certain exceptions to their political beliefs and how we can get people to care about certain political issues if they are not closely related to anyone being affected by them. It’s certainly not out of the ordinary for people to fall victim to identifiable victims. So, if you are a Republican and you wish to defend Portman from people claiming that he lacks empathy, it should comfort you to know that his empathic response is actually incredibly normal. And, if you are a Democrat and you are arguing that it angers you when politicians like Portman only hold empathic views for issues that personally impact them, you should know that it’s now your job, if you wish to be an effective persuader, to figure out how to create identifiable stories and vivid accounts for the issues that you care about, rather than relying on numbers and figures and wondering why they don’t evoke a more powerful reaction from politicians.
After all, the identifiable victim effect isn’t going anywhere anytime soon. Even Mother Teresa fell victim to it. As she put it, “If I look at the mass, I will never act. If I look at the one, I will.”
Of course, Stalin also noted that “the death of a single Russian soldier is a tragedy, [whereas] a million deaths is a statistic.” But, eh — let’s stick with quoting Mother Teresa.
Many people are wondering what this means for the future of same-sex marriage in the United States. Why exactly is this such a contentious issue, and why do Americans’ opinions seem to differ so greatly? When it comes to marriage equality, why can’t we all just get along?
Where Does a Same-Sex Marriage Attitude Come From?
The reason why only nine states in the USA have legalized same-sex marriage most likely has something to do with the large number of senators (and, presumably, American citizens) who are against it. But other than the obvious factors (like religion and age), what else might make someone especially likely to reject the idea of same-sex marriage?
We might find some answers by looking at empirical research on the psychological roots of political ideology. Conservative social attitudes (which typically include an opposition to same-sex marriage) are strongly related to preferences for stability, order, and certainty. In fact, research suggests that these attitudes may be part of a compensatory mental process motivated by anxiety; people who feel particularly threatened by uncertainty cope with it by placing great importance on norms, rules, and rigidity. As a result, people who are particularly intolerant of ambiguity, live in unstable circumstances, or simply have an innate need for order, structure, and closure are more likely to hold attitudes that promote rigidity and conventional social norms – meaning that they are most likely to be against same-sex marriage.
What does it mean to be intolerant of ambiguity? Well, would you rather see the world around you as clear and straightforward, or would you rather see everything as complicated and multidimensional? People who fall into the first category are much more likely to want everything in life (including gender roles, interpersonal relationships, and conceptualizations of marriage) to be dichotomous, rigid, and clear-cut. “Ambiguity-intolerant” people are also, understandably, more likely to construe ambiguous situations as particularly threatening. After all, if you are inherently not comfortable with the idea of a complicated, shades-of-gray world, any situation that presents you with this type of uncertainty will be seen as potentially dangerous. This is likely what’s happening when a conservative sees an ambiguous situation (e.g. a same-sex couple’s potential marriage) as a source of threat (e.g. to the sanctity of marriage).
Why Is Attitude Change So Hard?
After reading the section above, it should be fairly clear that there’s a problem with how both pro- and anti-same-sex-marriage proponents are viewing the other side’s point of view. The issue is not really that there’s one way to see the issue, and the other side simply isn’t seeing it that way; the issue is that both sides are focusing on entirely different things.
Overall, liberal ideology paints society as inherently improvable, and liberals are therefore motivated by a desire for eventual societal equality; conservative ideology paints society as inherently hierarchical, and conservatives are therefore motivated by a desire to make the world as stable and safe as possible. So while the liberals are banging their heads against the wall wondering why conservatives are against human rights, the conservatives are sitting on the other side wondering why on earth the liberals would want to create chaos, disorder, and dangerous instability. It boils down to a focus on equality versus a focus on order. Without understanding that, no one’s ever going to understand what the other side wants to know and hear, and all sides’ arguments will fall on deaf ears.*
But there’s another mental process at play. When someone has a strong attitude about something (liberal or conservative), the mind works very hard to protect it. When faced with information about a given topic, people pay attention to (and remember) the arguments that strengthen their attitudes, and they ignore, forget, or misremember any arguments that go against them. Even if faced with evidence that proves how a given attitude is undeniably wrong, people will almost always react by simply becoming more polarized; they will leave the interaction even more sure that their attitude is correct than they were before. So even if each side understood how to frame their arguments – even if liberals pointed out the ways in which same-sex marriage rights would help stabilize the economy, or conservatives argued to liberals that they could provide equal rights through civil unions rather than through marriage – it’s still very unlikely that either side would successfully change anyone’s attitude about anything.
If Attitudes Are So Stubborn, How Have They Changed In The Past?
So how did it happen? As one specific example, how did New York end up legalizing same-sex marriage in June 2011 with a 33-29 vote?
I’d wager a guess that part of it had to do with the five other states that had legalized same-sex marriage by that point and seen their heterosexual marriages remain just as sacred as they ever were before. As same-sex marriage becomes more commonplace (and heterosexual marriages remain unaffected), it will also become less threatening; as it becomes less threatening, it will evoke less of a threat response from people who don’t deal well with ambiguity.
But I can offer another serious contender: Amendment S5857-2011.
This amendment, which states that religious institutions opposed to same-sex marriage do not have to perform them, was passed shortly before the same-sex marriage legalization bill. There’s a very powerful social norm at work in our interactions, and it shapes how we respond to people’s attempts at persuasion: When we feel like someone has conceded something to us, we feel pressured to concede something back. This is called a reciprocal concession.
Let’s say a Girl Scout comes to your door and asks if she can sell you ten boxes of cookies. You feel bad saying no, but your waistline doesn’t need the cookies and your wallet doesn’t need the expense. After you refuse the sale, she responds by asking if you’d like to purchase five boxes instead. You then change your mind and agree to buy five boxes; after all, if the girl scout was willing to concede those five boxes of cookies, you feel pressured to concede something in return – like some of your money. That’s the power of reciprocal concessions.
This, in my opinion, is a good contender to explain what happened in the New York State Senate back in 2011. The vote was dead even: 31 for, 31 against. When the Senate passed the Amendment, this was a concession from the pro-same-sex-marriage side, which, according to the logic of reciprocal concessions, should have encouraged no-voting senators to reciprocate by conceding their votes. For two of them, it worked.
So now, we’ve seen that personality, ideology, and attitudes can all play a role in our attitudes towards same-sex marriage, and that votes might even swing because of techniques that we could have learned from our local Girl Scouts. This means is that it’s absolutely essential for everyone involved in the debate this week to understand that we won’t all respond to the same types of arguments, reasoning, or pleas. Rather, it is imperative that we consider how a focus on equality or stability might shape what information we pay attention to, and what values we deem most important.
1 I recognize that these are generalizations, and these descriptions do not accurately represent every liberal person and every conservative person. I also recognize that individual political attitudes are more complex than this distinction may make them out to be, and that religious ideology plays a very strong role as well. However, the focus on equality vs. stability is, at its core, the fundamental difference between liberal and conservative political ideology.
Repair Stroke Damage
Mice can recover from physically debilitating strokes that damage the primary motor cortex, the region of the brain that controls most movement in the body if the mice are quickly subjected to physical conditioning that rapidly “rewires” a different part of the brain to take over lost function, Johns Hopkins researchers have found. The research uses precise, intense and early treatment.
The researchers first trained normal but hungry mice to reach for and grab pellets of food in a precise way that avoided spilling the pellets and gave them the pellets as a reward. They reached maximum accuracy after seven to nine training days.
Then the researchers created experimental small strokes that left the mice with damage to the primary motor cortex. Predictably, the reaching and grasping precision disappeared, but a week of retraining, begun just 48 hours after the stroke, led the mice to again successfully perform the task with a degree of precision comparable to before the stroke.
Subsequent brain studies showed that although many nerve cells in the primary motor cortex were permanently damaged by the stroke, a different part of the brain called the medial premotor cortex adapted to control reaching and grasping.
“The function of the medial premotor cortex is not well-understood, but in this case it seemed to take over the functions associated with the reach-and-grab task in his experimental mice,” said study leader Steven R. Zeiler, M.D., Ph.D., an assistant professor of neurology at the Johns Hopkins University School of Medicine.
The researchers also report that otherwise healthy mice trained to reach and grasp pellets did not lose this ability after experiencing a stroke in the medial premotor cortex, which suggests that this part of the brain typically plays no role in those activities, and the existence of untapped levels of brain plasticity might be exploited to help human stroke victims.
Zeiler said another key finding in his research team’s mouse model was a reduction of the level of parvalbumin, a protein that marks the identity and activity of inhibitory neurons that keep the brain’s circuitry from overloading. With lower levels of parvalbumin in the medial premotor cortex, it appears the “brakes” are essentially off, allowing for the kind of activity required to reorganize and rewire the brain to take on new functions — in this case the ability to reach and grasp.
To prove that the learned functions had moved to the medial premotor cortex in the mice, the researchers induced strokes there. Again, the new skills were lost. And again, the mice could be retrained.
The research team’s next steps with their mouse model include evaluating the effect of drugs and timing of physical rehab on long-term recovery. The research could offer insight into whether humans should receive earlier and more aggressive rehab.
As many as 60 percent of stroke patients are currently left with diminished use of an arm or leg, and one-third need placement in a long-term care facility.
There's A Lot We Don't Know
In the early nineteen-nineties, David Poeppel, then a graduate student at M.I.T. (and a classmate of mine)—discovered an astonishing thing. He was studying the neurophysiological basis of speech perception, and a new technique had just come into vogue, called positron emission tomography (PET). About half a dozen PET studies of speech perception had been published, all in top journals, and David tried to synthesize them, essentially by comparing which parts of the brain were said to be active during the processing of speech in each of the studies. What he found, shockingly, was that there was virtually no agreement. Every new study had published with great fanfare, but collectively they were so inconsistent they seemed to add up to nothing. It was like six different witnesses describing a crime in six different ways.
This was terrible news for neuroscience—if six studies led to six different answers, why should anybody believe anything that neuroscientists had to say? Much hand-wringing followed. Was it because PET, which involves injecting a radioactive tracer into the brain, was unreliable? Were the studies themselves somehow sloppy? Nobody seemed to know.
And then, surprisingly, the field prospered. Brain imaging became more, not less, popular. The technique of PET was replaced with the more flexible technique of functional magnetic resonance imaging (fMRI), which allowed scientists to study people’s brains without the use of the risky radioactive tracers, and to conduct longer studies that collected more data and yielded more reliable results. Experimental methods gradually become more careful. As fMRI machines become more widely available, and methods became more standardized and refined, researchers finally started to find a degree of consensus between labs.
Meanwhile, neuroscience started to go public, in a big way. Fancy color pictures of brains in action became a fixture in media accounts of the human mind and lulled people into a false sense of comprehension. (In a feature for the magazine titled “Duped,” Margaret Talbot described research at Yale that showed that inserting neurotalk into a papers made them more convincing.) Brain imaging, which was scarcely on the public’s radar in 1990, became the most prestigious way of understanding human mental life. The prefix “neuro” showed up everywhere: neurolaw, neuroeconomics, neuropolitics. Neuroethicists wondered about whether you could alter someone’s prison sentence based on the size of their neocortex.
And then, boom! After two decades of almost complete dominance, a few bright souls started speaking up, asking: Are all these brain studies really telling us much as we think they are? A terrific but unheralded book published last year, “Neuromania,” worried about our growing obsession with brain imaging. A second book, by Raymond Tallis, published this year, invoked the same term and made similar arguments. In the book “Out of our Heads,” the philosopher Alva Noe wrote, ”It is easy to overlook the fact that images… made by fMRI and PET are not actually pictures of the brain in action.” Instead, brain images are elaborate reconstructions that depend on complex mathematical assumptions that can, as one study earlier this year showed, sometimes yield slightly different results when analyzed on different types of computers.
Last week, worries like these, and those of thoughtful blogs like Neuroskeptic and The Neurocritic, finally hit the mainstream, in the form of a blunt New York Times op-ed, in which the journalist Alissa Quart declared, “I applaud the backlash against what is sometimes called brain porn, which raises important questions about this reductionist, sloppy thinking and our willingness to accept seemingly neuroscientific explanations for, well, nearly everything.”
Quart and the growing chorus of neuro-critics are half right: our early-twenty-first-century world truly is filled with brain porn, with sloppy reductionist thinking and an unseemly lust for neuroscientific explanations. But the right solution is not to abandon neuroscience altogether, it’s to better understand what neuroscience can and cannot tell us, and why.
The first and foremost reason why we shouldn’t simply disown neuroscience altogether is an obvious one: if we want to understand our minds, from which all of human nature springs, we must come to grips with the brain’s biology. The second is that neuroscience has already told us lot, just not the sort of things we may think it has. What gets play in the daily newspaper is usually a study that shows some modest correlation between a sexy aspect of human behavior, with headlines like “FEMALE BRAIN MAPPED IN 3D DURING ORGASM” and “THIS IS YOUR BRAIN ON POKER”
But a lot of those reports are based on a false premise: that neural tissue that lights up most in the brain is the only tissue involved in some cognitive function. The brain, though, rarely works that way. Most of the interesting things that the brain does involve many different pieces of tissue working together. Saying that emotion is in the amygdala, or that decision-making is the prefrontal cortex, is at best a shorthand, and a misleading one at that. Different emotions, for example, rely on different combinations of neural substrates. The act of comprehending a sentence likely involves Broca’s area (the language-related spot on the left side of the brain that they may have told you about in college), but it also draws on the parts of the brain in the temporal lobe that analyze acoustic signals, and part of sensorimotor cortex and the basal ganglia become active as well. (In congenitally blind people, some of the visual cortex also plays a role.) It’s not one spot, it’s many, some of which may be less active but still vital, and what really matters is how vast networks of neural tissue work together.
The smallest element of a brain image that an fMRI can pick out is something called a voxel. But voxels are much larger than neurons, and, in the long run, the best way to understand the brain is probably not by asking which particular voxels are most active in a given process. It will instead come from asking how the many neurons work together within those voxels. And for that, fMRI may turn not out not to be the best technique, despite its current convenience. It may ultimately serve instead as the magnifying glass that leads us to the microscope we really need. If most of the action in the brain lies at the level of neurons rather than voxels or brain regions (which themselves often contain hundreds or thousands of voxels), we may need new methods, like optogenetics or automated, robotically guided tools for studying individual neurons; my own best guess is that we will need many more insights from animal brains before we can fully grasp what happens in human brains. Scientists are also still struggling to construct theories about how arrays of individual neurons relate complex behaviors, even in principle. Neuroscience has yet find its Newton, let alone its Einstein.
But that’s no excuse for giving up. When Darwin wrote “The Origin of Species,” nobody knew what DNA was for, and nobody imagined that we would eventually be sequencing it.
The real problem with neuroscience today isn’t with the science—though plenty of methodological challenges still remain—it’s with the expectations. The brain is an incredibly complex ensemble, with billions of neurons coming into—and out of—play at any given moment. There will eventually be neuroscientific explanations for much of what we do; but those explanations will turn out to be incredibly complicated. For now, our ability to understand how all those parts relate is quite limited, sort of like trying to understand the political dynamics of Ohio from an airplane window above Cleveland.
Which may be why the best neuroscientists today may be among those who get the fewest headlines, like researchers studying the complex dynamics that enter into understanding a single word. As Poeppel says, what we need now is “the meticulous dissection of some elementary brain functions, not ambitious but vague notions like brain-based aesthetics, when we still don’t understand how the brain recognizes something as basic as a straight line.”
The sort of short, simple explanations of complex brain functions that often make for good headlines rarely turn out to be true. But that doesn’t mean that there aren’t explanations to be had, it just means that evolution didn’t evolve our brains to be easily understood.
Robot Therapists For Autistic Kids
The number of children in the United States who are being diagnosed with autism is on the rise.
Traditional therapies to help autistic children develop their social skills can be time consuming and expensive. Recently, however, scientific researchers at Vanderbilt University, in Nashville, in the US state of Tennessee, may have overcome that challenge with the help of a robot.
The pioneering work is still in its earliest stages, but the results appear to be promising.
One of the children involved with the study is three-year-old Aiden. Learning doesn’t come easily for him. As a result of his autism, Aiden often struggles with social interactions. He’s been working with a therapist to address his learning challenges, but he’s also getting help from an unlikely source – a robot named NAO.
NAO is a commercial robot developed in France. What makes the robot different is the “intelligent environment” researchers have built. This aspect of his machinery consists of web cameras and TVs to track Aiden’s head movements and analyse his emotional state.
The information is then transmitted to a computer that programs the robot to respond to Aiden’s needs and instruct him through verbal prompts.
When NAO says, “Look here, Aiden,” incredibly, the boy responds to the command from his robotic “therapist” almost every time. That’s not always the case when Aiden is prompted by a human.
Researchers are still not fully sure why this happens, but seem convinced what makes robot therapy so successful is NAO’s ability to communicate with autistic children in a way most humans cannot.
Nilanjan Sarkar, a professor of mechanical and computer engineering at Vanderbilt, lead the study.
He told me, “If the robot determines that some of the gestures a child is not able to do, it can correct the gestures in a playful way, having them involved, not as a teacher or student, but as a playmate.“
Sarkar says this seems to be reassuring to autistic children who can quickly become disengaged from a teaching session if they believe they are not measuring up to the expectations of their human therapist.
Sarkar says he got the idea for creating the robot after visiting a relative in India whose child also suffers from autism. He realised there may be a need for robot “therapists” after observing how that child responded well to technology, but struggled with human social interactions.
Sarkar’s breakthrough work comes at a critical time. It’s hoped robots like NAO will play a crucial role in dealing with the increasing number of cases of autism in the US. Today, one-in-88 children is being diagnosed with autism.
Zachary Warren, director of the Vanderbilt Kennedy Center Treatment and Research Institute for Autism Spectrum Disorders, says Sarkar’s research is still new, but early results suggest a promising future for robot therapy.
“This model of thinking, of using definite tools, robots and computers at critical periods of development, might be one of the big contributions of this work. It might prime children for learning complex tasks. We might have something very impactful here,” he told me.
Warren cautions robot therapy is not designed, and could never replace human, therapists. Still, he is hopeful it will become a valuable companion tool to more traditional therapies.
He says as the number of autism cases continues to rise, budgets for treatment, will become even more constrained.
Robot therapies, he hopes, will help offset some of those costs, and help autistic children everywhere gain the social skills they need.
(This is an extract from an article in Scientific American about Iron Man 3 Tech: Iron Man 3
In real life science sensory feedback increases learning for brain-machine interface. In 2010, Aaron Suminski, Nicholas Hatsopoulos, and colleagues at the University of Chicago used a “sleeve” placed over a monkey’s arm to help improve learn how to move a cursor on a computer screen driven by recording activity in motor cortex. Using visual and somatosensory feedback together the monkeys learned how to control the cursor much faster and more accurately than without those sensations.
In 2011, a research team at the Duke University Center for Neuroengineering headed by Miguel Nicolelis, a pioneer and leader in the area of brain machine interface, trained two monkeys using brain activity to control and move a virtual hand.
The critical piece in this experiment - and a requirement for functional training with the fictional Iron Man exoskeleton - was that electrical activation in both the sensory and motor parts of the brains were used. Motor signals were used to drive the controller and then feedback was given directly into the brain by stimulating the sensory cortex when the monkeys made accurate movements. This advance actually provides patterns of electrical stimulation to the brain that mimic sensory inputs in movement.
The full article on using exoskeletons and computer controlled feedback, is here
Think about the last time you were about to interview for a job, speak in front of an audience, or go on a first date. To quell your nerves, chances are you spent time preparing – reading up on the company, reviewing your slides, practicing your charming patter. People facing situations that induce anxiety typically take comfort in engaging in preparatory activities, inducing a feeling of being back in control and reducing uncertainty.
While a little extra preparation seems perfectly reasonable, people also engage in seemingly less logical behaviors in such situations. Here’s one person’s description from our research:
I pound my feet strongly on the ground several times, I take several deep breaths, and I "shake" my body to remove any negative energies. I do this often before going to work, going into meetings, and at the front door before entering my house after a long day.
While we wonder what this person’s co-workers and neighbors think of their shaky acquaintance, such rituals – the symbolic behaviors we perform before, during, and after meaningful event – are surprisingly ubiquitous, across culture and time. Rituals take an extraordinary array of shapes and forms. At times performed in communal or religious settings, at times performed in solitude; at times involving fixed, repeated sequences of actions, at other times not. People engage in rituals with the intention of achieving a wide set of desired outcomes, from reducing their anxiety to boosting their confidence, alleviating their grief to performing well in a competition – or even making it rain.
Recent research suggests that rituals may be more rational than they appear. Why? Because even simple rituals can be extremely effective. Rituals performed after experiencing losses – from loved ones to lotteries – do alleviate grief, and rituals performed before high-pressure tasks – like singing in public – do in fact reduce anxiety and increase people’s confidence. What’s more, rituals appear to benefit even people who claim not to believe that rituals work. While anthropologists have documented rituals across cultures, this earlier research has been primarily observational. Recently, a series of investigations by psychologists have revealed intriguing new results demonstrating that rituals can have a causal impact on people’s thoughts, feelings, and behaviors.
Basketball superstar Michael Jordan wore his North Carolina shorts underneath his Chicago Bulls shorts in every game; Curtis Martin of the New York Jets reads Psalm 91 before every game. And Wade Boggs, former third baseman for the Boston Red Sox, woke up at the same time each day, ate chicken before each game, took exactly 117 ground balls in practice, took batting practice at 5:17, and ran sprints at 7:17. (Boggs also wrote the Hebrew word Chai (“living”) in the dirt before each at bat. Boggs was not Jewish.) Do rituals like these actually improve performance? In one recent experiment, people received either a “lucky golf ball” or an ordinary golf ball, and then performed a golf task; in another, people performed a motor dexterity task and were either asked to simply start the game or heard the researcher say “I’ll cross fingers for you” before starting the game. The superstitious rituals enhanced people’s confidence in their abilities, motivated greater effort – and improved subsequent performance. These findings are consistent with research in sport psychology demonstrating the performance benefits of pre-performance routines, from improving attention and execution to increasing emotional stability and confidence.
Humans feel uncertain and anxious in a host of situations beyond laboratory experiments and sports – like charting new terrain. In the late 1940s, anthropologist Bronislaw Malinowski lived among the inhabitants of islands in the South Pacific Ocean. When residents went fishing in the turbulent, shark-infested waters beyond the coral reef, they performed specific rituals to invoke magical powers for their safety and protection. When they fished in the calm waters of a lagoon, they treated the fishing trip as an ordinary event and did not perform any rituals. Malinowski suggested that people are more likely to turn to rituals when they face situations where the outcome is important and uncertain and beyond their control – as when sharks are present.
Rituals in the face of losses such as the death of a loved one or the end of a relationship (or loss of limb from shark bite) are ubiquitous. There is such a wide variety of known mourning rituals that they can even be contradictory: crying near the dying is viewed as disruptive by Tibetan Buddhists but as a sign of respect by Catholic Latinos; Hindu rituals encourage the removal of hair during mourning, while growing hair (in the form of a beard) is the preferred ritual for Jewish males.
People perform mourning rituals in an effort to alleviate their grief – but do they work? Our research suggests they do. In one of our experiments, we asked people to recall and write about the death of a loved one or the end of a close relationship. Some also wrote about a ritual they performed after experiencing the loss:
I used to play the song by Natalie Cole “I miss you like crazy” and cry every time I heard it and thought of my mom.
I looked for all the pictures we took together during the time we dated. I then destroyed them into small pieces (even the ones I really liked!), and then burnt them in the park where we first kissed.
We found that people who wrote about engaging in a ritual reported feeling less grief than did those who only wrote about the loss.
We next examined the power of rituals in alleviating disappointment in a more mundane context: losing a lottery. We invited people into the laboratory and told them they would be part of a random drawing in which they could win $200 on the spot and leave without completing the study. To make the pain of losing even worse, we even asked them to think and write about all the ways they would use the money. After the random draw, the winner got to leave, and we divided the remaining “losers” into two groups. Some people were asked to engage in the following ritual:
Step 1. Draw how you currently feel on the piece of paper onyour desk for two minutes.
Step 2. Please sprinkle a pinch of salt on the paper with your drawing.
Step 3. Please tear up the piece of paper.
Step 4. Count up to ten in your head five times.
Other people simply engaged in a task (drawing how they felt) for the same amount of time. Finally, everyone answered questions about their level of grief, such as “I can’t help feeling angry and upset about the fact that I did not win the $200.” The results? Those who performed a ritual after losing in the lottery reported feeling less grief. Our results suggest that engaging in rituals mitigates grief caused by both life-changing losses (such as the death of a loved one) and more mundane ones (losing a lottery).
Rituals appear to be effective, but, given the wide variety of rituals documented by social scientists, do we know which types of rituals work best? In a recent study conducted in Brazil, researchers studied people who perform simpatias: formulaic rituals that are used for solving problems such as quitting smoking, curing asthma, and warding off bad luck. People perceive simpatias to be more effective depending on the number of steps involved, the repetition of procedures, and whether the steps are performed at a specified time. While more research is needed, these intriguing results suggest that the specific nature of rituals may be crucial in understanding when they work – and when they do not.
Despite the absence of a direct causal connection between the ritual and the desired outcome, performing rituals with the intention of producing a certain result appears to be sufficient for that result to come true. While some rituals are unlikely to be effective – knocking on wood will not bring rain – many everyday rituals make a lot of sense and are surprisingly effective.
When the hippocampus, the brain’s primary learning and memory center, is damaged, complex new neural circuits — often far from the damaged site — arise to compensate for the lost function, say life scientists from UCLA and Australia who have pinpointed the regions of the brain involved in creating those alternate pathways.
The researchers found that parts of the prefrontal cortex take over when the hippocampus is disabled. Their breakthrough discovery, the first demonstration of such neural-circuit plasticity, could potentially help scientists develop new treatments for Alzheimer’s disease, stroke, and other conditions involving damage to the brain.
In the research, UCLA‘s Michael Fanselow and Moriel Zelikowsky in collaboration with Bryce Vissel, a group leader of the neuroscience research program at Sydney’s Garvan Institute of Medical Research, conducted laboratory experiments with rats showing that the rodents were able to learn new tasks even after damage to the hippocampus.
While the rats needed additional training, they nonetheless learned from their experiences — a surprising finding.
“I expect that the brain probably has to be trained through experience,” said Fanselow, a professor of psychology and member of the UCLA Brain Research Institute, who was the study’s senior author. “In this case, we gave animals a problem to solve.”
After discovering the rats could, in fact, learn to solve problems, Zelikowsky, a graduate student in Fanselow’s laboratory, traveled to Australia, where she worked with Vissel to analyze the anatomy of the changes that had taken place in the rats’ brains. Their analysis identified significant functional changes in two specific regions of the prefrontal cortex.
Compensating for damage from Alzheimer’s
“Interestingly, previous studies had shown that these prefrontal cortex regions also light up in the brains of Alzheimer’s patients, suggesting that similar compensatory circuits develop in people,” Vissel said. “While it’s probable that the brains of Alzheimer’s sufferers are already compensating for damage, this discovery has significant potential for extending that compensation and improving the lives of many.”
The hippocampus, a seahorse-shaped structure where memories are formed in the brain, plays critical roles in processing, storing and recalling information. The hippocampus is highly susceptible to damage through stroke or lack of oxygen and is critically involved in Alzheimer’s disease, Fanselow said.
“Until now, we’ve been trying to figure out how to stimulate repair within the hippocampus,” he said. “Now we can see other structures stepping in and whole new brain circuits coming into being.”
Zelikowsky said she found it interesting that sub-regions in the prefrontal cortex compensated in different ways, with one sub-region — the infralimbic cortex — silencing its activity and another sub-region — the prelimbic cortex — increasing its activity.
“If we’re going to harness this kind of plasticity to help stroke victims or people with Alzheimer’s,” she said, “we first have to understand exactly how to differentially enhance and silence function, either behaviorally or pharmacologically. It’s clearly important not to enhance all areas. The brain works by silencing and activating different populations of neurons. To form memories, you have to filter out what’s important and what’s not.”
Complex behavior always involves multiple parts of the brain communicating with one another, with one region’s message affecting how another region will respond, Fanselow noted. These molecular changes produce our memories, feelings and actions.
“The brain is heavily interconnected — you can get from any neuron in the brain to any other neuron via about six synaptic connections,” he said. “So there are many alternate pathways the brain can use, but it normally doesn’t use them unless it’s forced to. Once we understand how the brain makes these decisions, then we’re in a position to encourage pathways to take over when they need to, especially in the case of brain damage.
“Behavior creates molecular changes in the brain; if we know the molecular changes we want to bring about, then we can try to facilitate those changes to occur through behavior and drug therapy,” he added. I think that’s the best alternative we have. Future treatments are not going to be all behavioral or all pharmacological, but a combination of both.”
Why Buddha Isn’t Dead–and Psychology Still Isn’t Really a Science
(John Horgan in Scientific American)
I’ve been mulling over how I should follow up my previous post, the one with the subtle headline “Crisis in Psychiatry!” My meta-theme is that science has failed to deliver a potent theory of and therapy for the human mind. I’ve made this same point previously, notably in my 1996 Scientific American article “Why Freud Isn’t Dead” and my 1999 book The Undiscovered Mind, which was originally also titled “Why Freud Isn’t Dead.”
I was faulted for being too critical in those works, but in retrospect I probably wasn’t critical enough. My “Freud isn’t dead” argument went as follows: In spite of enduring vicious attacks since its inception, Freudian psychoanalysis endures as a theory of and therapy of the mind not because it has been scientifically validated. Far from it. Psychoanalysis is arguably analogous to phlogiston, the pseudo-stuff that alchemists once thought was the basis of combustion and other chemical phenomena.
Psychoanalysis endures because science has not produced an obviously superior paradigm to replace it. If psychoanalysis is phlogiston, so are all the supposedly new-and-improved mind-paradigms proposed over the past century, including behaviorism, cognitive science, behavioral genetics, evolutionary psychology and neuroscience.
An effective mind-paradigm should produce effective treatments for mental illness, right? Countless new psychotherapies have emerged since Freud’s heyday, but studies have shown that all “talking cures” are roughly as effective as each other, or ineffective. This is the notorious Dodo effect. (Those of you who believe, like my Scientific American colleague Ferris Jabr, that cognitive behavioral therapy represents a genuine advance in psychotherapy should check out this new study, which concludes otherwise.) Antidepressants, neuroleptics and other drugs can provide short-term relief for some sufferers from mental illness, but on balance they may do more harm than good.
Here’s how bad things have gotten. Many prominent psychologists, such as Richard Davidson, are promoting meditation as a therapy for troubled minds, even though the evidence for meditation’s benefits is flimsy. Think about that a moment. In spite of all the supposed advances of modern science, some authorities believe that the best treatment for mental disorders might be the method that Buddha taught 2,500 years ago. That’s like chemists suddenly telling us that phlogiston theory - or something even older, like the ancient belief that all matter is made of earth, fire, air and water - was right after all.
I’m often accused of of being too negative, of seeing the glass of mind-science as half empty instead of half full. Actually, even describing the glass as half empty is far too generous. We don’t have a genuine science of the mind yet. The question is when, if ever, will we?
Trick Qs and False Memories
Simply asking people whether they experienced an event can trick them into later believing that it did occur, according to a neat little study just out: Susceptibility to long-term misinformation effect outside of the laboratory.
Psychologists Miriam Lommen and colleagues studied 249 Dutch soldiers were deployed for a four month tour of duty in Afghanistan. As part of a study into PTSD, they were given an interview at the end of the deployment asking them about their exposure to various stressful events that had occurred. However, one of the things discussed was made up – a missile attack on their base on New Year’s Eve.
At the post-test, participants were provided new information about an event that did not take place during their deployment, that is, a (harmless) missile attack at the base on New Year’s Eve.
We provided a short description of the event including some sensory details (e.g., sound of explosion, sight of gravel after the explosion). After that, participants were asked if they had experienced it…
Eight of the soldiers reported remembering this event right there in the interview. The other 241 correctly said they didn’t recall it, but seven months later, when they did a follow-up questionnaire about their experiences in the field, 26% said they did remember the non-existent New Year’s Eve bombardment (this question had been added to an existing PTSD scale.)
Susceptibility to the misinformation was correlated with having a lower IQ, and with PTSD symptom severity.
False memory effects like this one have been widely studied, but generally only in laboratory conditions. I like this study because it used a clever design to take memory misinformation into the real world, by neatly piggybacking onto another piece of research.
Also, it’s interesting (and worrying) that the false information was presented in the context of a question, not a statement. It seems that merely being asked about something can, in some cases, lead to memories of having experienced that thing.
The man could not stand dirt. When he built his company’s first factory in Fremont, Calif., in 1984, he frequently got down on his hands and knees and looked for specks of dust on the floor as well as on all the equipment. For Steve Jobs, who was rolling out the Macintosh computer, these extreme measures were a necessity. “If we didn’t have the discipline to keep that place spotless,” the Apple co-founder later recalled, “then we weren’t going to have the discipline to keep all these machines running.” This perfectionist also hated typos. As Pam Kerwin, the marketing director at Pixar during Jobs’ hiatus from Apple, told me, “He would carefully go over every document a million times and would pick up on punctuation errors such as misplaced commas.” And if anything wasn’t just right, Jobs could throw a fit. He was a difficult and argumentative boss who had trouble relating to others. But Jobs could focus intensely on exactly what he wanted—which was to design “insanely great products”—and he doggedly pursued this obsession until the day he died. Hard work and intelligence can take you only so far. To be super successful like Jobs, you also need that X-factor, that maniacal overdrive—which often comes from being a tad mad.
For decades, scholars have made the case that mental illness can be an asset for writers and artists. In her landmark work Touched With Fire: Manic-Depressive Illness and the Artistic Temperament, Johns Hopkins psychologist Kay Jamison documented the “fine madness” that gripped dozens of prominent novelists, poets, painters, and composers. As Lord Byron wrote of his fellow bards, “We of the craft are all crazy. Some are affected by gaiety, others by melancholy, but all are more or less touched.” For the author of Don Juan, as for many of the other artsy types profiled by Jamison, the disease in question is manic depression (or bipolar disorder), but depression is also common. Sylvia Plath’s signature works—The Bell Jar and Daddy—hinge on her suicidal despair. But while most Americans now acknowledge that many famous writers were unbalanced, few realize that the movers and shakers who have built this country—CEOs like Steve Jobs—also struggled with psychiatric maladies. This misunderstanding motivated me to write my latest book, America’s Obsessives. After discussing Jobs and other contemporary figures in the prologue, I cover seven icons, including Thomas Jefferson, marketing genius Henry J Heinz, librarian Melvil Dewey, aviator Charles Lindbergh, beauty tycoon Estée Lauder, and baseball slugger Ted Williams. (Like Jobs, the Red Sox Hall of Famer was a neatness nut who used to quiz the clubhouse attendant about why he used Tide on the team’s laundry.) By picking trailblazers who toiled in different arenas - from business and politics to information technology and sports - I wanted to show how a touch of madness is perhaps the secret to rising to the top in just about any line of work.
These men and women of action did have occasional bouts with depression, but they primarily suffered (or benefited) from another form of mental illness: obsessive-compulsive personality disorder. The key features of this superachiever’s disease include a love of order, lists, rules, schedules, details, and cleanliness; people with OCPD are addicted to work, and they are control freaks who must do everything “their way.” OCPD is not to be confused with its cousin, obsessive-compulsive disorder. Those with OCD are paralyzed by thoughts that just won’t go away, while people with OCPD are inspired by them. Steve Jobs couldn’t stop designing products—when hospitalized in the ICU, he once ripped off his oxygen mask, insisting that his doctors improve its design on the double. Estée Lauder couldn’t stop touching other women’s faces. Perfect strangers would do, including those she might bump into on an elevator or a street corner. Without her beauty biz as an alibi, she might have been arrested for assault with deadly lipstick or face powder. These dynamos are hard-pressed to carve out time for anything else but their compulsions. Spouses and children typically endure long stretches of neglect. In the early 1950s, with two boys at home (today both are billionaire philanthropists), Lauder was riding the rails all over the country half the year, hawking her wares.
Obsessives hate nothing so much as taking a break to relax or reflect, and they typically do so only when felled by illness. “Home. Not well. Busy about house. Always plenty to do. Cannot well be idle and believe will rather wear out than rust out,” wrote the 35-year-old Henry Heinz in his diary in 1880, four years after starting his eponymous processed food company. Heinz’s compulsions included measuring everything in sight—he never left home without his steel tape measure, which he used on many an unsuspecting doorway—and keeping track of meaningless numbers. When traveling across the Atlantic on a steamer in 1886, he jotted down in his diary its precise dimensions as well as the number of passengers who rode in steerage class. But this love of pseudo-quantification would produce in the early 1890s one of the sturdiest slogans in American advertising history—“57 Varieties.” At the time, his company actually produced more than 60 products, but this number fetishist felt that there was something magical about sevens. By his early 50s, Heinz had already driven himself close to a complete nervous collapse on numerous occasions, and he reluctantly passed the reins of the company to his heirs. For the last two decades of his life, his children insisted that the overbearing paterfamilias chill out in a German sanatorium every summer, either at Dr. Carl von Dapper’s outfit in Bad Kissingen or Dr. Franz Dengler’s in Baden-Baden.
Melvil Dewey, whose childhood fixation with the number 10 led him to devise the Dewey Decimal Classification system, also was forced into an early retirement by his feverish pace. Dewey published the first edition of his search engine—the Google of its day, which is still in use in libraries in nearly 150 countries—in 1876, when he was only 24. For the next quarter of a century, Dewey took on a series of demanding jobs, typically juggling two or three at a time, as a librarian, businessman, and editor. He became the head of the world’s first library school, at Columbia University in 1884. According to a running joke, Dewey had a habit of dictating notes to two stenographers at the same time. In the end, it was his sexual compulsions that did him in. He was a serial sexual harasser and in 1905 was ostracized from the American Library Association, the organization that he had helped found a generation earlier, when four prominent female members of the guild filed complaints against him.
The aviator Charles Lindbergh also was an order aficionado whose oversized libido created a mess. This demanding dad saw his five children only a couple of months a year. He ruled over them and his wife, the best-selling author Anne Morrow Lindbergh, not with an iron fist but with ironclad lists. He kept track of each child’s infractions, which included such innocuous activities as gum-chewing. And he insisted that Anne track all her household expenditures, including every 15 cents spent for rubber bands, in copious account books. After Lindbergh turned 50, feeding his sex addiction became his full-time job; for the rest of his life, he was constantly flying around the world to visit his three German “wives,” longtime mistresses with whom he fathered seven children, and to hook up with various other flings.
Remarkably, though these obsessive icons were all awash in neurotic tics, there has been no shortage of hagiographers who idealize their every move. Of Heinz’s penchant for collecting seemingly random numbers, one biographer has observed that he “enthusiastically wrote down in his diary the statistics that one must know and record on such an occasion.” Another saw in Heinz’s factoid-finding a reason to compare him to “a scientist such as Thomas Edison.” The author of the first biography of Dewey made the laughable claim that “there was no psycho-neurosis in [him].” Even today, some still agree with what New York Gov. Al Smith said about Lindbergh soon after his legendary flight to Paris: “He represents to us … all that we wish—a young American at his best.” We Americans like our heroes and do not easily let them go. By pointing out the character flaws in our superachievers, I do not intend to diminish the greatness of their achievements. Instead I aim to show exactly how they managed to pull them off. And more often than not, it was with a touch of madness.
Your Inner Chimp
The central idea now familiar to most British Olympians, the stars of Liverpool Football Club and Ronnie O’Sullivan, the mercurial snooker player, is that there is a chimp in your brain. The chimp is not exactly you. It is your primitive self. It has emotions, reacts quickly and impulsively, and is not logical in its thinking. It jumps to conclusions and makes assumptions.
The key to success — in life as well as in sport — is to be able to recognise the behaviour of this chimp and to manage it with the logical part of the brain. As Dr Steve Peters, the psychiatrist who invented the model and has written a bestselling book on the subject, puts it: “You have to put the chimp back in its box.”
Peters is a very likeable Northerner. Within moments of meeting him at Google HQ in London, I can see why he has built such a strong rapport with Sir Chris Hoy, Craig Bellamy and the like. He has a bright face, a commonsense style and a way of making you think you are, for the time he is with you, at the centre of his universe.
Peters is also in demand. His chimp framework has made him one of the most influential people in sport. Peters is a psychiatrist — he trained as a doctor before taking up various hospital posts — but his principal work today is in the psychology of success. He wants people to maximise their potential and thinks that he can help them to do just that.
“Ronnie O’Sullivan has been very open about his work with me,” Peters says. “When he came to see me, the problem was that his chimp would try to sabotage him with anxious thoughts. This is what the chimp does. It frets about losing, or about not being able to pot the ball, and about how awful it would be not to win.
“But I explained to Ronnie that these thoughts were not coming from him. They were coming from his chimp. The key was for him to box his chimp. The rational part of his brain was perfectly able to recognise that losing doesn’t define him.
“Snooker is not the be-all and end-all. Once he recognised this, and talked the chimp down, he could play without the negative emotional baggage.”
Peters’s methods have had strong results. O’Sullivan has won the World Championship twice since starting to work with him in 2011. Peters has also been central to the success of the Great Britain track cycling team, Team Sky and other Olympic sports, such as taekwondo and canoeing. He has been working with Liverpool since 2012.
“The key is to understand that everyone is an individual,” Peters says. “We all have individual personalities and individual chimps. The way to box the chimp will vary from person to person, and from situation to situation. Sometimes a chimp needs to be reasoned with; sometimes it needs to be confronted. My job is to train people in the most effective method to use.”
Peters does not only help athletes with particular problems; he also helps athletes to use their psychological tools more effectively. “Some people are already very robust emotionally when they come to see me,” he says. “Chris Hoy, for example, was never unstable. There was a report in the press about him having panic attacks. I can tell you that never happened. He has never been on Valium in his life.
“What he did when he came to me was to say, ‘I am in a great place mentally and physically, but I can get an advantage in physical terms by working with a specialist in strength and conditioning. And I believe I can get an advantage in emotional terms by working with someone who can help me understand how my mind works.’ There was no dysfunction that I had to sort out; I was just adding to what he already had.”
Much of Peters’s work is to help athletes to gain a sense of perspective. The danger when walking out into an Olympic final is that an athlete will become overwhelmed with fear, or panic, or the yearning to run away. It is sometimes called the fight, flight, freeze response. The key to unlocking peak performance is to banish this “chimp-like” reaction by introducing rationality.
As Peters puts it: “It is vital to remember that sport is just sport. You won’t die if you lose. You will still be you. Victoria Pendleton once said to me, ‘All I do is go round and round in a circle.’ That is a great perspective to have because once you realise that sport is nonsense, you can give it everything without fear. I am not promoting being laid-back; I’m promoting perspective.”
Are there any downsides of perspective, I wonder. It may be very useful when one is about to go into competition and might otherwise be seized by panic, but what about at the beginning of an Olympic cycle? Why would someone want to commit to four years of professional sport when she has just convinced herself that it is “nonsense”?
“It is possible to want to win the medal, to commit to winning the medal, while still recognising that the sport is, in a sense, trivial,” Peters says. “Some people may find that a difficult balance to strike, but others do it very well. It is all about the individual.”
Occasionally, Peters has worked with athletes who just do not want to commit. The problem is not a reluctant chimp, keen to stay in a warm bed rather than do an early-morning run, but something more profound. You might call it a rational decision to say: “Sport isn’t worth it.”
“I have worked with two athletes who said, ‘I don’t want to do this,’ ” Peters says. “They were both great athletes who could have medalled. But they walked away, and I agreed with their decision. One wrote to me two years later and said, ‘Thank you for giving me my life back. I don’t want the medal and I don’t want the lifestyle. I have made no error of judgment.’ To me, that is fantastic. I don’t want to force people to do something they don’t want to.”
Spock vs Kirk
Good myths turn on simple pairs— God and Lucifer, Sun and Moon, Jerry and George—and so an author who makes a vital duo is rewarded with a long-lived audience. No one in 1900 would have thought it possible that a century later more people would read Conan Doyle’s Holmes and Watson stories than anything of George Meredith’s, but we do. And so Gene Roddenberry’s “Star Trek,” despite the silly plots and the cardboard-seeming sets, persists in its many versions because it captures a deep and abiding divide. Mr. Spock speaks for the rational, analytic self who assumes that the mind is a mechanism and that everything it does is logical, Captain Kirk for the belief that what governs our life is not only irrational but inexplicable, and the better for being so. The division has had new energy in our time: we care most about a person who is like a thinking machine at a moment when we have begun to have machines that think. Captain Kirk, meanwhile, is not only a Romantic, like so many other heroes, but a Romantic on a starship in a vacuum in deep space. When your entire body is every day dissolved, reënergized, and sent down to a new planet, and you still believe in the ineffable human spirit, you have really earned the right to be a soul man.
Writers on the brain and the mind tend to divide into Spocks and Kirks, either embracing the idea that consciousness can be located in a web of brain tissue or debunking it. For the past decade, at least, the Spocks have been running the Enterprise: there are books on your brain and music, books on your brain and storytelling, books that tell you why your brain makes you want to join the Army, and books that explain why you wish that Bar Refaeli were in the barracks with you. The neurological turn has become what the “cultural” turn was a few decades ago: the all-purpose non-explanation explanation of everything. Thirty years ago, you could feel loftily significant by attaching the word “culture” to anything you wanted to inspect: we didn’t live in a violent country, we lived in a “culture of violence”; we didn’t have sharp political differences, we lived in a “culture of complaint”; and so on. In those days, Time, taking up the American pursuit of pleasure, praised Christopher Lasch’s “The Culture of Narcissism”; now Time has a cover story on happiness and asks whether we are “hardwired” to pursue it.
Myths depend on balance, on preserving their eternal twoness, and so we have on our hands a sudden and severe Kirkist backlash. A series of new books all present watch-and-ward arguments designed to show that brain science promises much and delivers little. They include “A Skeptic’s Guide to the Mind” (St. Martin’s), by Robert A. Burton; “Brainwashed: The Seductive Appeal of Mindless Neuro-Science” (Basic), by Sally Satel and Scott O. Lilienfeld; and “Neuro: The New Brain Sciences and the Management of the Mind” (Princeton), by a pair of cognitive scientists, Nikolas Rose and Joelle M. Abi-Rached.
“Bumpology” is what the skeptical wit Sydney Smith, writing in the eighteen-twenties, called phrenology, the belief that the shape of your skull was a map of your mind. His contemporary heirs rehearse, a little mordantly, failed bits of Bumpology that indeed seem more like phrenology than like real psychology. There was the left-right brain split, which insisted on a far neater break within our heads (Spock bits to the left, Kirk bits to the right) than is now believed to exist. The skeptics revisit the literature on “mirror neurons,” which become excited in the frontal lobes of macaque monkeys when the monkeys imitate researchers, and have been used to explain the origins of human empathy and sociability. There’s no proof that social-minded Homo sapiens has mirror neurons, while the monkeys who certainly do are not particularly social. (And, if those neurons are standard issue, then they can’t be very explanatory of what we mean by empathy: Bernie Madoff would have as many as Nelson Mandela.)
It turns out, in any case, that it’s very rare for any mental activity to be situated tidily in one network of neurons, much less one bit of the brain. When you think you’ve located a function in one part of the brain, you will soon find that it has skipped town, like a bail jumper. And all of the neuro-skeptics argue for the plasticity of our neural networks. We learn and shape our neurology as much as we inherit it. Our selves shape our brains at least as much as our brains our selves.
Each author, though, has a polemical project, something to put in place of mere Bumpology. (People who write books on indoor plumbing seldom feel obliged to rival Vitruvius as theorists of architecture, but it seems that no one can write about one neuron without explaining all thought.) “Brainwashed” is nervously libertarian; Satel is a scholar at the American Enterprise Institute, and she and Lilienfeld are worried that neuroscience will shift wrongdoing from the responsible individual to his irresponsible brain, allowing crooks to cite neuroscience in order to get away with crimes. This concern seems overwrought, copping a plea via neuroscience not being a significant social problem. Burton, a retired medical neurologist, seems anxious to prove himself a philosopher, and races through a series of arguments about free will and determinism to conclude that neuroscience doesn’t yet know enough and never will. Minds give us the illusion of existing as fixed, orderly causal devices, when in fact they aren’t. Looking at our minds with our minds is like writing a book about hallucinations while on LSD: you can’t tell the perceptual evidence from your own inner state. “The mind is and will always be a mystery,” Burton insists. Maybe so, and yet we’re perfectly capable of probing flawed equipment with flawed equipment: we know that our eyes have blind spots, even as we look at the evidence with them, and we understand all about the dog whistles we can’t hear. Since in the past twenty-five years alone we’ve learned a tremendous amount about minds, it’s hard to share the extent of his skepticism. Psychology is an imperfect science, but it’s a science.
In “Neuro,” Rose and Abi-Rached see the real problem: neuroscience can often answer the obvious questions but rarely the interesting ones. It can tell us how our minds are made to hear music, and how groups of notes provoke neural connections, but not why Mozart is more profound than Manilow. Courageously, they take on, and dismiss, the famous experiments by Benjamin Libet that seem to undermine the idea of free will. For a muscle movement, Libet showed, the brain begins “firing”—choosing, let’s say, the left joystick rather than the right—milliseconds before the subject knows any choice has been made, so that by the time we think we’re making a choice the brain has already made it. Rose and Abi-Rached are persuasively skeptical that “this tells us anything about the exercise of human will in any of the naturally occurring situations where individuals believe they have made a conscious choice—to take a holiday, choose a restaurant, apply for a job.” What we mean by “free will” in human social practice is just a different thing from what we might mean by it in a narrower neurological sense. We can’t find a disproof of free will in the indifference of our neurons, any more than we can find proof of it in the indeterminacy of the atoms they’re made of.
A core objection is that neuroscientific “explanations” of behavior often simply re-state what’s already obvious. Neuro-enthusiasts are always declaring that an MRI of the brain in action demonstrates that some mental state is not just happening but is really, truly, is-so happening. We’ll be informed, say, that when a teen-age boy leafs through the Sports Illustrated swimsuit issue areas in his brain associated with sexual desire light up. Yet asserting that an emotion is really real because you can somehow see it happening in the brain adds nothing to our understanding. Any thought, from Kiss the baby! to Kill the Jews!, must have some route around the brain. If you couldn’t locate the emotion, or watch it light up in your brain, you’d still be feeling it. Just because you can’t see it doesn’t mean you don’t have it. Satel and Lilienfeld like the term “neuroredundancy” to “denote things we already knew without brain scanning,” mockingly citing a researcher who insists that “brain imaging tells us that post-traumatic stress disorder (PTSD) is a ‘real disorder.’ ” The brain scan, like the word “wired,” adds a false gloss of scientific certainty to what we already thought. As with the old invocation of “culture,” it’s intended simply as an intensifier of the obvious.
Phrenology, the original Bumpology, at least had the virtue of getting people to think about “cortical location,” imagining, for the first time, that the brain might indeed be mapped into areas. Bumpology brought a material order, however factitious, to a metaphysical subject. In the same way, even the neuro-skeptics seem to agree that modern Bumpology remains an important corrective to radical anti-Bumpology: to the kind of thinking that insists that brains don’t count at all and cultures construct everything; that, given the right circumstances, there could be a human group with six or seven distinct genders, each with its own sexuality; that there is a possible human society in which very old people would be regarded as attractive and nubile eighteen-year-olds not; and still another where adolescent children would be noted for their rigorous desire to finish recently commenced tasks. How impressive you find modern pop Bumpology depends in part on whether you believe that there are a lot of people who still think like that.
For all the exasperations of neurotautology, there’s a basic, arresting truth to neo-Bumpology. In a new, belligerently pro-neuro book, “The Anatomy of Violence: The Biological Roots of Crime” (Pantheon), Adrian Raine, a psychology professor at the University of Pennsylvania, discusses a well-studied case in which the stepfather of an adolescent girl, with no history of pedophilia, began to obsess over child pornography and then to molest his stepdaughter. He was arrested, arraigned, and convicted. Then it emerged that he had a tumor, pressing on the piece of the brain associated with social and sexual inhibitions. When it was removed, the wayward desires vanished. Months of normality ensued, until the tumor began to grow back and, with it, the urges.
Now, there probably is no precise connection between the bit of the brain the tumor pressed on and child lust. The same bit of meat-matter pressing on the same bit of brain in some other head might have produced some other transgression—in the head of a Lubavitcher, say, a mad desire to eat prosciutto. But it would still be true that what we think of as traditionally deep matters of guilt and temptation and taboo, the material of Sophocles and Freud, can be flicked on and off just by physical pressure. You have to respect the power of the meat to change the morals so neatly.
In one sense, this is more neuro-redundancy. Charting a path between these two truths is the philosopher Patricia S. Churchland’s project in “Touching a Nerve: The Self as Brain” (Norton), a limited defense of the centrality of neuro. She is rightly contemptuous of the invocation of “scientism” to dismiss the importance of neuroscience to philosophy, seeing that resistance as identical to the Inquisition’s resistance to Galileo, or the seventeenth century’s to Harvey’s discovery of the pumping heart:
This is the familiar strategy of let’s pretend. Let’s believe what we prefer to believe. But like the rejection of the discovery that Earth revolves around the sun, the let’s pretend strategy regarding the heart could not endure very long. . . . Students reading the history of this period may be as dumbfounded regarding our resistance to brain science as we are now regarding the seventeenth-century resistance to the discovery that the heart is a meat pump.
Humanism not only has survived each of these sequential demystifications; they have made it stronger by demonstrating the power of rational inquiry on which humanism depends. Every time the world becomes less mysterious, nature becomes less frightening, and the power of the mind to grasp reality more sure. A constant reduction of mystery to matter, a belief that we can name natural rules we didn’t make—that isn’t scientism. That’s science.
Yet Churchland also makes beautifully clear how complex and contingent the simplest brain business is. She discusses whether the hormone testosterone makes men angry. The answer even to that off-on question is anything but straightforward. Testosterone counts for a lot in making men mad, but so does the “stress” hormone cortisol, along with the “neuromodulator” serotonin, which affects whether the aggression is impulsive or premeditated, and the balance between all these things is affected by “other hormones, other neuromodulators, age and environment.”
So this question, like any other about neurology, turns out to be both simply mechanical and monstrously complex. Yes, a hormone does wash through men’s brains and makes them get mad. But there’s a lot more turning on than just the hormone. For a better analogy to the way your neurons and brain chemistry run your mind, you might think about the way the light switch runs the lights in your living room. It’s true that the light switch in the corner turns the lights on in the living room. Nor is that a trivial observation. How the light switch gets wired to the bulb, how the bulb got engineered to be luminous—all that is an almost miraculously complex consequence of human ingenuity. But at the same time the light switch on the living-room wall is merely the last stage in a long line of complex events that involve waterfalls and hydropower and surge protectors and thousands of miles of cables and power grids. To say the light switch turns on the living-room light is both true—vitally true, if you don’t want to bang your shins on the sofa sneaking home in the middle of the night—and wildly misleading.
It’s perfectly possible, in other words, to have an explanation that is at once trivial and profound, depending on what kind of question you’re asking. The strength of neuroscience, Churchland suggests, lies not so much in what it explains as in the older explanations it dissolves. She gives a lovely example of the panic that we feel in dreams when our legs refuse to move as we flee the monster. This turns out to be a straightforward neurological phenomenon: when we’re asleep, we turn off our motor controls, but when we dream we still send out signals to them. We really are trying to run, and can’t. If you feel this, and also have the not infrequent problem of being unable to distinguish waking and dreaming states, you might think that you have been paralyzed and kidnapped by aliens.
There are no aliens; there is not even a Freudian wave of guilt driving the monster. It’s just those neuromotor neurons, making the earth sticky. The best thing for people who have recurrent nightmares of this kind is to get more rem rest. “Get more sleep,” Churchland remarks. “It works.” Neurology should provide us not with sudden explanatory power but with a sense of relief from either taking too much responsibility for, or being too passive about, what happens to us. Autism is a wiring problem, not a result of “refrigerator mothers.” Schizophrenia isn’t curable yet, but it looks more likely to be cured by getting the brain chemistry right than by finding out what traumatized Gregory Peck in his childhood. Neuroscience can’t rob us of responsibility for our actions, but it can relieve us of guilt for simply being human. We are in better shape in our mental breakdowns if we understand the brain breakdowns that help cause them. This is a point that Satel and Lilienfeld, in their eagerness to support a libertarian view of the self as a free chooser, get wrong. They observe of one “brilliant and tormented” alcoholic that she, not her heavy drinking, was responsible for her problems. But, if we could treat the brain circuitry that processes the heavy drinking, we might very well leave her just as brilliant and tormented as ever, only not a drunk. (A Band-Aid, as every parent knows, is an excellent cure whenever it’s possible to use one.)
The really curious thing about minds and brains is that the truth about them lies not somewhere in the middle but simultaneously on both extremes. We know already that the wet bits of the brain change the moods of the mind: that’s why a lot of champagne gets sold on Valentine’s Day. On the other hand, if the mind were not a high-level symbol-managing device, flower sales would not rise on Valentine’s Day, too. Philosophy may someday dissolve into psychology and psychology into neurology, but since the lesson of neuro is that thoughts change brains as much as brains thoughts, the reduction may not reduce much that matters. As Montaigne wrote, we are always double in ourselves. Or, as they say on the Enterprise, it takes all kinds to run a starship
A Logical Problem
The Monty Hall problem - where a contestant has to pick one of three boxes - left readers scratching their heads. Why does this probability scenario hurt everyone's brain so much, asks maths lecturer Dr John Moriarty.
Imagine Deal or No Deal with only three sealed red boxes.
The three cash prizes, one randomly inserted into each box, are 50p, £1 and £10,000. You pick a box, let's say box two, and the dreaded telephone rings.
The Banker tempts you with an offer but this one is unusual. Box three is opened in front of you revealing the £1 prize, and he offers you the chance to change your mind and choose box one. Does switching improve your chances of winning the £10,000?
Each year at my university we hold open days for hordes of keen A-level students. We want to sell them a place on our mathematics degree, and I unashamedly have an ulterior motive - to excite the best students about probability using this problem, usually referred to as the Monty Hall Problem.
This mind-melter was alluded to in an AL Kennedy piece on change this week and dates back to Steve Selvin in 1975 when it was published in the academic journal American Statistician.
It imagines a TV game show not unlike Deal or No Deal in which you choose one of three closed doors and win whatever is behind it.
One door conceals a Cadillac - behind the other two doors are goats. The game show host, Monty Hall (of Let's Make a Deal fame), knows where the Cadillac is and opens one of the doors that you did not choose. You are duly greeted by a goat, and then offered the chance to switch your choice to the other remaining door.
Most people will think that with two choices remaining and one Cadillac, the chances are 50-50.
The most eloquent reasoning I could find is from Emerson Kamarose of San Jose, California (from the Chicago Reader's Straight Dope column in 1991): "As any fool can plainly see, when the game-show host opens a door you did not pick and then gives you a chance to change your pick, he is starting a new game. It makes no difference whether you stay or switch, the odds are 50-50."
But the inconvenient truth here is that it's not 50-50 - in fact, switching doubles your chances of winning. Why?
Pink Cadillac and a goat
Let's not get confused by the assumptions. To be clear, Monty Hall knows the location of the prize, he always opens a different door from the one you chose, and he will only open a door that does not conceal the prize.
For the purists, we also assume that you prefer Cadillacs to goats. There is a beautiful logical point here and, as the peddler of probability, I really don't want you to miss it.
Switching doubles your chances of winning”
In the game you will either stick or switch. If you stick with your first choice, you will end up with the Caddy if and only if you initially picked the door concealing the car. If you switch, you will win that beautiful automobile if and only if you initially picked one of the two doors with goats behind them.
If you can accept this logic then you're home and dry, because working out the odds is now as easy as pie - sticking succeeds 1/3 of the time, while switching works 2/3 of the time.
Kamarose was wrong because he fell for the deception - after opening the door, the host is not starting a new 50-50 game. The actions of the host have already stacked the odds in favour of switching.
The mistake is to think that two choices always means a 50-50 chance. Still not convinced? You are in good company. The paradox of the Monty Hall Problem has been incredibly powerful, busting the brains of scientists since 1975.
In 1990 the problem and a solution were published in Parade magazine in the US, generating thousands of furious responses from readers, many with distinguished scientific credentials.
Part of the difficulty was that, as usual, there was fault on both sides as the published solution was arguably unclear in stating its assumptions. Subtly changing the assumptions can change the conclusion, and as a result this topic has attracted sustained interest from mathematicians and riddlers alike.
Even Paul Erdos, an eccentric and brilliant Hungarian mathematician and one-time guest lecturer at Manchester, was taken in.
So what happens on our university's open days? We do a Monty Hall flash mob. The students split into hosts and contestants and pair up. While the hosts set up the game, half the contestants are asked to stick and the other half to switch.
The switchers are normally roughly twice as successful. Last time we had 60 pairs in 30 of which the contestants were always stickers and in the other 30 pairs always switchers:
Among the 30 switcher contestants, the Cadillac was won 18 times out of 30 - a strike rate of 60%
Among the 30 sticker contestants, there were 11 successes out of 30, a strike rate of about 36%
So switching proved to be nearly twice as successful in our rough and ready experiment and I breathed a sigh of relief.
Beliefs Always Trump Facts
Yale law school professor Dan Kahan's new research paper is called "Motivated Numeracy and Enlightened Self-Government," but for me a better title is the headline on science writer Chris Mooney's piece about it in Grist: "Science Confirms: Politics Wrecks Your Ability to Do Math."
Kahan conducted some ingenious experiments about the impact of political passion on people's ability to think clearly. His conclusion, in Mooney's words: partisanship "can even undermine our very basic reasoning skills.... [People] who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs."
In other words, say goodnight to the dream that education, journalism, scientific evidence, media literacy or reason can provide the tools and information that people need in order to make good decisions. It turns out that in the public realm, a lack of information isn't the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we're rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.
For years my go-to source for downer studies of how our hard-wiring makes democracy hopeless has been Brendan Nyhan, an assistant professor of government at Dartmouth.
Nyan and his collaborators have been running experiments trying to answer this terrifying question about American voters: Do facts matter?
The answer, basically, is no. When people are misinformed, giving them facts to correct those errors only makes them cling to their beliefs more tenaciously.
Here's some of what Nyhan found:
People who thought WMDs were found in Iraq believed that misinformation even more strongly when they were shown a news story correcting it.
People who thought George W. Bush banned all stem cell research kept thinking he did that even after they were shown an article saying that only some federally funded stem cell work was stopped.
People who said the economy was the most important issue to them, and who disapproved of Obama's economic record, were shown a graph of nonfarm employment over the prior year - a rising line, adding about a million jobs. They were asked whether the number of people with jobs had gone up, down or stayed about the same. Many, looking straight at the graph, said down.
But if, before they were shown the graph, they were asked to write a few sentences about an experience that made them feel good about themselves, a significant number of them changed their minds about the economy. If you spend a few minutes affirming your self-worth, you're more likely to say that the number of jobs increased.
In Kahan's experiment, some people were asked to interpret a table of numbers about whether a skin cream reduced rashes, and some people were asked to interpret a different table -- containing the same numbers -- about whether a law banning private citizens from carrying concealed handguns reduced crime.
Kahan found that when the numbers in the table conflicted with people's positions on gun control, they couldn't do the math right, though they could when the subject was skin cream. The bleakest finding was that the more advanced that people's math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem.
I hate what this implies -- not only about gun control, but also about other contentious issues, like climate change. I'm not completely ready to give up on the idea that disputes over facts can be resolved by evidence, but you have to admit that things aren't looking so good for reason. I keep hoping that one more photo of an iceberg the size of Manhattan calving off of Greenland, one more stretch of record-breaking heat and drought and fires, one more graph of how atmospheric carbon dioxide has risen in the past century, will do the trick. But what these studies of how our minds work suggest is that the political judgments we've already made are impervious to facts that contradict us.
Maybe climate change denial isn't the right term; it implies a psychological disorder. Denial is business-as-usual for our brains. More and better facts don't turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions. In the entire history of the universe, no Fox News viewers ever changed their minds because some new data upended their thinking. When there's a conflict between partisan beliefs and plain evidence, it's the beliefs that win. The power of emotion over reason isn't a bug in our human operating systems, it's a feature.
The Peanut Butter Test For Alzheimers
You may not have heard of "the peanut butter test," but it could become a fantastically low-cost and non-invasive way to test for Alzheimer's. After all, what's less invasive than asking someone to smell some delicious peanut butter?
"The ability to smell is associated with the first cranial nerve and is often one of the first things to be affected in cognitive decline," reads this release from the University of Florida, researchers from which conducted the experiment. But with Alzheimer's patients, the sense of smell is affected in a very particular way: The left nostril is significantly more impaired than the right. Weird! But true.
The experiment involved capping one nostril and measuring the distance at which the patient could detect about a tablespoon of peanut butter. In Alzheimer's patients, the left nostril was impaired so thoroughly that, on average, it had 10 centimeters less range than the right, in terms of odor detection. That's specific to Alzheimer's patients; neither control patients (those not suffering from cognitive decline) nor those with other types of cognitive impairment (like dementia) demonstrated that nostril difference.
Peanut butter was used because it's a so-called "pure odorant." Generally our sense of smell actually incorporates two distinct sensations: the olfactory sense, or smell, as well as a trigeminal sense, which is like a more physical burning or stinging sort of sense. Peanut butter has no trigeminal element; it's only olfactory, which makes it ideal for testing, as the link to Alzheimer's is specifically dealing with the olfactory sense.
This could be a great, inexpensive, early warning system for those with Alzheimer's; the illness is not easy to detect, requiring neurological examination as well as mental, and has to be carried out by a professional. The peanut butter test? Much easier.
Climate Change Persuasion
WHEN scholars of the future write the history of climate change, they may look to early 2008 as a pivotal moment. Al Gore's film An Inconvenient Truth was bringing the science to the masses. The economist Nicholas SternMovie Camera had made the financial case for tackling the problem sooner rather than later. And the Intergovernmental Panel on Climate Change (IPCC) had just issued its most unequivocal report yet on the link between human activity and climatic change.
The scientific and economic cases were made. Surely with all those facts on the table, soaring public interest and ambitious political action were inevitable?
The exact opposite happened. Fast-forward to today, the eve of the IPCC's latest report on the state of climate science, and it is clear that public concern and political enthusiasm have not kept up with the science. Apathy, lack of interest and even outright denial are more widespread than they were in 2008.
How did the rational arguments of science and economics fail to win the day? There are many reasons, but an important one concerns human nature.
Through a growing body of psychological research, we know that scaring or shaming people into sustainable behaviour is likely to backfire. We know that it is difficult to overcome the psychological distance between the concept of climate change – not here, not now – and people's everyday lives. We know that beliefs about the climate are influenced by extreme and even daily weather.
One of the most striking findings is that concern about climate change is not only, or even mostly, a product of how much people know about science. Increased knowledge tends to harden existing opinions (Nature Climate Change, vol 2, p 732).
These findings, and many more, are increasingly available to campaigners and science communicators, but it is not clear that lessons are being learned. In particular, there is a great deal of resistance towards the idea that communicating climate change requires more than explaining the science.
The IPCC report, due out on 27 September, will provide communicators with plenty of factual ammunition. It will inevitably be attacked by climate deniers. In response, rebuttals, debunkings and counter-arguments will pour forth, as fighting denial has become a cottage industry in itself.
None of it will make any real difference. This is for the simple reason that the argument is not really about the science; it is about politics and values.
Consider, for example, the finding that people with politically conservative beliefs are more likely to doubt the reality or seriousness of climate change. Accurate information about climate change is no less readily available to these people than anybody else. But climate policies such as the regulation of industrial emissions often seem to clash with conservative political views. And people work backwards from their values, filtering the facts according to their pre-existing beliefs.
Research has shown that people who endorse free-market economic principles become less hostile when they are presented with policy responses which do not seem to be as threatening to their world view, such as geoengineering. Climate change communicators must understand that debates about the science are often simply a proxy for these more fundamental disagreements.
Some will argue that climate change discourse has become so polluted by politics that we can't see the scientific woods for the political trees. Why should science communicators get their hands dirty with politics? But the solution is not to scream ever louder at people that the woods are there if only they would look properly. A much better, and more empirically supported, answer is to start with those trees. The way to engage the public on climate change is to find ways of making it resonate more effectively with the values that people hold.
My colleagues and I argued in a recent report for the Climate Outreach and Information Network that there is no inherent contradiction between conservative values and engaging with climate change science. But hostility has grown because climate change has become associated with left-wing ideas and language.
If communicators were to start with ideas that resonated more powerfully with the right – the beauty of the local environment, or the need to enhance energy security – the conversation about climate change would likely flow much more easily.
Similarly, a recent report from the Understanding Risk group at Cardif University in the UK showed there are some core values that underpin views about the country's energy system. Whether wind farms or nuclear power, the public judges energy technologies by a set of underlying values – including fairness, avoiding wastefulness and affordability. If a technology is seen as embodying these, it is likely to be approved of. Again, it is human values, more than science and technology, which shape public perceptions.
Accepting this is a challenge for those seeking to communicate climate science. Too often, they assume that the facts will speak for themselves – ignoring the research that reveals how real people respond. That is a pretty unscientific way of going about science communication.
The challenge when the IPCC report appears, then, is not to simply crank up the volume on the facts. Instead, we must use the report as the beginning of a series of conversations about climate change – conversations that start from people's values and work back from there to the science.
Alzheimers and T2 Diabetes
ALZHEIMER’S, the devastating neurological disease affecting 500,000 Britons, may actually be the late stages of type 2 (T2) diabetes, say scientists.
They have found that the extra insulin produced by those with T2 diabetes also gets into the brain, disrupting its chemistry.
Eventually it leads to the formation of toxic clumps of amyloid proteins that poison brain cells.
“The discovery could explain why people who develop T2 diabetes often show sharp declines in cognitive function, with an estimated 70% developing Alzheimer’s — far more than in the rest of the population,” said Ewan McNay, a Briton whose research at Albany University in New York State was co-sponsored by the American Diabetes Association
“People who develop diabetes have to realise this is about more than controlling their weight or diet. It’s also the first step on the road to cognitive decline.
“At first they won’t be able to keep up with their kids playing games, but in 30 years’ time they may not even recognise them.”
In Britain about 2.5m people have T2 diabetes — the National Diabetes Audit, published in October, showed that about 80% were overweight or obese.
The sharply elevated risk of Alzheimer’s disease in T2 diabetics has been known for a long time. However, since relatively few obese people have tended to survive into old age, the effects have had less attention and are not widely known among the public and GPs.
Now, however, better treatments mean a sharp improvement in the survival rates of people with T2 diabetes — meaning there is likely to be a surge in Alzheimer’s cases.
McNay’s research was aimed at uncovering the mechanism by which T2 diabetes might cause Alzheimer’s.
He fed rats on a high-fat diet to induce T2 diabetes and then carried out memory tests, showing that the animals’ cognitive skills deteriorated rapidly as the disease progressed.
An examination of their brains showed clumps of amyloid protein had formed, of the kind found in the brains of Alzheimer’s patients. McNay confirmed that these clumps were linked with cognitive decline by injecting a second batch of diabetic rats with drugs that dissolved the amyloid clumps — whereupon they regained their lost function.
Why, though, do the amyloid clumps form at all? McNay suggests that, in people with T2 diabetes, the body becomes resistant to insulin, a hormone that controls blood-sugar levels — so the body produces more of it.
However, some of that insulin also makes its way into the brain, where its levels are meant to be controlled by the same enzyme that breaks down amyloid.
McNay, who presented his research at the recent Society for Neuroscience meeting in San Diego, said: “High levels of insulin swamp this enzyme so that it stops breaking down amyloid. The latter then accumulates until it forms toxic clumps that poison brain cells. It’s the same amyloid build-up to blame in both diseases — T2 diabetics really do have low-level Alzheimer’s.”
McNay’s research does, however, offer one cause for hope. It is known that people who develop T2 diabetes can get rid of it again by losing weight and taking exercise. McNay suggests that the same remedies might also serve to ward off Alzheimer’s, at least in the very early stages.
His research has changed his own life already. “I have cut down on chocolate and go to the gym more, and as for my children, they have to run around the green before I’ll give them any treats.”
Depression strikes some 35 million people worldwide, according to the World Health Organization, contributing to lowered quality of life as well as an increased risk of heart disease and suicide. Treatments typically include psychotherapy, support groups and education as well as psychiatric medications. SSRIs, or selective serotonin reuptake inhibitors, currently are the most commonly prescribed category of antidepressant drugs in the U.S., and have become a household name in treating depression.
The action of these compounds is fairly familiar. SSRIs increase available levels of serotonin, sometimes referred to as the feel-good neurotransmitter, in our brains. Neurons communicate via neurotransmitters, chemicals which pass from one nerve cell to another. A transporter molecule recycles unused transmitter and carries it back to the pre-synaptic cell. For serotonin, that shuttle is called SERT (short for “serotonin transporter”). An SSRI binds to SERT and blocks its activity, allowing more serotonin to remain in the spaces between neurons. Yet, exactly how this biochemistry then works against depression remains a scientific mystery.
In fact, SSRIs fail to work for mild cases of depression, suggesting that regulating serotonin might be an indirect treatment only. “There’s really no evidence that depression is a serotonin-deficiency syndrome,” says Alan Gelenberg, a depression and psychiatric researcher at The Pennsylvania State University. “It’s like saying that a headache is an aspirin-deficiency syndrome.” SSRIs work insofar as they reduce the symptoms of depression, but “they’re pretty nonspecific,” he adds.
Now, research headed up by neuroscientists David Gurwitz and Noam Shomron of Tel Aviv University in Israel supports recent thinking that rather than a shortage of serotonin, a lack of synaptogenesis (the growth of new synapses, or nerve contacts) and neurogenesis (the generation and migration of new neurons) could cause depression. In this model lower serotonin levels would merely result when cells stopped making new connections among neurons or the brain stopped making new neurons. So, directly treating the cause of this diminished neuronal activity could prove to be a more effective therapy for depression than simply relying on drugs to increase serotonin levels.
Evidence for this line of thought came when their team found that cells in culture exposed to a 21-day course of the common SSRI paroxetine (Paxil is one of the brand names) expressed significantly more of the gene for an integrin protein called ITGB3 (integrin beta-3). Integrins are known to play a role in cell adhesion and connectivity and therefore are essential for synaptogenesis. The scientists think SSRIs might promote synaptogenesis and neurogenesis by turning on genes that make ITGB3 as well as other proteins that are involved in these processes. A microarray, which can house an entire genome on one laboratory slide, was used to pinpoint the involved genes. Of the 14 genes that showed increased activity in the paroxetine-treated cells, the gene that expresses ITGB3 showed the greatest increase in activity. That gene, ITGB3, is also crucial for the activity of SERT. Intriguingly, none of the 14 genes are related to serotonin signaling or metabolism, and, ITGB3 has never before been implicated in depression or an SSRI mode of action.
These results, published October 15 in Translational Psychiatry, suggest that SSRIs do indeed work by blocking SERT. But, the bigger picture lies in the fact that in order to make up for the lull in SERT, more ITGB3 is produced, which then goes to work in bolstering synaptogenesis and neurogenesis, the true culprits behind depression. “There are many studies proposing that antidepressants act by promoting synaptogenesis and neurogenesis,” Gurwitz says. “Our work takes one big step on the road for validating such suggestions.”
The research is weakened by its reliance on observations of cells in culture rather than in actual patients. The SSRI dose typically delivered to a patient’s brain is actually a fraction of what is swallowed in a pill. “Obvious next steps are showing that what we found here is indeed viewed in patients as well,” Shomron says.
The study turned up additional drug targets for treating depression—two microRNA molecules, miR-221 and miR-222. Essentially, microRNAs are small molecules that can turn a gene off by binding to it. The microarray results showed a significant decrease in the expression of miR-221 and miR-222, both of which are predicted to target ITGB3, when cells were exposed to paroxetine. So, a drug that could prevent those molecules from inhibiting the production of the ITGB3 protein would arguably enable the growth of more new neurons and synapses. And, if the neurogenesis and synaptogenesis hypothesis holds, a drug that specifically targeted miR-221 or miR-222 could bring sunnier days to those suffering from depression.
“Before I was born I was a twin,” Richard Carr-Gomm wrote in his autobiography. It may be far-fetched to suppose that his lifelong concern for the lonely stemmed from his sibling’s early death, but that concern has touched the lives of thousands.
Carr-Gomm was the self-effacing but utterly determined founder of the Abbeyfield Society, which offers the elderly the care and company of a small army of volunteers.
The society originated in a house in Bermondsey that the “scrubbing major” bought with a gratuity on his retirement from the Coldstream Guards. Nearly six decades on, it owns or runs more than 500 houses and care homes in Britain and still more in affiliated societies from Australia to Mexico. Between them they provide a refuge for more than 8,000 people from the quiet tragedy of loneliness, and, for a growing number, relief from the still poorly understood traumas of dementia.
One Abbeyfield house on which we report today takes infinite pride in giving its residents a home to enjoy rather than an institution to endure. Its manager has found space for a sweet shop, a library and a farmyard’s worth of pets, which until recently included pigs called Winston and Churchill. More importantly, she and her staff find time — time to talk with those for whom a conversation is a treat; time to sit with a distressed dementia sufferer long enough to learn something about her mood swings; time to make family members welcome and help them cope when eyes that used to light up with recognition no longer seem to know who’s come to visit.
Carr-Gomm was, as a boy, considered “delicate”. That did not stop him marching up Juno Beach on D-Day, or devoting most of his life after the war to a cause that he championed with unbending inner strength. It is a cause that deserves all our support.
Fairness (Seth Godin)
Our society tolerates gross unfairness every day. It tolerates misogyny, racism and the callous indifference to those born without privilege.
But we manage to find endless umbrage for petty slights and small-time favoritism.
When a teacher gives one student a far better grade than he deserves, and does it without shame, we're outraged. When the flight attendant hands that last chicken meal to our seatmate, wow, that's a slight worth seething over for hours.
When Bull Connor directed fire hoses and attack dogs on innocent kinds in Birmingham, it conflated the two, the collision of the large and the small. Viewers didn't witness the centuries of implicit and explicit racism, they saw a small, vivid act, moving in its obvious unfairness. It was the small act that focused our attention on the larger injustice.
I think that most of us are programmed to process the little stories, the emotional ones, things that touch people we can connect to. When it requires charts and graphs and multi-year studies, it's too easy to ignore.
We don't change markets, or populations, we change people. One person at a time, at a human level. And often, that change comes from small acts that move us, not from grand pronouncements.
People intuitively think that their eyes move smoothly across a scene, continuously taking in what is there, like a video camera. But we actually take in a series of multiple snapshots that the brain stitches together into a seamless image. We only get clear detail from a very small part where our vision is focused - everything else is a blur until we fixate on the next bit.
The biggest problem with vision is the way our brain chooses to work. This affects areas such as airport luggage scanning and doctors looking for tumours. When you constantly look at pics where there is nothing suspicious to be found, the brain begins to expect nothing to ever be found. Then, when something suspicious does show up, it fails to register, simply because it is unexpected.
This is surprisingly difficult to counter. It is not a matter of watchers failing to concentrate or being careless or lazy. It is a quirk of human brain processing, and it happens to everybody. The only way that researchers have been able to improve accuracy is by 'retraining' the brain by artificially increasing the number of hits (by putting fake occurrences into the data stream.)
Forty seconds before round two, and I’m lying on my back trying to breathe. Pain all through me. Deep breath. Let it go. I won’t be able to lift my shoulder tomorrow, it won’t heal for over a year, but now it pulses, alive, and I feel the air vibrating around me, the stadium shaking with chants, in Mandarin, not for me. My teammates are kneeling above me, looking worried. They rub my arms, my shoulders, my legs. The bell rings. I hear my dad’s voice in the stands, ‘C’mon Josh!’ Gotta get up. I watch my opponent run to the center of the ring. He screams, pounds his chest. The fans explode. They call him Buffalo. Bigger than me, stronger, quick as a cat. But I can take him – if I make it to the middle of the ring without falling over. I have to dig deep, bring it up from somewhere right now. Our wrists touch, the bell rings, and he hits me like a Mack truck. — Joshua Waitzkin
In his book The Art of Learning, Joshua Waitzkin describes how he is able to compete, and win, against martial arts competitors much physically stronger than himself by putting his mind into the game. When I asked Waitzkin whether he thinks his mental game is a result of his high intelligence, he told me,
“I don’t think I have an extraordinary intelligence. Buffalo had cultivated his whole body his whole life, and he had that edge. I had cultivated my mind. My chance lay in making the mental game dominate a physical battle. At a high level of competition, success often hinges on who determines the field and tone of battle.
“Mental toughness” is a phrase that is commonly used in sports to describe the superior mental qualities of the competitor. Most elite athletes report that at least 50% of superior athletic performance is the result of mental or psychological factors, and a whopping 83% of coaches rate mental toughness as the most important set of psychological characteristics for determining competitive success.
One of the first descriptions of mental toughness was made by sports psychologist James Loeher. Based on his extensive work with elite athletes and coaches, he proposed seven dimensions of mental toughness that he argued are developed: self-confidence, attention control, minimizing negative energy, increasing positive energy, maintaining motivation levels, attitude control, and visual and imagery control.
Following up this work with a more systematic analysis in 2002, Graham Jones and colleagues interviewed ten international performers (seven males and three females) from a variety of sports. The elite performers were asked to define mental toughness in their own words and describe the central characteristics of mental toughness. The following definition naturally emerged from the interviews:
People who are mentally tough have a psychological edge that enables them to cope better than their opponents with the many demands that sports place on a performer, and they are also more consistent and better than their opponents in remaining determined, focused, confident, and in control under pressure.
The athletes identified 12 key attributes as key to mental toughness in sport, ranked in order of importance:
Unshakeable self-belief in your ability to achieve competition goals (“Mental toughness is about your self belief and not being shaken from your path. . . . It is producing the goods and having the self belief in your head to produce the goods”).
Ability to bounce back from performance set-backs as a result of an increased determination to succeed (“Yea, we all have them (setbacks), the mentally tough performer doesn’t let them affect him, he uses them”).
Unshakeable self-belief that you possess unique qualities and abilities that make you better than your opponents (“I am better than everyone else by a long way because I have something that sets me apart from other performers”).
Insatiable desire and internalized motives to succeed (“You’ve really got to want it, but you’ve also got to want to do it for yourself. Once you start doing it for anyone else . . . you’re in trouble. You’ve also got to really understand why you’re in it . . . and constantly reminding yourself is vital”).
Remaining fully focused on the task at hand in the face of competition-specific distractions (“There are inevitable distractions and you just have to be able to focus on what you need to focus on”).
Regaining psychological control following unexpected, uncontrollable events (comeptition-specific) (“It’s definitely about not getting unsettled by things you didn’t expect or can’t control. You’ve got to be able to switch back into control mode”).
Pushing back the boundaries of physical and emotional pain, while still maintaining technique and effort under distress during training and competition (“In my sport you have to deal with the physical pain from fatigue, dehydration, and tiredness . . . you are depleting your body of so many different things. It is a question of pushing yourself . . . it’s mind over matter, just trying to hold your technique and perform while under this distress and go beyond your limits”).
Accepting that competition anxiety is inevitable and knowing that you can cope with it. (“I accept that I’m going to get nervous, particularly when the pressure’s on, but keeping the lid on it and being in control is crucial”).
Not being adversely affected by other’s good and bad performances (“There have been cases where people have set world records and people have gone out 5 or 6 minutes later, and improved the world record again. The mentally tough performer uses others ‘good performances as a spur rather than say “I can’t go that fast.” They say “well, he is no better than me, so I’m going to go out there and beat that”).
Thriving on the pressure of competition (“If you are going to achieve anything worthwhile, there is bound to be pressure. Mental toughness is being resilient to and using the competition pressure to get the best out of yourself”).
Remaining fully focused in the face of personal life distractions (“Once you’re in the competition, you cannot let you mind wander to other things”; and, “it doesn’t matter what has happened to you, you can’t bring the problem into the performance arena”).
Switching sport focus on and off as required (“You need to be able to switch it [i.e., focus] on and off, especially between games during a tournament. The mentally tough performer succeeds by having control of the on/off switch”).
In more recent years, a number of studies have attempted to further clarify mental toughness, its dimensions, and its development. In one large review, Daniel Gucciardi and colleagues argued that the dimensions that comprise mental toughness influence the way we approach and interpret both positive and negative events, which in turn influence performance.
Research also shows that mental toughness is an ongoing developing process. The attitudes, cognitions, emotions, and personal values that comprise mental toughness develop as a result of repeated exposure to a variety of experiences, challenges, and adversities. Once acquired, mental toughness is maintained by:
A desire and motivation to succeed that is insatiable and internalized
A perceived support network that includes sporting and non-sporting personnel
Effective use of basic and advanced psychological skills.
Do athletes have higher levels of mental toughness than non-athletes? In a very recent study, Félix Guillén and Sylvain Laborde compared levels of mental toughness between athletes and non-athletes. Based on the review by Gucciardi and colleagues, they distilled mental toughness down into four main dimensions:
Hope: The unshakeable self-belief in one’s ability to achieve competition goals (“I can think of many ways to get out of a jam“).
Optimism: A general expectancy that good things will happen (“In uncertain times, i usually expect the best“).
Perseverance: Consistency in achieving one’s goals and not giving up easily when facing adversity of difficulties (“I am often so determined that I continue working long after other people have given up“).
Resilience: The ability to adapt to challenges in the environment (“I do not dwell on things that I can’t do anything about“).
All four dimensions were significantly related to each other, forming a general factor of mental toughness. Athletes scored much higher than non-athletes on this general mental toughness factor, with a large effect size. What’s more, there was no difference between the type of sport (individual vs. team sports). This is consistent with prior research suggesting that mental toughness is more a function of environment than domains.
The researchers also found that mental toughness increased with age, also consistent with prior research showing that mental toughness develops through developmental experiences. Finally, the researchers found that athletes with higher levels of mental toughness practiced for longer, on average, than athletes with lower levels of mental toughness.
Mental toughness is not only important in sports. Markus Gerber and colleagues found that adolescents with higher mental toughness are more resilient against stress and depression. As Gucciardi and colleagues argue, mental toughness is important in any environment that requires performance setting, challenges, and adversities.
Beyond Mental Toughness
In Finland there is a phrase– dating back hundreds of years– which refers to extraordinary determination, courage, and resoluteness in the face of extreme adversity. It’s called Sisu.
Rising superstar Emilia Lahti, who is about to begin her doctoral studies relating to Sisu, has made a good case for why Sisu is distinguishable from other dimensions of mental toughness, such as perseverance, grit, and resilience. In one large-scale survey, which she conducted as a Masters student in the Masters of Positive Psychology Program at the University of Pennsylvania, Lahti found that 62% of people surveyed (Finns and Finnish Americans) viewed Sisu as a powerful psychological strength capacity, rather than the ability to be persistent and stick to a task (34%).
Lahti argues that Sisu contributes to an “action mindset”, a consistent and courageous approach toward challenges that enables individuals to see beyond their present limitations and into what might be. I think Joshua Waitzkin illustrates Sisu in his competition with Bufffalo (described above), as he digs deep into the wellspring of possibility that is not evident from the surface.
Uploading Your Brain
Everything felt possible at Transhuman Visions 2014, a conference in February billed as a forum for visionaries to "describe our fast-approaching, brilliant, and bizarre future." Inside an old waterfront military depot in San Francisco's Fort Mason Center, young entrepreneurs hawked experimental smart drugs and coffee made with a special kind of butter they said provided cognitive enhancements. A woman offered online therapy sessions, and a middle-aged conventioneer wore an electrode array that displayed his brain waves on a monitor as multicolor patterns.
On stage, a speaker with a shaved head and a thick, black beard held forth on DIY sensory augmentation. A group called Science for the Masses, he said, was developing a pill that would soon allow humans to perceive the near-infrared spectrum. He personally had implanted tiny magnets into his outer ears so that he could listen to music converted into vibrations by a magnetic coil attached to his phone.
None of this seemed particularly ambitious, however, compared with the claim soon to follow. In the back of the audience, carefully reviewing his notes, sat Randal Koene, a bespectacled neuroscientist wearing black cargo pants, a black T-shirt showing a brain on a laptop screen, and a pair of black, shiny boots. Koene had come to explain to the assembled crowd how to live forever. ''As a species, we really only inhabit a small sliver of time and space,''Koene said when he took the stage. ''We want a species that can be effective and influential and creative in a much larger sphere.''
Koene's solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. ''When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC,'' he said. ''It's kind of like platform-independent code.''
Koene's solution was straightforward: He planned to upload his brain to a computer. By mapping the brain, reducing its activity to computations, and reproducing those computations in code, Koene argued, humans could live indefinitely, emulated by silicon. "When I say emulation, you should think of it, for example, in the same sense as emulating a Macintosh on a PC," he said. "It's kind of like platform-independent code."
The audience sat silent, possibly awed, possibly confused, as Koene led them through a complex tour of recent advances in neuroscience supplemented with charts and graphs. Koene has always had a complicated relationship with transhumanists, who likewise believe in elevating humanity to another plane. A Dutch-born neuroscientist and neuro-engineer, he has spent decades collecting the credentials necessary to bring his fringe ideas in line with mainstream science. Now, that science is coming to him. Researchers around the globe have made deciphering the brain a central objective. In 2013, both the U.S. and the EU announced initiatives that promise to accelerate brain science in much the same way that the Human Genome Project advanced genomics. The minutiae may have been lost on the crowd, but as Koene departed the stage, the significance of what they just witnessed was not: The knowledge necessary to achieve what Koene calls "substrate independent minds" seems tantalizingly within reach.
The concept of brain emulation has a long, colorful history in science fiction, but it’s also deeply rooted in computer science. An entire subfield known as neural networking is based on the physical architecture and biological rules that underpin neuroscience.
Roughly 85 billion individual neurons make up the human brain, each one connected to as many as 10,000 others via branches called axons and dendrites. Every time a neuron fires, an electrochemical signal jumps from the axon of one neuron to the dendrite of another, across a synapse between them. It’s the sum of those signals that encode information and enable the brain to process input, form associations, and execute commands. Many neuroscientists believe the essence of who we are—our memories, emotions, personalities, predilections, even our consciousness—lies in those patterns.
In the 1940s, neurophysiologist Warren McCulloch and mathematician Walter Pitts suggested a simple way to describe brain activity using math. Regardless of everything happening around it, they noted, a neuron can be in only one of two possible states: active or at rest. Early computer scientists quickly grasped that if they wanted to program a brainlike machine, they could use the basic logic systems of their prototypes—the binary electric switches symbolized by 1s and 0s—to represent the on/off state of individual neurons.
A few years later, Canadian psychologist Donald Hebb suggested that memories are nothing more than associations encoded in a network. In the brain, those associations are formed by neurons firing simultaneously or in sequence. For example, if a person sees a face and hears a name at the same time, neurons in both the visual and auditory areas of the brain will fire, causing them to connect. The next time that person sees the face, the neurons encoding the name will also fire, prompting the person to recollect it.
Using these insights, computer engineers have created artificial neural networks capable of forming associations, or learning. Programmers instruct the networks to remember which pieces of data have been linked in the past, and then to predict the likelihood that those two pieces will be linked in the future. Today, such software can perform a variety of complex pattern-recognition tasks, such as detecting credit card purchases that diverge dramatically from a consumer’s past behavior, indicating possible fraud.
Of course, any neuroscientist will tell you that artificial neural networks don’t begin to incorporate the true complexity of the human brain. Researchers have yet to characterize the many ways neurons interact and have yet to grasp how different chemical pathways affect the likelihood that they will fire. There may be rules they don't yet know exist.
But such networks remain perhaps the strongest illustration of an assumption crucial to the hopes and dreams of Randal Koene: that our identity is nothing more than the behavior of individual neurons and the relationships between them. And that most of the activities of the brain, if technology were capable of recording and analyzing them, can theoretically be reduced to computations.
On a warm afternoon in late January, I follow Koene up the stairs of the second-floor walkup he shares with his girlfriend on the edge of San Francisco’s Portrero Hill. He leads me through a small living room crammed full of synthesizers and Legos and into a bedroom, where a standing desk represents his home office. It holds oversize computer screens and laptops arrayed like the electronics of a star-ship command center. It’s a modest setting, but Koene is only in the third decade of his quest—a mere blink of an eye when you consider that his goal is immortality.
Koene, the son of a particle physicist, first discovered mind uploading at age 13 when he read the 1956 Arthur C. Clarke classic The City and the Stars. Clarke’s book describes a city one billion years in the future. Its residents live multiple lives and spend the time between them stored in the memory banks of a central computer capable of generating new bodies. “I began to think about our limits,” Koene says. “Ultimately, it is our biology, our brain, that is mortal. But Clarke talks about a future in which people can be constructed and deconstructed, in which people are information.”
It was a vision, Koene decided, worth devoting his life to pursuing. He began by studying physics in college, believing the route to his goal lay in finding ways to reconstitute patterns of individual atoms. By the time he graduated, however, he concluded that all he really needed was a digital brain. So he enrolled in a masters program at Delft University of Technology in the Netherlands, where he focused on neural networks and artificial intelligence.
It was while at Delft in 1994 that Koene made an important discovery: a community of people who shared his ambition. Exploring the new medium of the Internet, he stumbled upon the “Mind Uploading Home Page,” owned by Joe Strout, an Ohio-born computer buff, aspiring neuroscientist, and self-described immortalist. Strout facilitated a discussion group that Koene quickly joined, and its members began to debate whether extracting information from the brain was technologically feasible, and if it was, what they should call it: downloading, uploading, or mind transfer. They eventually settled on “whole brain emulation.” And then they outlined career goals that would help them advance their cause.
Koene chose to pursue a Ph.D. in computational neuro-science at McGill University, and later landed at a Boston University neurophysiology lab, where he attempted to replicate mouse brain activity on a computer. Strout pursued an advanced degree in neuroscience, then moved on to the lab of a computational neurobiologist at the Salk Institute. “We were all trying to push research problems in whatever way we could,” Strout says. “The trouble was that for the elder neuroscience researchers, this wasn’t a topic they could discuss publicly. They would talk about it over a beer. But it was too fringe for people who were trying to get grants for research.”
By mapping the brain, humans could live indefinitely.
By then, many of the other group members had earned their credentials. And in 2007, computational neuroscientist Anders Sandberg, who studies the bioethics of human enhancement at Oxford University, summoned interested experts to Oxford’s Future of Humanity Institute for a two-day workshop. Participants laid out a roadmap of capabilities humans would need to develop in order to successfully emulate a brain: mapping the structure, learning how that structure matches function, and developing the software and hardware to run it.
Not long afterward, Koene left Boston University to become the director of neuroengineering at the Fatronik-Tecnalia Institute in Spain, one of the largest private research organizations in Europe. “I didn’t like the job once I figured out they weren’t into taking any risks and didn’t really care about futuristic things related to whole brain emulation,” Koene says. So, in 2010, he moved to Silicon Valley to take a job as head of analysis at Halcyon Molecular, a nanotechnology company that had raised more than $20 million from PayPal cofounders Peter Thiel and Elon Musk, among others. Though Halcyon’s goal was to develop low-cost, DNA-sequencing tools, its leaders assured Koene he would have time to work on brain emulation, a goal they supported.
By the time Halcyon abruptly went out of business in 2012, Koene had created Carboncopies.org, which serves as a hub for mind-uploading advocates. He had also made a lot of contacts. Within months, he secured financial backing from Dimitry Itskov, a Russian dot-com mogul who hoped to upload himself to a “sophisticated artificial carrier” and considered whole brain emulation an essential step.
“We need to provide a foundation so the new field of brain emulation is taken seriously,” Koene tells me from his bedroom command center. He opens a color-coded chart on one of the screens. It consists of overlapping circles filled with names and affiliations, divided into wedges representing the roadmap’s objectives. Koene points to the outermost circle. “These are the people who just have compatible R&D goals,” he says. Then he indicates the smaller, inner circle. “And these are the people who are onboard.”
It’s all of these individuals, mainstream neuroscientists, who will advance whole brain emulation, Koene says—not trans-humanists, who he observes “lack rigor.” And they’ll do so even if philosophically their goals are quite different.
Today, as it happens, every pillar of the brain-uploading roadmap is a highly active area in neuroscience, for an entirely unrelated reason: Understanding the structure and function of the brain could help doctors treat some of our most debilitating diseases.
At Harvard University, neurobiologist Jeff Lichtman leads the effort to create a connectome, or comprehensive map of the brain’s structure: the network of trillions of axons, dendrites, and synapses that convey electro-chemical signals. Lichtman is working to understand how experiences are physically encoded at the most basic level in the brain. To do so, he uses a device that incorporates innovations made by a brain-uploading proponent, Kenneth Hayworth, who spent time as a postdoc in Lichtman’s lab. It slices off razor-thin pieces of mouse brain and collects them sequentially on a reel of tape. The slices can then be scanned with an electron microscope and viewed on a computer like the frames of a movie.
By following the threadlike extensions of individual nerve cells from frame to frame, Lichtman and his team have gained some interesting insights. “We noticed, for instance, that when an axon bumped into a dendrite and made a synapse, if we followed it along, it made another synapse on the same dendrite,” he says. “Even though there were 80 or 90 other dendrites in there, it seemed to be making a choice. Who expected that? Nobody. It means this thing is not some random mess.”
When he started five years ago, Lichtman says, the technique was so slow it would have taken several centuries to generate images for a cubic millimeter of brain—about one thousandth the size of a mouse brain and a millionth the size of a human one. Now Lichtman can do a cubic millimeter every couple of years. This summer, a new microscope will reduce the timeline to a couple of weeks. An army of such machines, he says, could put an entire human brain within reach.
At the same time, scientists elsewhere are aggressively mapping neural function. Last April, President Obama unveiled the BRAIN Initiative (for Brain Research through Advancing Innovative Neurotechnologies) with an initial $100 million investment that many hope will grow to rival the $3.8 billion poured into decoding the human genome.
Columbia University neuroscientist Rafael Yuste proposed a large-scale brain activity map that helped inspire the BRAIN Initiative, and he has spent two decades developing tools aimed at tracking how neurons excite and inhibit one another. Yuste likens the brain’s connectome to roads and the firing of its neurons to traffic.
Studying how neurons fire in circuits and how those circuits interact, he says, could help demystify diseases such as schizophrenia and autism. It could also reveal far more. Our very identity, Yuste suspects, lies in the traffic of brain activity. “Our identity is no more than that,” he says. “There is no magic inside our skull. It’s just neurons firing.”
To study those electrical impulses, scientists need to record the activity of individual neurons, but they’re limited by the micromachining techniques used to produce today’s technology. In his lab at MIT, neuro-engineer Ed Boyden is developing electrode arrays a hundred times denser than the ones currently in use. At the University of California, Berkeley, meanwhile, a team of scientists has proposed nanoscale particles called neural dust, which they plan to someday embed in the cortex as a wireless brain-machine interface.
Whatever discoveries these researchers make may end up as fodder for another ambitious government initiative: the European Union’s Human Brain Project. Backed by 1.2 billion euros and 130 research institutions, it aims to create a super-computer simulation that incorporates everything currently known about how the human brain works.
There is no magic inside our skull, it’s just neurons firing.
Koene is thrilled with all of these developments. But he’s most excited about a brain-simulation technology already being tested in animals. In 2011, a team from the University of Southern California (USC) and Wake Forest University succeeded in creating the world’s first artificial neural implant—a device capable of producing electrical activity that causes a rat to react as if the signal came from the animal’s own brain. “We’ve been able to uncover the neural code—the actual spatio-temporal firing patterns—for particular objects in the hippocampus,” says Theodore Berger, the USC biomedical engineer who led the effort. “It’s a major breakthrough.”
Scientists believe long-term memory involves neurons in two areas of the hippocampus that convert electrical signals to entirely new sequences, which are then transmitted to other parts of the brain. Berger’s team recorded the incoming and outgoing signals in rats trained to perform a memory task, and then programmed a computer chip to emulate the latter on cue. When they destroyed one of the layers of the rat’s hippocampus, the animals couldn’t perform the task. After being outfitted with the neural implant, they could.
Berger and his team have since replicated the activity of other groups of neurons in the hippocampus and prefrontal cortex of primates. The next step, he says, will be to repeat the experiment with more complex memories and behaviors. To that end, the researchers have begun to adapt the implant for testing in human epilepsy patients who have had surgery to remove areas of the hippocampus involved in seizures.
“Ted Berger’s experiment shows in principle you can take an unknown circuit, analyze it, and make something that can replace what it does,” Koene says. “The entire brain is nothing more than just many, many different individual circuits.”
That afternoon, Koene and I drive to an office park in Petaluma about 30 miles outside of San Francisco. We head into a dimly lit, stucco building decorated with posters that superimpose words like “focus” and “imagination” over photographs of Alpine peaks and tropical sunsets.
Guy Paillet, a snowy-haired former IBM engineer with a thick French accent and a cheerful Santa Claus–like disposition, soon joins us in a conference room. Paillet and his partner had invented a new kind of energy-efficient computer chip based on the physical architecture of the brain—an achievement that had earned them inclusion in Koene’s chart. Koene wanted an update on their progress.
Paillet reports that he is negotiating to take over an economically troubled computer chip–fabrication foundry in the South of France. Would Koene be willing to serve as a scientific advisor and possibly a fund-raiser on a related project, he asks? Koene shifts impatiently in his chair. “I just had an idea,” he announces. “You are thinking of getting into the foundry business. At the same time people at UC Berkeley are thinking of building new types of neural interfaces. When they get their prototype to work, would you consider . . . .”
“That’s a very good idea!” Paillet interrupts, before Koene can even finish asking whether he might fabricate their device too.
Many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point?
As we pull out of the parking lot, Koene is ebullient. I had just witnessed his job at its best. “This is what I do,” he says. “You have got tons of labs and researchers who are motivated by their own personal interests.” The trick, he says, is to identify the goals that could benefit brain uploading and try to push them forward—whether the researchers have asked for the help or not.
Certainly, it seems, many scientists have proven willing to consult and even collaborate with Koene. That was clear last spring, when scientists from institutions as varied as MIT, Harvard University, Duke University, and the University of Southern California descended on New York City’s Lincoln Center to speak at a two-day congress that Koene organized with the Russian mogul Itskov. Called Global Future 2045, the conference’s objective was to explore the requirements and implications of transferring minds into virtual bodies by the year 2045.
Some of those present, however, later distanced themselves from the event’s stated “spiritual and sci-tech” vision. “We were trying to get people with a lot of funding who can do big things to start investing in important questions,” says Jose Carmena, one of the Berkeley neuroscientists working on neural dust. “That doesn’t mean we have the same goal. We have similar goals along the way, like recording from as many neurons as possible. We all want to understand the brain. It just happens that they need to understand the brain so they can upload it to a computer.”
Carmena’s reticence was shared by other researchers, some of whom grew alarmed at even a faint possibility that their opinions about the technical plausibility of brain uploading—however qualified and cautious—might somehow be misinterpreted as an endorsement. “There is a big difference between understanding and building a brain,” Yuste says. “There are many things that we more or less understand but we cannot build.” For example, the brain’s hardware could prove critical, he explained, “or there could be intrinsic stochastic events, like in quantum physics, that could make it impossible to replicate."
Harvard’s Lichtman was more comfortable speculating on the concept. “I am not sure any new laws of physics have to be invented as they go forward,” he says. “It’s not completely impossible, like the idea of putting a cow head on a dog. It’s a science-fiction idea, but making a brain of silicon does not seem crazy to me.” In fact, he thinks the movement has helped advance neuroscience and hopes people like his former postdoc Hayworth succeed—not so they can live forever but to accelerate cures for brain dysfunction.
Hayworth, for his part, is now a senior scientist at Howard Hughes Medical Institute’s Janelia Farm Research Campus, a leader in connectomics, where he is developing techniques to precisely image much larger sections of brain than currently possible. He also founded the Brain Preservation Foundation, which has offered a prize for inventing a method that can preserve the brain until emulation technology catches up. “I know this is a controversial topic,” he says, “and there aren’t a heck of a lot of scientific institutes of any type that relish being dragged into it. Hopefully at some point that will change.”
In the meantime, many scientists seem to puzzle over a question more fundamental to the brain uploaders’ goal: What’s the point? Existing indefinitely in the confines of computer code, Lichtman points out, would be a pretty boring life.
Earlier in the day, I had asked Todd Huffman, a member of Strout’s early discussion group, whether the quest really boiled down to achieving immortality. Koene and I had dropped by Huffman’s company, which received venture capital to develop automated brain-slicing and imaging technologies. Huffman was wearing pink toe nail polish on his shoeless feet and sported a thick beard and bleached faux-hawk.
“That’s a very egocentric and individualist way of characterizing it,” he responded. “It’s so that we can look at the thought structures of humans who are alive today, so that we can understand human history and what it is to be human. If we can capture and work with human creativity, drive, and awareness the same way that we work with, you know, pieces of matter,” he said, “we can take what it is to be human, move it to another substrate, and go do things that we can’t do as individual humans. We want as a species to continue our evolution.”
Brain uploading, Koene agreed, was about evolving humanity, leaving behind the confines of a polluted planet and liberating humans to experience things that would be impossible in an organic body. “What would it be like, for instance, to travel really close to the sun?” he wondered. “I got into this because I was interested in exploring not just the world, but eventually the universe. Our current substrates, our biological bodies, have been selected to live in a particular slot in space and time. But if we could get beyond that, we could tackle things we can’t currently even contemplate.”
Beliefs Trump Facts 2
Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.
The result was dramatic: a whole lot of nothing. None of the interventions worked. The first leaflet - focussed on a lack of evidence connecting vaccines and autism - seemed to reduce misperceptions about the link, but it did nothing to affect intentions to vaccinate. It even decreased intent among parents who held the most negative attitudes toward vaccines, a phenomenon known as the backfire effect. The other two interventions fared even worse: the images of sick children increased the belief that vaccines cause autism, while the dramatic narrative somehow managed to increase beliefs about the dangers of vaccines. “It’s depressing,” Nyhan said. “We were definitely depressed,” he repeated, after a pause.
Nyhan’s interest in false beliefs dates back to early 2000, when he was a senior at Swarthmore. It was the middle of a messy Presidential campaign, and he was studying the intricacies of political science. “The 2000 campaign was something of a fact-free zone,” he said. Along with two classmates, Nyhan decided to try to create a forum dedicated to debunking political lies. The result was Spinsanity, a fact-checking site that presaged venues like PolitiFact and the Annenberg Policy Center’s factcheck.org. For four years, the trio plugged along. Their work was popular - it was syndicated by Salon and the Philadelphia Inquirer, and it led to a best-selling book - but the errors persisted. And so Nyhan, who had already enrolled in a doctorate program in political science at Duke, left Spinsanity behind to focus on what he now sees as the more pressing issue: If factual correction is ineffective, how can you make people change their misperceptions? The 2014 vaccine study was part of a series of experiments designed to answer the question.
Until recently, attempts to correct false beliefs haven’t had much success. Stephan Lewandowsky, a psychologist at the University of Bristol whose research into misinformation began around the same time as Nyhan’s, conducted a review of misperception literature through 2012. He found much speculation, but, apart from his own work and the studies that Nyhan was conducting, there was little empirical research. In the past few years, Nyhan has tried to address this gap by using real-life scenarios and news in his studies: the controversy surrounding weapons of mass destruction in Iraq, the questioning of Obama’s birth certificate, and anti-G.M.O. activism. Traditional work in this area has focussed on fictional stories told in laboratory settings, but Nyhan believes that looking at real debates is the best way to learn how persistently incorrect views of the world can be corrected.
One thing he learned early on is that not all errors are created equal. Not all false information goes on to become a false belief - that is, a more lasting state of incorrect knowledge - and not all false beliefs are difficult to correct. Take astronomy. If someone asked you to explain the relationship between the Earth and the sun, you might say something wrong: perhaps that the sun rotates around the Earth, rising in the east and setting in the west. A friend who understands astronomy may correct you. It’s no big deal; you simply change your belief.
But imagine living in the time of Galileo, when understandings of the Earth-sun relationship were completely different, and when that view was tied closely to ideas of the nature of the world, the self, and religion. What would happen if Galileo tried to correct your belief? The process isn’t nearly as simple. The crucial difference between then and now, of course, is the importance of the misperception. When there’s no immediate threat to our understanding of the world, we change our beliefs. It’s when that change contradicts something we’ve long held as important that problems occur.
In those scenarios, attempts at correction can indeed be tricky. In a study from 2013, Kelly Garrett and Brian Weeks looked to see if political misinformation - specifically, details about who is and is not allowed to access your electronic health records - that was corrected immediately would be any less resilient than information that was allowed to go uncontested for a while. At first, it appeared as though the correction did cause some people to change their false beliefs. But, when the researchers took a closer look, they found that the only people who had changed their views were those who were ideologically predisposed to disbelieve the fact in question. If someone held a contrary attitude, the correction not only didn’t work - it made the subject more distrustful of the source. A climate-change study from 2012 found a similar effect. Strong partisanship affected how a story about climate change was processed, even if the story was apolitical in nature, such as an article about possible health ramifications from a disease like the West Nile Virus, a potential side effect of change. If information doesn’t square with someone’s prior beliefs, he discards the beliefs if they’re weak and discards the information if the beliefs are strong.
Even when we think we’ve properly corrected a false belief, the original exposure often continues to influence our memory and thoughts. In a series of studies, Lewandowsky and his colleagues at the University of Western Australia asked university students to read the report of a liquor robbery that had ostensibly taken place in Australia’s Northern Territory. Everyone read the same report, but in some cases racial information about the perpetrators was included and in others it wasn’t. In one scenario, the students were led to believe that the suspects were Caucasian, and in another that they were Aboriginal. At the end of the report, the racial information either was or wasn’t retracted. Participants were then asked to take part in an unrelated computer task for half an hour. After that, they were asked a number of factual questions (“What sort of car was found abandoned?”) and inference questions (“Who do you think the attackers were?”). After the students answered all of the questions, they were given a scale to assess their racial attitudes toward Aboriginals.
Everyone’s memory worked correctly: the students could all recall the details of the crime and could report precisely what information was or wasn’t retracted. But the students who scored highest on racial prejudice continued to rely on the racial misinformation that identified the perpetrators as Aboriginals, even though they knew it had been corrected. They answered the factual questions accurately, stating that the information about race was false, and yet they still relied on race in their inference responses, saying that the attackers were likely Aboriginal or that the store owner likely had trouble understanding them because they were Aboriginal. This was, in other words, a laboratory case of the very dynamic that Nyhan identified: strongly held beliefs continued to influence judgment, despite correction attempts - even with a supposedly conscious awareness of what was happening.
In a follow-up, Lewandowsky presented a scenario that was similar to the original experiment, except now, the Aboriginal was a hero who disarmed the would-be robber. This time, it was students who had scored lowest in racial prejudice who persisted in their reliance on false information, in spite of any attempt at correction. In their subsequent recollections, they mentioned race more frequently, and incorrectly, even though they knew that piece of information had been retracted. False beliefs, it turns out, have little to do with one’s stated political affiliations and far more to do with self-identity: What kind of person am I, and what kind of person do I want to be? All ideologies are similarly affected.
It’s the realization that persistently false beliefs stem from issues closely tied to our conception of self that prompted Nyhan and his colleagues to look at less traditional methods of rectifying misinformation. Rather than correcting or augmenting facts, they decided to target people’s beliefs about themselves. In a series of studies that they’ve just submitted for publication, the Dartmouth team approached false-belief correction from a self-affirmation angle, an approach that had previously been used for fighting prejudice and low self-esteem. The theory, pioneered by Claude Steele, suggests that, when people feel their sense of self threatened by the outside world, they are strongly motivated to correct the misperception, be it by reasoning away the inconsistency or by modifying their behavior. For example, when women are asked to state their gender before taking a math or science test, they end up performing worse than if no such statement appears, conforming their behavior to societal beliefs about female math-and-science ability. To address this so-called stereotype threat, Steele proposes an exercise in self-affirmation: either write down or say aloud positive moments from your past that reaffirm your sense of self and are related to the threat in question. Steele’s research suggests that affirmation makes people far more resilient and high performing, be it on an S.A.T., an I.Q. test, or at a book-club meeting.
Normally, self-affirmation is reserved for instances in which identity is threatened in direct ways: race, gender, age, weight, and the like. Here, Nyhan decided to apply it in an unrelated context: Could recalling a time when you felt good about yourself make you more broad-minded about highly politicized issues, like the Iraq surge or global warming? As it turns out, it would. On all issues, attitudes became more accurate with self-affirmation, and remained just as inaccurate without. That effect held even when no additional information was presented - that is, when people were simply asked the same questions twice, before and after the self-affirmation.
Still, as Nyhan is the first to admit, it’s hardly a solution that can be applied easily outside the lab. “People don’t just go around writing essays about a time they felt good about themselves,” he said. And who knows how long the effect lasts - it’s not as though we often think good thoughts and then go on to debate climate change.
But, despite its unwieldiness, the theory may still be useful. Facts and evidence, for one, may not be the answer everyone thinks they are: they simply aren’t that effective, given how selectively they are processed and interpreted. Instead, why not focus on presenting issues in a way keeps broader notions out of it - messages that are not political, not ideological, not in any way a reflection of who you are?
Take the example of the burgeoning raw-milk movement. So far, it’s a relatively fringe phenomenon, but if it spreads it threatens to undo the health benefits of more than a century of pasteurization. The C.D.C. calls raw milk “one of the world’s most dangerous food products,” noting that improperly handled raw milk is responsible for almost three times as many hospitalizations as any other food-borne illness. And yet raw-milk activists are becoming increasingly vocal - and the supposed health benefits of raw milk are gaining increased support. To prevent the idea from spreading even further, Nyhan advises, advocates of pasteurization shouldn’t dwell on the misperceptions, lest they “inadvertently draw more attention to the counterclaim.” Instead, they should create messaging that self-consciously avoids any broader issues of identity, pointing out, for example, that pasteurized milk has kept children healthy for a hundred years.
I asked Nyhan if a similar approach would work with vaccines. He wasn’t sure - for the present moment, at least. “We may be past that point with vaccines,” he told me. “For now, while the issue is already so personalized in such a public way, it’s hard to find anything that will work.” The message that could be useful for raw milk, he pointed out, cuts another way in the current vaccine narrative: the diseases are bad, but people now believe that the vaccines, unlike pasteurized milk, are dangerous. The longer the narrative remains co-opted by prominent figures with little to no actual medical expertise - the Jenny McCarthys of the world - the more difficult it becomes to find a unified, non-ideological theme. The message can’t change unless the perceived consensus among figures we see as opinion and thought leaders changes first.
And that, ultimately, is the final, big piece of the puzzle: the cross-party, cross-platform unification of the country’s élites, those we perceive as opinion leaders, can make it possible for messages to spread broadly. The campaign against smoking is one of the most successful public-interest fact-checking operations in history. But, if smoking were just for Republicans or Democrats, change would have been far more unlikely. It’s only after ideology is put to the side that a message itself can change, so that it becomes decoupled from notions of self-perception.
Vaccines, fortunately, aren’t political. “They’re not inherently linked to ideology,” Nyhan said. “And that’s good. That means we can get to a consensus.” Ignoring vaccination, after all, can make people of every political party, and every religion, just as sick.
We Hate Thinking
MANY people would rather inflict pain on themselves than spend 15 minutes in a room with nothing to do but think, according to a US study.
Researchers at the University of Virginia and Harvard University conducted 11 different experiments to see how people reacted to being asked to spend some time alone.
Just over 200 people participated in the experiments, in which researchers asked them to sit alone in an unadorned room, and report back on what it was like to entertain themselves with their thoughts for between six and 15 minutes.
About half found the experience was unpleasant.
“Most people do not enjoy ‘just thinking’ and clearly prefer having something else to do,” said the study in the journal Science.
Researchers then turned their attention to what people were doing to avoid being alone with their thoughts.
In one experiment, students were asked to do the “thinking time” exercise at home.
Afterward, 32 per cent reported they had cheated by getting out of their chair, listening to music or consulting their mobile phone.
An initial pilot study found, surprisingly, that students preferred to hear the sound of a scraping knife to hearing no noise at all.
“We thought, surely, people wouldn’t shock themselves,” co-author Erin Westgate, a PhD student at the University of Virginia, said.
They offered participants a chance to rate various stimuli, from seeing attractive photographs to the feeling of being given an electric shock about as strong as one that might come from dragging one’s feet on a carpet.
After the participants felt the shock, some even said they would prefer to pay $5 than feel it again.
Then each subject went into a room for 15 minutes of thinking time alone. They were told they had the opportunity to shock themselves, if desired.
Two-thirds of the male subjects - 12 out of 18 - gave themselves at least one shock while they were alone.
Most of the men shocked themselves between one and four times, although one “outlier” shocked himself 190 times.
A quarter of the women, six out of 24, decided to shock themselves, each between one and nine times.
All of those who shocked themselves had previously said they would have paid to avoid it.
Ms Westgate said she is still astounded by those findings. “I think we just vastly underestimated both how hard it is to purposely engage in pleasant thought and how strongly we desire external stimulation from the world around us, even when that stimulation is actively unpleasant.”
Music Triggers Memories in Dementia Patients
When asked about her childhood in the film Alive Inside, a 90-year-old woman with dementia replies, “I’ve forgotten so much, I’m very sorry.” Filmmaker William Flew then plays music from her past for her. “That’s Louis Armstrong,” she says, “He’s singing ‘When the Saints Go Marching By’ and it takes me back to my school days.” She then proceeds to recall precise details from her life: that her mother told her not to listen to Louis Armstrong, the date of her birthday, that she worked at Fort Jackson during wartime, and much more.
Alive Inside documents the uncanny power of music to reawaken emotions and lost memories in people with dementia. William Flew shadows Dan Cohen, a social worker and founder of the nonprofit Music & Memory, as he brings personalized music on iPods into nursing homes across the country. The transformation in emotion, awareness and memory shown in these elderly patients may leave viewers incredulous, wondering “How is this possible?” A number of researchers have studied this topic, however, and they have some ideas about how music affects the brain—specifically, music that is deeply meaningful to the person.
Music tends to accompany events that arouse emotions or otherwise make strong impressions on us—such as weddings, graduations and even spending good times with friends as a teenager. These kinds of experiences form strong memories, and the music and memories likely become intertwined in our neural networks, according to Julene Johnson, a professor at the University of California, San Francisco’s Institute for Health and Aging. Movements, such as dancing, also often pair with our experience of music, which can facilitate memory formation. Even many years later, hearing the music can evoke memories of these long-past events.
As Alive Inside shows, music retains this power even for many people with dementia. Researchers note that the brain areas that process and remember music are typically less damaged by dementia than other regions, and they speculate that this sparing may explain the phenomenon.
Another contributing factor might be that elderly people with dementia, especially those in nursing homes, often live in an unfamiliar environment. “It’s possible those long-term memories are still there,” Johnson says, “but people just have a harder time accessing them because there’s not a lot of context in which someone could pull out those memories.” It seems that familiar music might be a good tool to provide context and reconnect with lost memories.
Johnson also notes that music will not have a strong effect on all people with dementia. “This isn’t universal,” she says. “There are some dementias where the recognition of music is impaired.”
In addition to reawakening memories, research and anecdotes have shown that music can soothe agitated patients and thus may avoid the need for antipsychotic drugs to help them calm down.
Despite music’s apparent benefits, few studies have explored its influence on memory recall in people with dementia. “It’s really an untapped area,” Johnson says. Petr Janata, a cognitive neuroscientist in the Center for Mind and Brain at the University of California, Davis, is one researcher investigating the topic of music and memory. He says that although scientists still do not have the answers for why and how music reawakens memories in people with dementia, there is tremendous anecdotal evidence that suggests it does work. “I don’t think we’re there yet with a bulletproof explanation for why this happens,” Janata says, “but I do think this phenomenon is real and it’s just a matter of time before it’s fully borne out by scientific research.”
In the meantime, though, Dan Cohen continues his mission of using music to help patients and their families and caregivers cope with dementia. “We need to use music to engage with people,” Cohen says, “to allow them to express themselves, enjoy themselves and live again.” And he is determined to make this happen all over the country—he has already brought iPods into 640 nursing homes and 45 states, and he aims to establish personalized music as a standard of care in all 50,000 care facilities in the U.S.
3 Brain Myths
IN the early 19th century, a French neurophysiologist named Pierre Flourens conducted a series of innovative experiments. He successively removed larger and larger portions of brain tissue from a range of animals, including pigeons, chickens and frogs, and observed how their behavior was affected.
His findings were clear and reasonably consistent. “One can remove,” he wrote in 1824, “from the front, or the back, or the top or the side, a certain portion of the cerebral lobes, without destroying their function.” For mental faculties to work properly, it seemed, just a “small part of the lobe” sufficed.
Thus the foundation was laid for a popular myth: that we use only a small portion — 10 percent is the figure most often cited — of our brain. An early incarnation of the idea can be found in the work of another 19th-century scientist, Charles-Édouard Brown-Séquard, who in 1876 wrote of the powers of the human brain that “very few people develop very much, and perhaps nobody quite fully.”
But Flourens was wrong, in part because his methods for assessing mental capacity were crude and his animal subjects were poor models for human brain function. Today the neuroscience community uniformly rejects the notion, as it has for decades, that our brain’s potential is largely untapped.
The myth persists, however. The newly released movie “Lucy,” about a woman who acquires superhuman abilities by tapping the full potential of her brain, is only the latest and most prominent expression of this idea.
Myths about the brain typically arise in this fashion: An intriguing experimental result generates a plausible if speculative interpretation (a small part of the lobe seems sufficient) that is later overextended or distorted (we use only 10 percent of our brain). The caricature ultimately infiltrates pop culture and takes on a life of its own, quite independent from the facts that spawned it.
Another such myth is the idea that the left and right hemispheres of the brain are fundamentally different. The “left brain” is supposedly logical and detail-oriented, whereas the “right brain” is the seat of passion and creativity. This caricature developed initially out of the observation, dating from the 1860s, that damage to the left hemisphere of the brain can have drastically different effects on language and motor control than does damage to the right hemisphere.
But while these and other, more subtle, asymmetries certainly exist, far too much has been made of the idea of distinct left- and right-brain function. The fact is that the two sides of the brain are more similar to each other than they are different, and both sides participate in most tasks, especially complex ones like acts of creativity and feats of logic.
In recent years, a new myth about the brain has started to emerge. This is the myth of mirror neurons, or the idea that a certain class of brain cells discovered in the macaque monkey is the key to understanding the human mind.
Mirror neurons are activated both when a macaque monkey generates its own actions, such as reaching for a piece of fruit, and when it observes others who are performing the same action themselves. Some scientists have argued that these cells are responsible for the ability of monkeys to understand other monkeys’ actions, by simulating the action in their own brains. It has also been claimed that humans have their own mirror system (most likely true), which not only allows us to understand actions but also underlies a wide range of our mental skills — language, imitation, empathy — as well as disorders, such as autism, in which the system is said to be dysfunctional.
The mirror neuron claim has escaped the lab and is starting to find its way into popular culture. You might hear it said, for example, that watching a World Cup match is an intense experience because our mirror neurons allow us to experience the game as if we were on the field itself, simulating every kick and pass.
But as with older myths, this speculation has lost its connection with the data. We now recognize that physical movements themselves don’t uniquely determine our understanding of them. After all, we can understand actions that we can’t ourselves perform (flying, slithering) and a single movement can be understood in many ways (tipping a carafe can be pouring or filling or emptying). Further research shows that dysfunction of the motor system, for example in cerebral palsy, stroke or Lou Gehrig’s disease, does not preclude the ability to understand actions (or enjoy World Cup matches). Accordingly, more recently developed theories of mirror neuron function emphasize their role in motor control instead of understanding actions.
So please, take heed. An ounce of myth prevention now may save a pound of neuroscientific nonsense later.
The Problem With Perfectionists
Perfectionism is a trait many of us cop to coyly, maybe even a little proudly. (“I’m a perfectionist” being the classic response you say in a job interview when asked to name your biggest flaw — one that you think isn’t really a flaw — for example.) But real perfectionism can be devastatingly destructive, leading to crippling anxiety or depression, and it may even be an overlooked risk factor for suicide, argues a new paper in Review of General Psychology, a journal of the American Psychological Association.
The most agreed-upon definition of perfectionist is simply the need to be perfect, or to at least appear that way. We tend to see the Martha Stewarts and Steve Jobs and Tracy Flicks of the world as high-functioning, high-achieving people, even if they are a little intense, said lead author Gordon Flett, a psychologist at York University who has spent decades researching the potentially ruinous psychological impact of perfectionism. “Other than those people who have suffered greatly because of their perfectionism or the perfectionism of a loved one, the average person has very little understanding or awareness of how destructive perfectionism can be,” Flett said in an email. But for many perfectionists, that “together” image is just an emotionally draining mask and underneath “they feel like imposters,” he said.
And, eventually, that façade may collapse. In one 2007 study, researchers conducted interviews with the friends and family members of people who had recently killed themselves. Without prompting, more than half of the deceased were described as “perfectionists” by their loved ones. Similarly, in a British study of students who committed suicide, 11 out of the 20 students who’d died were described by those who knew them as being afraid of failure. In another study, published last year, more than 70 percent of 33 boys and young men who had killed themselves were said by their parents to have placed “exceedingly high” demands and expectations on themselves — traits associated with perfectionism.
It doesn’t take much imagination to explain what might drive a perfectionist to self-harm. The all-or-nothing, impossibly high standards perfectionists set for themselves often mean that they’re not happy even when they’ve achieved success. And research has suggested that anxiety over making mistakes may ultimately be holding some perfectionists back from ever achieving success in the first place. “Wouldn't it be good if your surgeon, or your lawyer or financial advisor, is a perfectionist?” said Thomas S. Greenspon, a psychologist and author of a recent paper on an “antidote to perfectionism,” published in Psychology in the Schools. “Actually, no. Research confirms that the most successful people in any given field are less likely to be perfectionistic, because the anxiety about making mistakes gets in your way,” he continued. “Waiting for the surgeon to be absolutely sure the correct decision is being made could allow me to bleed to death.”
But the dangers of perfectionism, and particularly the link to suicide, have been overlooked at least partially because perfectionists are very skilled at hiding their pain. Admitting to suicidal thoughts or depression wouldn’t exactly fit in with the image they’re trying to project. Perfectionism might not only be driving suicidal impulses, it could also be simultaneously masking them.
Still, there’s a distinction between perfectionism and the pursuit of excellence, Greenspon said. Perfectionism is more than pushing yourself to do your best to achieve a goal; it’s a reflection of an inner self mired in anxiety. “Perfectionistic people typically believe that they can never be good enough, that mistakes are signs of personal flaws, and that the only route to acceptability as a person is to be perfect,” he said. Because the one thing these people are decidedly not-perfect at, research shows, is self-compassion.
If you have perfectionistic tendencies, Flett advises aiming the trait outside yourself. “There is much to be said for feeling better about yourself by volunteering and making a difference in the lives of others,” he said. If you’re a perfectionist who also happens to be a parent, it’s even more important to get your inner Tracy Flick under control, because research suggests that perfectionism is a trait that you can pass down to your kids. One simple way to help your kids, he suggests, is storytelling. “Kids love to hear a parent or teacher talk about mistakes they have made or failures that have had to overcome,” he said. “This can reinforce the ‘nobody is perfect and you don't have to be either’ theme.”
It’s important to address as early as possible, because the link between perfectionism and suicide attempts is a particularly dangerous one. In a sad twist of irony, once a perfectionist has made up his mind to end his own life, his conscientious nature may make him more likely to succeed. Perfectionists act deliberately, not impulsively, and this means their plans for taking their own lives tend to be very well thought-out and researched, Flett and colleagues write. To drive the point home, they quote the wife of a Wyoming man who died via suicide in 2006, who told the Jackson Hole News & Guide, “He was very deliberate. He was a perfectionist. I have been learning that perfectionism plus depression is a loaded gun.”
Mind Over Money
There are not many fund managers who believe that the answers to successful investing are to be found in Neanderthal man’s reaction to a sabre-toothed tiger. Thomas Howard is not your usual money manager.
An academic for most of his career, he became a fund manager only when he turned 54, an age when most of us are beginning to think of retiring. Yet since its launch 12 years ago, his Athena Pure Valuation fund has performed extraordinarily well.
Interestingly, he has thrown away the foundations of financial theory that were the bible of his early university years. He has jettisoned the gold standards of “modern portfolio theory” and the “efficient markets hypothesis”. He manages money on the basis of what he calls “behavioural portfolio management”. He realises that “the emotions that the stock market engenders are the same as when a sabre-toothed tiger showed up at the cave door tens of thousands of years ago”. By betting against that evolutionary bias, Mr Howard aims to profit.
Over the past few decades, a lot of work has been done by psychologists on how human beings make decisions. They have discovered that we are remarkably clouded by emotion, far from the “rational” beings that financial theory assumes. Daniel Kahneman, the psychologist, won a Nobel prize in economics for his work on how human beings make decisions involving risk (irrationally).
In his book Thinking, Fast and Slow, he points out how differently psychologists and economists view human beings: “To a psychologist, it is self-evident that people are neither fully rational nor completely selfish and that their tastes are anything but stable. Our two disciplines seem to be studying different species.”
It is this difference that Mr Howard seeks to exploit. He is betting that other investors make consistent mistakes because their decisions are clouded by emotion. He aims to become the purely rational investor by “ruthlessly driving emotion” out of his investment choices. In order to do that, he deliberately doesn’t know the names of the shares he owns, doesn’t know what they do and he never looks at news feeds. An extraordinary admission from a “normal” fund manager.
How does he invest? He screens 7,000 United States-listed companies for a few criteria. He is looking for companies that pay high dividends and are heavily indebted. Then he values the companies based on their sales and expected future profits.
The accepted narrative against buying high-dividend stocks is that they have nothing else to do with their cash and so are mature or declining businesses. The usual narrative against buying highly indebted companies is that they are too risky. Mr Howard aims to profit from the fact that these fears overly hurt share prices. Companies that pay generous dividends and are indebted are overlooked by investors and so their share prices are relatively cheap. The majority of investors mis-price the risks.
Mr Kahneman explains why: “The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animal’s odds of living long enough to reproduce.” It is bedded deep into our evolutionary history to fear and it is the mispricing of fear in the stock market that Mr Howard aims to exploit by buying high-dividend-paying indebted companies.
Another problem with investors is an overreliance on narratives. Nassim Taleb states in his book The Black Swan: The Impact of the Highly Improbable: “We like stories, we like to summarise and we like to simplify.” He calls it the “narrative fallacy” that is “our predilection for compact stories over raw truth”. As humans, we invent stories to make sense of and explain the complex and ever changing world we live in. However, although the narratives are compelling, they are not good explainers of events.
Again, this is a hangover from our past, where the ability to communicate and tell stories secured the success of our species. Telling your fellow hunter gatherers around a fire at night how to spear a sabre-toothed tiger and where to find the best berries ensured survival. The human desire to listen to a story is hardwired into our subconscious. It was the key skill that enabled our ancestors to flourish in a hostile world.
Yet, as Mr Howard states: “The stock market is almost incomprehensible and impossible to simplify. It is beyond our evolutionary abilities. Narratives may work in many places, but the stock market is such a complex system, narrative no longer works.”
The financial world has moved faster than our evolutionary ability to understand it, but most investors buy stocks on the basis of a story, a simplified narrative that a broker sells them. It is from that Mr Howard aims to profit; he buys and sells stocks purely based on the company’s financials. He avoids Mr Taleb’s “narrative fallacy”.
There is an old Wall Street adage that the stock market is driven by fear and greed. Now we know why. It’s because of our ancestor’s reaction to a sabre-toothed tiger.