If we are surrounded by ugliness or disorder, we tend to act in surprisingly antisocial ways, as new research has shown
The Cialdini effect might sound like a new mind-control trick from the illusionist Derren Brown, but it is more sinister than that. It is indeed a mind-control trick, but one that requires no tricksy showman to pull it off. If, like me, you have ever abandoned a shopping trolley in a messy supermarket car park, then you have fallen under its subtly destructive spell and you have only your subconcious to blame.
The effect takes its name from Robert Cialdini, a American psychology professor who wrote a groundbreaking book called Influence: The Psychology of Persuasion. This was no pap psychology book; it was, appropriately enough, a highly influential work that continues to shape social psychology, that mesmerising scientific discipline which examines the sometimes irrational way we behave in our relationships with others. Cialdini showed, among other things, that people do what they see others doing, even when they know they shouldn't.
The looting of the MSC Napoli off the coast of Devon two years ago is a perfect example. Media coverage showing people walking off with items washed ashore emboldened others to try their luck, culminating in "looting mayhem", in the words of an inquiry into the incident published this week. The lack of an "all-powerful commander", such as a police chief - whose presence would have reinforced the message that "salvaging" amounted to stealing - was blamed for the chaos.
Now a Dutch study has shown that the Cialdini effect is only the start of our troubles. People can actually be steered into criminal behaviour, such as stealing, simply by tinkering with their environment. In fact, the scientists claim, if you know what psychological buttons to press, you can make antisocial behaviour spread like a contagious disease. The paper, which has gone virtually unnoticed beyond the academic community, should be read by anyone who cares how and why people disobey the rules of civil society.
It seems common sense that a litter-strewn, graffiti-spattered environment will suffer more petty criminality than a pristine one. This is the nub of the broken windows theory espoused by James Wilson and George Kelling in 1982: disorder begets disorder. But, surprisingly, it has never been proven beyond reasonable doubt, because other factors such as policing levels have fogged the picture. Nonetheless, the concept has been whipped up into the idea of zero tolerance policing, in which the stamping out of minor infractions such as graffiti is believed to deter other criminality.
And so Kees Keizer and a team of behavioural scientists from the University of Groningen designed some experiments that could settle the matter, all to be conducted secretly on Dutch streets. In the first set-up, they chose an alley near a shopping centre where people park their bikes. In the middle of the alley stood a large No Graffiti sign. Dr Keizer's team looped flyers over the bikes' handlebars; any cyclist would need to remove the flyer before pedalling away. Given there were no rubbish bins, would the cyclists take their litter home, or drop it on the ground? The scientists took up their spying positions, and waited.
When the alley walls were clean, 33 per cent of cyclists dropped the flyer on the pavement or put it on another bike (both counted as littering). When the scientists added graffiti and repeated the experiment on another day, 69 per cent of the cyclists littered, a far bigger difference than would be expected by chance. Could it be possible that one sign of disorder, graffiti, was triggering another undesirable behaviour, littering?
So they tested the theory another way, this time in a supermarket car park and using flyers shoved under windscreen wipers. When the car park was tidy, with all the shopping trolleys put away, 30 per cent dropped the flyers on the ground. When the car park looked chaotic, with four shopping trolleys strewn around (their handles smeared with petroleum jelly to deter shoppers from grabbing them and thus ruining the experiment), 58 per cent littered.
Despoiling the environment is one thing; stealing quite another. Dr Keizer's team left an envelope hanging out of a postbox; the stamped and addressed envelope had a window through which could clearly be seen a five-euro note. How would passers-by, or those posting a letter, react when they saw it? The vast majority (87 per cent) either left it alone, or pushed it into the postbox. Only 13 per cent took it away (this was regarded as stealing).
But roughing up the environment had a dramatic effect. When the postbox was tagged with graffiti, 27 per cent of people stole the letter. When the postbox was surrounded by rubbish (but not graffitied), 25 per cent pocketed the cash.
The academics, who reported their startling results last month in Science, suggest that disorder does indeed beget disorder; when one social or legal norm is obviously violated, we are tempted to loosen our grip on others. Or, as Dr Keizer writes in the more precise language of psychology: "The most likely interpretation of these results is... that one disorder (graffiti or littering) actually fostered a new disorder (stealing) by weakening the goal of acting appropriately... The mere presence of graffiti more than doubled the number of people littering and stealing."
Exactly why our capacity to act honourably melts away in nasty settings is a mystery. Dr Keizer speculates that, when the instinct to act appropriately is pushed to one side, competing instincts - such as to do what feels good or to give in to greed - take over. If we can see that bad behaviour has gone unpunished, perhaps we feel that our own lapses will go uncensured.
Whatever the reason, the implications for policy are clear. Slapdashery in the environment breeds slapdashery in behaviour, and small transgressions can lead to bigger ones. A community left in squalor will, we can speculate, eventually see its social norms dramatically lowered.
If you are so inclined, you can summon supporting evidence. Remember the media portraits of Dewsbury Moor, the unhappy setting for the abduction of the schoolgirl Shannon Matthews? This was no Yorkshire idyll: journalists found sink estates peopled by the unemployed and single mothers, where children are raised by a shifting cast of stepfathers against a lapping tide of low-level lawlessness. The missing girl was found to have vanished at the hands of her own mother, who hoped to collect a reward.
Andrew Norfolk wrote of the place, for this paper, as "a bleak mix of pebbledash council blocks and neglected wasteland... one's attention is all too easily distracted by the rubbish-strewn gardens, the smashed windows, the discarded broken toys".
Perhaps we shouldn't underestimate the power of a garden rake and a good glazier.
Rage
Stuck in a jam as I was approaching a roundabout, I gazed idly out of the window. A car beeped behind. In my daze I'd not noticed that the line of traffic had advanced. I caught up with the queue and as I reached the junction the beeper pulled level, his face gargoyled with rage. "You stupid c***!" he screamed in my face.
As he careered off, adrenaline kicked in. For a second I considered pursuit, barging his Audi estate into the kerbside, leaping out Grand Theft Auto-style and then I'd . . . what? Kill him with a single deft blow? Rub him out with my Walther PPK? Instead I continued on a mission to the charity shop with my bin-bags of old tat.
But the incident left me oddly shaken. His obscene fury was so disproportionate to my offence. I hadn't rashly pulled out, frightened or endangered him. I had merely delayed his progress by nanoseconds. Not even that, since I was still locked in a queue.
Sometimes London life seems built upon a thin and fragile crust through which a bubbling magma of anger could, at any moment, blow. Which is what happened in a baker's shop a few miles from here last week when Jimmy Mizen, out buying sausage rolls with his brother, refused a challenge to a fight and instead had his throat cut with a shard of glass. And then in McDonald's on Oxford Street on Monday when a row over a thrown drink ended with a man bleeding to death on the pavement, a knife in his heart.
When yet another young man dies, I scan the reports for words that will afford me some solace: gang slaying, feud, grudge, crack house, sink estate, 2am, drug-related, excessive alcohol . . . These words make me feel a little safer. They largely have nothing to do with my life. I can, I tell myself, protect my sons from these words. But when Jimmy's mother, Margaret Mizen, said "it was anger that killed my son", I know I am powerless. Because anger is unconfined: it lurks in the middle of the day, in public places; it erupts between total strangers. Anger turns a random encounter into deadly violence.
"There is too much anger in the world," said Mrs Mizen. There is certainly too much in London. A friend, trying to cross a road, was hit on the shoulder by the wing mirror of a passing van: it deliberately swerved to wallop her. A guy at my gym says that out cycling he slapped the face of a delivery driver who'd honked at him. Aghast, I say he could have been stabbed, but he just makes a defiant, macho bring-it-on gesture, then admits he sped off when the driver began reaching inside his glove compartment.
A study by the Mental Health Foundation found that a quarter of us worry about how angry we feel. And yet just what are we angry about, with lives of unprecedented safety, surplus and comfort? I have always marvelled at the grumpiness of guests in luxury resorts: after a short time being waited upon in paradise, having flunkies pick up damp towels, one's mood can be ruined by a deckchair being positioned at the wrong angle to the sun, a drink's insufficient chill. Similiarly with our basic needs more than satisfied and our homes piled with consumer goodies, like brattish heiresses we rail against the slightest irritation.
I spend a ludicrous amount of my life angry about nothing much. Usually casual public thoughtlessness: mothers blocking small shops with their humungous £500 prams, nurses addressing dignified elderly ladies by their first names or, in my eco-wrath, anyone buying cases of bottled still water. Or brand new arbitrary regulations imposed seemingly to irritate and confound: such as Tesco's policy of banning parents buying booze if accompanied by children.
Why do these things rile me? Because the world seems beyond control, the old certainties gone. Or am I just getting old? The anger management industry would, of course, have it that we are in need of their expensive ministrations. But are we really more angry or do we just express it more?
To lose one's temper is no longer to be diminished or shamed; it is a sign of emotional health rather than a dearth of reason. All anger is righteous now. It is conflated with drive, passion, energy, a means to affect progress. Gordon Ramsay - whose confected ire is almost unwatchable - every week says goodbye to his F-Word celebrity guest with the catchphrase "Now f*** off out of my kitchen!" and we're supposed to be endeared by his rough-diamond charm.
Anger becomes such a reflexive response that you do not realise how much it has penetrated your soul until you travel. Even New York seems less brimming with outrage, a collision in a crowd more likely to spark a "pardon me" than a glower. Visiting Australia, I heard a news item in which an educational survey had found modern Oz children the most illiterate and stupid ever. In Britain such a report would have provoked weeks of self-flagellating fury: Australia shrugged and headed for the beach.
Last summer in Slovenia, Europe's most easy-going state, I was walking with my son past a line of cars when one started to reverse right at us. My London self banged hard on the back of the vehicle and made a furious hand gesture. The passengers in the car slowly turned, their eyes wide, their mouths agape at the crazy lady. "Mum," said my son. "That was way too angry."
Yes, I was London angry: the sense that everyone is out to shaft you, nip into your parking place, rip you off, frustrate your efforts to get home, grind you into the tarmac. Anger is the sound of entitlement, the urge to have your existence acknowledged. And for the young and poor and reckless, anger voices their lack of power, control, self-esteem. And, since it will swiftly meet the anger of others, it must be armed with fists and knives, guns and hard dogs.
Anger is a buzz, an addiction. Clearly we were designed for more than our modern functions. We are healthier, stronger, better fed and educated than any humans yet born. And yet we are the most underchallenged. Here we are, creatures capable of building cathedrals, surviving trench warfare or traversing oceans, wandering dead-eyed around B&Q. "People need to find peace, not anger," said Mrs Mizen.
But alas "going off on one" - about Iraq, Cherie Blair, the tall, sweet boy in the bakery or the dozy woman driver in front - is the only time some people feel briefly and iridescently alive.
How You See Others
How positively you see others is linked to how happy, kind-hearted and emotionally stable you are, according to new research by a Wake Forest University psychology professor.
"Your perceptions of others reveal so much about your own personality," says Dustin Wood, assistant professor of psychology at Wake Forest and lead author of the study, about his findings. By asking study participants to each rate positive and negative characteristics of just three people, the researchers were able to find out important information about the rater's well-being, mental health, social attitudes and how they were judged by others.
The study appears in the July issue of the Journal of Personality and Social Psychology. Peter Harms at the University of Nebraska and Simine Vazire of Washington University in St. Louis co-authored the study.
The researchers found a person's tendency to describe others in positive terms is an important indicator of the positivity of the person's own personality traits. They discovered particularly strong associations between positively judging others and how enthusiastic, happy, kind-hearted, courteous, emotionally stable and capable the person describes oneself and is described by others.
"Seeing others positively reveals our own positive traits," Wood says.
The study also found that how positively you see other people shows how satisfied you are with your own life, and how much you are liked by others.
In contrast, negative perceptions of others are linked to higher levels of narcissism and antisocial behavior. "A huge suite of negative personality traits are associated with viewing others negatively," Wood says. "The simple tendency to see people negatively indicates a greater likelihood of depression and various personality disorders." Given that negative perceptions of others may underlie several personality disorders, finding techniques to get people to see others more positively could promote the cessation of behavior patterns associated with several different personality disorders simultaneously, Wood says.
This research suggests that when you ask someone to rate the personality of a particular coworker or acquaintance, you may learn as much about the rater providing the personality description as the person they are describing. The level of negativity the rater uses in describing the other person may indeed indicate that the other person has negative characteristics, but may also be a tip off that the rater is unhappy, disagreeable, neurotic -- or has other negative personality traits.
Raters in the study consisted of friends rating one another, college freshmen rating others they knew in their dormitories, and fraternity and sorority members rating others in their organization. In all samples, participants rated real people and the positivity of their ratings were found to be associated with the participant's own characteristics.
By evaluating the raters and how they evaluated their peers again one year later, Wood found compelling evidence that how positively we tend to perceive others in our social environment is a highly stable trait that does not change substantially over time.
Pre Commitment
"Love is the only thing that can save this poor creature," Gene Wilder grandly declares to his assistants in Young Frankenstein as he commands them to lock him in a room with his monster. "And I am going to convince him that he is loved even at the cost of my own life. No matter what you hear in there, no matter how cruelly I beg you, no matter how terribly I may scream, do not open this door or you will undo everything I have worked for. Do you understand? Do not open this door."
Think about that for a minute. Dr. Frankenstein goes into the room telling his aides to ignore what he's going to say once he's inside. He knows he will want to come out, so he enlists others to help him subordinate his own later wishes to the ones he has right now, which he apparently prefers.
There's a name for this sort of thing. In the quietly sizzling field of self-control studies, it's calledprecommitment, because it involves acting now against the strength of some later desires, either by taking certain options off the table or by making them prohibitively costly.
Precommitment doesn't just happen in movies. For years the economists Dean Karlan and John Romalis kept their weight down by means of a clever pact. Karlan and Romalis knew a little something about incentives, so they struck a deal: Each would have to lose 38 pounds in six months or forfeit half his annual income to the other. If both failed, the one who lost less would forfeit a quarter of his income. They lost the weight and generally kept it off, although, at one point, Romalis's weight popped back up over the limit and Karlan actually collected $15,000 from his friend. He felt he had no choice. He felt he had to take the money to maintain the credibility of their system, without which they'd both get fat.
Precommitment works, which is why Karlan, now a professor at Yale, set out to make it available to the world via stickK.com, the Internet's precommitment superstore. Karlan's venture enables any of us to contractually control our own actions or, if we violate the agreement, face a penalty we've chosen. Theoretically, it could make a Trollope of the most recalcitrant writer, allowing him to impose on himself the wanted law that cannot be disobeyed. Despite its nerdy origins, the site has a rakish motto: "Put a contract out on yourself!"
The concept is fiendishly simple. StickK.com (the second K is from the legal abbreviation for contract, although baseball fans will detect a more discouraging connotation) lets you enter into one of several ready-made binding agreements to lose weight, quit smoking, or exercise regularly, among other things. You can also create your own agreement, which many of the site's 100,000 registered users have done. You specify the terms (say, a loss of one pound per week for 20 weeks), put up some money, and provide the name of a referee if you want one to verify your results. Whenever you fail, stickK.com gives some of your money to a charity or friend that you've chosen. Whether you fail or succeed, stickK.com never keeps your money for itself aside from a transaction fee.
If you want a sharper incentive, you can even pick an individual enemy or an organization that stickK.com calls "an anti-charity." Democrats, for instance, might find it especially motivating to know that if they fail to live up to a binding personal commitment on stickK.com, some of their hard-earned money will go to the George W. Bush Presidential Library. Anti-charities apparently are highly motivating; stickK.com says they have an 80 percent reported success rate. "All stickK is doing," Karlan told me, "is raising the price of bad behavior - or lowering the cost of good behavior."
What's especially appealing about ventures like stickK.com is not just that they give us the tools to constrain ourselves but that they are voluntary. We are fortunate to live in a time when the biggest problem that many of us face is coping with our own appetites in the face of freedom and affluence. Inevitably our failures - bankruptcy, obesity - bring calls for government to protect us from ourselves. But there are ways we can protect ourselves from ourselves without trampling the rights of others.
Consider exercise. It's good for you, and people want to be healthy and attractive. So lots of us join gyms - and then don't use them, which is why the places get a lot less crowded after January, when the New Year's resolutions start to peter out. The membership fee is just the cost of our good intentions. The real expense is the time and effort required to work out.
Enter Gym-Pact, a clever Boston venture cooked up by a couple of recent Harvard grads. Gym-Pact gives participants a cut-rate membership. The catch is, you have to specify in advance how many times a week you'll show up and how much extra you'll pay for each missed day. In effect, Gym-Pact helps reallocate the cost of exercise to idleness.
Precommitment can be especially helpful when it comes to bad habits, including substance abuse. In the movie Tropic Thunder, one of the characters is a heroin addict who runs out of his drug while making a movie in the jungle. When a jungle drug-making operation is discovered, he gets one of his colleagues to tie him to a tree so he won't succumb to temptation. Soon enough, of course, he is pleading to be untied, just as Gene Wilder was pleading for his helpers to open the door.
Sound familiar? It should. History's first known episode of precommitment occurs in The Odyssey, when Odysseus and his men are sailing home from the Trojan War. He has been warned about the Sirens, whose seductive song leads sailors to destruction, but he wants to hear it anyway. So he gives his men earplugs and orders that they tie him to the mast, ignoring all subsequent pleas for release until they are safely past the danger.
The Odyssey is all about the management of desire, and the famous wiliness of its hero is on full display in this episode. Odysseus essentially invents precommitment to inoculate himself against his own predictable (and potentially fatal) future desires. A lesser man might have relied on willpower alone, but Odysseus knew that no one is immune to temptation.
Precommitment and the Poor
Dean Karlan had spent a good deal of time thinking about precommitment before launching stickK.com, especially in conjunction with his other great interest, Third World finance. A few years back, he and colleagues from Harvard and Princeton set out to investigate whether people would freely choose a precommitment device to help them save, and if so whether it would make much of a difference.
They designed an elegant experiment that produced fascinating results, which they recorded in a 2006Quarterly Journal of Economics paper titled "Tying Odysseus to the Mast: Evidence From a Commitment Savings Product in the Philippines." They carried out their project on the island of Mindanao, in partnership with a rural financial institution there known as the Green Bank. The professors first surveyed 1,777 current or former customers of the bank to assess how good they were at deferring gratification. The surveys asked such questions as, "Would you prefer to receive 200 pesos guaranteed today, or 300 guaranteed in one month?" And equally important: "Would you prefer to receive 200 pesos guaranteed in six months, or 300 guaranteed in seven months?"
Customers who chose the sooner, smaller reward in answer to the first question but the larger, later reward in response to the second were deemed likely to have self-control problems. The researchers offered 710 of these individuals a new kind of savings account called Save, Earn, Enjoy Deposits, or SEED. These special accounts offered the standard 4 percent interest, with a single catch: Withdrawals weren't allowed until either an agreed-upon date or sum was reached. (Almost all the savers chose a date rather than a sum, since failing to accumulate the latter could mean their savings were locked away indefinitely.)
Some 202 self-aware individuals, or 28 percent of those receiving the SEED account offer, accepted - a group that skewed somewhat female. And 83 percent of SEED enrollees also bought a ganansiya box from the bank. This is like a piggy bank with a lock, except that the bank holds the key; savers accumulate small sums by putting a peso or two into a box when they can. It's a poor man's precommitment device, in this case one that mirrors, on a small scale, the design of the SEED accounts.
Karlan and his colleagues found that SEED worked for the participants. After just a year, SEED account holders had increased their savings by a remarkable 81 percent. It was a modest experiment, but it showed that giving people the opportunity to precommit can help them rapidly accumulate capital, even if they don't have much income. The experiment also showed that many people with self-control problems know they have them. The SEED account participants mostly knew themselves well enough to purchase ganansiya boxes. This kind of self-knowledge isn't uncommon among the Third World poor. Daryl Collins, a consultant on Third World finance and a former lecturer at the University of Cape Town, reports that poor South Africans sometimes rely on money guards - "a neighbor or relative or friend that you trust and say, 'Hold this, and don't let me touch it,' " she explains. "Sometimes the same money guard asks you to hold their money, and so when someone comes to borrow money, you say, 'It's not my money.' It works."
Back in the 1990s, when it was suggested that early-withdrawal penalties might be discouraging Americans from saving more in retirement accounts, a survey found that 60 percent of us wanted to maintain the restrictions; only 36 percent favored making it easier to tap retirement savings early. Why such a lopsided result? I think it's because people understood how susceptible they would be to the temptation to crack open their own nest eggs. They wanted the barrier left in place to keep themselves away.
I'm not surprised. I remember my mother, in the 1960s, dutifully making regular deposits into a Christmas club account at the local bank. On the surface, Christmas clubs make no sense. You have to make regular deposits - I seem to recall my mother having something like the kind of payment book you might get with a car loan - and you receive little or no interest. Most amazing of all, the bank won't let you have your money back until December. But of course this was the reason my mother signed up; the arrangement forced you to save, and it kept your savings out of your hands.
I did something similar when I worked at a big newspaper and signed up for automatic payroll deductions, with the money going into my credit union savings account. Then every time I got a raise, I raised the savings deduction by the same amount. My lifestyle never expanded with my income, but I did build up a pile of cash. I had colleagues who used the government's withholding of income taxes the same way. Those unfamiliar with this technique may not know that you have some discretion about how much Uncle Sam withholds from your paycheck; if you have a mortgage, kids, and other significant deductions, you should reduce the withholding to match what you'll ultimately owe, since the government won't pay you interest while it has your overpayments. On the other hand, you can't access the withheld money until you file your taxes - after which you'll get a nice, big refund. Think of the lost interest as a modest service charge, well worth it to people who know they might not save any other way.
Self-control sophisticates use the tools that happen to be at hand, as is apparent from the urban numbers racket. If you know how the lottery works, you understand the numbers game, except that the latter offers better odds.
I grew up around people who played the numbers. They'd wager 25 or 50 cents with a bookie on some three-digit number based on a dream or a birthday or some other likely premise, and if the number came in, they'd win. The daily number was always taken from some objective source that was ostensibly beyond manipulation; it might have been the last three digits of the day's take at Aqueduct, for example, or of the trading volume on the New York Stock Exchange. Like many people who buy lottery tickets, many numbers players play for entertainment.
But back in the 1970s, the sociologist Ivan Light looked at numbers gambling in Harlem and saw not a diversion or even "a tax on stupidity" (the term derisive economists use for state-run lotteries) but a functioning financial system - and an effective precommitment device to help people save. What outsiders didn't seem to understand was that Harlem residents didn't trust, and weren't well served by, banks. The so-called numbers racket, illegal though it may have been, partially filled this vacuum.
First, remember that the winning number is always just three digits, 000 through 999, so the odds of winning are a far-from-astronomical 1 in 1,000. And while the pot never contains millions, a winner who bet $1 might clear $500 after the customary 10 percent tip to the runner, who carries the loot back and forth. (Naturally, no taxes are paid.)
How did this add up to a savings plan? Survey data showed that the players were persistent, with nearly 75 percent playing two or three times a week and 42 percent playing daily, for years on end. In other words, they acted something like long-term investors. And they were likely to get back $500 for every thousand bets of $1 each. That may not seem like much of a return on investment, but bear in mind that many players bet with quarters, a sum that even among the poor tends to vanish unaccountably. They got some hope. They couldn't raid their "savings" until they won. And their money also bought convenience: Numbers runners made house calls, and these visits no doubt helped people keep playing.
In some poor neighborhoods of India, "deposit collectors" perform the same function. The collector gives a would-be saver a card imprinted with a grid of 220 cells, and the customer commits to handing over, say, five rupees for each cell each day. At the end, the saver would get back 1,100 rupees, less 100 rupees for the collector's fee. Savers are happy to live with this negative interest rate in exchange for the convenience and for the commitment device.
In Harlem, numbers players also knew their money was supporting black enterprise, local jobs, and a certain amount of neighborhood investment. But most of all, sooner or later you had a large sum of money to look forward to - and no control over when it would arrive.
"Most gamblers understand their numbers betting as a means of personal saving," Light reported, adding: "The bettor's justification for this seemingly preposterous misconception arises from unsatisfactory experiences with depository savings techniques. Once a numbers collector has a man's quarter, they aver, there is no getting it back in a moment of weakness. If, on the other hand, the quarter were stashed at home, a saver would have to live with the continuing clamor of unmet needs. In a moment of weakness, he might spend the quarter. Therefore, in the bettor's view, the most providential employment of small change is to bet it on a number."
Precommitment and Paternalism
StickK.com has come along at a time of renewed interest in paternalism. A number of people, most prominently the University of Chicago economist Richard Thaler and the Harvard legal scholar Cass Sunstein, have suggested that institutions should help people make better choices by means of more thoughtful "choice architecture." (Sunstein currently serves as administrator of the federal Office of Information and Regulatory Affairs.) At company cafeterias, for instance, the fruits and vegetables might be displayed more prominently and priced more attractively than desserts so that people will be more likely to pick healthier items. The idea is not to mandate behavior but to present choices so that the indisputably better option is more likely to be selected.
The classic example is the movement in business to automatically enroll employees in a 401(k) plan, with the right to opt out. This is the opposite of the traditional approach, which relies on employees to opt in. It turns out that human beings have a strong status quo bias, which is a fancy way of saying inertia is a powerful force in people's lives. In a study published in 2001, for instance, Brigitte Madrian and Dennis Shea found at one company that sign-ups among new hires rose to 86 percent from 49 percent after automatic enrollment was adopted. Reversing the default condition, which cost nothing and constrained nobody, thus significantly boosted the retirement prospects of a great many employees.
StickK.com lets people do this sort of thing for themselves. It's a place where they can act of their own volition to make themselves adhere to their second-order preferences - that is, their preferences about preferences. You may like to smoke cigarettes, for example, but you may also prefer not to have that preference. And your rational allegiance is to your second preference, the one that lets you avoid lung cancer and the other problems of smoking. The beauty of stickK.com is that it lets people decide for themselves which longer-term goals they embrace, in effect by becoming their own paternalists. And it gives them the means to enforce their own second-order desires, just as people do when they have their stomachs stapled or their jaws wired to constrain their eating. As Vito Corleone might have put it, stickK.com wants you to make yourself an offer you can't refuse.
So how can each of us be our own godfather? The answer is to shuck the naivete of the untutored in favor of a more sophisticated approach to ourselves and our intentions. That means, first, relying as little as possible on willpower in the face of temptation. It's much better, like Odysseus, to row right past the cattle of the sun god than to count on controlling the hunger that could lead to a fatal barbecue. It also means acknowledging how much we are influenced by our surroundings - and taking command of our environment so that it influences us in ways we prefer. Most important of all, a more sophisticated approach means recognizing that we cannot honor our best intentions by ourselves. If we are to take control of our own destiny in a world of such unprecedented freedom and abundance, we have no choice but to enlist the help of others - not just family but friends, colleagues, and community. The only hope, in short, is to do all that we can to have ourselves tied to the mast of our own intentions.
Yet there are times when we might conclude that someone so dependent on precommitment actually lacks self-control. A fat person who has his jaws wired shut in order to slim down, for example, signals to all that he couldn't control his eating without resort to artifice. Jon Elster has observed that, when it comes to booze, many societies have norms against both drunkenness and abstinence. Sydney Greenstreet, pouring a drink in The Maltese Falcon, puts the point neatly: "I distrust a man who says 'when.' If he's got to be careful not to drink too much, it's because he's not to be trusted when he does." Regardless of the signals it sends, committing yourself - irrevocably, if you can - to your best intentions is the most powerful weapon available in the war for self-command.
Hell Is Not Other People
Inhibition often begins with the sense that somebody is watching; experiments have demonstrated that simply installing a mirror makes people behave more honestly when, for example, they pick up a newspaper and are supposed to leave their money on the honor system. Mirrors also seem to diminish stereotyping, promote hard work, and discourage cheating. In one study of children, the mere presence of a mirror reduced the stealing of Halloween candy by more than 70 percent. You can think of other people as human mirrors. "Our friends and relatives," the psychologist Howard Rachlin writes, "are essential mirrors of the patterns of our behavior over long periods - mirrors of our souls. They are the magic 'mirrors on the wall' who can tell us whether this drink, this cigarette, this ice cream sundae, this line of cocaine, is more likely to be part of a new future or an old past. We dispense with these individuals at a terrible risk to our self-control."
Human relationships are vital in many ways, but in the self-control arena we are as dependent on them as Odysseus was on his crew, for we simply cannot bind ourselves to our own wills without other people. Participants in some 12-step programs have sponsors they can call upon when the will weakens, and even stickK.com encourages users to name a referee who can attest to whether they've met their goals. While loneliness subverts self-control, community can promote it in various ways, not least by minimizing social isolation and establishing norms. Communities are also social information systems, and being known in one is surely a moderating force, because reputations are valuable. Communities can reward with esteem and punish by turning a cold shoulder. You can use this knowledge against yourself. If you make New Year's resolutions, for example, you're much better off telling everyone about them, even putting them on a blog. Once this is done, you'll be much more likely to uphold them, since your reputation will be at stake.
If you're serious about living up to your second-order preferences, then the truly radical approach is to treat yourself like a moderately sophisticated lab rat. People often do so instinctively by promising themselves a certain reward - opening the good wine, buying a new dress, taking a vacation in Hawaii - when a certain goal is met. But self-rewards can be tricky without appointing someone else to bestow or withhold the prize. If you don't mind treating yourself like a lab rat, friends and family members can be a big help.
If you want to make a difficult but enduring change, announce it (to yourself and others) well in advance; an engagement period is always useful in getting one's intended to the altar. It's not by chance that the military allows enlistees a period of time between signing and induction. One study of charitable giving found that it rose when a delay was permitted between pledging and giving. Another study found that the longer in advance people ordered groceries, the less they spent and the healthier their choices were.
On the other hand, since speed and proximity kill self-control, it pays to keep a buffer of time and space between you and the most dubious gratifications. The Wall Street Journal cites the example of Scott Jaffa, a systems administrator in suburban Washington, D.C., who "destroyed the online access code for his 401(k) so he could no longer have instant access to his retirement accounts. His goal was to make it 'significantly harder' and to require 'human interaction' before he could trade on his own emotions." This act of precommitment helped him endure some stomach-churning stock market declines without taking any harmful action.
Homer and Ned
In America, it sometimes seems, you are either Homer Simpson or Ned Flanders. Homer is a slave to his appetites most of the time, although his fat-clogged heart is in the right place, while his neighbor Ned is a paragon of self-control, never letting his temper get the better of him even for a moment - but only because he's in thrall to a cult-like evangelism. Both men seem to be missing a fully functioning will.
I go back and forth when I think about which I'd rather be. Homer is selfish, shortsighted, flabby, and dumb, finding consolation in a bucket of fried chicken with extra skin. Ned is nicer and better-looking, has better-behaved kids, and runs his own business, yet there is something awful about him too. The basis of his good life seems contrived, even prefabricated, and his relationship to choice efficient but somehow stunted.
The third alternative is to decide for ourselves which of our preferences we like and then defend them against the importuning of those we do not. In the philosopher Harry Frankfurt's formulation, this is what makes you a person; the alternatives are submitting blindly to impulse, like Homer, or submitting blindly to some power outside yourself, like Ned.
Faced with these options, we find ourselves once again in the position of Odysseus, who must navigate between Scylla and Charybdis as part of his long and difficult journey home. But while we don't have much say over the desires we have, we certainly can decide which we prefer and then search for ways to act on that basis. Self-regulation will always be a challenge, but if somebody's going to be in charge, it might as well be ourselves.
What We Can Learn From Monkeys
Getting hit can cause you to change your behavior, but it does not always teach you why.
The Monkey Story
The experiment involved 5 monkeys, a cage, a banana, a ladder and, crucially, a water hose.
The 5 monkeys would be locked in a cage, after which a banana was hung from the ceiling with, fortunately for the monkeys (or so it seemed...), a ladder placed right underneath it.
Of course, immediately, one of the monkeys would race towards the ladder, intending to climb it and grab the banana. However, as soon as he would start to climb, the sadist (euphemistically called "scientist") would spray the monkey with ice-cold water. In addition, however, he would also spray the other four monkeys...
When a second monkey was about to climb the ladder, the sadist would, again, spray the monkey with ice-cold water, and apply the same treatment to its four fellow inmates; likewise for the third climber and, if they were particularly persistent (or dumb), the fourth one. Then they would have learned their lesson: they were not going to climb the ladder again - banana or no banana.
In order to gain further pleasure or, I guess, prolong the experiment, the sadist outside the cage would then replace one of the monkeys with a new one. As can be expected, the new guy would spot the banana, think "why don't these idiots go get it?!" and start climbing the ladder. Then, however, it got interesting: the other four monkeys, familiar with the cold-water treatment, would run towards the new guy - and beat him up. The new guy, blissfully unaware of the cold-water history, would get the message: no climbing up the ladder in this cage - banana or no banana.
When the beast outside the cage would replace a second monkey with a new one, the events would repeat themselves - monkey runs towards the ladder; other monkeys beat him up; new monkey does not attempt to climb again - with one notable detail: the first new monkey, who had never received the cold-water treatment himself (and didn't even know anything about it), would, with equal vigour and enthusiasm, join in the beating of the new guy on the block.
When the researcher replaced a third monkey, the same thing happened; likewise for the fourth until, eventually, all the monkeys had been replaced and none of the ones in the cage had any experience or knowledge of the cold-water treatment.
Then, a new monkey was introduced into the cage. It ran toward the ladder only to get beaten up by the others. Yet, this monkey turned around and asked "why do you beat me up when I try to get the banana?" The other four monkeys stopped, looked at each other slightly puzzled and, finally, shrugged their shoulders: "Don't know. But that's the way we do things around here"...
What We Can Learn From Robots
What can a wide-eyed, talking robot teach us about trust?
A lot, according to Northeastern psychology professor David DeSteno, and his colleagues, who are conducting innovative research to determine how humans decide to trust strangers -- and if those decisions are accurate.
The interdisciplinary research project, funded by the National Science Foundation (NSF), is being conducted in collaboration with Cynthia Breazeal, director of the MIT Media Lab's Personal Robots Group, Robert Frank, an economist, and David Pizarro, a psychologist, both from Cornell.
The researchers are examining whether nonverbal cues and gestures could affect our trustworthiness judgments. "People tend to mimic each other's body language," said DeSteno, "which might help them develop intuitions about what other people are feeling -- intuitions about whether they'll treat them fairly."
This project tests their theories by having humans interact with the social robot, Nexi, in an attempt to judge her trustworthiness. Unbeknownst to participants, Nexi has been programmed to make gestures while speaking with selected participants -- gestures that the team hypothesizes could determine whether or not she's deemed trustworthy.
"Using a humanoid robot whose every expression and gesture we can control will allow us to better identify the exact cues and psychological processes that underlie humans' ability to accurately predict if a stranger is trustworthy," said DeSteno.
During the first part of the experiment, Nexi makes small talk with her human counterpart for 10 minutes, asking and answering questions about topics such as traveling, where they are from and what they like most about living in Boston.
"The goal was to simulate a normal conversation with accompanying movements to see what the mind would intuitively glean about the trustworthiness of another," said DeSteno.
The participants then play an economic game called "Give Some," which asks them to determine how much money Nexi might give them at the expense of her individual profit. Simultaneously, they decide how much, if any, they'll give to Nexi. The rules of the game allow for two distinct outcomes: higher individual profit for one and loss for the other, or relatively smaller and equal profits for both partners.
"Trust might not be determined by one isolated gesture, but rather a 'dance' that happens between the strangers, which leads them to trust or not trust the other," said DeSteno, who, with his colleagues, will continue testing their theories by seeing if Nexi can be taught to predict the trustworthiness of human partners.
Types of Humour
Funny ha ha, funny pathetic, funny peculiar. How often are people landed in hot water by their supposed sense of humour? Of the many unpleasant things we have inherited from America, one is humourless, litigious, killjoy behavioural requirements. "It was meant as a joke" doesn't sound too good in court or during the cold in-house inquiry. How many people have been punished after an off-hand double entendre?
What is the function of humour? Which types of humour are acceptable and which verboten?
Why do some people enjoy aggressive or sexual humour, while others prefer black or intellectual humour? Is personality related to humour creation - such as telling jokes or making puns? Do people who make us laugh have different personality traits to those who cannot?
Some psychologists have been particularly interested in the social function of humour. The fascination is in how humour can generate a sense of group solidarity, provide a safety valve for dealing with group pressure, and help individuals cope with threatening experiences.
Humour creativity seems unrelated to humour appreciation. The former is concerned with perceiving and describing people, objects or situations in an incongruous way. The latter is the enjoyment of these descriptions.
Thus we have four possible types, namely individuals who are:
* High/high - frequently making witty remarks or telling jokes and seeking out other people or situations where there is humour;
* Low/low - serious people who do not enjoy telling or hearing humorous stories;
* High/low - people who enjoy telling jokes but show little appreciation when told them by others;
* Low/high - people who are not much given to creating humour but who love to laugh and do so frequently.
There clearly are different types of humour: nonsense humour based on puns or incongruous combinations of words or images; satire based on ridiculing people, groups or institutions; aggressive humour that describes brutality, violence, insults and sadism, and sexual humour. And of course the British speciality: toilet humour.
Freud wrote a number of papers on humour and was fascinated by its functions. Jokes, like dreams, provide an insight into the unconscious. They are important defence mechanisms and suppressing them can lead to serious consequences. Freud divided jokes into two classes, namely the innocent/trivial and the tendentious. The latter served two main purposes - aggression (satire) or sex. Thus the purpose of the most interesting jokes is the expression of sexual or aggressive feelings that would otherwise be barred.
Furthermore, the amount and timing of laughter correspond to the psychological energy saved by not having to repress. "In jokes veritas: jokes are a socially accepted and socially shared mechanism of expressing what is normally forbidden."
Freudian theory is a fecund source of testable hypotheses. For instance: individuals who find aggressive jokes funniest will be those in whom aggression is normally repressed. Those whose main defence mechanism is repression and who have a strong social conscience will be humourless (they will not laugh at jokes). Wits will be more neurotic than the normal population.
But what of humour preference? Extroverts tend to like fast jokes (skits/comedy) and practical jokes. They can take jokes at their own expense and approve of others who can laugh at themselves. Some neurotic unstable people like satire and black humour but tend not to enjoy humour much. Worse, they fail to appreciate the possible uses of humour as an antidote to their anxiety, moodiness and depression. Equally, they fail to appreciate others who use humour as a coping mechanism, as a way to distort reality and thus make things more tolerable.
The problem with humour at work is that one man's meat is another man's poison. And the humourless will inherit the earth because they have been rewarded for "telling nanny". There are, of course, occasions when humour really is inappropriate and insulting, but often jokes are in the ear of the beholder.
Humour at work is bound up with corporate culture: racism lingering just below the surface can been seen in some jokes. But be very careful about any sexist or sexual joke in any quasi-egalitarian or puritan organisations. That can be totally career limiting. And as for public school practical jokers ... there are now armies of lawyers who have given up ambulance chasing for the more lucrative mobile-phone recordings of the odd joke in the workplace.
Rage On The Internet
For a while after his first TV series was broadcast in 2009, comedian Stewart Lee was in the habit of collecting and filing some of the comments that people made about him on web pages and social media sites. He did a 10-minute Google trawl most days for about six months and the resultant collected observations soon ran to dozens of pages. If you read those comments now as a cumulative narrative, you begin to fear for Stewart Lee. A good third of the posts fantasised about violence being done to the comic, most of the rest could barely contain the extent of their loathing.
This is a small, representative selection:
"I hate Stewart Lee with a passion. He's like Ian Huntley to me." Wharto15, Twitter
"I saw him at a gig once, and even offstage he was exuding an aura of creepy molesty smugness." Yukio Mishima, dontstartmeoff.com
"One man I would love to beat with a shit-covered cricket bat." Joycey, readytogo.net
"He's got one of those faces I just want to burn." Coxy, dontstartmeoff.com
"I hope stewart lee dies." Idrie, Youtube
"WHAT THE HELL! If i ever find you, lee, i promise i will, I WILL, kick the crap out of you." Carcrazychica, YouTube
"Stewart Lee is a cynical man, who has been able to build an entire carrer [sic] out of his own smugness. I hope the fucking chrones disease [sic] kills him." Maninabananasuit, Guardian.co.uk
"I spent the entire time thinking of how much I want to punch Stewart Lee in the face instead of laughing. He does have an incredibly punchable face, doesn't he? (I could just close my eyes, but fantasizing about punching Stewart Lee is still more fun than sitting in complete, stony silence.)" Pudabaya, beexcellenttoeachother.com
Lee, a standup comedian who does not shy away from the more grotesque aspects of human behaviour, or always resist dishing out some bile of his own, does not think of himself as naive. But the sheer volume of the vitriol, its apparent absence of irony, set him back. For a few months, knowing the worst that people thought of him became a kind of weird compulsion, though he distanced himself from it slightly with the belief that he was doing his obsessive collating "in character". "Collecting all these up isn't something I would do," he suggests to me. "It is something the made-up comedian Stewart Lee would do, but I have to do it for him, because he is me."
Distanced or not, Lee couldn't help but be somewhat unsettled by the rage he seemed to provoke by telling stories and jokes: "When I first realised the extent of this stuff I was shocked," he says. "Then it appeared to me that a lot of the things I was hated for were things I was actually trying to do; a lot of what people considered failings were to me successes. I sort of wrote a lot of series two of Stewart Lee's Comedy Vehicle with these comments in mind, trying to do more of the things people hadn't liked."
The "40,000 words of hate" have now become "anthropologically amusing" to him, he insists. "You can see a lot of them seem to be the same people posting the same stuff under different names in different places, and it is strange to see people you have known personally, whom you thought you had got on fine with at the time, abusing you under barely effective pseudonyms."
He's stopped looking these days, and never really tried to identify or confront any of his detractors. "I am slightly worried that some of them might be a bit insane and hope I haven't made myself or my family a target."
Lee is, of course, not alone in having this anonymous violent hatred directed toward him. On parts of the internet it has become pretty much common parlance. Do a quick trawl on the blog sites and comment sections about most celebrities and entertainers - not to mention politicians - and you will quickly discover comparable virtual rage and fantasised violence. Comedians seem to come in for more than most, as if taboo-breaking was taken as read, or the mood of the harshest baying club audience had become a kind of universal rhetoric. It's not quite heckling this, though, is it? A heckle requires a bit of courage and risk; the audience can see who is doing the shouting. Lee's detractors were all anonymous. How should we understand it then: harmless banter? Robust criticism? Vicious bullying?
The psychologists call it "deindividuation". It's what happens when social norms are withdrawn because identities are concealed. The classic deindividuation experiment concerned American children at Halloween. Trick-or-treaters were invited to take sweets left in the hall of a house on a table on which there was also a sum of money. When children arrived singly, and not wearing masks, only 8% of them stole any of the money. When they were in larger groups, with their identities concealed by fancy dress, that number rose to 80%. The combination of a faceless crowd and personal anonymity provoked individuals into breaking rules that under "normal" circumstances they would not have considered.
Deindividuation is what happens when we get behind the wheel of a car and feel moved to scream abuse at the woman in front who is slow in turning right. It is what motivates a responsible father in a football crowd to yell crude sexual hatred at the opposition or the referee. And it's why under the cover of an alias or an avatar on a website or a blog - surrounded by virtual strangers - conventionally restrained individuals might be moved to suggest a comedian should suffer all manner of violent torture because they don't like his jokes, or his face. Digital media allow almost unlimited opportunity for wilful deindividuation. They almost require it. The implications of those liberties, of the ubiquity of anonymity and the language of the crowd, are only beginning to be felt.
You can trace those implications right back to the genesis of social media, to pioneering Californian utopias, and their fall. The earliest network-groups had a sort of Edenic cast. One representative group was CommuniTree, which was set up as an open-access forum on a series of modem-linked computers in the 1970s when computers were just humming into life. For a while the group of like-minded enthusiasts ran on perfectly harmonious lines, respecting others, having positive and informed discussions about matters of shared relevance. At some point, however, some high school teenagers armed with modems accessed the open-access space and used it to trash and abuse the CommuniTree, taking free speech to uninhibited extremes that the pioneers had never wanted. The pioneers were suitably horrified. And eventually, after deciding that they could neither control the students through censorship, nor tolerate the space with them in it, they shut CommuniTree down.
This story has become almost folkloric among new media prophets, a sort of founding myth. It was one of the first moments when the possibilities of the new collective potential was tainted by anonymous lowest-common-denominator humanity, a pattern that has subsequently been repeated in pretty much all virtual communication. Barbarians, or "trolls" as they became known, had entered the community, ignoring the rules, shouting loudly, encouraging violence, spoiling it for everybody. Thereafter, anyone who has established a website or forum with high, or medium-high ideals, has had to decide how to deal with such anonymous destructive posters, those who got in the way of constructive debate.
Tom Postmes, a professor of social and organisational psychology at the universities of Exeter and Groningen in his native Netherlands, and author of Individuality and the Group, has been researching these issues for 20 years. "In the early years," he says, "this online behaviour was called flaming. And then that became institutionalised. Among friends, the people who engaged in this activity were actually quite jocular in intent but they were accountable to standards and norms that are radically different to those of most of their audience. Trolls aspire to violence, to the level of trouble they can cause in an environment. They want it to kick off. They want to promote antipathetic emotions of disgust and outrage, which morbidly gives them a sense of pleasure."
Postmes compares online aliases to the tags of graffiti artists: "Trolls want people to identify their style, to recognise them, or at least their online identity. But they will only be successful in this if an authority doesn't clamp down on them. So anonymity helps that. It's essentially risk-free."
There is no particular type of person drawn to this kind of covert bullying, he suggests: "Like football hooligans, they have family and live at home but when they go to a match the enjoyment comes from finding a context in which you can let go, or to use the familiar phrase 'take a moral vacation'. Doing this online has a similar characteristic. You would expect it is just normal people, the bloke you know at the corner shop or a woman from the office. They are the people typically doing this."
Some trolls have become nearly as famous as the blogs to which they attach themselves, in a curious, parasitical kind of relationship. Jeffrey Wells, author of Hollywood Elsewhere, is a former columnist on the LA Times who has been blogging inside stories about movies for 15 years. For the last couple of years his gossip and commentary has been dogged by the invective of a character called LexG, whose 200-odd self-loathing and wildly negative posts recently moved Wells to address him directly: "The coarseness, the self-pity and the occasional eye-pokes and cruel dismissiveness have to be turned down. Way down. Arguments and genuine disdain for certain debaters can be entertaining, mind. I'm not trying to be Ms Manners. But there finally has to be an emphasis on perception and love and passion and the glories of good writing. There has to be an emphasis on letting in the light rather than damning the darkness of the trolls and vomiting on the floor and kicking this or that Hollywood Elsewhere contributor in the balls."
When I spoke to Wells about LexG, he was philosophical. "Everybody on the site writes anonymously, except me," he says. "If they didn't I think it would cause them to dry up. This place is like a bubble in which you can explode, let the inner lava out. And, boy, is there a lot of lava."
He has resisted insisting that people write under their own name because that would kill the comments instantly. "Why would you take that one in 100 chance that your mother or a future employer will read what you were thinking late one night a dozen years ago if you didn't have to?" For haters, Wells believes, anonymity makes for livelier writing. "It's a trick, really - the less you feel you will be identified, the more uninhibited you can be. At his best LexG really knows how to write well and hold a thought and keep it going. He is relatively sane but certainly not a happy guy. He's been doing this a couple of years now and he really has become a presence; he does it on all the Hollywood sites."
Have they ever met?
"Just once," Wells says. "I asked him to write a column of his own, give him a corner of the site, bring him out in the open." LexG didn't want to do it, he seemed horrified at the prospect. "He just wanted to comment on my stuff," Wells suggests. "He is a counter-puncher, I guess. The rules on my site remain simple, though. No ugly rancid personal comments directed against me. And no Tea Party bullshit."
The big problem he finds running the blog is that his anonymous commenters get a kind of pack mentality. And the comments quickly become a one-note invective. As a writer Wells feels he needs a range of emotion: "I also do personal confession or I can be really enthusiastic about something. But the comments tend to be one colour, and that becomes drab. It's tougher, I guess, to be enthusiastic, to really set out honestly why something means something to you. It takes maybe twice as long. I can run with disdain and nastiness for a while but you don't want to always be the guy banging a shoe on the table. Like LexG. I mean it's not healthy, for a start."
Wells does his own marshalling of the debate, somewhat like the bartender of a western saloon. Other sites - including our own Comment is Free - employ moderators to try to keep trolls in line, and move the debate on. A young journalist called Sarah Bee was for three years the moderator on seminal techie news and chat forum the Register. She started as a sub-editor but increasingly devoted her time to looking after the "very boisterous" chat on the site. She has no doubt that "anonymity makes people bolder and more arsey, of course it does. And it was quite a politically libertarian crowd, so you get people expressing things extremely stridently, people would disagree and there would often be a lot of real nastiness." She was very liberal as far as moderating went, she thinks, with no real hard and fast rules, except, perhaps, for "a ban on prison-rape jokes, which came up extremely often".
Every once in a while, however, the mood would get "very ugly" and she would try to calm things down and remonstrate with people. "I would occasionally email them - they had to give their email addresses when registering for the site - to say, 'Even though you are not writing under your real name, people can hear you.'" In those instances, strangely, she suggests, most people were incredibly contrite when contacted. It was like they had forgotten who they were. "They would send messages back saying, 'Oh, I'm so sorry', not even using the excuse of having a bad day or anything like that. It is so much to do with anonymity."
Bee became known as the Moderatrix - "all moderators have an implicit sub-dom relationship with their site" - though she was just about the only person in the comment section who used her own name. "There was a lot of misogyny and casual sexism, some pretty off-colour stuff. I would get a few horrible emails calling me a cunt or whatever," she says, "but that didn't bother me as much as the day-to-day stuff, really."
The day-to-day stuff was, though, "like being in another world. It got really wearying. I would go home sometimes and just sigh and wonder about it all."
She is keen to say that the Register itself she thought a great thing, and loved the idea of working there, but being Moderatrix eventually got her down. "A hive mind sets in," she suggests. "Just occasionally good sense would prevail but then there is that fact that arguments on the internet are literally never over. You moderate a few hundred comments a day, and then you come in the next morning and there are a few hundred more waiting for you. It's Sisyphean."
In the end she needed a change. She's in another "community management" job now, dealing through Facebook, which is a relief because "it removes anonymity so people are a lot more polite". When she retired Moderatrix she did a goodbye and got 250 comments wishing her well. She doesn't miss it, though. "Just occasionally I would let a stream of the most offensive things through, just to let people know how those things looked in the world. People would realise for a bit. But then the old behaviours would immediately set in. The thing any moderator will tell you is that every day is a new day and everything repeats itself every day. It is not about progress or continuity."
There are many places, of course, on the internet where a utopian ideal of "here comes everybody" prevails, where the anonymous hive mind is fantastically curious and productive. A while ago I talked to Jimmy Wales, the founder of Wikipedia, about some of this, and asked him who his perfect contributor was. "The ideal Wikipedian, in my mind, is someone who is really smart and really kind," he said, without irony. "Those are the people who are drawn into the centre of the group. When people get power in these communities, it is not through shouting loudest, it is through diplomacy and conflict resolution."
Within this "wikitopia" there were, too, though, plenty of Lord of the Flies moments. The benevolent Wiki community is plagued by "Wikitrolls" - vandals who set out to insert slander and nonsense into pages. A policing system has grown up to root out troll elements; there are well over 1,000 official volunteer "admins", working round the clock; they are supported in this work by the eyes and ears of the moral majority of "virtuous" Wikipedians.
"When we think about difficult users there are two kinds," Wales said, with the same kind of weariness as Moderatrix. "The easy kind is someone who comes in, calls everyone Nazis, starts wrecking articles. That is easy to deal with: you block them, and everyone moves on. The hard ones are people who are doing good work in some respects but are also really difficult characters and they annoy other people, so we end up with these long intractable situations where a community can't come to a decision. But I think that is probably true of any human community."
Wales, who has conducted perhaps the most hopeful experiment in human collective knowledge of all time, appears to have no doubt that the libertarian goals of the internet would benefit from some similar voluntary restraining authority. It was the case of the blogger Kathy Sierra that caused Wales and others to propose in 2007 an unofficial code of conduct on blog sites, part of which would outlaw anonymity. Kathy Sierra is a programming instructor based in California; after an online spat on a tech-site she was apparently randomly targeted by an anonymous mob that posted images of her as a sexually mutilated corpse on various websites and issued death threats. She wrote on her own blog: "I'm at home, with the doors locked, terrified. I am afraid to leave my yard, I will never feel the same. I will never be the same."
Among Wales's suggestions in response to this and other comparable horror stories of virtual bullying was that bloggers consider banning anonymous comments altogether, and that they be able to delete comments deemed abusive without facing accusations of censorship. Wales's proposals were quickly shot down by the libertarians, and the traffic-hungry, as unworkable and against the prevailing spirit of free-speech.
Other pioneering idealists of virtual reality have lately come to question some of those norms, though. Jaron Lanier is credited with being the inventor of virtual worlds. His was the first company to sell virtual reality gloves and goggles. He was a key adviser in the creation of avatar universe Second Life. His recent book, You Are Not a Gadget, is, in this sense, something of a mea culpa, an argument for the sanctity of the breathing human individual against the increasingly anonymous virtual crowd. "Trolling is not a string of isolated incidents," Lanier argued, "but the status quo in the online world." He suggested "drive-by anonymity", in which posters create a pseudonym in order to promote a particularly violent point of view, threatened to undermine human communication in general. "To have substantial exchange, you need to be fully present. That is why facing one's accuser is a fundamental right of the accused."
We rightly hear a great deal about the potential of social media and websites to spread individual freedom, as evidenced during the Arab spring and elsewhere. Less is written about their capacity to reinforce pack identities and mob rule, though clearly that is also part of that potential.
Social psychologist Tom Postmes has been disturbed by the coarsening of debate around issues such as racial integration in his native Netherlands, a polarisation that he suggests has grown directly from the fashionable political incorrectness of particular websites where anonymity is guaranteed. "There is some evidence to suggest that the mainstream conservative media even cuts politically correct or moderate posts from websites in favour of the extremes," he says. "The tone of the public debate around immigration has diminished enormously in these forums."
One effect of "deindividuation" is a polarisation within groups in which like-minded people typically end up in more extreme positions because they gain credibility by exaggerating loosely held prejudices. You can see that in the bloggers trying to outdo one another with pejoratives about Stewart Lee. This has the effect of shifting norms: extremism becomes acceptable. As Lanier argues: "I worry about the next generation of young people around the world growing up with internet-based technology that emphasises crowd aggregation. will they be more likely to succumb to pack dynamics when they come of age?" The utopian tendency is to believe that social media pluralises and diversifies opinion; most of the evidence suggests that it is just as likely, when combined with anonymity, to reinforce groupthink and extremism.
A lot of this comes down to the politics of anonymity, a subject likely to greatly exercise the minds of legislators as our media becomes increasingly digitised, and we rely more and more on mostly unaccountable and easily manipulated sources - from TripAdvisor to Twitter feeds to blog gossip - for our information.
One simple antidote to this seems to rest in the very old-fashioned idea of standing by your good name. Adopt a pseudonym and you are not putting much of yourself on the line. Put your name to something and your words are freighted with responsibility. Arthur Schoepenhauer wrote well on the subject 160 years ago: "Anonymity is the refuge for all literary and journalistic rascality," he suggested. "It is a practice which must be completely stopped. Every article, even in a newspaper, should be accompanied by the name of its author; and the editor should be made strictly responsible for the accuracy of the signature. The freedom of the press should be thus far restricted; so that when a man publicly proclaims through the far-sounding trumpet of the newspaper, he should be answerable for it, at any rate with his honour, if he has any; and if he has none, let his name neutralise the effect of his words. And since even the most insignificant person is known in his own circle, the result of such a measure would be to put an end to two-thirds of the newspaper lies, and to restrain the audacity of many a poisonous tongue."
The internet amplifies Schopenhauer's trumpet many times over. Though there are repressive regimes when anonymity is a prerequisite of freedom, and occasions in democracies when anonymity must be preserved, it is clear when those reservations might apply. Generally, though, who should be afraid to stand up and put their name to their words? And why should anyone listen if they don't?
Freudian Slips or Assembly Error?
Last week, a verbal stumble by Republican candidate Rick Santorum led to a fresh batch of accusations that he harbors racist sentiments. Here is a transcript, from a speech delivered on March 27th 2012 in Janesville, Wisconsin:
We know, we know the candidate Barack Obama, what he was like. The anti-war government nig- uh, the uh America was a source for division around the world.
Almost immediately, this video clip began to zip around the internet, with many people arguing that Santorum had caught himself in the middle of uttering a racial slur against Barack Obama, inadvertently revealing his true attitude. The presumption behind these arguments is that Freudian slips reflect a layer of thoughts and attitudes that sometimes slip past the mental guards of consciousness and bubble to the surface. That they're the window to what someone was really thinking, despite his best efforts to conceal it.
But decades of research in psycholinguistics reveal that speech errors are rarely this incriminating. The vast majority of them come about simply because of the sheer mechanical complexity of the act of speaking. They're less like Rorschach blot tests and more like mundane assembly-line mistakes that didn't get caught by the mind's inner quality control.
Speech errors occur because when it comes to talking, the mind cares much more about speed than it does about accuracy. We literally speak before we're done thinking about what we're going to say, and this is true not just for the more impetuous amongst us, but for all speakers, all of the time. Speech production really is like an assembly line, but an astoundingly frenzied one in which an incomplete set of blueprints is snatched out of the hands of the designers by workers eager to begin assembling the product before it's fully sketched out.
The assembly process - that is, the process of actually choosing and uttering specific words to express the ideas in the blueprint - is itself highly error-prone. Any one of a number of things can go wrong. For example, the word-chooser might forget that he's already sent along instructions for a specific word, and request that word twice by mistake. Or in the heat of the moment, one word might get chosen instead of another simply because the two look a lot alike. Or the word-builders might put the wrong pieces together. This often happens with speech errors that are called spoonerisms, where two sounds get exchanged - leading to odd results like saying 'queer old dean' instead of 'dear old queen.'
All this chaos leads to errors and disfluencies in about five percent of sentences uttered, most of which are easily glossed over by listeners. Psycholinguists have long been fascinated by speech errors, publishing hundreds of studies on the topic, often using clever methods to induce slips of the tongue in the lab. What do all these studies tell us about the inner workings of a speaker's mind? Well, lots. But not so much about the attitudes that a speaker might be trying to suppress. Rather, they tell us a lot about what speech production looks like, allowing scientists to build detailed models of the entire assembly-line process. This makes speech errors plenty riveting if you're a language geek. But they're hardly grounds for making judgments about someone's "true" opinions.
Unfortunately, lack of knowledge about the complicated process of language production can easily lead people to jump to false conclusions about the underlying causes of so-called Freudian slips. For example, in 2007, Republican presidential candidate Mitt Romney was taken to task for confusing the names Obama and Osama and suffered accusations of deliberate political fear-mongering.
And in 2006, radio broadcaster Dave Lenihan lost his job over inadvertently uttering a racial slur while discussing the prospect of Condoleezza Rice as commissioner for the national Football League. Here's what Lenihan said:
She's got the patent resume of somebody that has serious skill. She loves football, she's African-American, which would be kind of a big coon. A big coon. Oh my God - I totally, totally, totally, totally am sorry for that. I didn't mean that.
Lenihan later claimed that he was aiming to say 'coup' and mis-pronounced the word, an explanation that would strike a language scientist as wildly plausible. The goof may well have been a blend of the parts of the words 'coup' and 'boon', either of which would have been reasonable words in context, and whose sound similarity would have increased the probability of the blend. No matter. Enough listeners expressed outrage at what they saw as Lenihan's thinly veiled racist attitudes and he got the boot.
It's impossible to know exactly what led to Santorum's slip. But it could easily have come from any one of a number of assembly-line failures. He may simply have been changing course midstream in his sentence while uttering a word like 'negotiator.' Or the offending syllable may have come from a word that he was planning to utter later in the sentence, or sounds may have gotten swapped around in the word-building stage.
Rick Santorum may or may not hold racist attitudes. But we can't tell from his slips of the tongue. To use speech errors as evidence of his deeply held sentiments is about as scientific as dunking a woman into the river to see if she floats before declaring her a witch.
A Little Help
THE idea that an infusion of hope can make a big difference to the lives of wretchedly poor people sounds like something dreamed up by a well-meaning activist or a tub-thumping politician. Yet this was the central thrust of a lecture at Harvard University on May 3rd by Esther Duflo, an economist at the Massachusetts Institute of Technology known for her data-driven analysis of poverty. Ms Duflo argued that the effects of some anti-poverty programmes go beyond the direct impact of the resources they provide. These programmes also make it possible for the very poor to hope for more than mere survival.
She and her colleagues evaluated a programme in the Indian state of West Bengal, where Bandhan, an Indian microfinance institution, worked with people who lived in extreme penury. They were reckoned to be unable to handle the demands of repaying a loan. Instead, Bandhan gave each of them a small productive asset - a cow, a couple of goats or some chickens. It also provided a small stipend to reduce the temptation to eat or sell the asset immediately, as well as weekly training sessions to teach them how to tend to animals and manage their households. Bandhan hoped that there would be a small increase in income from selling the products of the farm animals provided, and that people would become more adept at managing their own finances.
The results were far more dramatic. Well after the financial help and hand-holding had stopped, the families of those who had been randomly chosen for the Bandhan programme were eating 15% more, earning 20% more each month and skipping fewer meals than people in a comparison group. They were also saving a lot. The effects were so large and persistent that they could not be attributed to the direct effects of the grants: people could not have sold enough milk, eggs or meat to explain the income gains. Nor were they simply selling the assets (although some did).
So what could explain these outcomes? One clue came from the fact that recipients worked 28% more hours, mostly on activities not directly related to the assets they were given. Ms Duflo and her co-authors also found that the beneficiaries' mental health improved dramatically: the programme had cut the rate of depression sharply. She argues that it provided these extremely poor people with the mental space to think about more than just scraping by. As well as finding more work in existing activities, like agricultural labour, they also started exploring new lines of work. Ms Duflo reckons that an absence of hope had helped keep these people in penury; Bandhan injected a dose of optimism.
Ms Duflo is building on an old idea. Development economists have long surmised that some very poor people may remain trapped in poverty because even the largest investments they are able to make, whether eating a few more calories or working a bit harder on their minuscule businesses, are too small to make a big difference. So getting out of poverty seems to require a quantum leap - vastly more food, a modern machine, or an employee to mind the shop. As a result, they often forgo even the small incremental investments of which they are capable: a bit more fertiliser, some more schooling or a small amount of saving.
This hopelessness manifests itself in many ways. One is a sort of pathological conservatism, where people forgo even feasible things with potentially large benefits for fear of losing the little they already possess. For example, poor people stay in drought-hit villages when the city is just a bus ride away. An experiment in rural Bangladesh provided men with the bus fare to Dhaka at the beginning of the lean season, the period between planting and the next harvest when there is little to do except sit around. The offer of the bus fare, an amount which most of the men could have saved up to pay for themselves, led to a 22-percentage-point increase in the probability of migration. The money migrants sent back led their families' consumption to soar. Having experienced the $100 increase in seasonal consumption per head that the $8 bus fare made possible, half of those offered the bus fare migrated again the next year, this time without the inducement.
People sometimes think they are in a poverty trap when they are not. Surveys in many countries show that poor parents often believe that a few years of schooling have almost no benefit; education is valuable only if you finish secondary school. So if they cannot ensure that their children can complete school, they tend to keep them out of the classroom altogether. And if they can pay for only one child to complete school, they often do so by avoiding any education for the children they think are less clever. Yet economists have found that each year of schooling adds a roughly similar amount to a person's earning power: the more education, the better. Moreover, parents are very likely to misjudge their children's skills. By putting all their investment in the child who they believe to be the brightest, they ensure that their other children never find out what they are good at. Assumed to have little potential, these children live down to their parents' expectations.
The fuel of self-belief
Surprising things can often act as a spur to hope. A law in India set aside for women the elected post of head of the village council in a third of villages. Following up several years later, Ms Duflo found a clear effect on the education of girls. Previously parents and children had far more modest education and career goals for girls than for boys. Girls were expected to get much less schooling, stay at home and do the bidding of their in-laws. But a few years of exposure to a female village head had led to a striking degree of convergence between goals for sons and daughters. Their very existence seems to have expanded the girls' sense of the possible beyond a life of domestic drudgery. An unexpected consequence, perhaps, but a profoundly hopeful one.
Niceness Is In Your Genes
Many times, two siblings raised by the same parents, and subject to similar environmental influences, can turn out to be polar opposites: one kind and generous, the other mean-spirited. A new study reveals that the latter might simply have been dealt the wrong hormone receptor genes.
Oxytocin and vasopressin, two hormones that inspire feelings of love and generosity when they flood our brains, bind to neurons by attaching to molecules called receptors, which can come in different forms. The new research, led by psychologist Michel Poulin of the University of Buffalo, suggests that if you have the genes that give you certain versions of those hormone receptors, you're more likely to be a nice person than if you have the genes for one of the other versions. However, the researchers found that the genes work in concert with a person's upbringing and life experiences to determine how sociable - or anti-social - he or she becomes.
As detailed in a new article in the journal Psychological Science, hundreds of people were surveyed about their attitudes toward civic duty, their charitable activities and their worldview. They were asked, for example, whether people have an obligation to report crimes, sit on juries or pay taxes, whether they themselves engage in charitable activities such as giving blood or volunteering, and whether people - and the world as a whole - are basically good, or are threatening and dangerous. Of those surveyed, 711 people provided a sample of their saliva for DNA analysis, which showed which version of the oxytocin and vasopressin receptors they had.
Study participants who saw the world as a threatening place, and the people in it as inherently bad, were nonetheless nice, dutiful and charitable as long as they had the versions of the receptor genes associated with niceness. These "nicer" versions of the genes, Poulin said, "allow you to overcome feelings of the world being threatening and help other people in spite of those fears."
With the other types of receptor genes, however, a negative worldview resulted in anti-social behavior.
"The fact that the genes predicted behavior only in combination with people's experiences and feelings about the world isn't surprising," Poulin said in a press release, "because most connections between DNA and social behavior are complex." [Is Free Will an Illusion? Scientists, Philosophers Forced to Differ]
For oxytocin, the difference between having the "nicer" hormone receptor and the "less nice" receptor lies in a single DNA base pair located on the third chromosome. If you inherit two guanine base pairs - one from each parent - giving you a genotype represented by the letters GG, your cells build the "nicer" receptor. If you inherit an adenine base pair from either one or both parents, and have a genotype represented by either AA or AG, you land the "less nice" oxytocin receptor.
The percentage of people with each genotype varies greatly between ethnicities. "In European-American samples - so, white people in the U.S. - what you see is that the GG genotype represents about half of people, or a small majority. That's the population we studied in this paper," Poulin told Life's Little Mysteries. "Other research indicates that the rates of GG or the so-called 'nice' genotype are much lower in East Asian populations. This is sparking an interesting discussion among psychologists about the roots of pro-social behavior. We know East Asian cultures are much more communal than other cultures. How do we explain that distinction?"
It could be that other genes, or other cultural factors, play a more significant role in molding communal behavior among East Asian people than their oxytocin receptors, Poulin said. "These are early days in figuring out the association between genes and pro-social behavior."
But the evidence is converging to point to a greater influence of genes on niceness than was previous assumed. For example, another study performed last year by scientists at the University of Edinburgh showed that identical twins, who share 100 percent of their genes, had much more similar attitudes toward civic duty and charitable activities than did fraternal twins, who had parallel upbringings but who share only 50 percent of their genes. With the new study, Poulin and his colleagues have identified the genes that they say "may lie at the core of the caregiving behavioral system."
"We aren't saying we've found the niceness gene," he said. "But we have found a gene that makes a contribution. What I find so interesting is the fact that it only makes a contribution in the presence of certain feelings people have about the world around them."
Having the "nicer" genes may benefit you, as well as those around you. According to the scientists, "Some research has indicated that behavior aimed at helping other people is a better predictor of health and well-being than are social engagement or received social support." In other words, helping others makes you healthier - even more so than being helped yourself.
The Way You Walk
First impressions are powerful and are formed in all sorts of social settings, from job interviews and first dates to court rooms and classrooms. We regularly make snap judgments about others, deciding whether people are trustworthy, confident, extraverted, likable, and more. Although we have all heard the old adage, "don't judge a book by its cover," we do just that. And at the same time that we are judging others, we in turn communicate a great deal of information about ourselves - often unwittingly - that others use to size us up.
It is no surprise that complete strangers engage in a process of mutual evaluation, or that people form impressions quite quickly, and in many cases, quite accurately. One of the first studies to demonstrate this fact was conducted in 1966 by Warren Norman and Lou Goldberg, who had college students rate their classmates' personalities on the very first day of class, before they had a chance to interact. Students were also asked to rate their own personalities. Two surprising findings emerged from that study: First, classmates tended to agree in their assessments of others; if one classmate rated a peer as dependable or extroverted, it was likely that other classmates rated that individual as dependable or extroverted too.
The second and more noteworthy finding was that students' first impressions of their classmates tended to align with their classmates' own self-assessments. Thus, if a person was rated as sociable by his classmates, it was quite likely that he had independently rated himself as sociable as well. These data, and those from similar studies, suggest that we are adept in rapidly and accurately evaluating some personality traits, or at the very least that we are quick to discern the way others see themselves.
Just how low can we go? If we can accurately size up a fellow student in the first few moments of class, without any significant interaction, how little information do we need in order to make these assessments? Is body language enough? What about facial expressions, clothing, or mannerisms? From the shoes we wear and the way we stand to the songs we like and the way we walk, researchers have examined what our behaviors and our preferences convey to others. In many instances it seems we need to catch only the smallest detail about a person to form an accurate impression, but of course we can get it wrong.
Nalini Ambady and Robert Rosenthal led the quest to understand the limits of impression formation, and in a series of studies they demonstrated that observers make accurate personality and competency judgments using very 'thin slices' of behavior. In their experiments, undergraduate raters watched brief, 30-sec video clips of teachers in the classroom, and evaluated the teachers on thirteen different variables, including likability, competence, warmth, honesty, and optimism. Notably, the audio on these video clips was removed, so that raters made their evaluations exclusively on the basis of non-verbal cues. Not only were the judgments of the teachers fairly consistent across raters, but they were also fairly accurate.
These appraisals, rendered after only half a minute of observation, were reliably predictive of the evaluation scores given to the teachers by students whom they had instructed for a full semester. In subsequent studies Ambady and Rosenthal examined judgments rendered after only ten seconds, and then after a mere two seconds. Even when given only two seconds of silent video, the raters made judgments that correlated reliably with end-of-semester evaluations made by the teachers' own students.
Two seconds of silent video may indeed seem like a very thin slice of behavior upon which to base an impression, but researchers have demonstrated that we can do well with far less. More recent investigations have demonstrated that a simple photograph of our favorite shoes provides enough data for strangers to judge our age, gender, income and attachment anxiety, and that a list of our top ten favorite songs reveals how agreeable and emotional stable we are.
But what if we reduce the information available to a mere series of dots, strung together to form a stick figure that depicts movement but nothing else? John Thoresen, Quoc Vuong, and Anthony Atkinson addressed this question in a series of experiments where participants judged personality traits on the basis of body movements alone. The scientists first videotaped male and female volunteers as they walked roughly 25 feet. From these videos, they created stick-figure depictions of each walker, eliminating all information about age, attractiveness, weight, clothing, race, and gender. The only information available to observers was the gait of the walker, conveyed in the form of a two-dimensional stick-figure.
Participants in these studies rated each stick-figure walker on six trait scales: adventurousness, extraversion, neuroticism, trustworthiness, warmth, and approachability. Two questions were addressed: First, were the impressions about the stick-figure walkers consistent across raters? Second, were they accurate?
Raters were in fact fairly reliable in their judgments: if one rater judged a walker to be adventurous and extraverted, it is likely that other raters did too. Despite the consensus in ratings, though, the impressions were not correct. The trait judgments made by raters did not align with the walkers' self-reports.
These findings are a bit puzzling. If the raters were wrong, why were their impressions so similar? What information were raters using to make their judgments that lead to such consistency? Thoresen and colleagues speculated that the raters may have tried to glean other (unseen) physical characteristics of the stick-figure walkers like gender, age, or health, and may have used those intuitions as the basis for their personality judgments. To test this possibility, Thoresen and colleagues thus asked new groups of raters to view the stick-figure walkers and guess various physical characteristics of the stick-figure walkers, like gender, attractiveness, age, and excitability.
Raters were again very consistent in their judgments: If one rater perceived a stick-figure walker as male or attractive, it was very likely that the other raters did. For some physical characteristics (e.g., gender) raters were both consistent and also fairly accurate, but for others (e.g., age), raters agreed with each other but were not correct in their assessments. Regardless of the accuracy of these judgments, the assessments of physical characteristics like gender and attractiveness reliably predicted the perceived personality traits for each stick-figure walker. For example, walkers perceived as masculine were also perceived as emotionally stable, those perceived as attractive were also considered approachable, and those perceived as calm were also viewed as warm.
The findings suggest that in the absence of good information, viewers glean what they can from a situation and use that information to form impressions about personality traits. Even though trait judgments that are derived from minimal detail (like gait) are likely to be wrong, there is an odd consistency in those errant judgments. People seem to rely on common factors when forming impressions, and reach similar, albeit inaccurate, conclusions when information is scarce.
The Power of Fictional Characters
When you "lose yourself" inside the world of a fictional character while reading a story, you may actually end up changing your own behavior and thoughts to match that of the character, a new study suggests.
Researchers at Ohio State University examined what happened to people who, while reading a fictional story, found themselves feeling the emotions, thoughts, beliefs and internal responses of one of the characters as if they were their own -- a phenomenon the researchers call "experience-taking."
They found that, in the right situations, experience-taking may lead to real changes, if only temporary, in the lives of readers.
In one experiment, for example, the researchers found that people who strongly identified with a fictional character who overcame obstacles to vote were significantly more likely to vote in a real election several days later.
"Experience-taking can be a powerful way to change our behavior and thoughts in meaningful and beneficial ways," said Lisa Libby, co-author of the study and assistant professor of psychology at Ohio State University.
There are many ways experience-taking can affect readers.
In another experiment, people who went through this experience-taking process while reading about a character who was revealed to be of a different race or sexual orientation showed more favorable attitudes toward the other group and were less likely to stereotype.
"Experience-taking changes us by allowing us to merge our own lives with those of the characters we read about, which can lead to good outcomes," said Geoff Kaufman, who led the study as a graduate student at Ohio State. He is now a postdoctoral researcher at the Tiltfactor Laboratory at Dartmouth College.
Their findings appear online in the Journal of Personality and Social Psychology and will be published in a future print edition.
Experience-taking doesn't happen all the time. It only occurs when people are able, in a sense, to forget about themselves and their own self-concept and self-identity while reading, Kaufman said. In one experiment, for example, the researchers found that most college students were unable to undergo experience-taking if they were reading in a cubicle with a mirror.
"The more you're reminded of your own personal identity, the less likely you'll be able to take on a character's identity," Kaufman said.
"You have to be able to take yourself out of the picture, and really lose yourself in the book in order to have this authentic experience of taking on a character's identity."
In the voting study, 82 undergraduates who were registered and eligible to vote were assigned to read one of four versions of a short story about a student enduring several obstacles on the morning of Election Day (such as car problems, rain, long lines) before ultimately entering the booth to cast a vote. This experiment took place several days before the 2008 November presidential election.
Some versions were written in first person ("I entered the voting booth) while some were written in third person ("Paul entered the voting booth"). In addition, some versions featured a student who attended the same university as the participants, while in other versions, the protagonist in the story attended a different university.
After reading the story, the participants completed a questionnaire that measured their level of experience-taking -- how much they adopted the perspective of the character in the story. For example, they were asked to rate how much they agreed with statements like "I found myself feeling what the character in the story was feeling" and "I felt I could get inside the character's head."
The results showed that participants who read a story told in first-person, about a student at their own university, had the highest level of experience-taking. And a full 65 percent of these participants reported they voted on Election Day, when they were asked later.
In comparison, only 29 percent of the participants voted if they read the first-person story about a student from a different university.
"When you share a group membership with a character from a story told in first-person voice, you're much more likely to feel like you're experiencing his or her life events," Libby said. "And when you undergo this experience-taking, it can affect your behavior for days afterwards."
While people are more likely to lose themselves in a character who is similar to themselves, what happens if they don't learn that a character is not similar until later in a story?
In one experiment, 70 male, heterosexual college students read a story about a day in the life of another student. There were three versions -- one in which the character was revealed to be gay early in the story, one in which the student was identified as gay late in the story, and one in which the character was heterosexual.
Results showed that the students who read the story where the character was identified as gay late in the narrative reported higher levels of experience-taking than did those who read the story where the character's homosexuality was announced early.
"If participants knew early on that the character was not like them - that he was gay - that prevented them from really experience-taking," Libby said.
"But if they learned late about the character's homosexuality, they were just as likely to lose themselves in the character as were the people who read about a heterosexual student."
Even more importantly, the version of the story participants read affected how they thought about gays.
Those who read the gay-late narrative reported significantly more favorable attitudes toward homosexuals after reading the story than did readers of both the gay-early narrative and the heterosexual narrative.Those who read the gay-late narrative also relied less on stereotypes of homosexuals -- they rated the gay character as less feminine and less emotional than did the readers of the gay-early story.
"If people identified with the character before they knew he was gay, if they went through experience-taking, they had more positive views -- the readers accepted that this character was like them," Kaufman said.
Similar results were found in a story where white students read about a black student, who was identified as black early or late in the story.
Libby said experience-taking is different from perspective-taking, where people try to understand what another person is going though in a particular situation -- but without losing sight of their own identity. "Experience-taking is much more immersive -- you've replaced yourself with the other," she said. The key is that experience-taking is spontaneous -- you don't have to direct people to do it, but it happens naturally under the right circumstance.
"Experience-taking can be very powerful because people don't even realize it is happening to them. It is an unconscious process," Libby said.
Power really does corrupt
Power really does corrupt - research shows that being boss changes our brains
EVERY week or so, US President Barack Obama's security chiefs give him a list of terror suspects based in Yemen, Somalia or Pakistan, along with biographies and pictures. From this shortlist Obama personally authorises which suspects should be taken out by remote predator drones. The first strike he ordered happened three days after he took office and he was reportedly extremely upset when a number of children were inadvertently killed in the attack.
At this year's White House Correspondents' Association Dinner, the president continued his now famous series of light-hearted singing and jokey press outings with a warning to boy band the Jonas Brothers about his daughters: "Sasha and Malia are huge fans but, boys, don't be getting any ideas. I have two words for you: predator drones. You will never see it coming. You think I'm joking?"
More recently, he was suspected of sexual innuendo when speaking in Beverley Hills. He said: "I want to thank my wonderful friend who accepts a little bit of teasing about Michelle beating her in push-ups - but I think she claims Michelle didn't go all the way down." According to the reporters' pool, the president let the line "hang, naughtily".
No one is sure whether the double-entendre was intentional or not, but at the very least there was a lack of scrutiny to avoid innuendo. Whatever the truth, this is consistent with a more "loosened up" presidential persona, which may be part of an election plan to soften the rather aloof, professorial style that characterised his early presidency.
But another interpretation is possible. The predator drone gag was humorous, but if you or I were given the task of deciding who would die that week - with the possibility of also killing innocent children - would we not find it hard to joke about such strikes?
Consider this. The nearly four years he has spent as the most powerful man in the world has almost certainly reshaped Obama's brain and personality. Power increases testosterone levels, which in turn increases the uptake of dopamine in the brain's reward network. The results are an increase in egocentricity and a reduction in empathy (Psychological Science, vol 17, p 1068).
The tasteless joke about the predator drones was in line with the sort of decline in empathy that even small amounts of power can trigger. Similarly, if he did intend the sexual innuendo in his press-ups joke, that kind of disinhibition would also be characteristic of power's effects on the brain. Even tiny amounts of power, such as being allowed to grade the performance of your partner in a social psychology experiment, changes behaviour.
This can be seen in research by Dacher Keltner and his colleagues at the University of California, Berkeley, who showed that when a hierarchical group is presented with a plate of cookies, "the boss" is much more likely to take the last one, and eat it with an open mouth, scattering debris and leaving crumbs on their face. These behaviours are not features of a bad upbringing or sloppy personality; if the same person was part of the group, they would be more likely to eat demurely.
Like many neurotransmitters in the brain, dopamine operates in an "inverted U" shape, where either too little or too much can impair the co-ordinated functioning of the brain. Through its cocaine-like disruption of the brain's reward system, unfettered power leads to real problems of judgement, emotional functioning, self-awareness and inhibition.
Unfettered power can also trigger narcissism and a mentality along the lines of the "hubris syndrome" that the former British cabinet minister David Owen identified, where power becomes an intoxicating drug for politicians. And the bizarre behaviour of dictators like Muammar Gaddafi cannot easily be explained in terms of pre-existing personality traits: it is much easier to interpret in terms of the unbalancing effects of power on the brain.
The tools of democracy - free elections, limited terms of office, an independent judiciary and a free press - were developed in part to combat the effects of excessive power on leaders. Even the Chinese change their leaders every 10 years. But it is not just political leaders who are affected - hundreds of millions of people have power over others through their jobs.
Nathanael Fast and colleagues at the University of Southern California's Marshall School of Business discovered that some bosses who have a lot of power over their underlings behave decently while others abuse their position by behaving aggressively. Why is this? Fast discovered that power makes bullies of people who feel inadequate in the role of boss. With power comes the need to perform under the close and critical scrutiny of underlings, peers and bosses. Such power energises and smartens some, but it stresses others who might have functioned well in a less powerful position. The Japanese prime minister Shinzo Abe is a good example. He resigned in 2007 after just one year in office, with severe stress playing a major part.
Other leaders may have too big an appetite for power. Former British prime minister Tony Blair is arguably a recent example, where an appetite for power led to disastrous judgements, principally the invasion of Iraq. And Vladimir Putin, who will have been in continuous power as Russian president or prime minister for 18 years by the time he finishes his current term, arguably shows alarming symptoms of the neurological consequences of excessive power, such as a taste for photographs of himself bare chested or with tigers or bears.
This, then, is the conundrum: we need strong leaders with an appetite for power and who can benefit from its anti-depressant effects while being able to negotiate the stresses, decisions and loneliness of leadership. Power feels good because it uses the same reward network as cocaine and sex. As we watch our leaders become rapidly grey and lined with the stress of office, we recognise that they need to be rewarded and motivated by power to stay the course and handle the complex challenges of the 21st century. At the same time, they need protecting from the toxic effects of the world's most seductive neurological drug.
The philosopher Bertrand Russell argued that power is to human relationships what energy is to physics. Yet the effects of power on the brains of leaders is one of the great under-considered variables in life. In the very near future, the neurological and psychological effects of power must become part of our discourse about leaders, bosses, professors, doctors - and all the other roles in which individuals are given charge of resources which others want, need or fear.
Of course power's effects are not all negative. It makes people smarter and more inclined to think abstractly and strategically. Power emboldens by reducing anxiety and raising mood, and it gives people a greater appetite for risk. This makes sense evolutionarily: as a species that goes in for hierarchically organised groups, leadership or dominance should enhance strategic thinking, reduce anxiety and allow leaders room to inspire others to keep up to the mark. We cannot use leaders who are paralysed by a surfeit of empathy. Which general would make the right decision if he emotionally engaged with the suffering of every soldier or civilian?
So Obama's drone joke is understandable enough, however distasteful it may be to those of us unchanged by mega-doses of power. A US president can only stay in office for eight years. Now we know that there are good biological reasons for this as well as political ones.
Lies, White Lies, and Bullshit
Here's something I bet we all believe: Lying is bad. Telling the truth is good. It's what our parents told us, right up there with eat your vegetables, brush your teeth, and make sure you unplug the soldering iron. (What? I was raised by engineers). But there's something else none of us can argue: We are all liars. According to a 2011 survey of Americans, we humans lie about 1.65 times a day. (Men lie a little more than woman, 1.93 lies to 1.39 lies a day.)
Perhaps this is why people got really excited in 2007 when Jeff Hancock, a communications professor at Cornell University, starting talking about how we could use computers and algorithms to help detect lies. His research pulled from a study he had done with two other professors, Catalina Toma and Nicole Ellison, about how people lie in online dating profiles. It turned out that nine out of 10 people fibbed when describing themselves to prospective mates. This fact may not be so surprising if we are honest with ourselves about our own dating lives. But Hancock went one step further. He began to develop a computer program that could detect the lies that people were telling online.
People have a terrible track record for picking out a lies - we can detect a lie about 54 percent of the time. Hancock's algorithms, on the other hand, were able to establish patterns for how people told lies. One of the telltale signs that the programs look for is fewer words. Liars give less information when they describe events, people, and places. Those who are telling the truth, on the other hand, give more details. For instance, they talk about spatial relations, like how far a hotel was from a coffee shop or how long it took to get to the subway.
So that's it, it's the end of lying as we know it. With the help of computers and software, lying could become a thing of the past. And that scares the hell out of me.
The idea of technology delivering us from the shackles of vicious liars calls to mind Winston Smith in George Orwell's 1984, especially this specific passage:
A kilometer away the Ministry of Truth, his place of work, towered vast and white over the grimy landscape.The Ministry of Truth - Minitrue in Newspeak - was startlingly different from any other object in sight. It was an enormous pyramidal structure of glittering white concrete, soaring up, terrace after terrace, three hundred meters into the air. From where Winston stood it was just possible to read, picked out of its white face in elegant lettering, the three slogans of the Party:
WAR IS PEACE
FREEDOM IS SLAVERY
IGNORANCE IS STRENGTH.
In the novel, Winston works for the Ministry of Truth, changing and destroying the past to keep the present in line with the current goals of the oppressive party. Orwell explains, "He who controls the past controls the future. He who controls the present controls the past." The Ministry of Truth is an enormous apparatus for telling lies, for manipulating the past to serve the good of the ruling party. Could an algorithm start to act like the towering Ministry of Truth? If so, whom would it serve? Who defines truth? Using technology to control and police the truth in our communications with other people seems frighteningly dystopic. If all humans lie, then doing away with the ability to fib or fabricate might feel like doing away with a little piece of our very humanity.
Orwell's vision was important because he was showing us a future that we should avoid. The future isn't an accident. It's made every day by the actions of people. Because of this, we need to ask ourselves: What is the future we want to live in? What kinds of futures do we want to avoid?
For the past 56 years, since Russia's launch of Sputnik birthed the Space Age, we've imagined a very specific kind of future, one with sleek angles, shiny-clean homes, and good-looking people using amazing new devices. We've seen these images in movies, advertising, and corporate vision videos. As a futurist, I don't like these visions of tomorrow. I find them intellectually dishonest. They lack imagination and fail to take into account that humans are complicated. In fact, the bright and chrome future is as scary to me as Orwell's visions. To design a future that we all want to live in, a future for real people, we need to embrace our humanity and imagine it in this future landscape. To be specific, we need to embrace the fact that we are all liars - in certain ways.
"Not all lies are created equal," my Intel colleague Dr. Tony Salvador, a trained experimental psychologist, told me recently. There are really two kinds of untruths. First, you have the bad lies, the ones we tell to actively deceive people for personal gain. These are the lies that hurt people and can send you to jail. At the other end of the spectrum are the white lies, the little lies we tell to just be nice - "social lubricant," as Tony puts it. "It's like when you bump into someone and say, "Oh, I'm sorry." You're really not sorry, but you say it so you can both just move on. These kinds of lies just keep our days moving forward. They keep the friction down between people so that we can get done what we need to do in a world full of people." You know, the kind of fibs that keep us humans from killing one another.
Between deception and comfort lies a vast expanse of bullshit. Bullshit isn't lying. Princeton professor Harry Frankfurt explains in his book On Bullshit that the bullshitter's intention is neither to report the truth nor to conceal it. It is to conceal his or her wishes. Bullshit can be the gray area between doing harm to someone (taking advantage) and making them feel better (white lies). It comes down to a question of intent. Are you bullshitting to be nice, or are you bullshitting to deceive and gain an advantage?
This Liars' Landscape is helpful because it makes us examine how we could use technology to make people's lives better while at the same time not making them less human. One misconception about technology is that it is somehow separate from us as human beings. But technology is simply a tool, a means to an end. A hammer becomes interesting when you use it to build a house. It's what you can do with the tool that matters.
As we move into a future in which we have more devices and smarter algorithms, how do we design a future that can detect harmful deception while at the same time allowing us to be the lovely lying humans we all love? The first step is to imagine a more human future, with none of that metallic sheen. Perhaps, if we can get the technology right, there will be no deception, a little less bullshit, but just as many white lies.
Are We Naturally Generous or Stingy?
Cooperation eases our way in the world, contributing to extraordinary and mundane human achievements alike. Yet even the nicest do-gooders sometimes act with self-interest. A study published recently in Nature sought to understand the mental processes that tip a person from generous to greedy. “By default are we selfish animals who have to exert willpower in order to cooperate?” asks David Rand, a psychologist at Harvard University who led the study. “Or are we predisposed to cooperate, but when we stop to think about it the greedy calculus of self-interest takes over?”
To peer into this aspect of human nature, Rand and his colleagues gave study participants 40 cents, then asked them to decide how much to keep for themselves and how much to donate to a common pool that would later be doubled and split evenly among those who donated. Those who quickly made up their minds donated more than those who took longer, suggesting that quick decisions based on intuition were more generous than slower, deliberative decisions.
In a follow-up study, researchers prompted participants either to trust their instincts or to mull them over when deciding. Consistent with the earlier finding, donations were higher for the intuition group.
This result suggests that our first impulse is to cooperate, but it does not necessarily mean we are genetically hardwired to do so, Rand says. Instead it may reflect a habit learned from a lifetime of fruitful cooperative experiences. The work also suggests that donation seekers would do well to leave their facts and statistics behind when courting potential donors—that pitch could backfire by promoting a ruminative, miserly mind-set.
Stupidity: What makes people do dumb things
"EARTH has its boundaries, but human stupidity is limitless," wrote Gustave Flaubert. He was almost unhinged by the fact. Colourful fulminations about his fatuous peers filled his many letters to Louise Colet, the French poet who inspired his novel Madame Bovary. He saw stupidity everywhere, from the gossip of middle-class busybodies to the lectures of academics. Not even Voltaire escaped his critical eye. Consumed by this obsession, he devoted his final years to collecting thousands of examples for a kind of encyclopedia of stupidity. He died before his magnum opus was complete, and some attribute his sudden death, aged 58, to the frustration of researching the book.
Documenting the extent of human stupidity may itself seem a fool's errand, which could explain why studies of human intellect have tended to focus on the high end of the intelligence spectrum. And yet, the sheer breadth of that spectrum raises many intriguing questions. If being smart is such an overwhelming advantage, for instance, why aren't we all uniformly intelligent? Or are there drawbacks to being clever that sometimes give slower thinkers the upper hand? And why are even the smartest people prone to – well, stupidity?
It turns out that our usual measures of intelligence – particularly IQ – have very little to do with the kind of irrational, illogical behaviours that so enraged Flaubert. You really can be highly intelligent, and at the same time very stupid. Understanding the factors that lead clever people to make bad decisions is beginning to shed light on many of society's biggest catastrophes, including the recent economic crisis. More intriguingly, the latest research may suggest ways to evade a condition that can plague us all.
The idea that intelligence and stupidity are simply opposing ends of a single spectrum is a surprisingly modern one. The Renaissance theologian Erasmus painted Folly – or Stultitia in Latin – as a distinct entity in her own right, descended from the god of wealth and the nymph of youth; others saw it as a combination of vanity, stubbornness and imitation. It was only in the middle of the 18th century that stupidity became conflated with mediocre intelligence, says Matthijs van Boxsel, a Dutch historian who has written many books about stupidity. "Around that time, the bourgeoisie rose to power, and reason became a new norm with the Enlightenment," he says. "That put every man in charge of his own fate."
Modern attempts to study variations in human ability tended to focus on IQ tests that put a single number on someone's mental capacity. They are perhaps best recognised as a measure of abstract reasoning, says psychologist Richard Nisbett at the University of Michigan in Ann Arbor. "If you have an IQ of 120, calculus is easy. If it's 100, you can learn it but you'll have to be motivated to put in a lot of work. If your IQ is 70, you have no chance of grasping calculus." The measure seems to predict academic and professional success.
Various factors will determine where you lie on the IQ scale. Possibly a third of the variation in our intelligence is down to the environment in which we grow up – nutrition and education, for example. Genes, meanwhile, contribute more than 40 per cent of the differences between two people.
These differences may manifest themselves in our brain's wiring. Smarter brains seem to have more efficient networks of connections between neurons. That may determine how well someone is able to use their short-term "working" memory to link disparate ideas and quickly access problem-solving strategies, says Jennie Ferrell, a psychologist at the University of the West of England in Bristol. "Those neural connections are the biological basis for making efficient mental connections."
This variation in intelligence has led some to wonder whether superior brain power comes at a cost – otherwise, why haven't we all evolved to be geniuses? Unfortunately, evidence is in short supply. For instance, some proposed that depression may be more common among more intelligent people, leading to higher suicide rates, but no studies have managed to support the idea. One of the only studies to report a downside to intelligence found that soldiers with higher IQs were more likely to die during the second world war. The effect was slight, however, and other factors might have skewed the data.
Intellectual wasteland
Alternatively, the variation in our intelligence may have arisen from a process called "genetic drift", after human civilisation eased the challenges driving the evolution of our brains. Gerald Crabtree at Stanford University in California is one of the leading proponents of this idea. He points out that our intelligence depends on around 2000 to 5000 constantly mutating genes. In the distant past, people whose mutations had slowed their intellect would not have survived to pass on their genes; but Crabtree suggests that as human societies became more collaborative, slower thinkers were able to piggyback on the success of those with higher intellect. In fact, he says, someone plucked from 1000 BC and placed in modern society, would be "among the brightest and most intellectually alive of our colleagues and companions" (Trends in Genetics, vol 29, p 1).
This theory is often called the "idiocracy" hypothesis, after the eponymous film, which imagines a future in which the social safety net has created an intellectual wasteland. Although it has some supporters, the evidence is shaky. We can't easily estimate the intelligence of our distant ancestors, and the average IQ has in fact risen slightly in the immediate past. At the very least, "this disproves the fear that less intelligent people have more children and therefore the national intelligence will fall", says psychologist Alan Baddeley at the University of York, UK.
In any case, such theories on the evolution of intelligence may need a radical rethink in the light of recent developments, which have led many to speculate that there are more dimensions to human thinking than IQ measures. Critics have long pointed out that IQ scores can easily be skewed by factors such as dyslexia, education and culture. "I would probably soundly fail an intelligence test devised by an 18th-century Sioux Indian," says Nisbett. Additionally, people with scores as low as 80 can still speak multiple languages and even, in the case of one British man, engage in complex financial fraud. Conversely, high IQ is no guarantee that a person will act rationally – think of the brilliant physicists who insist that climate change is a hoax.
It was this inability to weigh up evidence and make sound decisions that so infuriated Flaubert. Unlike the French writer, however, many scientists avoid talking about stupidity per se – "the term is unscientific", says Baddeley. However, Flaubert's understanding that profound lapses in logic can plague the brightest minds is now getting attention. "There are intelligent people who are stupid," says Dylan Evans, a psychologist and author who studies emotion and intelligence.
What can explain this apparent paradox? One theory comes from Daniel Kahneman, a cognitive scientist at Princeton University who won the Nobel prize in economics for his work on human behaviour. Economists used to assume that people were inherently rational, but Kahneman and his colleague Amos Tversky discovered otherwise. When we process information, they found, our brain can access two different systems. IQ tests measure only one of these, the deliberative processing that plays a key role in conscious problem-solving. Yet our default position in everyday life is to use our intuition.
To begin with, these intuitive mechanisms gave us an evolutionary advantage, offering cognitive shortcuts that help deal with information overload. They include cognitive biases such as stereotyping, confirmation bias, and resistance to ambiguity – the temptation to accept the first solution to a problem even if it is obviously not the best.
While these evolved biases, called "heuristics", may help our thinking in certain situations, they can derail our judgement if we rely on them uncritically. For this reason, the inability to recognise or resist them is at the root of stupidity. "The brain doesn't have a switch that says 'I'm only going to stereotype what restaurants are like but not people'," Ferrell says. "You have to train those muscles."
Because it has nothing to do with your IQ, to truly understand human stupidity you need a separate test that examines our susceptibility to bias. One candidate comes from Keith Stanovich, a cognitive scientist at the University of Toronto in Canada, who is working on a rationality quotient (RQ) to assess our ability to transcend cognitive bias.
Consider the following question, which tests the ambiguity effect: Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? Possible answers are "yes", "no", or "cannot be determined". The vast majority of people will say it "cannot be determined", simply because it is the first answer that comes to mind – but careful deduction shows the answer is "yes".
RQ would also measure risk intelligence, which defines our ability to calibrate the likelihood of certain probabilities. For example, we tend to overestimate our chances of winning the lottery, says Evans, and underestimate the chance of getting divorced. Poor risk intelligence can cause us to choose badly without any notion that we're doing so.
So what determines whether you have naturally high RQ? Stanovich has found that unlike IQ, RQ isn't down to your genes or nurture factors from your childhood. More than anything, it depends on something called metacognition, which is the ability to assess the validity of your own knowledge. People with high RQ have acquired strategies that boost this self-awareness. One simple approach would be to take your intuitive answer to a problem and consider its opposite before coming to the final decision, says Stanovich. This helps you develop keen awareness of what you know and don't know.
But even those with naturally high RQ can be tripped up by circumstances beyond their control. "You individually can have great cognitive abilities, but your environment dictates how you have to act," says Ferrell.
As you have probably experienced, emotional distractions can be the biggest cause of error. Feelings like grief or anxiety clutter up your working memory, leaving fewer resources for assessing the world around you. To cope, you may find yourself falling back on heuristics for an easy shortcut. Ferrell says this may also explain more persistent experiences such as "stereotype threat". That's the feeling of anxiety that minority groups can experience when they know their performance could be taken to confirm an existing prejudice; it has been shown time and again to damage test scores.
Perhaps nothing encourages stupidity more than the practices of certain businesses, as André Spicer and Mats Alvesson have found. Neither were interested in stupidity at the time of their discovery. Spicer, at the Cass Business School in London, and Alvesson at Lund University in Sweden, had set out to investigate how prestigious organisations manage highly intelligent people. But they soon had to tear up their thesis.
Over and over, the same pattern emerged: certain organisations – notably investment banks, PR agencies and consultancies – would hire highly qualified individuals. But instead of seeing these talents put to use, says Spicer, "we were struck by the fact that precisely the aspects they'd been trained in were immediately switched off", a phenomenon they branded "functional stupidity".
Their findings made sense in the context of bias and rationality. "We didn't initially see Kahneman as the backbone to our work," Spicer says. "But we started to notice interesting connections to the kind of things he observed in the lab." For example, organisational practices regularly shut down the employees' risk intelligence. "There was no direct relationship between what they did and the outcome," says Spicer, so they had no way to judge the consequences of their actions. Corporate pressures also amplified the ambiguity bias. "In complex organisations, ambiguity is rife – and so is the desire to avoid it at all costs," says Spicer.
The consequences may be catastrophic. In a meta-analysis last year, Spicer and Alvesson reported that functional stupidity was a direct contributor to the financial crisis (Journal of Management Studies, vol 49, p 1194). "These people were incredibly smart," Spicer says. "They all knew that there were problems with mortgage-backed securities and structured commodities." But not only was it was no one's problem to look at them; the employees faced discipline if they raised their concerns, perhaps because they seemed to be undermining those with greater authority. The result is that potentially brilliant employees left logic at the office door.
The Republic of Stupidity
In light of the economic crash, the findings would seem to confirm some of Flaubert's fears about the power of stupid people in large groups, which he referred to in jest as The Republic of Stupidity. It also confirms some of van Boxsel's observations that stupidity is most dangerous in people with high IQ – since they are often given greater responsibility: "the more intelligent they are, the more disastrous the results of their stupidity".
This may explain why, according to Stanovich, the financial sector has been clamouring for a good rationality test "for years". At the moment the RQ test cannot give a definitive score, like an IQ, because you need to compare a large number of volunteers before you can develop a steadfast scale that will allow comparison between different groups of people. However, he has found that merely taking this kind of test improves our awareness of common heuristics which can help us resist their siren song. In January, he began the process of developing the test, thanks to a three-year grant from the philanthropic John Templeton Foundation.
Whether anyone will finish what Flaubert started is another question. Van Boxsel will be calling it quits after his seventh book on the topic. But the US Library of Congress has, perhaps unwittingly, taken up the baton by deciding to archive every tweet in the world.
For the rest of us, knowledge of our foolish nature could help us escape its grasp. Maybe the Renaissance philosophers, such as Erasmus, fully understood stupidity's capacity to rule us. Below depictions of Folly, or Stultitia, you will see the acknowledgement: "Foolishness reigns in me."
Body Language
WHEN Tom Cruise and Katie Holmes announced their divorce last year, tabloid journalists fell over themselves to point out that they had seen it coming. "Just look at their body language!" the headlines screamed, above shots of Holmes frowning while holding Cruise at arm's length. "Awkward!" And when Barack Obama lost last year's first US presidential debate to Republican nominee Mitt Romney, some commentators blamed it on his "low-energy" body language and tendency to look down and purse his lips, which made him come across as "lethargic and unprepared".
Popular culture is full of such insights. After all, it is fun to speculate on the inner lives of the great and the good. But anyone with a sceptical or logical disposition cannot fail to notice the thumping great elephant in the room – the assumption that we can read a person's thoughts and emotions by watching how they move their body. With so many myths surrounding the subject, it is easy to think we understand the coded messages that others convey, but what does science have to say about body language? Is there anything more in it than entertainment value? If so, which movements and gestures speak volumes and which are red herrings? And, knowing this, can we actually alter our own body language to manipulate how others perceive us?
A good place to start looking for answers is the oft-quoted statistic that 93 per cent of our communication is non-verbal, with only 7 per cent based on what we are actually saying. This figure came from research in the late 1960s by Albert Mehrabian, a social psychologist at the University of California, Los Angeles. He found that when the emotional message conveyed by tone of voice and facial expression differed from the word being spoken (for example, saying the word "brute" in a positive tone and with a smile), people tended to believe the non-verbal cues over the word itself. From these experiments Mehrabian calculated that perhaps only 7 per cent of the emotional message comes from the words we use, with 38 per cent coming from tone and the other 55 per cent from non-verbal cues.
Mehrabian has spent much of the past four-and-a-bit decades pointing out that he never meant this formula to be taken as some kind of gospel, and that it only applies to very specific circumstances – when someone is talking about their likes and dislikes. He now says that "unless a communicator is talking about their feelings or attitudes, these equations are not applicable" and that he cringes every times he hears his theory applied to communication in general.
So the oldest stat in the body language book isn't quite what it seems, and the man who came up with the formula would like everyone to please stop going on about it. After all, if we really could understand 93 per cent of what people are saying without recourse to words, we wouldn't need to learn foreign languages and no one would ever get away with a lie.
Clearly, people can lie successfully. And, generally, though it is useful to lie occasionally, we would rather that others could not. Which is why a lot of the interest in body language concerns detecting lies. Legend has it that liars give themselves away with physical "tells", such as looking to the right, fidgeting, holding their own hands or scratching their nose. How much of this stacks up?
The first item is easy to dispatch. A study published last year, the first to scientifically test the "liars look right" assertion, found no evidence to back it up. A team led by psychologist Richard Wiseman from the University of Hertfordshire in Hatfield, UK, observed the eye movements of volunteers telling lies in lab-based experiments. They also studied footage of people at police press conferences for missing persons, where some of the emotional pleas for information came from individuals who turned out to be involved in the disappearance. In neither case did the liars look to the right any more than in other directions (PLoS One, vol 7, p e40259).
As for other tells, a meta-analysis of more than 100 studies found that the only bodily signs found in liars significantly more often than in truth-tellers were dilated pupils and certain kinds of fidgeting – fiddling with objects and scratching, but not rubbing their face or playing with their hair. The best way to spot a liar, the study found, was not to watch a person's body language but to listen to what they were saying. Liars tended to talk with a higher-pitched voice, gave fewer details in their accounts of events, were more negative and tended to repeat words.
Overall, the researchers concluded, subjective measures – or a gut feeling – might be more effective for lie detection than any available scientific measure. The problem with relying on body language is that while liars may be slightly more likely to exhibit a few behaviours, people who are telling the truth do the same things. In fact, the signals you might think of as red flags for lying, like fidgeting and avoiding eye contact, tend to be signs of emotional discomfort in general, and a non-liar is more likely to express them under the pressure of questioning. This is perhaps why, despite having a vested interest in spotting liars, we are generally pretty bad at it. In fact, US psychologist Paul Ekman has found that most people perform no better than would be expected by chance. And the success rate of judges, police, forensic psychiatrists and FBI agents is only marginally higher.
So it might be best not to go around accusing people of lying based on their body language. And there are lots of other examples in which our preconceptions of non-verbal communication are off-beam or even totally misleading (see graphic). Take crossed arms. Most people believe that when someone folds their arms they are being defensive or trying to fend off another individual or their opinions. This may be true. "But the same arm-cross can mean the opposite if the torso is super-erect, bent back somewhat – then it conveys invulnerability," says David McNeill, who studies gestures at the University of Chicago. Besides, an arm crosser might simply be cold, trying to get comfortable, or just lacking pockets.
McNeill is also not convinced by claims trotted out by public-speaking consultants about the importance of hand gestures. It is often said, for example, that "steepling" your fingers, makes you look authoritative and an open hand signals honesty. He says that these are examples of metaphorical gestures that have the meanings that people in management perceive, but they are not limited to these meanings. In other words, these well-known "rules" of body language are arbitrary. An open hand, for example might be a metaphor for trustworthiness, but it could just as easily signal holding the weight of something. The gesture is ambiguous without context and cues from spoken language.
So far, our scientific approach has provided little support for those who claim to speak fluent body-ese, but it turns out there are some gestures everyone understands. At the 2008 Olympic and Paralympic Games, athletes from all cultures made the same postures when they won: arms up in a high V, with the chin raised. The same was true for athletes who had been blind from birth, suggesting that the victory pose is innate, not learned by observation (see photo). Defeat postures seemed to be universal too. Almost everyone hunches over with slumped shoulders when they lose.
In fact, if you are hunting for signs of victory or defeat, the body may be a better place to look than the face. Hillel Aviezer at Princeton University and colleagues revealed last year that the facial expressions of professional tennis players when they won or lost an important point were so similar that people struggled to tell them apart. However, the body language was easy to read even when the face was blanked out (Science, vol 338, p 1225).
Other recent studies indicate that we can glean important clues about people from the way they move. Men judge a woman's walk and dance as significantly sexier when she is in the most fertile part of her menstrual cycle, suggesting that a woman's body language sends out the message that she is ready to mate, whether or not she – or the men around her – realise it. Meanwhile, women and heterosexual men rate the dances of stronger men more highly than those of weaker men, which might be an adaptation for women to spot good mates and men to assess potential opponents.
Using body language to assess sexual attraction can be risky, though. Karl Grammer at the University of Vienna in Austria found support for the popular notion that women signal interest in a man by flipping their hair, tidying their clothes, nodding and making eye contact. But he also discovered that they make the same number of encouraging signals in the first minute of meeting a man whether they fancy him or not. Such flirting is only a sign of real interest if it keeps going after the first 4 minutes or so. Grammer interprets this as women using body language to keep a man talking until they can work out whether he is worth getting to know.
Even when there is general agreement about how to interpret body language, we can be wrong, as has been revealed in new research on gait. Psychologist John Thoresen at the University of Durham, UK, filmed people walking and then converted the images to point-light displays to highlight the moving limbs while removing distracting information about body shape. He found that almost everyone judged a swaggering walk to signal an adventurous, extroverted, warm and trustworthy person. A slow, loose and relaxed walk, on the other hand, was associated with a calm, unflappable personality. However, when the researchers compared the actual personalities of the walkers to the assumptions other people made about them, they found no correlation (Cognition, vol 124, p 2621).
Arguably, it doesn't really matter what your body language actually reveals about you. What matters is what other people think it is telling them. So can it be faked?
Fake it to make it
Thoresen says that it should certainly be possible to fake a confident walk. "I have no data to back this up," he says, "but I do believe people can be trained to change perceived personality." There are other corporeal tricks that may help in impression management, too. For example, people in job interviews who sit still, hold eye contact, smile and nod along with the conversation are more likely to be offered a job. Those whose gaze wanders or who avoid eye contact, keep their head still and don't change their expression much are more likely to be rejected. If it doesn't come naturally, consciously adopting a confident strut, a smile and nod and some extra eye contact probably won't hurt – unless you overdo it and come across as a bit scary.
Faking calmness and confidence may change the way others perceive us, but psychologist Dana Carney at the University of California, Berkeley, believes it can do far more than that. She says we can use our body language to change ourselves. Carney and her colleagues asked volunteers to hold either a "high power" or "low power" pose for 2 minutes. The former were expansive, including sitting with legs on a desk and hands behind the head and standing with legs apart and hands on hips, while the latter involved hunching and taking up little space. Afterwards, they played a gambling game where the odds of winning were 50:50, and the researchers took saliva samples to test the levels of testosterone and cortisol – the "power" and stress hormones, respectively – in their bodies. Those who had held high-power poses were significantly more likely to gamble than those who held low-power poses (86 per cent compared with 60 per cent). Not only that, willingness to gamble was linked to physiological changes. High-power posers had a 20 per cent increase in testosterone and a 25 per cent decrease in cortisol, while low-power posers showed a 10 per cent decrease in testosterone and a 15 per cent increase in cortisol (Psychological Science, vol 21, p 1463).
"We showed that you can actually change your physiology," says Carney. "This goes beyond just emotion – there is something deeper happening here." The feeling of power is not just psychological: increased testosterone has been linked with increased pain tolerance, so power posing really can make us more powerful. And this is not the only way body language can influence how you feel. Carney points to studies showing that sitting up straight leads to positive emotions, while sitting with hunched shoulders leads to feeling down. There is also plenty of evidence that faking a smile makes you feel happier, while frowning has the opposite effect. In fact, there is evidence that people who have Botox injections that prevent them from frowning feel generally happier.
Despite these interesting results, if science has shown us anything it is that we should always question our preconceptions about body language. Even when people from diverse cultures are in agreement about the meaning of a particular movement or gesture, we may all be wrong. As the evidence accumulates, there could come a time when we can tailor our body language to skillfully manipulate the messages we send out about ourselves. For now, at least our popular conceptions can be modified with a little evidence-based insight. Or as Madonna almost put it: "Don't just stand there, let's get to it, strike a pose. There's something to it."
Swearing
I love swearing. It’s a weekly miracle that my essays don’t include “totally fucked” or “fucked up and bullshit” in every paragraph. If I were reborn as a linguist, I would study swearing and cursing. I watch documentaries about cursing, I play a lot of Cards Against Humanity, and this interview with Melissa Mohr, the author of Holy Shit: A Brief History of Swearing is my favorite episode of Slate’s just-nerdy-enough podcast Lexicon Valley. If you’ve been in the audience when I give a presentation, you probably (despite my efforts to the contrary) heard me swear five or six times. I would hate to live in a world without swearing because it would be fucking dull. Unfortunately, my (and most English-speaking people) love of swearing comes into direct contradiction with inclusionary social politics. I need a new arsenal of swear words that punch up and tear down destructive stereotypes. Every time I swear, I want to be totally confident that I’m offending the right people.
Swearing, while not its only function, has a lot to do with offending people. Swearing is a necessary social sanction that does a lot of good in the world. There will always be people in this world that deserve to be told off. (Like my neighbor for example.) But in the process of telling each other where to shove it, we also reaffirm and establish who in the world is desirable and who is unwanted. So if I call you dumb, stupid, lame, gay, retarded, or even a girl, I’m not only saying that women, non-cis gendered people, or the differently abled are inherently bad, I’m also invoking all of the power of ableism, homophobia, and patriarchy to make you feel bad. Too many curse words strengthen the kind of social structures that we should be dismantling. I want to quickly and easily compare people to the parts of society that I find gross and unseemly. I want words that compare people to those with ill begotten wealth or obscene power but, so far, calling someone the President of the United States of America doesn’t have the sticking power it should.
Efforts to consciously and directly alter language rarely work, so producing a new collection of commonly used swear words is going to take more effort than making some up and putting them in a list. I do not want to rely on the “fetch” method of consciously injecting new words into daily conversation. That’s not to say such efforts are hopeless or naive - putting a word to a feeling or a phenomena is the beginning of all sorts of movements and cultural revolutions - but I get the feeling that swear words just need to feel right. They need to come out of your mouth without a second thought.
The good news is that there are two large sociotechnical trends that work in our favor. The first is economic stagnation. Mohr, in the aforementioned Lexicon Valley interview, notes that the social taboo against swearing has everything to do with keeping your status. The very poor and the very rich (two classes that continue to grow in our present economic situation) have always been comfortable and blatant in their swearing. Swearing bares no risk if you don’t have anything to lose or are so well-heeled that there is no one else in the room that you need to impress. Only the upwardly mobile bourgeoisie are afraid of swearing. One could say that the socioeconomic climate is primed for swearing experimentation.
The second trend is the decentralization of media. Podcasts, YouTube videos, blogs, and even Netflix and Hulu exclusive content are all subject to far less regulation than radio or television. The words you cannot say on television are still the same, but there are plenty of other venues to test out new swear words. It’s strange then, that given all the Internet-inspired new words that have made it into dictionaries over the past decade (e.g. tweet, defriend, uplink), none of them are swears or curses. You might stop me here and say that those press releases are just ways of ginning up press for a dying institution– some shameless link bait by people that don’t really know what that means. I think that’s besides the point entirely. After all, what would be more press-worthy than a word you can’t say in polite company? And yet, the offerings remain scant. I guess I could call you a Scumbag Steve but in the heat of the moment I’m probably just going to call you a motherfucker.
Perhaps that’s just it. Most of the communicative innovation of the past decade has used photos, illustrations, video, and emoticons to express a feeling or an idea. As Jenny Davis wrote a few years back, memes are the mythology of our digitally augmented society. They don’t make arguments; they are the dominant ideologies of our time. I can offend you with an Insanity Wolf meme in ways that my parents probably couldn’t, but its going to use the same lexicon that they had. I’m not suggesting that this is a zero-sum game where we either get new words or new memes, but perhaps I’m looking for the wrong thing. Maybe new curse words won’t do as much culture work as I think they will because the fight has moved elsewhere: Away from utterances and towards a more heterogeneous system of self-expression.
Be that as it may, there’s no substitute for a new expletive to yell at people who cut you off on the highway. I’m not going to end this with a call for more swear words because that would be missing the point. Rather I’d like to see some words that are already in widespread use in relatively small communities (I imagine ShitRedditSays has a few.) and descriptions of how they came into being. I don’t think we can purposefully recreate moments where new words are born, but we can certainly foster an attentiveness or sensitivity to modes of evocative expression that rely solely on utterance. Perhaps, instead of copying and pasting something you whipped up on memegenerator.net, try to mash some words together. We could really fucking use some new ones.
Telling stories is not just the oldest form of entertainment, it's the highest form of consciousness. The need for narrative is embedded deep in our brains. Increasingly, success in the information age demands that we harness the hidden power of stories. Here's what you need to know to tell a killer tale.
I live in the storytelling capital of the world. I tell stories for a living. You're probably familiar with many of my films, from Rain Man and Batman to Midnight Express to Gorillas in the Mist to this year's The Kids Are All Right.
But in four decades in the movie business, I've come to see that stories are not only for the big screen, Shakespearean plays, and John Grisham novels. I've come to see that they are far more than entertainment. They are the most effective form of human communication, more powerful than any other way of packaging information. And telling purposeful stories is certainly the most efficient means of persuasion in everyday life, the most effective way of translating ideas into action, whether you're green-lighting a $90 million film project, motivating employees to meet an important deadline, or getting your kids through a crisis.
PowerPoint presentations may be powered by state-of-the-art technology. But reams of data rarely engage people to move them to action. Stories, on the other hand, are state-of-the-heart technology - they connect us to others. They provide emotional transportation, moving people to take action on your cause because they can very quickly come to psychologically identify with the characters in a narrative or share an experience - courtesy of the images evoked in the telling.
Equally important, they turn the audience/listeners into viral advocates of the proposition, whether in life or in business, by paying the story—not just the information—forward.
Storyteller Stories, unlike straight-up information, can change our lives because they directly involve us, bringing us into the inner world of the protagonist. As I tell the students in one of my UCLA graduate courses, Navigating a Narrative World, without stories not only would we not likely have survived as a species, we couldn't understand ourselves. They provoke our memory and give us the framework for much of our understanding. They also reflect the way the brain works. While we think of stories as fluff, accessories to information, something extraneous to real work, they turn out to be the cornerstone of consciousness.
The first rule of telling stories is to give the audience - whether it's one business person or a theater full of moviegoers - an emotional experience. The heart is always the first target in telling purposeful stories. Stories must give listeners an emotional experience if they are to ignite a call to action.
By far, the most effective and efficient way to do that is through the use of metaphor and analogy. More than mere linguistic artifacts, these devices are key components of the way we think, building blocks of the very structure of knowledge. In their swift economy, they evoke images and turn on memory, with all its rich sensory and emotional associations, bringing the listener into the story, cognitively and emotionally, as an active participant—you might say, as coproducer.
"We perceive and remember something based on how it fits with other things. One way the brain sorts things is by metaphors," says psychologist Pamela Rutledge, director of the Los Angeles-based Media Psychology Research Center. "When you're describing things in a story, you are creating visual imagery that engages you in multiple ways." The brain does not distinguish between a lived image and an imagined one. "You're bringing your own stuff to the story, which is reinforced through the emotions associated with the experience," says Rutledge.
Storyteller writing in a cafeThe psychic lever that opens the brain to the power of stories is the ability to form mental representations of our experience. It is wired into the brain's prefrontal cortex. Mental representations allow us to simulate events, to enjoy the experiences of others, and to learn from them, without having to endure all experiences ourselves. "Storytelling is an integrative process," Daniel Siegel, clinical professor of psychiatry at UCLA, told me. "It not only weaves together all the details of an experience when it's being encoded but enhances the network of nodes through which all those details can be retrieved and recalled. Research shows that we remember details of things much more effectively when they are embedded in a story. Telling and being moved to action by them is in our DNA."
The brain may be prewired for stories, but you still have to turn it on. Most compelling stories have a sympathetic hero. And they are shaped by three critical elements—a challenge, struggle, and some resolution. "There's a huge cognitive comfort just in knowing you're on a story arc," says Rutledge. "We can tolerate the anxiety of the challenge because we know there will be resolution." As psychologist Jerome Bruner famously said, "Stories are about the vicissitudes of human intention. Trouble is what drives the drama."
Stories, it turns out, are not optional. They are essential. Our need for them reflects the very nature of perceptual experience, and storytelling is embedded in the brain itself.
While we all feel ourselves to be unified creatures, that is not the reality of our experience or our brains. There is no central command post in the brain, says neuroscientist Michael Gazzaniga, professor of psychology at the University of California at Santa Barbara. Rather, there are millions of highly specialized local processors—circuits for vision, for other sensory data, for motor control, for specific emotions, for cognitive representations, just to name a few modules—distributed throughout the brain carrying out the neural processes of experience.
What's more, Washington University neuroscientist Jeffrey Zacks told me, such modules monitor external experience not continuously but in a kind of punctuated way, a process he calls event sampling. "The mind/brain segments ongoing activity into meaningful events," he says. How is it, then, that they function as an integrated whole and we experience ourselves that way?
Because we tell ourselves stories, Gazzaniga says. There is in fact a processor in our left hemisphere that is driven to explain events to make sense out of the scattered facts. The explanations are all rationalizations based on the minuscule portion of mental actions that make it into our consciousness.
Desperate to find order in the chaos and to infer cause and effect, the left hemisphere—in a module Gazzaniga dubs "the interpreter"—tries to fit everything into a coherent story as to why a behavior was carried out. The brain takes information spewed out from other areas of the brain, the body, and the environment, and synthesizes it into a story. If there is not an obvious explanation, we fabricate one.
Gazzaniga knows this from decades of work with so-called split-brain patients, people in whom the connection between right and left hemisphere has been surgically severed. With no transfer of information between hemispheres, such patients can't possibly know why they are, say, raising their left hand after Gazzaniga "sneaks into the right hemisphere" to give a command to do so. Yet, when asked what they thought their left hand was doing, they invent a story to explain why their left hand was moving.
"Consciousness," says Gazzaniga, "does not constitute a single, generalized process." It involves widely distributed processes integrated by the interpreter module." The psychological unity we feel emerges from the specialized system of the interpreter, our built-in storyteller, generating explanations about our perceptions, memories, and actions and the relationships among them. What results is a personal narrative, the story that confers the subjective experience of unity, that solid sense of self.
We literally create ourselves through narrative. Narrative is more than a literary device - it's a brain device. Small wonder that stories can be so powerful.
Further, stories can be a stand-in for life, allowing us to expand our knowledge beyond what we could reasonably squeeze into a lifetime of direct experience. Zacks has found that vividly narrated stories activate the exact same brain areas that process the various components of real-life experience. "When we read a story and really understand it, we create a mental simulation of the events described by the story," says Zacks. His studies, which use brain imaging technology, show that readers borrow what they can from their own knowledge, based on past experience, to mentally reproduce the sights and sounds and tastes and movements described in a narrative.
Storyteller lounging by a pool with a scriptThe ability to construct such mental simulations may be the tool that propelled human evolution. We can take in the stories of others who escaped life-threatening situations without taking on the risk; the safety of the retelling gives us an opportunity to try out solutions. Telling stories may also have enhanced survival by promoting social cohesion among our ancestors.
Which brings me to my final point about telling purposeful stories. Because they are so important, it's wise to prepare your stories in advance. But before you launch into your script, take some time to learn about your audience. What you discover will determine how you tell your story. You want to make sure your audience is with you. You can't get anywhere without them.
Persuasion Varies
On the heels of the decade of the brain and the development of neuroimaging, it is nearly impossible to open a science magazine or walk through a bookstore without encountering images of the human brain. As prominent neuroscientist, Martha Farah, remarked “Brain images are the scientific icon of our age, replacing Bohr’s planetary atom as the symbol of science”.
The rapid rise to prominence of cognitive neuroscience has been accompanied by an equally swift rise in practitioners and snake oil salesmen who make promises that neuroimaging cannot yet deliver. Critics inside and outside of the discipline have both been swift to condemn sloppy claims that MRI can tell us who we plan to vote for, if we love our iPhones, and why we believe in God. Yet, the constant parade of overtrumped results has lead to the rise of “The new neuro-skeptics” who argue that neuroscience is either unable to answer the interesting questions, or worse, that scientists have simply been seduced by the flickering lights of the brain.
The notion that MRI images have attained an undue influence over scientists, granting agencies, and the public gained traction in 2008 when psychologists David McCabe and Alan Castel published a paper showing that brain images could be used to deceive. In a series of experiments, they found that Colorado State University undergraduates rated descriptions of scientific studies higher in scientific reasoning if they were accompanied by a 3-D image of the brain (see Figure), rather than a mere bar graph or a topographic map of brain activity on the scalp (presumably from electroencephalography).
Critics of cognitive neuroscience have largely assumed that the pretty images which persuaded McCabe and Castel’s naïve participants have also seduced academics, journalists, and policy makers. Researchers in fields ranging from psychology to English literature were lured, so the argument goes, into using an extravagant research tool that has not advanced their disciplines in meaningful ways. These claims have hardly been limited to snide remarks over drinks at academic conferences, the McCabe and Castel paper has been cited several hundred times in scientific papers and been used to discount the scientific value of neuroimaging.
Some neuroscientists have started to push back. A recent critical review suggests that McCabe and Castel may have gotten it wrong—that brain images possess little-to-no special persuasive power. Most systematically, a series of 10 experiments—with over 2000 subjects—designed to replicate the original experiments found that brain images “exerted little to no influence”. Likewise, a pattern of failed replications in other labs suggests that the effect of brain images in persuasion may actually be trivial. A popular blogger, the Neuroskeptic, has argued that critics of brain imaging may be themselves the victims of another kind of seductive allure, “the allure of that which confirms what we already thought we knew.”
So what are we to believe: Are brain images persuasive or not? Although the idea seems intuitively appealing, the data is decidedly mixed. It seems like a puzzle that will continue to spark debate and research. However, we suspect it’s a puzzle that was actually solved in 1980—decades before cognitive neuroscientists began using MRI.
In the 1960 and 70’s, psychologists studying persuasion confronted a morass of conflicting findings. Some studies would find, for example, that peripheral aspects of a message such as the attractiveness of a speaker were more persuasive than the actual arguments. Use a sexy model, and it doesn’t matter what he or she says – an adage that explains 98% of beer advertising. But then other studies would find the opposite. Sometimes it was the central aspects of a message – the power of its arguments – that mattered, while people remained unfazed by attractiveness or other tangential cues. There was, in the words of one psychologist, “reigning confusion in the area”.
For many years, debate raged about which findings were “true” and which were “false”. But poring over the literature, two young psychologists, Rich Petty and John Cacioppo, realized that far more informative questions were instead, “when does each pattern hold, and why?” Asking the right questions instead of seeking the “correct” answer ultimately led Petty and Cacioppo to develop a theory—the Elaboration Likelihood Model—that resolved apparent contradictions in the literature and fundamentally changed the science of persuasion.
The Elaboration Likelihood Model (ELM) posits that when people are motivated and able to carefully evaluate messages they tend to be persuaded by central aspects of a message (e.g., the strength of its arguments). In contrast, when they are not motivated or able to elaborate, they tend to be persuaded by more peripheral aspects of a message (e.g., the attractiveness or professional credentials of a speaker). Groundbreaking work by psychologist Shelly Chaiken and others emerged around the same time making similar claims. These theories were a watershed moment in the field, and have been proven to be hugely powerful and influential frameworks for understanding persuasion across a variety of contexts and fields. These papers have been cited several thousand times over the past few decades.
It is, for this reason, remarkable that the recent articles on the persuasive power of brain images do not draw upon these highly influential models of persuasion. If they did, the question about whether or not brain images are persuasive would likely be discarded in favor of more informative questions. For instance, the persuasive value of a brain image very likely depends on whether or not the image is central or peripheral to the message, and whether or not the audience is motivated or able to elaborate on the message. These models would predict that scientists and other critical consumers would be the least likely to find brain images persuasive, unless they had a direct bearing on the research question at hand (e.g., when the images are being used to make an argument for spatial associations or dissociations). There are exceptions, of course. But even many of these exceptions have been well mapped out by persuasion researchers.
From a psychological perspective, we have much to gain from using these well-developed theories to understand responses to particular phenomenon, like brain images. More generally, we suggest that debates of this nature—does X influence Y?—are not as useful as theoretical work designed to understand how or why X influences Y. “Do brain images seduce?” is not the right question. “When and why might they seduce?” is. Armed with a theory about underlying psychological processes, one can generate nuanced hypotheses about when specific effects are likely to occur. People are more influenced by peripheral cues when they don’t or can’t think carefully about persuasive messages. For that reason it’s probably a good idea to use sexy models in ads that will air late at night or when people are already drunk. And your sexy picture of a brain is more likely to persuade an audience of non-experts or of hung-over experts on the last morning of the conference.
Rather than documenting the veracity or effect size of particular effects, this type of theoretical work is our central task as scientists of the mind and brain. If we understand how the mind works, then we can predict how it will behave in particular circumstances. And if the history of persuasion research is any guide, asking the right questions is likely to be bear more fruit than trying to prove that brain images are (or are not) seductive.
Attitude To Nature
WHEN COLIN WATSON grew up in a Yorkshire mining village just after the second world war, raiding birds’ nests for eggs was regarded as a virtuous hobby that kept boys out of trouble and did no harm. By the time Mr Watson died in 2006—after falling from a larch tree, reaching out for a sparrowhawk’s nest—the world had changed. The Royal Society for the Protection of Birds (RSPB) had confiscated his collection of 2,000 rare birds’ eggs and he had been convicted six times and fined thousands of pounds.
The shift in humanity’s approach to the natural world is in part the result of a long, slow evolution in moral attitudes that started long before Mr Watson’s boyhood. Its origins lie in the three great intellectual movements of recent times.
The Enlightenment changed man’s attitude to the rights of others. Once upon a time people were not expected to take the well-being of anybody beyond their family or tribe into consideration. Then the scope of moral responsibility widened to include compatriots and, later on, foreigners. More recently the circle expanded further to include other creatures, but only up to a point: few people think that animals are due the same consideration as human beings, though few now reckon they are due none at all. Compassion does not always sit comfortably with conservation (see article), but a broad concern for the welfare of other species underlies environmentalism.
In the 19th century the industrial revolution spawned the Romantic movement, which viewed civilisation as barbaric and nature as the source of all beauty: just as man started to destroy his surroundings, so he began to treasure them. Today’s environmental movement owes much to writers such as Henry Thoreau, who contrasted the shallowness of contemporary society with the spiritual depth he found living in a cabin in the woods.
Lastly, the theory of evolution undermined the Biblical notion of man as separate from, and appointed by God to have dominion over, the rest of creation. Discovering that you are an ape makes it harder to kill primates.
In the 20th century the spread of industrial farming fuelled environmental concerns. “Silent Spring”, a book by Rachel Carson published in 1962, about the impact on bird populations of DDT, a widely used pestkiller, helped foster a sense that society had got things upside down. Civilisation was uncivilised and economic growth was destroying, not creating, the things in life that were of real value.
A new sort of luxury
In the 20th century it was certainly true that economic growth was destroying nature at an unprecedented rate. But the prosperity that the growth created also gave people more freedom to think about things beyond their material welfare. Those well supplied with the necessities of life can use their resources on luxuries, be they handbags or bird-watching.
Prosperity also gave people more leisure, and enjoying nature is one of humanity’s most popular pastimes. Some 71m Americans say they watch, feed or photograph wildlife in their spare time, more than play computer games, and 34m are hunters or anglers who also, in their own way, enjoy wildlife.
George MacKerron and Susana Mourato from University College London and the London School of Economics recently looked at the relationship between happiness and nature. They found that people are happier in all outdoor environments (except in fog or rain) than they are indoors. What makes them happiest is taking exercise or bird-watching by the sea or on a mountain with someone they like. Those seeking to cheer themselves up should avoid bare inland areas, suburbia and children.
The second reason why humanity has started paying more attention to nature has nothing to do with fun or morality. It is that as people have messed up bits of the environment, they have come to understand the complexity of ecosystems as well as their importance for human welfare.
Two of the sharpest illustrations of this come from China’s Great Leap Forward. In 1958 the Chinese government announced that sparrows were to be targeted as part of the “Four Pests” campaign because they ate grain, offering rewards for killing them. People obediently tore down the birds’ nests, caught them in nets and banged saucepans to stop them landing anywhere. Sparrow numbers collapsed. But the birds, it turned out, ate insects that ate crops, and their slaughter thus contributed to the great famine of 1960 that killed 20m people.
At the time China was also stepping up its timber production, increasing the harvest from 20m cubic metres a year in the 1950s to 63m cubic metres in the 1990s. The area covered by forest shrank by more than a third over the period. The resulting soil erosion gummed up the Yangzi River. In 1998 it flooded, killing 3,600 people and doing around $30 billion-worth of damage.
The story of Newfoundland’s cod fishery offers a similar tale of self-defeating destructiveness, this time from the capitalist world. Around 1600 English fisherman reported that the cod off Newfoundland were “so thick by the shore that we hardly have been able to row a boat through them”. Factory fishing started in the 1950s, and the catch peaked in 1968 at 810,000 tonnes. By 1992, when cod biomass was reckoned to have fallen to 1% of its level before factory fishing started, the government declared a moratorium, but the cod fishery never recovered.
The reasons for the decline in populations of pollinators such as bees are less clear. According to a United Nations report, the number of honey-producing bee colonies in America more than halved between 1950 and 2007; European populations have also dropped. Pesticides, habitat loss or the spread of disease through globalisation may be to blame—nobody is sure. Whatever the explanation, the costs are potentially huge. Wild and domesticated bees as well as other insects such as hoverflies are especially important in the production of fruit, vegetables and oilseeds. According to an estimate in 2007, the global value of pollinators to farmers is €153 billion.
The potential of biodiversity for the pharmaceuticals industry is not easily quantified but hugely important. Around half of new drugs are derived from natural products. That should not be surprising: as Thomas Lovejoy, who holds the biodiversity chair at the Heinz Centre in Washington, points out, the genome of every living creature is a unique solution to a unique set of problems. So it seems likely that out there in the rainforest genomes exist that would be useful to humanity, if only humanity knew about them before it wiped them out.
The gastric brooding frog, for instance, appeared to scientists to hold great promise. This strange creature, endemic to Australia, gestated its offspring in its stomach. That suggested it could turn off production of stomach acids, which would be useful for people with stomach ulcers or recovering from stomach surgery. Research on the frog started in the 1980s, but the only two species of gastric brooding frog went extinct shortly afterwards. A scientist at the University of New South Wales is currently trying to resurrect the frog from surviving DNA.
But the services that other species perform for mankind do not stop there. Just as scientists are discovering that the human body is a huge colony of different species, with a large variety of bacteria inside every one of them, so they are finding out that the ecosystem of the soil - bacteria, fungi, protozoa, nematodes, microarthropods - is even more extraordinarily diverse. In a gram of soil there may be as many as a million species of bacteria. Their interactions with the food we eat and the air we breathe are complex and crucial to the production and maintenance of life. The combination of their importance and our ignorance suggests that humans would be wise to show humility in their dealings with other species, even when they are invisible to the naked eye.
All these factors have led to a big shift in attitudes towards nature. One of its manifestations has been a boom in green NGOs. Many trace their origins a long way back: Britain’s RSPB, for example, was founded in 1889 to campaign against women using exotic feathers in their hats, and the Sierra Club was established in 1892 to support Yosemite National Park, founded two years earlier. But the 1960s were a particularly fertile period. The World Wildlife Fund (now the Worldwide Fund for Nature) was set up in 1961, the Environmental Defence Fund in 1967, Friends of the Earth in 1969, and Greenpeace came together in the late 1960s. This was also the period when membership of some of the older organisations took off.
The NGOs have helped improve other species’ prospects in a couple of ways. Members’ contributions finance programmes, for instance to buy land, restore degraded habitat and protect species. In America and Britain, many big conservation efforts have been backed by NGOs or philanthropists. The NGOs’ lobbying efforts also make an impact. As membership of conventional parties has shrunk, theirs has boomed (see chart 2). Whether there is a causal connection—and if there is, which way the causality runs—is moot, but there is no doubt that the influence of green campaigners over mainstream politics has grown. In part, it is manifested through pressure from the NGOs on the big parties, but in some countries, such as Germany, Belgium and Brazil, it has made a difference to mainstream politics. By way of laws, regulation and subsidy, human behaviour towards other species is changing.
Gee Whiz Studies
Behavioral genetics has failed to produce robust evidence linking complex traits and disorders to specific genes.
Last spring, John Horgan kicked up a kerfuffle by proposing that research on race and intelligence, given its potential for exacerbating discrimination, should be banned. Now Nature has expanded this debate with Taboo Genetics. The article looks at four controversial areas of behavioral genetics - intelligence, race, violence and sexuality - to find out why each field has been a flashpoint, and whether there are sound scientific reasons for pursuing such studies.
The essay provides a solid overview, including input from both defenders of behavioral genetics and critics. The author, Erika Check Hayden, quotes me saying that research on race and intelligence too often bolsters racist ideas about the inferiority of certain groups, which plays into racist policies.
I only wish that Hayden had repeated my broader complaint against behavioral genetics, which attempts to explain human behavior in genetic terms. The field, which I’ve been following since the late 1980s, has a horrendous track record. My concerns about the potential for abuse of behavioral genetics are directly related to its history of widely publicized, erroneous claims.
I like to call behavioral genetics “gene whiz science,” because “advances” so often conform to the same pattern. Researchers, or gene-whizzers, announce: There’s a gene that makes you gay! That makes you super-smart! That makes you believe in God! That makes you vote for Barney Frank! The media and the public collectively exclaim, “Gee whiz!”
Follow-up studies that fail to corroborate the initial claim receive little or no attention, leaving the public with the mistaken impression that the initial report was accurate—and, more broadly, that genes determine who we are.
Over the past 25 years or so, gene-whizzers have discovered “genes for” high IQ, gambling, attention-deficit disorder, obsessive-compulsive disorder, bipolar disorder, schizophrenia, autism, dyslexia, alcoholism, heroin addiction, extroversion, introversion, anxiety, anorexia nervosa, seasonal affective disorder, violent aggression—and so on. So far, not one of these claims has been consistently confirmed by follow-up studies.
These failures should not be surprising, because all these complex traits and disorders are almost certainly caused by many different genes interacting with many different environmental factors. Moreover, the methodology of behavioral geneticists is highly susceptible to false positives. Researchers select a group of people who share a trait and then start searching for a gene that occurs not universally and exclusively but simply more often in this group than in a control group. If you look at enough genes, you will almost inevitably find one that meets these criteria simply through chance. Those who insist that these random correlations are significant have succumbed to the Texas Sharpshooter Fallacy.
To get a sense of just how shoddy behavioral genetics is, check out my posts on the “liberal gene,” “gay gene” and God gene” (the latter two “discovered” by Dean Hamer, whose record as a gene-whizzer is especially abysmal); and on the MAOA-L gene, also known as the “warrior gene.” Also see this post, where I challenge defenders of behavioral genetics to cite a single example of a solid, replicated finding.
Ever since I first hammered behavioral genetics in my 1993 Scientific American article “Eugenics Revisited,” critics have faulted me for treating the field so harshly. But over the last 20 years, the field has performed even more poorly than I expected. At this point, I don’t know why anyone takes gene-whiz science seriously.
In her Nature article, Hayden polls readers on whether scientists should “refrain from studying” the genetics of intelligence, race, violence and sexuality. Overwhelmingly, readers say no. Fine. But we should all treat “gene whiz” claims—on any topics, not just ones that might be “taboo”–with the skepticism they deserve.
Why Do We Kiss?
Birds don’t do it, bees don’t do it – and you’d be hard pushed to educate a flea to do it. Yet in over 90 per cent of human cultures, almost uniquely in the animal kingdom, coupling involves kissing. Why?
Scientists from the University of Oxford believe they might have found part of the answer: it turns out a good snog could be a method for road testing the suitability of a future mate, and then keeping them afterwards.
Until now there have been several theories about the purpose of kissing.
The simplest is that it is a prelude to sex — a method of heightening arousal. This idea is partly backed by previous studies showing that men are extremely interested in kissing prior to sex and extremely uninterested afterwards. For women, however, it is the reverse.
Another theory is that it is a symbol of attachment and commitment – or, as the authors of today’s paper somewhat less romantically put it, “[Kissing among couples] demonstrates a willingness to expose themselves to potential health hazards, such as influenza, herpes simplex virus or meningococcal meningitis.”
Yet a third idea is the “sniff it and see” theory. This is the contention that the act of kissing provides a method for assessing someone’s genetic fitness – the proximity required for successful smooching enabling people also to smell their potential partner and judge if they are diseased or not.
In order to decide between the three, the researchers commissioned an internet survey of 900 people, and published the results in the journal Archives of Sexual Behaviour.
“Kissing in human sexual relationships is incredibly prevalent in various forms across just about every society and culture,” said Rafael Wlodarski, a PhD student, of his motivation for the study. “We are still not exactly sure why it is so widespread or what purpose it serves.”
What he found was that people considered kissing to be far more important with long-term partners than short-term partners, indicating it could be a method of keeping relationships going. There was also evidence that early in a relationship people changed their views of partners after kissing — indicating that it could be used to assess mates. There was little evidence to support the idea that it was used as a method of arousal, however. Only in short-term relationships did people rate it as more important before sex.
Car Salesmen
How does a car salesman get you behind the wheel? By being a keen observer of human behavior—and not letting you say "no."
The auto salesperson holds a special place in American society. According to a Gallup poll, 95 percent of Americans believe car salesmen have low ethical standards. Twice as many people trust lawyers as car salesmen.
Robert V. Levine, a California State University at Fresno psychologist, spent weeks as a car salesman while writing his book The Power of Persuasion.
Here's what he's gleaned:
The Low-Ball
Salesmen often lure customers by quoting an impossibly low price over the phone. When the customer arrives, the "low-baller" excuses himself—to the bathroom, a phone call or family emergency - leaving a second salesperson to explain that the first had misquoted the price or was having personal problems.
Common Ties
Successful salesmen try to latch onto anything they have in common with the buyer, such as a link—real or not—to the buyer's job or college. "There's also a lot of matching," says Levine. The manager "will bring out the Hispanic salesman for the Hispanic couple, or the female salesperson for a woman."
Sensing the Vibe
If attempts at a personal connection have failed, smart salespeople bump the customer to a colleague. If there's a sale, they'll split the commission.
Never Let Them Say No
A major tenet of auto sales: Never ask a customer a question that can be answered with a "no." It stops the sales pitch in its tracks. Instead of asking if a customer is interested in a particular model, a savvy salesman might ask, "Do you prefer the economic four-cylinder or the power of the six-cylinder?"
Stall and Then Stall Some More
A good salesman will eat up as much time as possible, rattling off safety features and crisscrossing the car lot. As the minutes tick by, many buyers feel guilty about walking away. Not only would they have wasted the salesperson's time, but they may regret having wasted their own time.
Obedience School
If a salesperson feels a customer is slipping away, he may abruptly walk to another car or head back into the office without telling the customer where he's going. The maneuver, known in car lot lingo as the "turn and walk" lets the salesperson gauge whether he's in control of the situation. A customer usually follows, says Levine.
The Key Swap
Giving the customer the keys mentally prepares the buyer for the sale. A dealership may even "lend" the car for a weekend. Buyers rarely give the car back. Alternatively, a dealer may try to get you to hand him your keys, hoping you'll feel you've already traded in the old car.
Point of No Return
If a customer agrees to a test drive, salesmen know they've probably sold the car. By then the customers have spent so long on the car lot that they tell themselves that if they don't go through with the purchase, they'll have to go through the process all over again.
Pressure Cooker
After the test-drive, the salesperson will likely turn on the pressure. "Would you like to have the stereo installed today, or come back Thursday?" he might ask. "Put the car in the sold line, Mike," he might call out. At this point, few customers object.
Paying It Forward - The Research
Every few weeks, another heart-warming tale of regular folks deciding to “pay it forward” makes the news. People have tracked the longest pay-it-forward chains at tollbooths at the Golden Gate Bridge in San Francisco – where one person pays the toll for next person in line, that person pays for the next person, and so on. And just last month Starbucks encouraged their customers to treat their fellow customers to a coffee: “Pay it forward, and Starbucks will pay you back," stated CEO Howard Schultz. There’s good reason to think that these warm-fuzzy chains might be common; after all, giving to others leads to happiness.
But as anyone who’s ever waited in line – whether for coffee or to pay a toll – knows, other people don’t always treat us with generosity. Sometimes, they treat us with greed – cutting us off as we maneuver into the toll lane or cutting in front of us to order their caramel macchiato. What happens when we are the victims of cruelty instead of kindness? Unfortunately, our research shows that we are more likely to pay greed forward than generosity.
Imagine being in the following situation. I tell you that I gave someone $6 and told them they could give as much or as little of it to you as they wanted (and keep the rest for themselves). I hand you an envelope that contains the amount they gave you. You eagerly open the envelope, shaking it to reveal your bounty, only to find that this previous person left you nothing. Zero point zero zero dollars. Take a moment to think how you’d feel – and what words come to mind to describe the guy who stiffed you.
Now imagine I then gave you another $6, and asked you give as much as you wanted to a new person and keep the rest. It’s like being cut in line: having been cut, would you turn the other cheek or make sure to cut somebody else? Would you pay the greed forward?
Before I tell you how much people paid forward in this situation, consider two more possibilities. What if you’d opened the envelope and the previous person had been amazingly generous, giving you all $6? Or what if they’d treated you fairly, giving you $3 and keeping $3 for themselves? Again, given a new $6 to split it with a new person, would you pay forward that generosity? Would you pay forward that fairness?
My colleagues Kurt Gray at the University of Maryland, Adrian Ward at the University of Colorado, and I placed hundreds of people in one of the three situations described above – receiving greed, receiving fairness, or receiving generosity – and measured how much of a new $6 they were willing to give to another person. First, some good news: people who had been treated fairly were very likely to pay forward fairness: if someone splits $6 evenly with me, I’ll split $6 evenly with the next person. Now, some worse news: people who had received generosity – who’d gotten the full $6 from the previous person –were willing to pay forward only $3. In other words, receiving generosity ($6) did not make people pay forward any more cash than receiving fairness ($3). In both cases, people were only willing to pay forward half. Now the bad news: people who had received greed? They were very likely to pay that greed forward, giving the next person just a little over $1, on average.
To social scientists, what’s most interesting about paying it forward (“generalized reciprocity”) is that there doesn’t appear to be any good reason to pay anything forward. It certainly makes sense to pay people back – if someone gave me $0 and then I got the chance to split $6 with that same person, giving him $0 in return might teach him a lesson to be kinder to me in the future. But visiting the sins of a previous person on an unsuspecting new person – as people do in our research – seems less sensible, and less fair. Our results reveal that people pay greed forward as a means of dealing with the negative emotions that being treated badly engender: if I can’t pay you back for being a jerk, my only option for feeling better is to be a jerk to someone else.
Gary Becker and Cost and Reward
From crime to tuition fees, countless aspects of our lives are influenced by the late economist Gary Becker
Why did Chris Huhne need to go to jail for dodging his penalty points for speeding? After all, as he constantly points out, hundreds of thousands of people have done this without giving it a second thought. Yet Mr Huhne would find his sentence less baffling if he reflected upon what happened the day that Gary Becker was late.
And in the days after the death of the great professor, I think that’s worth doing.
A little more than 40 years ago, Becker was driving to the oral exam of a student when he realised that he wasn’t going to get there in time. Not only that, but he would have to park the car and there were parking restrictions all around the building in which the exam was taking place. Finding a car park would take ages and then he would have to walk.
So here is what he did. He parked in a forbidden area, dashed to the meeting, made it on time and on his way won the Nobel Prize for Economics.
The decision about where to park had, in the end, been relatively simple. The fines for parking outside the building were modest and, more importantly, they were unreliable. In other words, he might park in the restricted area and not get a ticket. Against this was the certain cost of the car park and the professional cost of being late for an important engagement. It seemed to him that the benefit of the parking violation over legal parking was clear.
As he left the car behind, he began to reflect on what he had done. And to wonder whether others weren’t doing it too. The standard theory of criminal behaviour at the time was that criminals were mentally ill or social victims. Becker, however, realised that the decision he had made to break the law had been entirely rational. Those who break the law balance the costs and the benefits of doing it.
From this insight came Becker’s influential analysis of crime, one of the ways he transformed economics and one of the reasons he became a Nobel laureate. When his death was announced last week, Gary Becker was hailed as one of the most significant social scientists of his generation, at least the equal of giants such as Milton Friedman, with whom he worked at the University of Chicago. When awarding him the Presidential Medal of Freedom, George W Bush described Becker as “without question one of the most influential economists of the last hundred years”.
Becker’s contribution was to use the traditional tools of economics — the analysis of supply and demand, the working of incentives, the weighing of costs and benefits — and, for the first time, apply it to social policy.
Take, for instance, Chris Huhne and his penalty points. The former cabinet minister thinks that he was sent to jail despite the fact that so many others commit the same crime and get away with it. In fact, a Becker-style analysis suggests that he was sent to jail precisely because so many others commit the same crime and get away with it.
If a crime is very hard to detect, the incentive to commit it is greater. The benefit of the crime outweighs the cost. As a result, you have to make the penalty greater so that, taken together, the punishment and the risk of punishment make transgression too costly to be worthwhile.
Becker’s work was very influential on sentencing policy and prevention of crime. Yet more influential still was his impact on the field of economics. Instead of being a study merely of how money moves around, it became a study of how people behave.
One of his most famous pieces of work concerned racial discrimination in the American South. He showed that you could measure the preference for discrimination of employers by looking at how far they were willing to hire less productive workers merely because they were white.
He also developed and popularised the idea of human capital, now a common term but very controversial when Becker first used it. The professor argued that education could be seen as an economic decision in which the long-term benefit of a better job could offset the short-term cost.
It is impossible now to debate student tuition fees, for instance, without reference to Becker’s ideas. Given that students increase their earning power as a result of a university education, but also that there is a social benefit, who should pay for it? And would tuition fees put off poorer students?
Becker would not have been surprised (as others have been) that the people most willing to pay tuition fees have proved to be the least well-off, and those least likely to pay have been the middle class. The cost is the same for both groups but the least well-off gain a greater benefit because they have less to fall back on without higher education.
During his life Becker was often accused of treating human beings as factors of production and purely self-interested. Yet this is a complete misunderstanding. The real significance of Becker’s work is the opposite.
Becker won the Nobel prize because he demonstrated the way in which economics was about more than money. Understanding people’s incentives, and structuring social institutions in response to them, does not mean that people’s incentives are purely selfish.
People might have a preference for altruism, for example, or for community living. A market-based system takes these preferences into account. It’s quite wrong to equate markets with narrow economic selfishness.
This is relevant to the current debate on free schools or the NHS. Allowing competition and choice is not preferring self-interest over public spirit. It is simply structuring the system in line with people’s preferences, and these preferences may be charitable.
Becker’s work also shows why non-market systems struggle. People calculate the trade-off between costs and benefits. If the system gets in the way of their preferences, they work around it. That’s why tourists used to swap their jeans for hard currency behind the Iron Curtain. Then the punishments get more severe to deter such defiance of the law.
The world has lost one of its great thinkers. Fortunately we still have the power of his thought.
Nudge 1
DID you hear the one about the flies in the toilet? They took off, flew round the world, and started a revolution.
It was 1999, and the authorities at Schiphol Airport in Amsterdam were looking to cut costs. One of the most expensive jobs was keeping the floor of the men's toilet clean. The obvious solution would have been to post signs politely reminding men not to pee on the floor. But economist Aad Kieboom had an idea: etch a picture of a fly into each urinal. When they tried it, the cleaning bill reportedly fell 80 per cent.
Amsterdam's urinal flies have since become the most celebrated example of a "nudge", or strategy for changing human behaviour on the basis of a scientific understanding of what real people are like – in this case, the fact that men pee straighter if they have something to aim at. The flies are now, metaphorically, all around us.
Governments across the world are increasingly employing nudges to encourage citizens to lead healthier, more responsible lives. Chances are you have been nudged, although probably without realising it. So does nudging work? And should we accept it?
To understand the nudge revolution you have to go back to the 1980s, to the heyday of a branch of economics known as the Chicago School, after the University of Chicago economics department where it started. Its fundamental principle was "rational choice theory": when people make choices, they exercise near-perfect rationality. They logically weigh up incentives such as prices, taxes and penalties in order to maximise their own economic interests.
Rational choice theory was hugely influential, picking up Nobel prizes and providing the intellectual foundations for neoliberalism. But there was a problem: it was deeply flawed.
Imagine you are given £100 and told that you can keep it, as long as you give some of it to a stranger. The stranger knows the deal, and can reject your offer – in which case you both get nothing. Rational choice theory predicts that strangers will accept whatever you offer: even a small gain is better than none. In reality, however, people make surprisingly large offers, and strangers often reject ones that do not appear fair.
Why is this? In a nutshell, because real humans are not coldly rational. Although we are motivated by money, we are also motivated by other things, such as social norms and the concept of fairness. We don't like to appear greedy, even to strangers, and we would rather punish a derisory offer than accept it.
Insights like this led to a new way of thinking called behavioural economics. This "science of choice" documented the many ways real people deviate – often wildly – from rationality.
One of its most important insights is the idea that we have two systems of thought: System 1 is fast, automatic and emotional. System 2 is slow, effortful and logical. The coexistence of these two systems is the key concept of dual process theory, which won Daniel Kahneman of Princeton University the Nobel prize in economics in 2002.
The fast-thinking system has been likened to an inner Homer Simpson; the slow, methodical system, to an inner Mr Spock. System 1 doesn't stop to think: it just does. It reacts on the fly and jumps to conclusions. System 2 is the opposite. It is a thinker, not a doer. It is what we use to solve complex tasks that require attention and reasoning.
When it comes to decision-making, system 2 generally produces better outcomes. But attention, concentration and reasoning are finite resources. So most everyday mental tasks are left to system 1, leaving us wide open to errors.
Answer this question as quickly as you can. Fish and chips cost £2.90. A fish costs £2 more than the chips. How much do chips cost? System 1 instantly shouts out an answer which feels right: 90p. It takes deliberation to arrive at the correct answer, which is 45p.
Numerous other biases and flaws are also at play. We are swayed by social pressures and will often follow the herd instead of making decisions to suit ourselves. We procrastinate and tend to choose the path of least resistance. We value short-term pleasure more than long-term success. We are "loss averse", meaning the pain of losing something is greater than the pleasure of gaining it. We favour the status quo even if it is not in our best interests, and are easily influenced by irrelevant information.
This ragbag of flawed thinking is responsible for all sorts of poor choices in life, such as giving in to temptation, failing to save for retirement, sending angry emails and making ill-advised purchases. It is why well-laid plans to eat more healthily, exercise more and drink less often come to naught. It is, in short, what makes us human.
Encumbered by all these biases, the human mind looks anything but the orderly decision-making machine envisioned by rational choice theory. But in a funny way, it is. Our minds are biased and flawed, but in a systematic way. Human behaviour is irrational, but predictably so.
It is this predictability that convinced behavioural economists that it should be possible to change behaviour. And so the concept of nudge was born.
The idea came to widespread public attention in 2008 when two social scientists at the University of Chicago wrote Nudge: Improving decisions about health, wealth and happiness. Richard Thaler and Cass Sunstein had been working for years on how to apply behavioural economics to policy. The book became a surprise bestseller, and won some influential advocates.
The main tool of nudging is "choice architecture", or the way in which options are presented. Whenever you make a decision, from the mundane to the potentially life-changing, choice architecture is at work.
Every time you go into a restaurant or shop, fill in a form, visit a website, read a newspaper, vote, turn on the TV, or do any number of everyday activities, you encounter choice architecture. Much of this is incidental, and some you even create yourself, as when you stock up with food – if you haven't got chocolate and sweets in the cupboard, you're less likely to indulge in them because doing so would require a trip to the shop. But some is deliberately created by other people – often with the intention of exploiting your biases.
Supermarkets are the experts at this (although they don't call it nudging). They greet you with the smell of baking bread, place the most profitable brands at eye level and put chocolate next to the checkouts. The intention is that you cave in to temptation and end up buying things you didn't intend to.
None of this is news to retailers trying to separate you from your money. But only recently have public authorities woken up to the power of choice architecture, and the possibility of redesigning it to nudge people towards doing the right thing. The "right thing", of course, is a value judgement, but is usually defined as the option people would have chosen if they were not burdened by biases (although who gets to decide is a bone of contention – see "When does persuasion veer into coercion?").
In practice, nudging can mean all sorts of things. Many decisions in life are dictated by a default option, where a choice is made for you unless you opt out. Some countries, for example, automatically register citizens as organ donors. It is easy to opt out, but most people do not get round to it. Many nudges simply reverse a default option.
Similarly, if getting people to do public-spirited things is difficult, they can be nudged by applying social pressure. A good example is voting. Informing people about high turnouts in their neighbourhood can encourage them to go out and do their civic duty.
The common thread running through all these strategies is that they do not use orthodox economic incentives like taxes, fines and rewards. According to the working definition of nudges laid out by Thaler and Sunstein, anything that reaches for these policy tools does not qualify.
Consider the very real problem of excessive drinking. Increasing the price of alcohol might reduce drinking, but that isn't a nudge. A nudge would be telling people how much other people drink on average, or prompting pubs to sell beer in two-thirds-of-a-pint glasses as well as pints, on the understanding that if you give people a big portion they will probably consume it even if they don't really want to.
Perhaps most importantly, nudges must be "freedom preserving", which means people remain at liberty to make the wrong choice. You can still drink pints if you want and nobody will tell you that you can't.
That element is what makes the nudge approach so very attractive to politicians: it does not involve bossing people about or enacting new legislation. That is largely why Thaler and Sunstein's ideas found an eager audience on both sides of the Atlantic – and on both sides of the political divide.
Big Brother is nudging you
In 2009, President Barack Obama's administration appointed Sunstein to run the Office of Information and Regulatory Affairs (OIRA), a powerful agency within the White House that scrutinises federal regulation to make sure the benefits outweigh the costs.
And when the UK's coalition government came to power in 2010, the prime minister David Cameron created the Behavioural Insights Team – nicknamed the Nudge Unit – to put nudge theory into practice.
Sunstein was head of OIRA until 2012. During his time there he helped to bring about what he calls a "large-scale transformation of American government". Using nudge theory he and his team changed the way Americans eat, use energy, save for retirement and more (see "Inventing the nudge").
The UK nudge unit claims similar successes. Anyone applying for a driving licence now has to answer the question "Do you wish to register as an organ donor?". People are free to say no, but by changing the default option – to keep people off the register unless they seek out registration – the unit eventually expects to double the number of voluntary donors to about 70 per cent of the population.
David Halpern, the unit's director, says their biggest success is in recovering unpaid tax. People who owe money now receive a letter telling them (truthfully) that most people in their area pay their taxes on time. This social nudge has increased compliance from 68 per cent to 83 per cent.
The unit also quintupled the uptake of a failing attic insulation scheme by adding a free clearance service. It was not that people wouldn't pay to have their attics insulated, they just couldn't be bothered to empty them out first. Overall, Halpern claims that his unit's initiatives have already saved hundreds of millions of pounds.
Off the back of these successes, nudging has spread like wildfire, with governments across the globe – including Australia, New Zealand, France and Brazil – joining in.
Taken individually, nudge-based success stories may seem trivial. But there is a lot more to come. "We're very much at the beginning of the road, there's a great deal of scope for more," says Halpern. The UK unit has been given a remit to expand its activities across all areas of government and also to take on paying clients. Sunstein similarly says that what they have done in the US is the "tip of the iceberg".
As the strategy is rolled out more widely, the cumulative impact could become enormous. With nudges applied to policy problems in all aspects of public life, some economists anticipate incremental transformation of societies like the UK and US into "nudge states", or "au pair states" (like the nanny state but less bossy).
Will all of this lead to better societies? Advocates of nudging are adamant that the science is on their side. The UK unit tests its interventions in randomised trials before rolling them out – both to see whether they work and whether they are socially acceptable. In the US, too, Sunstein insists that "everything we did was based on evidence".
Even so, concerns remain. Theresa Marteau, director of the University of Cambridge's Behaviour and Health Research Unit, and an adviser to the UK unit, has trawled the scientific literature for data on nudges used to change health-related behaviour, such as diet, alcohol consumption, smoking and physical activity. She says the evidence for effective nudges is largely absent. That is not to say they cannot work, because clearly they can in some circumstances. "But the question is, which interventions are most effective at changing which behaviours?"
There are fears that certain nudges might even prove counterproductive. For example, there is some evidence that when foods are labelled as healthy or low fat, it is taken as licence to consume more, Marteau says.
But perhaps the most serious obstacle to the nudge revolution is public acceptability. Although nudges are intended to be helpful and preserve freedom, many people feel there is something sinister about interventions designed to change their behaviour without them necessarily realising it.
Marteau accepts that people often dislike the idea that they are being nudged. But she points out that they are anyway, and often by people who don't have their best interests at heart. "I think it is born of a lack of understanding of how all our behaviour is being shaped the whole time by forces outside of our awareness."
And so the question is not "do you want to be nudged?", but "who do you trust to do it?"
Nudge 2
When you published Nudge in 2008, did you expect it to have so much influence?
No. We were trying to write the best book we could. I was surprised and gratified that it got such attention.
You went on to head up the White House Office of Information and Regulatory Affairs (OIRA). What was the most effective nudge that you implemented?
Automatic enrolment in retirement savings plans has had a major effect. If you have to sign up, it's a bit of a bother. People procrastinate or go about other business. Then they have less money in retirement. With automatic enrolment, you're more likely to be comfortable when you retire.
How many US citizens have been nudged that way – and do they know it?
A very large number. Automatic enrolment is a common practice now, many millions of people have benefited from it. People recognise they've been automatically enrolled – there's nothing secret about it, and it's explained by employers. But they wouldn't think, "I've been nudged".
Don't they have a right to know?
I don't think it's very important that people support the idea of nudging in the abstract. I think it's important that policies be helpful and sensible.
What other policies did you design during your time at OIRA?
In the US, the principle icon for informing people about healthy food choices was known as the food pyramid. It was very confusing. We now have the food plate, which is more intelligible, and we believe it's leading to more informed choices.
We also had a situation in which poor children who were eligible for free meals weren't getting them because they had to enrol. We automatically enrolled them, and a lot of kids are now getting food who otherwise wouldn't.
How do you design a nudge?
It's a problem-centred approach, rather than a theory-centred approach. So if we had a problem of excess complexity making it hard for people to make informed choices, the solution would be to simplify. If people aren't enrolled in a programme because it's a headache to sign up, automatic enrolment seems like a good idea.
Is nudging generally preferable to strategies like taxes and prohibition?
The advantage of a nudge is that it's more respectful of freedom of choice. It always belongs on the table, but if you have a situation where, say, polluters are causing health problems, some regulatory response is justified – a criminal sentence or a civil fine.
Can nudging solve complex, long-term problems such as climate change?
Climate change needs international efforts. You can make progress by informing people of greenhouse gas emissions associated with their car, or through default rules, like lights going off when no one is in a room. But nudges are unlikely to be sufficient.
Can nudges lead to lasting change?
There is good research on the circumstances under which using social norms result in persistent instead of short-term behavioural change. With respect to energy use, so long as people are frequently reminded, it works.
How much further can nudging go?
I think it's important not to get too fixated on the word "nudge". Part of the reason the book did well is that it has a catchy title, but I'd like to think the better reason is that it has solutions to problems. The use of these tools has produced terrific results in a short time – and we're at the tip of the iceberg.
Change is Hard
For many people, working out how to change something, especially a bad habit, is one of the most frustrating things you can experience. Many people know what they want to change, but don't have the knowledge needed to implement it. Perhaps you can relate. You start a new diet on a Monday and wonder why, as an intelligent, self-motivated, driven person, you cannot seem to keep the cupcake out of your mouth by Wednesday. What's up with that? Or you wonder why you can't seem to kick your procrastination habit, your lack of exercise habit, your bad work habit or any other part of your existence that is not serving you well. Sound familiar at all?
Understanding the process of change, why we are the way we are, and how to change when you really want to, is incredibly important. The attribute of driving effective change can give you the keys to the kingdom of your success and happiness. However, it can keep you in the deep dark hole of frustration, that can lead to self-defeat and low self-esteem, if you don't learn how to use it.
So let's start with what we typically know. Changing behaviors is hard. Change is hard, period. You get wired to certain patterns of behavior, and your brain gets stuck in a groove that takes concerted, conscious and consistent effort to change. And even when you do manage to change for a few days, weeks or months, it is all too easy to slip back into your old patterns.
The good news is that we know, through the latest neuroscience, that our brains are "plastic." This means they can create new neural pathways (like brain train tracks), which allows you to create change and form new patterns of behavior that over time can stick. You find a new groove, so to speak. But it takes work. Sometimes, it takes a lot of work. And it takes time. The popular myth that you can quickly and easily change a deeply-ingrained habit in 21 days has been largely disproven by brain and behavioral scientists in recent years. They now think that it actually takes anywhere from six to nine months to create the new neural pathways that support changing behavior -- hmmm, no more of those quick-fix plans. Sorry.
There are three things needed to make any change, whether it is mental, emotional or physical: desire, intent and persistence. Our pop culture society is filled with women's magazine covers that say you can meet your dream partner by the weekend, land your dream job in five days, or lose ten kilos in two weeks. This can leave mere mortals feeling completely inadequate when they fail to do these things, which are completely unrealistic, if not downright impossible, to get done in the first place.
When you consider that only eight percent of people actually follow through on changing a habit, you can see that it's key to understand enough about the change process, and yourself, to smooth a path to success.
So what are the steps and considerations? Here are some questions to think about, as you begin to create positive change in a lasting way:
You really to have to want it
There is no point in saying you are going to stop working so much, so you can get some semblance of balance in your life, if in reality you really don't care that much about balance, and you really love to work. Who are you doing it for? Don't kid yourself. You must be serious and care about the change you decide to make, so you are willing to work for it and follow through.
What need is being served by what you are doing now?
Your current behavior is there for a reason, or you wouldn't be doing it. Hard to swallow, but true. Whether you're a workaholic, 20 kilos overweight, have anger management issues, or are unhappily single -- your current situation is serving you somehow. So take some time to think about this. Whether the need is relaxation but the behavior is binge drinking; or the need is recognition but the behavior is overwork to prove yourself; you first need to identify what need is being served by your current behavior. Once you have the answer, you can work out how to meet this need in another way, smoothing the path to change.
How else can you meet your needs?
So, you have identified the current behavior and how it is serving you -- that's fantastic. Now think about how else you could get this same need met. You may relate to this example. For some people, eating cupcakes, chocolate or other things, that you downright know are not only bad for you, but proven to leave you feeling tired, grumpy and full of self-loathing, is less about the food, and more about the nurturing, comfort, or distraction they are providing. How else could you get your need met? Perhaps retreating to your meditation cushion, your yoga mat, the bath tub, or even your bed, would give you a much greater sense of the nurturing needed, without the guilt, crash in self-esteem for not following through on your intention, and of course, the kilos (bury those skinny jeans a little deeper again). So when you think about the needs you have, how else can they be met?
What's the price of not changing?
You will experience ambivalence on the change path, no question about it. That's okay. But to progress down the road, you have to ask yourself, what is the price of not changing? If you really want a promotion, but are too fearful to ask for the management development training that you need, the price is staying in the same role. Is overcoming your fear worth the goal? Or if you really want to get healthy, lose weight and get fit, but you don't want to have to cut the sugar and get out walking, what is the price of that behavior? Putting on yet another 10 kilos? Think about and write down any negative effects your current behaviors are creating in your life -- self‑loathing, boredom, career stagnation, frustration. Once you have hit this wall of realization, you are in the perfect place to turn around and move forward.
What positive image can pull you forward?
It is known from research in the fields of positive psychology and neuroscience, that you have more success when you are moving towards something positive, than moving away from something negative. It is also known that positive images pull you forward. Think vision boards, athletes visualizing their performance success, or thinking through the positive outcome of a business presentation before it takes place. It works, and science now proves that it does. So what positive image of the outcome you want can you visualize, to pull you towards success? Come up with one, have it firmly in your mind, place it on a wall, in your computer, in your journal, or anywhere you will reference it, and look at it frequently. It can be especially helpful when your resolve is slipping, to remind you what you are working so hard for.
Are you acknowledging success?
When you have made progress on your change efforts, it is really important to acknowledge that achievement. When you celebrate your efforts, you create upward spirals of momentum that help reinforce the positive change and make it stick. Recognizing your efforts also helps to reinforce the direction you are moving, and motivates you further towards your goals. Recognizing, acknowledging and celebrating your progress, however small, are keys to success on your change path.
Change can be challenging. Anyone who has tried to change a habit knows this is true. But it is possible. And you can smooth the path to success by being aware of the cycle of change, being prepared, and being consistent. The result is worth the effort, if you want it badly enough to work for it.
Learning Self-Control - The Marshmallow Man
NOT many Ivy League professors are associated with a type of candy. But Walter Mischel, a professor of psychology at Columbia, doesn’t mind being one of them.
“I’m the marshmallow man,” he says, with a modest shrug.
I’m with Mr. Mischel (pronounced me-SHELL) in his tiny home office in Paris, where he spends the summer with his girlfriend. We’re watching grainy video footage of preschoolers taking the “marshmallow test,” the legendary experiment on self-control that he invented nearly 50 years ago. In the video, a succession of 5-year-olds sit at a table with cookies on it (the kids could pick their own treats). If they resist eating anything for 15 minutes, they get two cookies; otherwise they just get one.
I’ve given a version of the test to my own kids; many of my friends have given it to theirs. Who wouldn’t? Famously, preschoolers who waited longest for the marshmallow went on to have higher SAT scores than the ones who couldn’t wait. In later years they were thinner, earned more advanced degrees, used less cocaine, and coped better with stress. As these first marshmallow kids now enter their 50s, Mr. Mischel and colleagues are investigating whether the good delayers are richer, too.
At age 84, Mr. Mischel is about to publish his first nonacademic book, “The Marshmallow Test: Mastering Self-Control.” He says we anxious parents timing our kids in front of treats are missing a key finding of willpower research: Whether you eat the marshmallow at age 5 isn’t your destiny. Self-control can be taught. Grown-ups can use it to tackle the burning issues of modern middle-class life: how to go to bed earlier, not check email obsessively, stop yelling at our children and spouses, and eat less bread. Poor kids need self-control skills if they’re going to catch up at school.
Mr. Mischel — who is spry, bald and compact — faced his own childhood trials of willpower. He was born to well-off Jewish intellectuals in Vienna. But Germany annexed Austria when he was 8, and he “moved quickly from sitting in the front row in my schoolroom, to the back row, to standing in the back, to no more school.” He watched as his father, a businessman who spoke Esperanto and liked to read in cafes, was dragged from bed and forced to march outside in his pajamas.
His family escaped to Brooklyn, but his parents never regained their former social status. They opened a struggling five-and-dime, and as a teenager Walter got a hernia from carrying stacks of sleeves at a garment factory. One solace was visiting his grandmother, who hummed Yiddish songs and talked about sitzfleisch: the importance of continuing to work, regardless of the obstacles (today we call this “grit”).
Mr. Mischel came both to embody sitzfleisch, and to study it. Over a 55-year academic career he has published an average of one journal article, chapter or scholarly book about every three months. Over the years, some of the original subjects in the marshmallow study have begged to know whether they ate the marshmallow as preschoolers; they can’t remember. He has told only one of them, who had cancer at 40, and asked to know his marshmallow results on his deathbed. (He was a “pretty good” delayer, Mr. Mischel says diplomatically.)
Part of what adults need to learn about self-control is in those videos of 5-year-olds. The children who succeed turn their backs on the cookie, push it away, pretend it’s something nonedible like a piece of wood, or invent a song. Instead of staring down the cookie, they transform it into something with less of a throbbing pull on them.
Adults can use similar methods of distraction and distancing, he says. Don’t eye the basket of bread; just take it off the table. In moments of emotional distress, imagine that you’re viewing yourself from outside, or consider what someone else would do in your place. When a waiter offers chocolate mousse, imagine that a cockroach has just crawled across it.
“If you change how you think about it, its impact on what you feel and do changes,” Mr. Mischel writes.
To do this, use specific if-then plans, like “If it’s before noon, I won’t check email” or “If I feel angry, I will count backward from 10.” Done repeatedly, this buys a few seconds to at least consider your options. The point isn’t to be robotic and never eat chocolate mousse again. It’s to summon self-control when you want it, and be able to carry out long-term plans.
“We don’t need to be victims of our emotions,” Mr. Mischel says. “We have a prefrontal cortex that allows us to evaluate whether or not we like the emotions that are running us.” This is harder for children exposed to chronic stress, because their limbic systems go into overdrive. But crucially, if their environment changes, their self-control abilities can improve, he says.
Self-control alone doesn’t guarantee success. People also need a “burning goal” that gives them a reason to activate these skills, he says. His students all have the sitzfleisch to get into graduate school, but the best ones also have a burning question they want to answer in their work, sometimes stemming from their own lives. (One student’s burning question was why some people don’t recover from heartbreak.) Mr. Mischel’s burning goal from childhood was to “make a life that would help my family recover from the trauma of suddenly becoming homeless refugees.” More recently, it’s been to find coping skills for children suffering from traumas of their own.
At the moment, my burning goal is to be like Walter Mischel. At 84, instead of slowing down, he’s preparing for his American book tour and fielding questions from Polish journalists.
His secret seems to come straight from the marshmallow test: distraction. “It’s to keep living in a way one wants to live and work; to distract constructively; to distract in ways that are in themselves satisfying; to do things that are intrinsically gratifying,” he says. “Melancholy is not one of my emotions. Quite seriously, I don’t do melancholy. It’s a miserable way to be.”
Howard Becker - Observing How Things Work
Americans have often had strange and serendipitous careers in Paris, from Thomas Evans, the Philadelphia dentist who cured Emperor Louis-Napoleon of a toothache and became an indispensable ornament of the Imperial court, to those African-American jazzmen, like the great soprano-sax player Sidney Bechet, whose careers were revived, and reputations nurtured, in France in ways they never could have been in America. But few have known an odder trajectory than Howie—“Only my mother ever called me Howard”—Becker. Howard S. Becker, to give him his full, honorary-degree name—he has six—has been a major figure in American sociology for more than sixty years. Now a brisk eighty-six, he remains most famous for the studies collected in his book “Outsiders,” of 1963, which transformed sociologists’ ideas of what it means to be a “deviant.” In America’s academic precincts, he is often seen as a sort of Richard Feynman of the social sciences, notable for his street smarts, his informal manner, and his breezy, pungent prose style—a Northwestern professor who was just as at home playing piano in saloons. (Indeed, the observations that put him on the path to academic fame, on the subculture of marijuana smokers, began while he was playing jazz piano in Chicago strip joints. “Not burlesque houses,” he says. “These were strip joints.”)
Yet it is his position in France that is truly astonishing. Two critical biographies of Becker have been published in French in the past decade, and “Beckerisme” has become an ideology to conjure with. YouTube videos capture him speaking heavily accented Chicago French to student audiences, and he now spends a good part of every year in Paris, giving seminars and holding court. His work is required reading in many French universities, even though it seems to be a model of American pragmatism, preferring narrow-seeming “How?” and “Who, exactly?” questions to the deeper “Why?” and “What?” supposedly favored by French theory. That may be exactly its appeal, though: for the French, Becker seems to combine three highly American elements—jazz, Chicago, and the exotic beauties of empiricism.
This summer, Becker published a summing up of his life’s method and beliefs, called “What About Mozart? What About Murder?” (The title refers to the two caveats or complaints most often directed against his kind of sociology’s equable “relativism”: how can you study music as a mere social artifact—what about Mozart? How can you consider criminal justice a mutable convention—what about Murder?) The book is both a jocular personal testament of faith and a window into Becker’s beliefs. His accomplishment is hard to summarize in a sentence or catchphrase, since he’s resolutely anti-theoretical and suspicious of “models” that are too neat. He wants a sociology that observes the way people act around each other as they really do, without expectations about how they ought to. Over the decades, this has led him to do close, almost novelistic studies of jazz musicians, medical students, painters, and photographers.
Among sociologists, he’s most famous for having made sociology’s previous theories of “deviance” look deviant: studying obscure or out groups, he has shown that the way their members act together follows the same kinds of rules that everyone else follows. Some people may march to a different drummer - but, when they do, they’re usually all marching in rhythm, too. As one of his students has written, “Rather than asking the less than fruitful question of why people break rules, Becker came to focus on how people go through an identifiable process to choose to break rules.” A Beckerian analysis of a social “world” asks how, in any culture or subculture, someone comes to be called an insider while someone else gets pushed outside. Simple as it is, this approach has proved immensely influential in the study of everything from drug addiction to queer theory. Basically, Becker believes that Yogi Berra was right: you really can observe the most by watching. Heather Love, a professor of English at Penn who specializes in gender and sexuality studies, points out that it shares “many of the same concerns, about institutions, power, the dynamics of social relations” as contemporary post-structuralist research, “but all in this kind of homegrown, ordinary language, a ‘just the facts, ma’am’ style that has the appeal of American noir and hardboiled fiction.”
Not long ago, in an apartment that he and his wife, Dianne Hagaman, had taken for the fall in the Fifth Arrondissement—the neighborhood of Paris that clusters around the old Sorbonne—he sat and talked about his life’s work and its apotheosis in Paris, almost as a spectator of his own surprising career. As long-faced and dry-eyed as a stoical silent comedian, Becker is game to talk about anything. A conversation with him becomes an inimitable spool of bebop piano tips, Chicago history, sociological minutiae, and meditations on French intellectual life, with helpful detours into strip-club culture in the forties and the reasons that French professors think of themselves as civil servants while American ones imagine themselves as entrepreneurs.
“I always really wanted to be a piano player,” he begins. “When I was about twelve, I heard boogie-woogie for the first time and fell in love with it. My folks had bought a piano for show, and I bought a book of boogie-woogie and taught myself to play it, more or less. And then I met some kids in the neighborhood—you see, I went to Austin High.” Austin High was the citadel of Chicago jazz, where, in the twenties, Bud Freeman had helped create a form of excited, driven white-folks jazz that remained influential through the swing era. “I got jobs for people who couldn’t afford real musicians—thirteen-year-old kids playing for other thirteen-year-old kids.” Then he got into a better band, which was racially mixed. “That was a big thing,” he says. “Because we were racially mixed, we played only black dances. The kids who were at the black dances, if you didn’t play those pieces exactly the way they were on the record, you were in trouble. So I took lessons from Lennie Tristano. When I met him, he was in his late twenties and had already stopped playing in public—he wouldn’t put up with anything other than perfect playing conditions, with the result that he almost never played.”
Tristano, who was a saxophonist as well as a pianist, was the Glenn Gould of bebop: difficult, hypersensitive, reclusive, and hugely gifted. “Instead of teaching ‘freedom,’ or creativity, Tristano taught me a set of practices that create the feeling of what an improvisation ought to sound like,” Becker says. Tristano taught simple ways of solving puzzles that come up in improvising—for instance, ways of adding flatted fifths and minor ninths to otherwise too familiar chord sequences. “He showed how to create an essentially unlimited set of possibilities to work with as I played through an evening in a bar,” Becker recalls. Jazz solos, he learned from his models, were concocted almost entirely “from a small collection of ‘crips,’ short phrases that can be combined in a million ways, subjected to all possible variations.” The lesson that social performance, even of the highest kind, was more a string of crips than an outpouring of confessions remained at the root of Becker’s understanding of the way the world works.
Knowing that his father, a first-generation Jewish immigrant, would “have a kitten” at the thought of his son spending his life playing piano in saloons, Becker enrolled in the University of Chicago—then at the height of its Robert Hutchins-era reputation as a citadel of great books and no sports—so that he could be seen to study all day in order to be free to play jazz all night. “I started working strip joints on Clark Street—all the grownups were in the Army. We played the one independent, non-Mob-owned joint. Guys would come in from the hybrid-seed-corn convention and spend three or four thousand dollars buying drinks for the girls. Then they’d go away happy.”
He planned to get a graduate degree in English while continuing his jazz life, and then one day he stumbled on a new book, “Black Metropolis: A Study of Negro Life in a Northern City”—the northern city being Chicago—by St. Clair Drake and Horace Cayton. It was one of the first in-depth studies of contemporary urban life. “It was wonderful, the whole idea of being an urban anthropologist!” Becker says. “You could be an anthropologist, a very romantic thing, but you didn’t have to go away to do it. Some of the anthropologists I knew lost half their teeth. Not nice. I thought, Wow! If I just wrote down what I was doing at night, just what everyone said and what I observed, then those were field notes.”
Those “field notes” gathered at the strip clubs and night spots helped inspire a seminal paper of 1953, “Becoming a Marihuana User,” in the American Journal of Sociology. (Asked if he knew so much because he was smoking weed himself, he says, “Yeah. Obviously.” And does he still smoke it? “Yeah. Obviously.”) Becker insists that his accomplishment in the paper was no more than the elimination of a single needless syllable: “Instead of talking about drug abuse, I talked about drug use.” “Deviance” had long been a preoccupation of sociology and its mother field, anthropology. Most “deviance theory” took it for granted that if you did weird things you were a weird person. Normal people made rules - we’ll crap over here, worship over here, have sex like so - which a few deviants in every society couldn’t keep. They clung together in small bands of misbehavior.
Becker’s work set out to show that out-groups weren’t made up of people who couldn’t keep the rules; they were made up of people who kept other kinds of rules. Marijuana smoking, too, was a set of crips, a learned activity and a social game. At a time when the general assumption was that drug use was private and compulsive, Becker argued that you had to learn how to get high. Smoking weed, he showed, was most often strange or unpleasant at first. One of his informants (a fellow band member) reported, “I walked around the room, walking around the room trying to get off, you know; it just scared me at first, you know. I wasn’t used to that kind of feeling.” Another musician explained, “You have to just talk them out of being afraid. Keep talking to them, reassuring, telling them it’s all right. And come on with your own story, you know: ‘The same thing happened to me. You’ll get to like that after a while.’ ” In the sociologese that Becker had not yet entirely discarded, he wrote, “Given these typically frightening and unpleasant first experiences, the beginner will not continue use unless he learns to redefine the sensations as pleasurable.” He went on, “This redefinition occurs, typically, in interaction with more experienced users, who, in a number of ways, teach the novice to find pleasure in this experience, which is at first so frightening.” What looked like a deviant act by an escape-seeking individual was simply a communal practice shaped by a common enterprise: it takes a strip club to smoke a reefer.
The lessons learned in the night clubs remain present even today. In his new Mozart/Murder book, Becker points out the continuities between the middle-class housewives of the early twentieth century who became addicted to the opium products then sold over the counter for “women’s troubles” and black youths who now take essentially the same kinds of drug, in a different world: “When middle-class women could buy opium, they did, and they got addicted. When they couldn’t, they didn’t. When poor black boys could buy it, they did, and they got addicted, too.” In Becker’s work, a small realism of social scenes replaces the melodrama of personal pathology.
Becker also points out that any social group, insider or outsider, ends by divorcing itself from the group it’s supposed to be serving. “Everyone has an ideal student or audience in mind, and we never get them,” he points out. This makes teachers impatient with students, and jazz musicians suspicious of audiences. Jazz musicians smoked weed to get high, but one of the effects was to set them off from the night-club-going customers they despised. “This insight looks original only now,” Becker says. “If you were playing, that was all you heard: ‘Fucking squares, now look what they want!’ I remember learning to leave the stand quickly, before any one could ask me to play ‘Melancholy Baby.’ That was the stuff of every minute of what you were doing.” He adds, “The originality - I shouldn’t even call it that - was to pay attention to it as something worth talking about.”
This insight turned out to apply to a lot more than marijuana smokers. “My dissertation supervisor, Everett Hughes, loved the idea that anything you see in the lowly kind of work is there in privileged work, too, only they don’t talk about it,” he says. “Later on, he went to the American nurses’ association and they hired him as a consultant, and he said, ‘Let’s do some real research: why don’t you talk about how nurses hate patients?’ There was a shocked silence and then someone said, ‘How did you know that?’”
The influence of Becker’s early work remains profound. A presidential lecture he gave in 1966 at the annual meeting of the Society for the Study of Social Problems, entitled “Whose Side Are We On?,” is still a clarion in the field. Gayle Rubin, a professor of anthropology at Michigan and a leading scholar of L.G.B.T. studies, praises it as a pioneering attempt at “moral levelling,” where the old prudish act of exposing deviants and curing them of deviance changed to the project of finding out what deviants did, and why it was, on inspection, usually no more deviant than what the rest of us did. “That stuff at Chicago in the fifties really lit the way for so much of what came after,” Rubin says. “There’s a real renaissance of it now.”
Becker insists that he never entirely intended to stay in academia: “It was only after I finished the Ph.D. that I more or less realized that my choice now was to be the most educated piano player on Sixty-third Street or start taking sociology more seriously.” Suspicious of the administrative details of academic life, he lived on research grants, passing from college campus to institutional setting—“For fourteen or fifteen years, I was what was called a ‘research bum.’ ” Following the lead of his first wife, Nan Harris, who was a ceramic sculptor, he decided to write about the visual arts. “But I had this disability—I couldn’t draw!” he says. Living in San Francisco for a while, he took up photography instead, and was lucky enough to have as the “lab monitor,” who mixed chemicals and helped students, a young woman named Annie Leibovitz. His experiences as a working photographer, like his earlier ones as a working jazzman, illuminated what eventually became his second important book, “Art Worlds” (1982), which advanced a collaborative view of picture-making. Like reefer-smoking among jazz musicians, artmaking was not the business of solitary artists, inspired by visions, but a social enterprise in which a huge range of people played equally essential roles in order to produce an artifact that a social group decided to dignify as art. Art, like weed, exists only within a world.
It was a quarter-century ago, with the publication of “Art Worlds” and “Outsiders” in France, that the strange second act of Becker’s career began. His books became a magnetic pole around which dissident French sociologists could gather. A group of social scientists calling themselves L’École de Chicago de Paris translated “Outsiders,” and saw it become a campus best-seller. (Becker: “I think because it worked well as a textbook, being sort of leftish - really, just unconventional about things like deviance - and easy to read, which was a great combination to give to undergraduates.”) But the book also provided a means to combat the man who, for a generation, had been the dominant figure in French social science, Pierre Bourdieu.
Becker’s role as the American not-Bourdieu is so essential to his reputation in France that, in talking about Becker, one invariably also talks about his other. Bourdieu, who died in 2002, was a sociologist whose work - brilliantly disenthralled or grimly determinist, depending on your perspective - explained all social relations as power relations, even in a seemingly open world of “free expression” like the visual arts. For Bourdieu, whose book “Distinction: A Social Critique of the Judgment of Taste” (1979) remains a classic text on the sociology of culture, a dominant class reproduces itself by enforcing firm rules about what is and is not acceptable, and creates a closed, exclusive language to describe it: those who have power decide what counts as art, and to enter that field at all is possible for outsiders only if they learn to repeat the words that construct its values.
One of the most agitated debates in French social science today is between Bourdieu’s and Becker’s conceptions of the realm in which our lives take place. Bourdieu believed that all social life takes place in a “field” and Becker insists that it takes place within a “world”—an opposition that irresistibly brings to mind Woody Allen’s remark that while Democritus called the indivisible units of the universe “atoms” Leibniz called them “monads,” and that fortunately the two men never met or there would have been an extremely dull argument. The argument about fields and worlds, as Becker freely admits, is a bit like that one - both are generalized metaphors - but he also thinks it can be saved from a mere dispute over nomenclature.
“Bourdieu’ s big idea was the champs, field, and mine was monde, world - what’s the difference?” Becker asks rhetorically. “Bourdieu’s idea of field is kind of mystical. It’s a metaphor from physics. I always imagined it as a zero-sum game being played in a box. The box is full of little things that zing around. And he doesn’t speak about people. He just speaks about forces. There aren’t any people doing anything.” People in Bourdieu’s field are merely atom-like entities. (It was Bourdieu’s vision that helped inspire Michel Houellebecq’s nihilistic novel of the meaningless collisions of modern life, “The Elementary Particles.”)
“Mine is a view that - well, it takes a village to write a symphony and get it performed,” Becker goes on. “It’s not just the composer. The great case for me is in film, because nobody ever figured out who the real artist is: the screenwriter or the director or who? Or, rather, everybody figured it out, but never figured out the same thing. Early on when I was reading about art, I read a book by Aljean Harmetz on the making of ‘The Wizard of Oz.’ She was the daughter of someone in the wardrobe department of M-G-M, and she explains that there were four directors of that film, and the guys who thought of the crucial thing, the change from black-and-white to color when the characters enter Oz, were the composer and the lyricist! In an important way, I took the list of credits at the end of a Hollywood film as my model of how artistic creation really happens.”
As Becker has written elsewhere, enlarging the end-credits metaphor, “A ‘world’ as I understand it consists of real people who are trying to get things done, largely by getting other people to do things that will assist them in their project. . . . The resulting collective activity is something that perhaps no one wanted, but is the best everyone could get out of this situation and therefore what they all, in effect, agreed to.” In a Beckerian world, we act the way we do because of a certain logic of events - jazz musicians are supposed to smoke dope, graduate students learn how to please their supervisors - but there are lots of different roles within the world, and we can choose which one to play, and how to play it. We’re all actors, not angels or completely free agents. But we are looking for applause, so we put on the best show we can. This view of the world has something in common with that of Becker’s longtime friend and colleague Erving Goffman. “But Goffman got more interested in the micro-dramatics of things,” Becker points out, meaning, for instance, his studies of how people look when they lie. “I was always more interested in the big picture.”
Becker tries to observe his own ascendancy in France with the same detachment with which he observes other people, but his appeal to the French goes beyond his simply not being Bourdieu. The French myth of America is as robust as the American myth of France, and one important element in it is the idea that Americans can arrive intuitively at results that the French can get to only by thinking a lot. Like the Hollywood moviemakers whom the French New Wave critics adopted in the fifties and sixties, Becker is beloved in Paris in part because he doesn’t seem overencumbered with theory or undue abstraction. As Heather Love also points out, “U.S. deviance studies has the international allure of American crime fiction, and with a cool narrator like Becker, all the better.”
But, to his French admirers, this doesn’t disprove the need for theory; it just means that sometimes the best theories are left mysteriously unspoken. That Howard Hawks made so many good movies without actually having a theory of moviemaking was a strong sign that he must really have a fantastic theory of the movies, if he would only tell you. Becker’s reputation is a bit like that: if you can say so many interesting things just by watching the world, then you must really have a fantastic set of prescription spectacles, even if no one ever gets to see you wearing them.
Over lunch, Becker discusses a question that rises above personality clashes and institutional leanings. The project of moral levelling also has within it the problem of moral levelling. What is the point of sociology if it can’t tell us that murder is bad or Mozart is great? Surely we don’t want to expand our equanimity about out-groups to, say, the Gambino family, whose rules include whacking people they don’t like, or the Manson family, who had rules and rituals, too? For Becker, though, these objections involve a “category mistake.” Yes, murder is wrong, but why is it the job of social sciences to remind us of that fact?
“How does it really happen isn’t the only question, sure,” he says. “It’s just the one with the biggest chance of having an interesting answer rather than a predictable, safe one. I’m interested in how power happens, not just saying, ‘Oh, the exercise of power.’ ” One of his favorite instances of how power works involves the role of the invisible middlemen who create places for themselves in the muddled center of any bureaucracy - in Brazil, where he lived for a while, they’re called despachantes, but a student of Becker’s has found close equivalents in Chicago laundromats, where they ease the burden of the welfare system. “They get power by knowing the rules on the box in greater detail than anyone else,” Becker says. “They’re the people you turn to to break the code of the system. That kind of ‘how’ of power interests me more than the fact of power.
“What does sociology bring to the table? Well, I’d expand the definition of sociology. Calvino, in ‘Invisible Cities,’ is a sociologist. Robert Frank, in ‘The Americans’ - that’s sociology. There’s a thing that I’m sure David Mamet said once, though I’ve never been able to track it to its source. He was talking about the theatre, and he said that everyone is in a scene for a reason. Everyone has something he wants. Everyone has some plan he’s trying to pull off. ‘What’s the reason?’ is the real question. So that’s what you do. It’s like you’re watching a play and you—you’re the guy who knows that everyone is there for a reason.”
The Dishonesty Project
All of our pants are almost constantly on metaphorical fire, is the basic impression I got after watching the new documentary Dishonesty: The Truth About Lies. The film is a fascinating exploration of the current scientific research on the little things that nudge people into lying, cheating, and stealing, and most of the research comes from behavioral economist Dan Ariely, the Duke professor and best-selling author of books like Predictably Irrational: The Hidden Forces That Shape Our Decisions.
The film will be screening for a short time in New York at the IFC Center, starting this Friday. (For bonus social-science nerd fun, Ariely and director Yael Melamde will be at the Friday and Saturday shows to answer audience questions.) But Science of Us got an advance screener of the film, so, herewith, some of the most interesting findings on dishonesty the documentary covers. (All direct quotes in the post are taken from the film. Honest.)
Some of the most interesting insights into human dishonesty have stemmed from a 20-item set of math problems. In much of his research on lying, Ariely has favored something called the matrix experiment, a set of 20 straightforward math problems that anyone could solve, were they given enough time. The trick is, as Ariely explains, they never give their study volunteers enough time. The participants get just five minutes to answer as many questions as they can; then, they take their papers up to the front of the room and shred them. Next to the shredder is one of the experimenters, and the students are instructed to tell this person how many questions they answered correctly, and they’ll be paid the according amount of dollars.
But there’s a second trick: The shredder didn’t actually shred their papers. It only shred the sides, so the researchers can later see who was telling the truth. On average, people solve four problems correctly, but they tend to report getting six right.
When given the opportunity, the majority of people will lie. But the bigger, fatter lies are rare. More than 40,000 people have now participated in some version of the matrix experiment, and more than 70 percent of them cheated. But only a few — Ariely has counted about 20 — could be considered “big” cheaters, those who told the researchers that they solved all the matrix problems correctly, meaning they walked away with $20 they didn’t earn. So these liars stole a total of $400 from the researchers. In contrast, Ariely and his team have documented about 28,000 fibbers, stealing a grand total of about $50,000. “I think this is not a bad reflection of reality,” Ariely said. “Yes, there are some big cheaters out there, but they are very rare. And because of that, their overall economic impact is relatively low. On the other hand, we have a ton of little cheaters, and because there are so many of us, the economic impact of small cheaters is actually incredibly, incredibly high.”
We’re also pretty good at lying to ourselves. Michael Norton, a Harvard Business School professor, has done experiments in which he gives study participants general trivia tests, and some of the papers have the answers at the bottom. So the participants first take that test, and then they’re given another one — this one with no answers at the bottom. They’re also asked to predict how well they think they’ll do on this second test, and most of them predict they’ll get an excellent score on this test, as well. “They just think that they’re amazing test-takers now,” Norton said. Even when he’s tried to get them to think more realistically by promising them more money if their predictions are more accurate, people still overestimate their ability. “This process of deceiving ourselves is so strong, and it happens to us so quickly, where we have a twinge of, Maybe I cheated, and then, No, I didn’t, I’m a genius,” Norton said.
Animals lie, too. It’s a broader take on the idea of lying, but Murali Doraiswamy, of the Duke Institute for Brain Sciences, argues that “all creatures, big or small, have deception as part of their armamentary,” usually as a means of survival. “A plant or a bird might change color and camouflage itself, which is a form of deception,” Doraiswamy said. And the bigger the brain, the better the liar: take chimpanzees, for example. “They may lead their group away from where the food is, so that one particular chimpanzee can come back to the food later on,” he said.
A little dishonesty is good for kids, kind of. Doraiswamy argues that when young kids start to experiment with lying, it’s often more of a way to build their imagination than an attempt to get away with something. “It helps them build their brain, and it helps them build what is called theory of mind,” said Doraiswamy, referencing the psychological theory that says as our brains mature, we get better at figuring out what another person is thinking about (a form of imagination, really). “And unless children lie, and unless children imagine, and dream big, they may not have the full capacity to develop a theory of mind,” he said.
Lying for someone else’s benefit doesn’t really feel like a lie. When people can justify their dishonesty, the lie often doesn’t get picked up by a lie detector, according to Ariely’s research. “Lie detectors basically detect emotional arousal — when we feel uncomfortable,” he explains. When people cheat for their own gain, the lie is detected, no problem. “But sometimes, we ask people to cheat for a charity. And then the lie detector is silent. The lie detector doesn’t catch anything. Why? Because if we could justify it, we’re doing something for a good cause, there’s really no arousal — there’s no conflict, there’s no emotional problem.” The film follows this little factoid with the story of Kelly Williams-Bolar, the Ohio woman who was jailed for lying on her kids’ school records so they could switch to a better district, and it is heart-breaking.
Dishonesty gets easier over time, and neuroscientists think they know why. At first, even a little lie provokes a big response in brain regions associated with emotion, such as the amygdala and insula, said Tali Sharot, a cognitive neuroscientist at University College London. “The tenth time you lie, even if you lie the same amount, the response is not that high. So while lying goes up over time, the response in your brain goes down.” Sharot believes this can be explained by a very basic principle of neuroscience: the brain adapts. “The brain is coding everything relative to what the baseline is,” she explains. So if we don’t usually lie, and then we do, this prompts a big neurological response. But if we lie a lot, the response lessens over time. “After a while, the negative value of lying — the negative feeling — is just not there, so much,” Sharot said. And this, she reasons, makes it easier for people to keep on lying.
But there is an extremely simple way to curb dishonesty dramatically — just remind people to not be dishonest. All you have to do is show people some kind of reminder of a moral code, and the urge to lie dissipates. In one experiment Ariely describes, researchers asked 500 students at UCLA to try to jot down as many of the Ten Commandments that they could remember. After that, they took part in the matrix experiment. None of them recalled all the commandments, and yet none of them cheated, Ariely said. This was true regardless of whether the students were religious or not. Simply reminding them that Thou shall not lie has a weirdly powerful effect.
The study was replicated at MIT, but without the religious context: Students were asked to read MIT’s “moral code” before the matrix task. Again, no one cheated, Ariely said — this, despite the fact that MIT doesn’t even have a moral code. “It is not about heaven and hell and being caught,” Ariely said. “It’s about reminding ourselves about our own moral fiber.”
Anticipate Temptation
Every day we are bombarded with temptations — to cheat on our diets, to spend instead of save our paychecks, to tell little white lies. It can be exhausting to have to continually remind ourselves that, long-term, we want to be upstanding people, so we shouldn’t make tempting but unethical short-term decisions. But what if simply thinking about dishonesty could make it easier for us to behave ethically? This novel possibility comes from a set of studies published May 22 in the Personality and Social Psychology Bulletin, which demonstrate that anticipating temptation decreases the likelihood of a person engaging in poor behavior.
In this set of studies, lead researcher Oliver Sheldon, a specialist in organizational behavior at Rutgers University, and co-author Ayelet Fishbach, a social psychologist at the University of Chicago, set out to understand the factors that influence self-control in ethical decision-making. The results suggest “a potential solution to curb dishonesty,” according to Francesca Gino, a behavioral scientist at Harvard Business School who was not involved in the work.
Sheldon and Fishbach designed three experiments to investigate thoughts that occur to people just before they make an ethical decision. In each, they recorded the behavior of groups of participants during an exercise after the individuals were given different combinations of prompts designed to activate thoughts of either past temptation or social and moral integrity. “We predicted, and found, that such forewarning… helps people better prepare to proactively counteract the influence of impending ethical temptations on their behavior,” Sheldon says.
In the first experiment, 196 business school students were designated as buyers (realtors) or sellers of historical homes, with sellers being told to sell only to buyers who would preserve the homes and buyers being told to conceal the fact their clients would tear down the homes. Before interacting with one another, half the participants in each group were prompted to write about a time they felt tempted to “bend the rules” and the other half were prompted to write about an arbitrary neutral topic, namely a time when having a back-up plan aided in a negotiation. As the buyers were the group that faced an ethical conflict in the interaction, theirs was the behavior under analysis. Of the buyers, 67 percent of those who had not been asked to reflect on unethical behavior lied in order to close the deal, as compared to 45 percent of those who were reminded of temptation.
In the second experiment, 75 college students were told to flip a coin labeled “short” and “long” on either side, to determine whether they had to proofread short or long passages of text. Before the exercise, the students were divided into two groups and given the same writing assignments as in the buyer-seller experiment. The two groups were further divided in half, with one half of each group being told that at their stages of life a person’s values, life goals and personalities are stable; the other half were told that these personal elements can change. By adding this twist to the experiment, the researchers wanted participants to think about how their actions during the exercise could be indicative of their future selves. Students who both thought about temptation and were told their traits were stable were honest about the outcome of their coin flips, whereas most of the remaining participants lied about the outcome in order to do less work. Clearly, thinking ahead served to prepare people for temptation and make them more honest — provided it was combined with thinking about moral integrity.
In the final experiment, the researchers showed 161 online participants six workplace scenarios in which an employee might be tempted to behave unethically, such as stealing office supplies. Once again, the participants completed one of the same writing assignments used in the previous experiments and then were shown the six scenarios either all at once (suggesting that just one unethical decision would be made, thus the behavior constituted an isolated incident) or on six separate screens (suggesting unethical behavior would be a recurring event). Participants who recalled grappling with temptation and considered the scenarios all at once were significantly less inclined to support unethical workplace behavior than those in the other groups. The results of this experiment suggest a similar conclusion to that of the second experiment — that thinking ahead and thinking of moral integrity keeps people honest.
In considering the results of the three experiments, a distinct pattern emerged: First, people who recall previous wrongdoing seem a little less likely to repeat their mistakes; the researchers believe this exercise causes subjects to better “anticipate” temptation. But this effect is not an absolute preventative against dishonesty. The second part of the pattern is that even if people anticipate temptation, they are less likely to resist if they think their decision will have no impact on their future integrity, social acceptance, or self-image. The first part of the pattern constitutes a novel perspective on self-control, whereas the second part is “consistent with previous theorizing on why good people behave badly,” notes Ann Tenbrunsel, an ethicist at Mendoza College of Business who was not part of the study.
Another researcher not involved in this study, Andrea Pittarello, a behavioral psychologist at Ben-Gurion University of the Negev in Israel, proposes that the mechanism behind ethical decision-making in these experiments is that thinking of previous dishonesty “make[s] moral standards more salient and in turn decrease[s] dishonesty.” He points out that thoughts of previous dishonesty might create a sense of guilt that influences people to behave ethically in an attempt to make up for past wrongdoing.
So why do good people do bad things? This new research suggests that the answer could be as simple as a lack of anticipation of conflict and temptation. Dan Ariely, a behavioral economist at Duke University, says the results fit with research by him and others on the idea that seemingly honest people are dishonest “only because of a kind of ‘wishful blindness,’ when we don’t pay attention to our thoughts.”
This area of research could someday pave the way for more practical interventions that could help ensure that ethical decisions are made. In this study, participants had to complete a writing assignment to induce the anticipation that influenced their behavior. Unfortunately, this method is time-consuming and thus impractical for use in day-to-day life. But it does offer a few lessons that could apply to how we think about making decisions. So the next time you are facing an ethical challenge, stop to think about the person you were, the person you are, and the person you want to be. In the words of Warren Buffett, “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.”
Welcome to Cleveland
Since 1978, passengers flying to Mitchell International Airport in Milwaukee have spotted the sign, written on a roof in six-foot tall letters.
The only problem? It says 'Welcome to Cleveland'.
Mark Gubin's sign has been confusing - and terrifying - passengers who were suddenly convinced they were going to the wrong place.
'There's not a real purpose for having this here except madness,' Gubin told the Milwaukee Journal Sentinel in 2005. 'Which I tend to be pretty good at.'
Gubin first got the idea to paint the sign when he was having lunch on the roof of his apartment with his assistant at the time, and she noticed all the low-flying planes that came by.
She told the photographer it would be a nice idea to make a sign that welcomed the passengers to Milwaukee. Gubin told her he had an even better idea.
The sign became famous after it was first painted, making headlines in thousands of newspapers and magazines, TV news and even The Tonight Show with Johnny Carson.
'It was all tongue-in-cheek, just for fun,' he said of the sign.
'Living in the world is not a dress rehearsal. You better have fun with it.
Identifying Cheats
Lady Macbeth would barely register. Flashman would pass with flying colours. Even the Satan of Milton’s Paradise Lost would have a fair chance of gliding under the radar.
All sorts of psychopaths, congenital liars and Machiavellian schemers could be slipping through the net of psychometric testing with disastrous consequences, scientists have warned.
The classic “Big Five” traits used by psychologists to measure personality — extraversion, openness, agreeableness, conscientiousness and neuroticism — are said to be failing to pick up those most likely to cheat the system.
A growing number of researchers are calling for a sixth category to be included in the tests: “honesty/humility”.
In theory, this measures how modest, sincere and fair-minded people are. In practice, it is a moral sieve to pick out those who are the opposite: devious, self-serving and cheerfully fraudulent.
These qualities — if that is the right word — are closely linked to what psychiatrists call the “dark triad” of narcissism, psychopathy and Machiavellianism. Failure to account for this trio of traits could be catastrophic, according to advocates of the new test.
The revised six-point scale, the Hexaco model, was drawn up about a decade ago, but has run into stout resistance. Some psychologists are concerned that its scores are not comparable with earlier studies based on the Big Five, while others worry that the measure is dependent on how honest people are. Which, if you are a thoroughbred Machiavellian, is not very.
A new study may help the Hexaco model’s cause. Academics in Germany and Denmark offered hundreds of volunteers the chance to lie about their scores for personal gain in games of chance. They found that cheating was widespread and strongly predicted by scores in the honesty-humility test — suggesting that it could be more reliable than some psychologists think.
“Discouraging though it may be, the well-publicised cases of fraud, corruption and dishonesty that surface almost daily are probably little more than the tip of the iceberg,” the researchers wrote in the Journal of Research in Personality.
Michael Ashton, of Brock University in Canada, one of the psychologists who invented the Hexaco, said the new evidence was important and innovative and would “help to break down any lingering resistance to adopt it”.
Steven Brown, of Glasgow Caledonian University, who used Hexaco tests in his research on music piracy, said that they helped to explain why people were prepared to flout the law. “The honesty-humility dimension shows the strongest associations with behaviours that offer rewards that are generally considered to run counter to moral and legal conventions,” he said.
Twitter Mobs
It is the pinnacle of free speech — an open, high-speed social networking site where users post snappy, eye-catching news and comment fromall around the world, day and night.
However, Twitter has been accused of becoming the favourite playground of a modern-day lynch mob. Members scan the timelines of other users for gaffes, political incorrectness or anything “offensive” and then descend.
Sir Tim Hunt is only the latest victim of the mob. A few ill-judged remarks about female scientists being a distraction in the workplace and 47,358 tweets later his reputation lies in tatters.
Psychologists have long warned about the dangers of mob rule, or “group think” as it is known these days, and they say that Twitter is fertile ground.
Lance Workman, a senior lecturer in psychology at the University of South Wales, said: “When you tweet you feel partof a group, so you gravitate towards the norms of that group. In social psychology we call it deindividuation. People lose a bit of their identity and their values when they join in and say things they wouldn’t normally say.
“Their behaviour gets more extreme as they get excited and feel powerful. Twitter is ripe for group think. It is instant and it is short— 140 characters or less. It is not about standing back and considering what your response is before declaring it.”
So, given the brevity of tweets and fleeting nature of debate on the site, it is astonishing that institutions as venerable as the Royal Society and University College London, care so much about it.
Mick Hume, author of Trigger Warning, a new book about free speech and the internet, said that these organisations had made a fundamental error regarding Twitter.
“They are mistaking what is on Twitter for public opinion. It is easy to create what looks like a groundswell of opinion on Twitter but it really is nothing of the sort,” Hume said. “Tweeters are zealots, a small, vociferous groupof zealots who punch above their weight and Twitter is their favourite playground. But these institutions are so terrified they are out of touch with the public that they act on it.”
Matthew Taylor, chief executive of the Royal Society of Arts, went further: “A typical Twitter storm is the equivalent of 50 or 100 people standing on a street corner shouting abuse about someone they have never met to no one in particular. In most circumstances we would consider that behaviour antisocial and possibly insane. However, Twitter provides people the platform to behave in thisway without even leaving the house.”
Sir Tim is not the first to fall foul of the Twitter mob, although his fall from grace has been one of the most dramatic. Justine Sacco, a PR executive, made a tasteless joke on Twitter before she boarded a flight about going to Africa and risking contracting Aids. By the time she landed, she had lost her job. Another victim was Jonah Lehrer, a writer on The New Yorker who invented and amended quotes for a book and was hounded out of his job. Matt Taylor, a scientist who worked on the Philae lander project, was vilified for wearing a shirt featuring semi-naked women. The hashtag was #shirtstorm.
Some users have defended Twitter, including Rhiannon Lucy Cosslett. She created the hashtag #distractinglysexy, where female scientists poked fun at Sir Tim by posting images of themselves in protective clothing. She said that Twitter did not lead to Sir Tim’s demise. “It was clearly embarrassment on the part of the scientific community at his retrograde sexism.”
A User's Guide To Rational Thinking
Nudge Unit
A text message sent on a Sunday evening and at the end of the half term holiday has had a dramatic effect on the number of youngsters dropping out of further education. The text, which addresses students by name, reminds them to plan their journey to college in advance and says that their tutor is looking forward to seeing them. It costs 3.5p to send and has cut dropout rates by a third. One in five students had been dropping out of further educational programmes for those with poor maths and English skills.
The text was written by Downing Street’s “nudge” unit to help solve the longstanding problem. The Behavioural Insights Team has also successfully encouraged the superrich to pay their tax on time, raised the number of organ donors by using messages on the DVLA website and encouraged investment bankers to donate a day of their salary to charity by giving them a small packet of sweets along with the letter asking for money.
The budge unit’s triumphs and failures have been documented in a report on its work over the past two years. Its staff are experts in human behaviour and create “behavioural prompts” to encourage people to do the right thing without issuing sanctions or threats.
David Cameron set up the unit in one of his first acts as prime minister in 2010 and it has now been spun off into an enterprise in its own right, advising government departments.
David Halpern, the head of the unit, said that he had been taken aback by some of its successes, including an initiative on police recruitment that he said was “absolutely extraordinary”. Large numbers of candidates from ethnic minorities were routinely dropping out of the recruitment process halfway through. The unit drafted an email to be sent out at the most vulnerable point asking recruits to reflect on why the job mattered to them. It had no impact on white candidates but led to a jump of 50 per cent in ethnic minorities completing the recruitment process. Printing information on how to switch energy suppliers on the envelope containing the winter fuel allowance boosted inquiries to Ofgem’s website by 20 per cent.
Not everything works. An initiative to help women stop smoking early in pregnancy, using stickers on pregnancy test kits, had no impact. A scheme to help consumers save money on heating with bespoke advice on what level to set their thermostats also failed to take off.
Mr Halpern says that the unit can test several things at once to see which is most effective, a luxury not available to government departments rolling out policies. In the case of late taxpayers, a general letter telling the public that most households pay their tax on time had some success with everyone but the most wealthy, so another attempt was made targeting just this group. They were told that their money was needed for vital public services such as schools and hospitals and that the rest of the country was depending on them. Prospective organ donors were unmoved by pictures on the DVLA website suggesting who might benefit but did respond well when reminded that they might need an organ one day.
Mr Halpern is writing a book about setting up the unit. He recalls that it was greeted with derision. The former Cambridge academic was given two years to make a ten-fold return on the initial £500,000 investment or be wound up. He met the target and more. “Civil servants are sceptical because they have seen so many ideas come and go. However they have seen what can be achieved by making small changes and using simple techniques and now several departments have their own nudge unit, including HMRC,” Mr Halpern said.
The project has been replicated in Australia and Singapore; and there are plans in the US and Germany.
Yawn to detect a psychopath
If you want to spot a psychopath, you could start by yawning at them. Scientists have found that the more psychopathic traits people have, the less likely they are to be afflicted by “contagious yawning”.
To a greater or lesser degree, most people can be induced to yawn by seeing people yawning, or even just by thinking about yawning. This contagious yawning is common to many mammals, and is believed to serve a social purpose, perhaps by building empathy.
Researchers from Baylor University in Texas wanted to test this hypothesis by seeing if psychopaths were less affected by the yawns of others. For the study, published in the journal Personality and Individual Differences, 135 students were tested for psychopathy. This involved completing a test that measured traits such as “Machiavellian egocentricity”, “coldheartedness” and “rebellious nonconformity”.
They were then shown a video in which people exhibited a variety of facial expressions, as well as yawning. While this went on the researchers watched and recorded how often the students yawned. What they found was that those who scored highly on the scale of “coldheartedness” were more than a third less likely to yawn.
Brian Rundle, a PhD student, said that the work helped to validate the idea that contagious yawning was about empathy. “There is a lot of educated speculation about this,” he said. “Why does our species yawn when we see somebody else yawn? One of the biggest lines of evidence is that it is very much related to empathy — so what I wanted to do was test it in a population known for having a lack of empathy.”
For the test of psychopathy, people were asked to say how much they agreed with statements such as “I cringe when an athlete gets badly injured during a game on TV”, or “I do favours for people even when I know I won’t see them again”.
“A lot of these people are still very pleasant people, it is just clear they’re not able to connect with you as much as other people are,” Mr Rundle said. He added that his research should not mean that people who didn’t yawn when you yawned were definitely psychopathic. “While this is a really interesting finding, if you don’t respond to contagious yawn it doesn’t mean you have something wrong with you.”
Why, though, would empathetic yawning be useful to humans? “That is a big question,” he said. “There’s some evidence to show that in baboons or dogs or chimps the alpha male tends to be the one to yawn first.” He said that in our caveman past, a yawn from the chief could help synchronise behaviour. “If you’re sitting around the campfire it cues everyone else to yawn, and instead of going to bed at separate times they all do it at the same time.”
Not As Moral As You Think
I’ve been teaching Stanley Milgram’s electric-shock experiment to business school students for more than a decade, but “Experimenter,” a movie out this week about the man behind the famous social science research, illuminates something I never really considered.
In one scene, Milgram (played by Peter Sarsgaard) explains his experiment to a class at Harvard: A subject, assigned to be the “teacher,” is ordered to administer increasingly intense shocks to another study participant in the role of “learner,” allegedly to illustrate how punishment affects learning and memory. Except, unbeknownst to the subject, the shocks are fake, the other participant works for the lab and the study is about obedience to authority. More than 60 percent of subjects obeyed fully, delivering up to the strongest shock, despite cries of pain from the learner. Those cries were pre-recorded, but the teachers’ distress was real: They stuttered, groaned, trembled and dug their fingernails into their flesh even as they did what they were asked to do.
In this scene from the 2015 film "Experimenter," Stanley Milgram (played by Peter Sarsgaard) explains his electric-shock experiment to a class at Harvard.
“How do you justify the deception?” one student asks.
“I like to think of it as an illusion, not deception,” Milgram counters, claiming that the former has a “revelatory function.”
The student doesn’t buy it: “You were delivering shocks, to your subjects, psychological shocks . . . methodically for one year.”
Before seeing the film, I didn’t fully appreciate that parallel. In the grainy black-and-white documentary footage that the real-life Milgram produced, he remains off-camera. I’d never put much thought into the moral dilemma he faced. I’d never asked myself what I would have done in his position.
I’m fairly certain that — even in an era before institutional review boards, informed-consent and mandatory debriefings — I would have determined that it’s wrong to inflict that much psychological distress. But I can’t be absolutely sure.
When I ask students whether, as participants, they would have had the courage to stop administering shocks, at least two thirds raise their hands, even though only one third of Milgram’s subjects refused. I’ve come to refer to this gap between how people believe they would behave and how they actually behave as “moral overconfidence.” In the lab, in the classroom and beyond, we tend to be less virtuous than we think we are. And a little moral humility could benefit us all.
Moral overconfidence is on display in politics, in business, in sports — really, in all aspects of life. There are political candidates who say they won’t use attack ads until, late in the race, they’re behind in the polls and under pressure from donors and advisers, their ads become increasingly negative. There are chief executives who come in promising to build a business for the long-term but then condone questionable accounting gimmickry to satisfy short-term market demands. There are baseball players who shun the use of steroids until they age past their peak performance and start to look for something to slow the decline. These people may be condemned as hypocrites. But they aren’t necessarily bad actors. Often, they’ve overestimated their inherent morality and underestimated the influence of situational factors.
Moral overconfidence is in line with what studies find to be our generally inflated view of ourselves. We rate ourselves as above-average drivers, investors and employees, even though math dictates that can’t be true for all of us. We also tend to believe we are less likely than the typical person to exhibit negative qualities and to experience negative life events: to get divorced, become depressed or have a heart attack.
In some ways, this cognitive bias is useful. We’re generally better served by being over confident and optimistic than by lacking confidence or being too pessimistic. Positive illusions have been shown to promote happiness, caring, productivity and resilience. As psychologists Shelley Taylor and Jonathon Brown have written, “These illusions help make each individual’s world a warmer and more active and beneficent place in which to live.”
But overconfidence can lead us astray. We may ignore or explain away evidence that runs counter to our established view of ourselves, maintaining faith in our virtue even as our actions indicate otherwise. We may forge ahead without pausing to reflect on the ethics of our decisions. We may be unprepared for, and ultimately overwhelmed by, the pressures of the situation. Afterward, we may offer variations on the excuse: “I was just doing what the situation demanded.”
The gap between how we’d expect ourselves to behave and how we actually behave tends to be most evident in high-pressure situations, when there is some inherent ambiguity, when there are competing claims on our sense of right and wrong, and when our moral transgressions are incremental, taking us down a slippery slope.
All these factors were present in Milgram’s experiment. The subjects felt the pressure of the setting, a Yale University lab, and of prompts such as “It is absolutely essential that you continue.” There was ambiguity surrounding what a researcher might reasonably request and what rights research subjects should demand. There also was a tension between the subjects’ moral obligation to do no harm and their obligation to dutifully complete an experiment they had volunteered for and that might contribute to the broader advancement of science and human understanding. Additionally, because the subjects were first asked to administer low-voltage jolts that increased slowly over time, it was tricky for them to determine exactly when they went too far and violated their moral code.
For a real-world example, consider Enron. Employees were under extraordinary pressure to present a picture of impressive earnings. Ambiguities and conflicts were built into the legal and regulatory systems they were operating in. And so they pushed accounting rules to their limits. What began as innovative and legitimate financial engineering progressed to a corporate shell game that met the letter of the law but flouted its spirit — and ultimately led to Enron’s collapse. This is not to say that Enron’s top executives — Kenneth Lay, Jeffrey Skilling and Andrew Fastow — were good people, but to emphasize how others throughout the organization got caught up in morally troublesome behavior.
We would see fewer headlines about scandal and malfeasance, and we could get our actions to better match our expectations, if we tempered our moral overconfidence with some moral humility. When we recognize that the vast majority of us overestimate our ability to do the right thing, we can take constructive steps to limit our fallibility and reduce the odds of bad behavior.
One way to instill moral humility is to reflect on cases of moral transgression. We should be cautious about labeling people as evil, sadistic or predatory. Of course, bad people who deliberately do bad things are out there. But we should be attuned to how situational factors affect generally good people who want to do the right thing.
Research shows that when we are under extreme time pressure, we are more likely to behave unethically. When we operate in isolation, we are more likely to break rules. When incentives are very steep (we get a big reward if we reach a goal, but much less if we don’t), we are more likely to try to achieve them by hook or by crook.
I teach a case about an incentive program that Sears Auto Centers had in the 1990s. The company began offering mechanics and managers big payments if they met certain monthly goals — for instance, by doing a certain number of brake jobs. To make their numbers, managers and mechanics began diagnosing problems where none existed and making unnecessary repairs. At first, employees did this sporadically and only when it was absolutely necessary to make quota, but soon they were doing unneeded brake jobs on many cars. They may not have set out to cheat customers, but that’s what they ended up doing.
Along with studying moral transgression, we should celebrate people who do the right thing when pressured to do wrong. These would include whistleblowers such as Jeffrey Wigand of the tobacco industry and Sherron Watkins of Enron. But we also can look to the civil rights movement, the feminist movement and the gay rights movement, among others, to find people who used their ingenuity and took great risks to defy conventions or authorities they considered unjust.
I teach another case study in which a senior banker asks an associate to present data to a client that makes the expected returns of a transaction look more attractive than they actually are. When I ask students how they would respond, most say they’d initiate a conversation with the boss in which they gently push back. I then role-play as a busy banker who’s on the phone, annoyed at the associate showing up to talk about this issue again: “Why are you back here? Haven’t you done it yet? I’ll take responsibility — just do as I instructed.” Asked what they’d do next, the students generally fall into two groups: Most say they’d cave and go along with the instructions, and some say they would resign. What’s interesting is how they stake out the two extreme positions; very few have the imagination to find a middle ground, such as talking to peers, or senior employees beyond the boss, or seeking out an ombudsman. The aim in teaching this case is to help students see ways to behave more resourcefully and imaginatively in the face of pressure, and to adopt a wider perspective that offers alternative solutions.
Leaders have an additional duty to reduce the incentives and pressures in their organizations that are likely to encourage moral transgressions — and to clear a path for employees to report behavior that steps over boundaries.
Some professions have had success implementing formal codes of conduct. Doctors look to the Hippocratic Oath as a simple guide to right and wrong, and police read suspects the Miranda Rights to limit their own sense of power and make clear that arrestees should not feel powerless. Unfortunately, in business, such scripts remain less developed. Vision and values statements are common. But Enron’s certainly didn’t stand in the way of wrongdoing.
In the absence of effective moral codes, or in combination with them, a culture of “psychological safety” can help people find moral courage. The concept, pioneered by my Harvard colleague Amy Edmondson, is that organizations should encourage employees to take risks — report mistakes, ask questions, pitch proposals — without the fear that they will be blamed or criticized. Edmondson found that high-performing medical teams had higher rates of reported errors than teams with lower performance. The teams comfortable reporting errors were able to collectively learn from their mistakes. Similarly, places where employees are comfortable objecting to moral pressure or flagging moral transgression have the best chance of correcting course.
Stanley Milgram defended his study and its methodology in public without hesitation. In his notebooks, however, there’s a hint of moral wavering: “If we fail to intervene, although we know a man is being made upset, why separate these actions of ours from those of the subject, who feels he is causing discomfort to another.”
Milgram apparently overrode that instinct. But we shouldn’t. If we all maintain a healthy dose of moral humility, we can avoid the blind obedience of the subjects in his experiment, as well as the harm we can unwittingly cause when in positions of authority.
Queueing
EVER wondered why the queues at the other checkouts in the supermarket always seem to move faster than yours?
Now a new book on one of modern life’s most intractable mysteries — how to get the best out of queuing — has the answer: because we only notice how fast the other queues are moving when ours is moving slowly.
As author David Andrews explains in Why Does the Other Line Always Move Faster?, to be published in December, we experience time differently while waiting than if we are engaged in a process. So, if by chance you have chosen the fastest line at the checkout, you do not notice that because you are focused instead on unloading the trolley and paying.
You notice the other lines moving more swiftly only because yours is not. In other words, the other line always moves faster because you are not in it. “Our minds are rigged against us. Regardless of time actually spent, the slowest line will always be the one you are standing in,” Andrews writes.
Of course another part of the answer is simple probability. If there are three queues at the supermarket and you join the middle one, there is a two in three chance that one of the queues on either side of you will be the fastest, whereas yours has only a one in three chance.
Andrews dates his fascination with queuing — or “waiting in line” as the Americans call it — to his childhood in the Romanian capital Bucharest, where queuing for basic food was a way of life.
Along with a brief history of the phenomenon, the book offers tips on how to deal with queues. These range from how to deal with someone who cuts in, to not making eye contact with others; and if the line is disorganised “stand in such a position . . . that anyone approaching will see that you are waiting”.
Andrews is quick to debunk one of our most dearly held national beliefs: that we in Britain have always been a nation of civilised queuers.
This is a relatively recent fiction, Andrews claims. “The myth that the British are willing, patient and even eager to stand in line dates to Second World War propaganda during a time of shortages and rationing. Queues were in fact often tense and politically charged affairs that had to be policed in case of riots.”
Nonetheless even in extremis the British tend towards an orderly queue. Andrews admits to bemusement at footage of the 2011 London riots where looters were observed patiently taking their turn when stealing from shops.
The Chinese, however, had to be taught to queue in the run-up to the Beijing Olympics in 2008. The effort was made after years of complaints from tourists and expats about the lack of queue etiquette. Bus stations were decked with signs that read in Chinese: “I wait in line and am cultured. I display courtesy and am happy,” and, “It’s civilised to queue, it’s glorious to be polite.”
“The act of stepping into line is like putting a stake in the ground: submissive to the people in front of you, dominant to the people behind you,” Andrews writes.
That sounds more like a summons to battle than the weekly shop at Waitrose. But if you do not have the time to queue or simply cannot be bothered, help is at hand. In London, companies such as TaskRabbit, will now pay someone to do it for you.
James Hamlin, of TaskRabbit, said: “Queuing has always been, and always will be, viewed as a waste of our time. Queuing is one of the tasks that people often outsource to TaskRabbit.”
For those who are unable to outsource, however, scientists have worked out an answer to perhaps the most crucial question of all: how do you choose the queue that will move fastest? Simple: choose the one with the most men in it.
This is because women are more patient than men, who are more likely to just give up if the queue is moving too slowly.
According to researchers at Surrey University: “Men were more likely to dislike waits than women and be less accepting of their inevitability.”
But then again, it does not pay to overcomplicate it. When Dan Meyer of Desmos, a US-based online maths business, analysed the till receipts at his local supermarket he discovered that each person in line adds at least 41 seconds to your waiting time, regardless of how many items they have, because of the time taken to unload, pack and pay. His advice: just choose the line with the fewest people in it.
Appeal to the Other Person's Values 1
IN business, everyone knows that if you want to persuade people to make a deal with you, you have to focus on what they value, not what you do. If you’re trying to sell your car, you emphasize the features of the sale that appeal to the buyer (the reliability and reasonable price of the vehicle), not the ones that appeal to you (the influx of cash).
This rule of salesmanship, as we demonstrated in a series of experiments detailed in a recent article in the journal Personality and Social Psychology Bulletin, also applies in political debate — i.e., you should frame your position in terms of the moral values of the person you’re trying to convince. But when it comes to politics, this turns out to be hard to do. We found that people struggled to set aside their reasons for taking a political position and failed to consider how someone with different values might come to support that same position.
In one study, we presented liberals and conservatives with one of two messages in support of same-sex marriage. One message emphasized the need for equal rights for same-sex couples. This is the sort of fairness-based message that liberals typically advance for same-sex marriage. It is framed in terms of a value — equality — that research has shown resonates more strongly among liberals than conservatives. The other message was designed to appeal to values of patriotism and group loyalty, which have been shown to resonate more with conservatives. (It argued that “same-sex couples are proud and patriotic Americans” who “contribute to the American economy and society.”)
Liberals showed the same support for same-sex marriage regardless of which message they encountered. But conservatives supported same-sex marriage significantly more if they read the patriotism message rather than the fairness one.
In a parallel experiment, we targeted liberals for persuasion. We presented a group of liberals and conservatives with one of two messages in support of increased military spending. One message argued that we should “take pride in our military,” which “unifies us both at home and abroad.” The other argued that military spending is necessary because, through the military, the poor and disadvantaged “can achieve equal standing,” by ensuring they have “a reliable salary and a future apart from the challenges of poverty and inequality.”
For conservatives, it didn’t matter which message they read; their support for military spending was the same. However, liberals expressed significantly greater support for increasing military spending if they read the fairness message rather than the patriotism one.
If you’re thinking that these reframed arguments don’t sound like ones that conservatives and liberals would naturally be inclined to make, you’re right. In an additional study, we asked liberals to write a persuasive argument in favor of same-sex marriage aimed at convincing conservatives — and we offered a cash prize to the participant who wrote the most persuasive message. Despite the financial incentive, just 9 percent of liberals made arguments that appealed to more conservative notions of morality, while 69 percent made arguments based on more liberal values.
Conservatives were not much better. When asked to write an argument in favor of making English the official language of the United States that would be persuasive to liberals (with the same cash incentive), just 8 percent of conservatives appealed to liberal values, while 59 percent drew upon conservative values.
Why do we find moral reframing so challenging? There are a number of reasons. You might find it off-putting to endorse values that you don’t hold yourself. You might not see a link between your political positions and your audience’s values. And you might not even know that your audience endorses different values from your own. But whatever the source of the gulf, it can be bridged with effort and consideration.
Maybe reframing political arguments in terms of your audience’s morality should be viewed less as an exercise in targeted, strategic persuasion, and more as an exercise in real, substantive perspective taking. To do it, you have to get into the heads of the people you’d like to persuade, think about what they care about and make arguments that embrace their principles. If you can do that, it will show that you view those with whom you disagree not as enemies, but as people whose values are worth your consideration.
Even if the arguments that you wind up making aren’t those that you would find most appealing, you will have dignified the morality of your political rivals with your attention, which, if you think about it, is the least that we owe our fellow citizens.
Appeal to the Other Person's Values 2
Each of us understands that different people are swayed by different sorts of arguments, based on different ways of viewing the world. That seems sort of obvious. A toddler might want an orange juice because it's sweet, not because she's trying to avoid scurvy, which might be the argument that moves an intellectual but vitamin-starved sailor to take action.
So far, so good.
The difficult part is this: Even when people making an argument know this, they don't like making an argument that appeals to the other person's alternative worldview.
Worth a full stop here. Even when people have an argument about a political action they want someone else to adopt, or a product they want them to buy, they hesitate to make that argument with empathy. Instead, they default to talking about why they believe it.
To many people, it feels manipulative or insincere or even morally wrong to momentarily take the other person's point of view when trying to advance an argument that we already believe in.
And that's one reason why so many people claim to not like engaging in marketing. Marketing is the empathetic act of telling a story that works, that's true for the person hearing it, that stands up to scrutiny. But marketing is not about merely sharing what you, the marketer believes. It's about what we, the listener, believe.
Each year, nearly 50 percent of Americans vow to change their behavior come Jan. 1, resolving to lose weight (one-third of us want to slim down every year), get more organized or fall in love. Odds are, they won’t succeed. Just 8 percent achieve their New Year’s resolutions. One-quarter give up after the first week. These statistics are bleak but not surprising. Many New Year’s pledges involve trying to establish new habits or conquer bad ones. And there’s a lot of misinformation swirling around about how habits are formed and how they can be changed. Here are some of the most common.
1.A lack of willpower is to blame for our bad habits.
When people fail to change their habits, they often blame their weak wills. One-third of Americans say they lack the self-control they need to accomplish their goals. About one-fourth attribute trouble sticking to a diet, for example, to personal character defects such as laziness.
In truth, many of our behaviors are not guided by self-control. Half the tasks we perform daily are things we do without thinking. And studies show that people with high levels of self-control aren’t constantly battling temptation — they’re simply relying on good habits to exercise, make the kids’ lunch or pay the bills on time without thinking about it much. In that way, high self-control is an illusion, actually consisting of a bedrock of habitual patterns. That makes sense: It would be exhausting to repeatedly struggle to control our actions to do the right thing.
2. Apps can help us change our behavior.
Apps like Fitbit, MyFitnessPal and BookLover promise to help us change our habits by tracking our good (or bad) behavior. And some websites say they work, running lists like “17 bad habits you can kick using nothing but a smartphone” or “Mobile apps that can help you kick your bad habits.”
But most apps simply monitor what you’re doing, which doesn’t necessarily lead to behavior change. As one group of scientists noted, “The gap between recording information and changing behavior is substantial.” There is, they wrote, “little evidence . . . that [apps] are bridging that gap.”
In my research, I’ve found that certain types of planning and monitoring actually get in the way of creating new habits, perhaps because they focus our attention on things that are irrelevant to behavior change. Some people might like these devices. But until there’s broader evidence of effectiveness, I recommend that most people don’t bother with them.
3. It takes 21 days to form a new
habit.
This idea stems from a popular 1960s book by Maxwell Maltz, and it’s often repeated today. Self-help books promise that you can fix your marriage, jump-start your exercise routine or cure your money woes in just three weeks.
In truth, there’s no magic number when it comes to establishing habits. They are created slowly as people repeat behaviors in a stable context. Some simple health behaviors, such as drinking a glass of water before each meal, had to be repeated for only 18 days before people did them without thinking, according to one recent study. Others, such as exercise, needed closer to a year of repetition. Researchers found that it took an average of 66 days for a new habit to form.
For most people, more important than repeating an action for a certain number of days is establishing a routine. Doing something at the same location or time of day (like putting on sunscreen before you leave the house every morning) can help outsource control of the action. In a study of regular exercisers, for example, almost 90 percent had a location or time that cued their desire to exercise. For them, exercising was more automatic and required less thought and willpower.
4. The best way to change a habit is to set realistic goals.
In my lab, we recently conducted a study with people who wanted to change some behavior. When asked whether they would prefer a self-help book about goal-setting or one about environmental change, they overwhelmingly chose the book on goal-setting.
This is a mistake. Modifying our environment lets us remake our behavior without over-relying on willpower. Unwanted habits can be disrupted by changing the cues that activate them. People eat less unhealthy food if they put lids on candy dishes at the office and if stores place unhealthy snacks at the back of displays. Altering your surroundings can also set up cues to promote desired behaviors. People who weigh less keep fruit on their kitchen counters. And children without televisions in their bedrooms have lower BMIs than children with. Of course, these sorts of associations don’t prove that putting fruit on your countertop or removing TVs will make you thinner. But they illustrate how our environments cue healthy behaviors — or the reverse.
A study of returning Vietnam War veterans shows just how important environment can be. Twenty percent were actively addicted to heroin while they were serving overseas. But just 5 percent relapsed after they returned home. Researchers concluded that these shockingly low rates were due to the dramatic change in environment vets experienced. Back in the States, the triggering cues all but disappeared.
5. Learning about the benefits of new habits helps change our behavior.
This common misperception forms the basis for a plethora of public health efforts. For example, the federal government’s “Fruits and Veggies, More Matters” campaign has tried to educate people about the benefits of eating greens. It hasn’t worked. Since its inception in 2007, fruit and vegetable consumption has gone down.
That’s no surprise. Research has repeatedly shown that educating people about the benefits of a behavior does not translate to changing habits. Habits are formed through doing. And the long-term memory systems involved in habit formation don’t shift with new resolutions.
In our research, we’ve found that old habit associations endure, and hinder behavioral changes, even after people adopt new intentions. For example, once you see a prompt to surf the Web, it’s hard to get that out of your head and instead focus on your resolution to stay organized by paying the bills. With habits, we learn not by learning, but by doing.
5 Myths About Behaviour Change
Each year, nearly 50 percent of Americans vow to change their behavior come Jan. 1, resolving to lose weight (one-third of us want to slim down every year), get more organized or fall in love. Odds are, they won’t succeed. Just 8 percent achieve their New Year’s resolutions. One-quarter give up after the first week. These statistics are bleak but not surprising. Many New Year’s pledges involve trying to establish new habits or conquer bad ones. And there’s a lot of misinformation swirling around about how habits are formed and how they can be changed. Here are some of the most common.
1.A lack of willpower is to blame for our bad habits.
When people fail to change their habits, they often blame their weak wills. One-third of Americans say they lack the self-control they need to accomplish their goals. About one-fourth attribute trouble sticking to a diet, for example, to personal character defects such as laziness.
In truth, many of our behaviors are not guided by self-control. Half the tasks we perform daily are things we do without thinking. And studies show that people with high levels of self-control aren’t constantly battling temptation — they’re simply relying on good habits to exercise, make the kids’ lunch or pay the bills on time without thinking about it much. In that way, high self-control is an illusion, actually consisting of a bedrock of habitual patterns. That makes sense: It would be exhausting to repeatedly struggle to control our actions to do the right thing.
2. Apps can help us change our behavior.
Apps like Fitbit, MyFitnessPal and BookLover promise to help us change our habits by tracking our good (or bad) behavior. And some websites say they work, running lists like “17 bad habits you can kick using nothing but a smartphone” or “Mobile apps that can help you kick your bad habits.”
But most apps simply monitor what you’re doing, which doesn’t necessarily lead to behavior change. As one group of scientists noted, “The gap between recording information and changing behavior is substantial.” There is, they wrote, “little evidence . . . that [apps] are bridging that gap.”
In my research, I’ve found that certain types of planning and monitoring actually get in the way of creating new habits, perhaps because they focus our attention on things that are irrelevant to behavior change. Some people might like these devices. But until there’s broader evidence of effectiveness, I recommend that most people don’t bother with them.
3. It takes 21 days to form a new habit.
This idea stems from a popular 1960s book by Maxwell Maltz, and it’s often repeated today. Self-help books promise that you can fix your marriage, jump-start your exercise routine or cure your money woes in just three weeks.
In truth, there’s no magic number when it comes to establishing habits. They are created slowly as people repeat behaviors in a stable context. Some simple health behaviors, such as drinking a glass of water before each meal, had to be repeated for only 18 days before people did them without thinking, according to one recent study. Others, such as exercise, needed closer to a year of repetition. Researchers found that it took an average of 66 days for a new habit to form.
For most people, more important than repeating an action for a certain number of days is establishing a routine. Doing something at the same location or time of day (like putting on sunscreen before you leave the house every morning) can help outsource control of the action. In a study of regular exercisers, for example, almost 90 percent had a location or time that cued their desire to exercise. For them, exercising was more automatic and required less thought and willpower.
4. The best way to change a habit is to set realistic goals.
In my lab, we recently conducted a study with people who wanted to change some behavior. When asked whether they would prefer a self-help book about goal-setting or one about environmental change, they overwhelmingly chose the book on goal-setting.
This is a mistake. Modifying our environment lets us remake our behavior without over-relying on willpower. Unwanted habits can be disrupted by changing the cues that activate them. People eat less unhealthy food if they put lids on candy dishes at the office and if stores place unhealthy snacks at the back of displays. Altering your surroundings can also set up cues to promote desired behaviors. People who weigh less keep fruit on their kitchen counters. And children without televisions in their bedrooms have lower BMIs than children with. Of course, these sorts of associations don’t prove that putting fruit on your countertop or removing TVs will make you thinner. But they illustrate how our environments cue healthy behaviors — or the reverse.
A study of returning Vietnam War veterans shows just how important environment can be. Twenty percent were actively addicted to heroin while they were serving overseas. But just 5 percent relapsed after they returned home. Researchers concluded that these shockingly low rates were due to the dramatic change in environment vets experienced. Back in the States, the triggering cues all but disappeared.
5. Learning about the benefits of new habits helps change our behavior.
This common misperception forms the basis for a plethora of public health efforts. For example, the federal government’s “Fruits and Veggies, More Matters” campaign has tried to educate people about the benefits of eating greens. It hasn’t worked. Since its inception in 2007, fruit and vegetable consumption has gone down.
That’s no surprise. Research has repeatedly shown that educating people about the benefits of a behavior does not translate to changing habits. Habits are formed through doing. And the long-term memory systems involved in habit formation don’t shift with new resolutions.
In our research, we’ve found that old habit associations endure, and hinder behavioral changes, even after people adopt new intentions. For example, once you see a prompt to surf the Web, it’s hard to get that out of your head and instead focus on your resolution to stay organized by paying the bills. With habits, we learn not by learning, but by doing.
Murder
On a television show last week, the professional “mentalist” Derren Brown persuaded three people to commit murder — or, rather, to think they had committed murder. Channel 4’s Pushed to the Edge involved an expensively staged trick to persuade four ordinary people to push a man off a building. Three complied; one did not.
People who took part in Derren Brown’s show believed they were committing acts including pushing a man — really an actor — off a building. The trick was executed by actors who had been trained and directed by Brown. Gleefully, backstage, he egged them on.
What did it prove? The ostensible reason was to demonstrate “the power of social compliance”. This is odd, because if you did not know about this power, you must have been asleep for 4,000 years or, at least, locked in a room with no access to news.
The three great monotheistic religions — Judaism, Christianity and Islam (total adherents: 3.7bn) — all begin with a story about the power of social compliance, even in a society of just two people. A man named Adam is bullied by a woman, Eve, into eating an apple and gets kicked out of paradise.
It’s a myth, you say. No, it’s a wisdom story that imparts a great truth by which believers live — they call it “original sin” — and from which nonbelievers should learn.
Original sin means simply that we can all be evil (or just bad on a daily basis).
Look around. How does Isis get people to behead even their co-religionists or blow themselves up? By the power of social compliance.
In his book Ordinary Men, the historian Christopher Browning studies German policemen rounding up Jews in Poland in 1942 and concludes that they were driven not by hatred or ideology but by obedience to authority and peer pressure.
There is some science on this subject but it is sensational rather than good. In 1961 the world was enthralled by the trial in Jerusalem of Adolf Eichmann, one of the bureaucrats behind the Holocaust. The horrors perpetrated by this dull man made many people ask: “What is wrong with the Germans? Were they unusual in putting obedience before conscience?”
At Yale a psychologist called Stanley Milgram did not think the Germans had any particular problem: he thought all humans had the same problem, a capacity for evil. Milgram set up what became a famous experiment (and the model for Brown’s) to see how obedient people would be if ordered to harm others. This involved (like Brown’s) a team of actors and collaborators who manipulated volunteer members of the public on Milgram’s behalf. Believing they were taking part in an experiment into how memory works under duress, two-thirds of the volunteers obeyed orders and gave what they understood were potentially fatal electric shocks to wired-up “learners” who had given the wrong answers.
The first thing to say about both Milgram and Brown is that it took a fantastic amount of plotting and planning to push people to the homicidal edge.
Secondly, in her book Behind the Shock Machine, the Australian psychologist Gina Perry describes what she found in the Milgram archives. Such was the zeal of the experimenters to get the right result that the figures were exaggerated and the procedures were unacceptably extreme. She also found that a lot of Milgram’s volunteers guessed they were not inflicting real electric shocks but carried on anyway.
This does not disprove the argument that everybody is capable of evil — it cannot be disproved — but it does question the applicability of these staged events to ordinary life.
There were two other famous experiments into human conformity: Asch and Stanford prison.
A psychologist called Solomon Asch, working at Swarthmore College in Pennsylvania, showed that people’s desire to conform could make them deny the evidence of their own eyes. In a series of trials in the 1950s, Asch placed unwitting volunteers into groups of his collaborators. He found that when the collaborators gave an obviously wrong description of an item they were looking at, more than a third of the volunteers agreed with them.
In the gruesome Stanford prison experiment in 1971, volunteers were asked to play the roles of guards and prisoners. Although barred from using physical punishment, the guards rapidly descended into sadism, subjecting the prisoners to strip searches, confiscating their mattresses and restricting access to toilets.
There are problems with all this evidence. There’s a theatricality — explicitly in Brown, but implicitly in the more respectable experiments — which, in itself, will tend to make people behave in a surprising manner by making them feel like actors.
In addition, Brown’s show was rendered meaningless by the care he took in choosing his subjects. He vetted 200 people to find the four he thought most compliant. Statisticians would call this a gross sampling error. All Brown had shown was that 1.5% of people might, under extreme duress, kill somebody — a surprisingly low number.
On top of that there is the intensity of the experience. Brown’s victims were plainly suffering — and I fear that the three who did commit “murder” may suffer more in years to come as they consider the depths to which they sank.
Crucially, the show required provocateurs whose commitment to the fiction had to be absolute. Their behaviour — repeating certain phrases and words — was hypnotic. The primary subject did, at times, appear to be entranced by the repetitions.
The hypnotic effect is important in all these cases. I was once very effectively hypnotised while I was researching a book. I ended up seeing a flying saucer and was about to be abducted by aliens when the kindly psychologist, Dr David Oakley, yanked me out of my trance.
The doubleness of my state of mind — I both knew I was and knew I wasn’t seeing a saucer — was astounding and suggested our capacity for delusion is like our capacity for evil: real but subtle and hard to define and manage. I have no doubt that I would have ended up going quite a long way down the road, if not with Brown, at least with Milgram and the rest.
So we should be sceptical of all these attempts to scientise or turn into entertainment what is not just obvious — Stalin, Hitler, Mao, Isis — but also embodied in ancient wisdom: original sin. Evil exists. But this raises the further question: where is it?
Philip Zimbardo, the psychologist who conducted the Stanford prison experiment, concluded that everybody could “turn evil” but that it was not an inherent disposition. In other words, evil resides in situations, not in people. This echoes the romantic view that we are born innocent but society corrupts us.
Milgram tended to locate the problems in authority and institutions: “When he merges his person into an organisational structure, a new creature replaces autonomous man, unhindered by the limitations of individual morality, freed of humane inhibition, mindful only of the sanctions of authority.” This would fit neatly with the idea that evil was drawn out of the Germans by bad institutions.
I don’t buy any of this. Situations, society and institutions are created by humans, therefore it seems meaningless to me to say that evil is not in people. It must be, or every human society would be perfect. So it’s in us and we can all fall into the pit of the evil we contain. This is not a reason to get depressed; it’s a reason to celebrate.
For most of us in Britain life is, in historical terms, quite astonishingly comfortable. This cannot be because we have less evil in us than, say, the people of Germany did when the Nazis were in control. It must be, therefore, that the evil is contained, as indeed it is in many countries.
What contains it is the gentle pressure and freedom of social relations and the general benignity of our institutions. This can all be lost in an instant but, for the moment, it is where we, happily, are.
As long as we know that it can all go horribly wrong because of the human capacity for evil, then we have at least one line of defence.
Finally, on reflection, I realise I have been too hard on Derren Brown. His show worked because the four participants weren’t his real subjects; the actors and crew were. When one actor playing a policeman who had talked a man into snatching a baby was suddenly conscience-stricken — “That’s horrible, that’s horrible,” he moaned — Brown consoled him: “OK, well done for pushing him into it.”
He got them to do something obviously wicked — causing people to suffer for entertainment — and thereby proved that when a TV show is involved, people will do absolutely anything, however vile.
Resilience
Norman Garmezy, a developmental psychologist and clinician at the University of Minnesota, met thousands of children in his four decades of research. But one boy in particular stuck with him. He was nine years old, with an alcoholic mother and an absent father. Each day, he would arrive at school with the exact same sandwich: two slices of bread with nothing in between. At home, there was no other food available, and no one to make any. Even so, Garmezy would later recall, the boy wanted to make sure that “no one would feel pity for him and no one would know the ineptitude of his mother.” Each day, without fail, he would walk in with a smile on his face and a “bread sandwich” tucked into his bag.
The boy with the bread sandwich was part of a special group of children. He belonged to a cohort of kids—the first of many—whom Garmezy would go on to identify as succeeding, even excelling, despite incredibly difficult circumstances. These were the children who exhibited a trait Garmezy would later identify as “resilience.” (He is widely credited with being the first to study the concept in an experimental setting.) Over many years, Garmezy would visit schools across the country, focussing on those in economically depressed areas, and follow a standard protocol. He would set up meetings with the principal, along with a school social worker or nurse, and pose the same question: Were there any children whose backgrounds had initially raised red flags—kids who seemed likely to become problem kids—who had instead become, surprisingly, a source of pride? “What I was saying was, ‘Can you identify stressed children who are making it here in your school?’ ” Garmezy said, in a 1999 interview. “There would be a long pause after my inquiry before the answer came. If I had said, ‘Do you have kids in this school who seem to be troubled?,’ there wouldn’t have been a moment’s delay. But to be asked about children who were adaptive and good citizens in the school and making it even though they had come out of very disturbed backgrounds—that was a new sort of inquiry. That’s the way we began.”
Resilience presents a challenge for psychologists. Whether you can be said to have it or not largely depends not on any particular psychological test but on the way your life unfolds. If you are lucky enough to never experience any sort of adversity, we won’t know how resilient you are. It’s only when you’re faced with obstacles, stress, and other environmental threats that resilience, or the lack of it, emerges: Do you succumb or do you surmount?
Environmental threats can come in various guises. Some are the result of low socioeconomic status and challenging home conditions. (Those are the threats studied in Garmezy’s work.) Often, such threats—parents with psychological or other problems; exposure to violence or poor treatment; being a child of problematic divorce—are chronic. Other threats are acute: experiencing or witnessing a traumatic violent encounter, for example, or being in an accident. What matters is the intensity and the duration of the stressor. In the case of acute stressors, the intensity is usually high. The stress resulting from chronic adversity, Garmezy wrote, might be lower—but it “exerts repeated and cumulative impact on resources and adaptation and persists for many months and typically considerably longer.”
Prior to Garmezy’s work on resilience, most research on trauma and negative life events had a reverse focus. Instead of looking at areas of strength, it looked at areas of vulnerability, investigating the experiences that make people susceptible to poor life outcomes (or that lead kids to be “troubled,” as Garmezy put it). Garmezy’s work opened the door to the study of protective factors: the elements of an individual’s background or personality that could enable success despite the challenges they faced. Garmezy retired from research before reaching any definitive conclusions—his career was cut short by early-onset Alzheimer’s—but his students and followers were able to identify elements that fell into two groups: individual, psychological factors and external, environmental factors, or disposition on the one hand and luck on the other.
In 1989 a developmental psychologist named Emmy Werner published the results of a thirty-two-year longitudinal project. She had followed a group of six hundred and ninety-eight children, in Kauai, Hawaii, from before birth through their third decade of life. Along the way, she’d monitored them for any exposure to stress: maternal stress in utero, poverty, problems in the family, and so on. Two-thirds of the children came from backgrounds that were, essentially, stable, successful, and happy; the other third qualified as “at risk.” Like Garmezy, she soon discovered that not all of the at-risk children reacted to stress in the same way. Two-thirds of them “developed serious learning or behavior problems by the age of ten, or had delinquency records, mental health problems, or teen-age pregnancies by the age of eighteen.” But the remaining third developed into “competent, confident, and caring young adults.” They had attained academic, domestic, and social success—and they were always ready to capitalize on new opportunities that arose.
What was it that set the resilient children apart? Because the individuals in her sample had been followed and tested consistently for three decades, Werner had a trove of data at her disposal. She found that several elements predicted resilience. Some elements had to do with luck: a resilient child might have a strong bond with a supportive caregiver, parent, teacher, or other mentor-like figure. But another, quite large set of elements was psychological, and had to do with how the children responded to the environment. From a young age, resilient children tended to “meet the world on their own terms.” They were autonomous and independent, would seek out new experiences, and had a “positive social orientation.” “Though not especially gifted, these children used whatever skills they had effectively,” Werner wrote. Perhaps most importantly, the resilient children had what psychologists call an “internal locus of control”: they believed that they, and not their circumstances, affected their achievements. The resilient children saw themselves as the orchestrators of their own fates. In fact, on a scale that measured locus of control, they scored more than two standard deviations away from the standardization group.
Werner also discovered that resilience could change over time. Some resilient children were especially unlucky: they experienced multiple strong stressors at vulnerable points and their resilience evaporated. Resilience, she explained, is like a constant calculation: Which side of the equation weighs more, the resilience or the stressors? The stressors can become so intense that resilience is overwhelmed. Most people, in short, have a breaking point. On the flip side, some people who weren’t resilient when they were little somehow learned the skills of resilience. They were able to overcome adversity later in life and went on to flourish as much as those who’d been resilient the whole way through. This, of course, raises the question of how resilience might be learned.
George Bonanno is a clinical psychologist at Columbia University’s Teachers College; he heads the Loss, Trauma, and Emotion Lab and has been studying resilience for nearly twenty-five years. Garmezy, Werner, and others have shown that some people are far better than others at dealing with adversity; Bonanno has been trying to figure out where that variation might come from. Bonanno’s theory of resilience starts with an observation: all of us possess the same fundamental stress-response system, which has evolved over millions of years and which we share with other animals. The vast majority of people are pretty good at using that system to deal with stress. When it comes to resilience, the question is: Why do some people use the system so much more frequently or effectively than others?
One of the central elements of resilience, Bonanno has found, is perception: Do you conceptualize an event as traumatic, or as an opportunity to learn and grow? “Events are not traumatic until we experience them as traumatic,” Bonanno told me, in December. “To call something a ‘traumatic event’ belies that fact.” He has coined a different term: PTE, or potentially traumatic event, which he argues is more accurate. The theory is straightforward. Every frightening event, no matter how negative it might seem from the sidelines, has the potential to be traumatic or not to the person experiencing it. (Bonanno focusses on acute negative events, where we may be seriously harmed; others who study resilience, including Garmezy and Werner, look more broadly.) Take something as terrible as the surprising death of a close friend: you might be sad, but if you can find a way to construe that event as filled with meaning—perhaps it leads to greater awareness of a certain disease, say, or to closer ties with the community—then it may not be seen as a trauma. (Indeed, Werner found that resilient individuals were far more likely to report having sources of spiritual and religious support than those who weren’t.) The experience isn’t inherent in the event; it resides in the event’s psychological construal.
It’s for this reason, Bonanno told me, that “stressful” or “traumatic” events in and of themselves don’t have much predictive power when it comes to life outcomes. “The prospective epidemiological data shows that exposure to potentially traumatic events does not predict later functioning,” he said. “It’s only predictive if there’s a negative response.” In other words, living through adversity, be it endemic to your environment or an acute negative event, doesn’t guarantee that you’ll suffer going forward. What matters is whether that adversity becomes traumatizing.
The good news is that positive construal can be taught. “We can make ourselves more or less vulnerable by how we think about things,” Bonanno said. In research at Columbia, the neuroscientist Kevin Ochsner has shown that teaching people to think of stimuli in different ways—to reframe them in positive terms when the initial response is negative, or in a less emotional way when the initial response is emotionally “hot”—changes how they experience and react to the stimulus. You can train people to better regulate their emotions, and the training seems to have lasting effects.
Similar work has been done with explanatory styles—the techniques we use to explain events. I’ve written before about the research of Martin Seligman, the University of Pennsylvania psychologist who pioneered much of the field of positive psychology: Seligman found that training people to change their explanatory styles from internal to external (“Bad events aren’t my fault”), from global to specific (“This is one narrow thing rather than a massive indication that something is wrong with my life”), and from permanent to impermanent (“I can change the situation, rather than assuming it’s fixed”) made them more psychologically successful and less prone to depression. The same goes for locus of control: not only is a more internal locus tied to perceiving less stress and performing better but changing your locus from external to internal leads to positive changes in both psychological well-being and objective work performance. The cognitive skills that underpin resilience, then, seem like they can indeed be learned over time, creating resilience where there was none.
Unfortunately, the opposite may also be true. “We can become less resilient, or less likely to be resilient,” Bonanno says. “We can create or exaggerate stressors very easily in our own minds. That’s the danger of the human condition.” Human beings are capable of worry and rumination: we can take a minor thing, blow it up in our heads, run through it over and over, and drive ourselves crazy until we feel like that minor thing is the biggest thing that ever happened. In a sense, it’s a self-fulfilling prophecy. Frame adversity as a challenge, and you become more flexible and able to deal with it, move on, learn from it, and grow. Focus on it, frame it as a threat, and a potentially traumatic event becomes an enduring problem; you become more inflexible, and more likely to be negatively affected.
In December the New York Times Magazine published an essay called “The Profound Emptiness of ‘Resilience.’ ” It pointed out that the word is now used everywhere, often in ways that drain it of meaning and link it to vague concepts like “character.” But resilience doesn’t have to be an empty or vague concept. In fact, decades of research have revealed a lot about how it works. This research shows that resilience is, ultimately, a set of skills that can be taught. In recent years, we’ve taken to using the term sloppily—but our sloppy usage doesn’t mean that it hasn’t been usefully and precisely defined. It’s time we invest the time and energy to understand what “resilience” really means.
Coincidences
Bernard Beitman in his research has found that certain personality traits are linked to experiencing more coincidences—people who describe themselves as religious or spiritual, people who are self-referential (or likely to relate information from the external world back to themselves), and people who are high in meaning-seeking are all coincidence-prone. People are also likely to see coincidences when they are extremely sad, angry, or anxious.
“Coincidences never happen to me at all, because I never notice anything,” [statistician David] Spiegelhalter says. “I never talk to anybody on trains. If I’m with a stranger, I don’t try to find a connection with them, because I’m English.”
Beitman, on the other hand, says, “My life is littered with coincidences.” He tells me a story of how he lost his dog when he was 8 or 9 years old. He went to the police station to ask if they had seen it; they hadn’t. Then, “I was crying a lot and took the wrong way home, and there was the dog … I got into [studying coincidences] just because, hey, look Bernie, what’s going on here?”
For Beitman, probability is not enough when it comes to studying coincidences. Because statistics can describe what happens, but can’t explain it any further than chance. “I know there’s something more going on than we pay attention to,” he says. “Random is not enough of an explanation for me.”
Why do people shut their eyes when kissing?
SCIENTISTS have moved in on one of romance’s most endearing mysteries — why do people shut their eyes when kissing?
What they found is that the textbook answer — that it’s hard to focus that close up — is wrong. Instead the reason is more to do with our brain’s inability to process all the sensations suddenly pouring from our lips and tongues as well as visual data. “Tactile awareness depends on the level of perceptual load in a concurrent visual task,” said Polly Dalton and Sandra Murphy, cognitive psychologists at Royal Holloway, University of London, in the Journal of Experimental Psychology: Human Perception and Performance.
Dalton and Murphy’s research did not actually involve people kissing. Instead they asked people to carry out visual tasks while measuring if they could simultaneously detect something touching their hands. Their findings showed that, as the visual task became more complex, the subject’s ability to detect tactile sensations declined.
The key finding was that visual sensations overruled tactile ones, also explaining why vibrating devices used to warn drivers and pilots of imminent danger often go unnoticed.
Dalton said: “If we are focusing strongly on a visual task, this will reduce our awareness of stimuli in other senses. It is important for designers to be aware of these effects, because auditory and tactile alerts are often used in situations of high visual demand, such as driving a car or flying an aircraft.”
However, sometimes, said Dalton, people want to focus on tactile sensations, for example when dancing, reading braille, kissing or making love — all situations in which people often shut their eyes. Dalton said: “These results could explain why we close our eyes when we want to focus attention on another sense. Shutting out the visual input leaves more mental resources to focus on other aspects of our experience.”
Toxoplasma and Rage
Road rage and other acts involving a sudden loss of self-control could be linked to a parasite spread by cats.
Infection by Toxoplasma gondii doubles the chances of suffering bouts of uncontrollable anger, a study found.
People with intermittent explosive disorder (IED) are liable to display outbursts of violent rage which are disproportionate to the provocation received — for instance, when annoyed by another driver on the road.
Scientists who assessed 358 US adults found that 22 per cent of those diagnosed with IED also tested positive for T. gondii. In non-infected individuals only 9 per cent had the condition.
Emil Coccaro of the University of Chicago, the lead researcher, said: “Our work suggests that latent infection with the parasite may change brain chemistry in a fashion that increases the risk of aggressive behaviour. We do not know if this relationship is causal, and not everyone that tests positive for toxoplasmosis will have aggression issues.”
Many people are believed to carry the parasite, found in cat faeces and contaminated food, without realising it.
Each year about 350 cases of toxoplasmosis are reported in England and Wales, but experts believe the number of infections could be a thousand times greater. Some estimates suggest that up to a third of people in the UK will be infected at some point in their lives.
Often, the parasite causes no symptoms, but it can lead to miscarriages, birth defects, and cause serious illness in people with weak immune systems.
It has been linked to psychiatric conditions including schizophrenia, bipolar disorder and suicidal behaviour. Roughly a third of those in the study had IED, another third had some other psychiatric condition such as depression or personality disorder, and the remainder were healthy individuals with no history of mental problems. Toxoplasmosis-positive individuals scored significantly higher on measures of anger and aggression.
Co-author Royce Lee said: “Correlation is not causation, and this is not a sign that people should get rid of their cats. We don’t yet understand the mechanisms involved. It could be an increased inflammatory response, direct brain modulation by the parasite, or even reverse causation where aggressive individuals tend to have more cats or eat more undercooked meat. Our study signals the need for more research and more evidence in humans.”
IED is thought to affect up to 16 million Americans, said the scientists whose findings are reported in the Journal of Clinical Psychiatry.
Catch Liar With Cognitive Overload
People, on the whole, are pretty bad at knowing when someone’s lying. And research has shown that “professional lie detectors” — like detectives, psychologists, and judges — are typically no better at detecting a lie than the rest of us.
On the one hand, that’s kind of a bummer for the people who hold those jobs, plus anyone who watches detective shows. But glass-half-full people, here’s another way of looking at it: You’re just as good as a pro at sniffing out liars! Plus, there are specific things you can do to improve your skills on this front. In the New York Times today, psychologist Edward Geiselman offers a tip.
Geiselman, a professor emeritus of psychology at the University of California, Los Angeles, is part of the team that developed a technique called cognitive interviewing, a police technique originally created as a way to help victims and witnesses better recall information. But as Geiselman explains, it can also be used to figure out when someone’s being dishonest:
By asking them to recount fine-grain particulars, you maximize the amount of mental energy they are expending on their story, thereby ratcheting up what psychologists call their “cognitive load.” As they try to reconstruct the circumstances leading up to the event in question, you can employ so-called extenders: “Really?” “Tell me more about that.” Many police departments use a more confession-oriented method with a distinctly parental tone (“I know you did it; now tell me why”). But researchers have begun asking if that approach leads to false confessions. A cognitive interviewer takes a different, more journalistic approach, gathering as much information as possible.
Do this by posing open-ended, or what Geiselman calls “expansion,” queries. A truthful person usually answers a follow-up question with additional details. A liar tends to stick with the same, bare-bones answers. … Instead, request the unexpected. Geiselman recommends asking someone to illustrate events with a pencil and paper or to retell the story starting at the end. A liar’s account will begin to break down.
In a 1985 paper in the Journal of Applied Psychology, Geiselman and his colleagues laid out the four main steps for cognitive interviewing in greater detail: “Reinstate the context,” thinking back to what the scene looked like, as well as your thoughts and feelings during the event in question; make sure to report every detail, even the ones you might think are unimportant; recall the events in different orders, working both backward, forward, and from various points in the middle; and change perspective, thinking through the experiences of other people on the scene.
The key to the technique’s success, Geiselman said, is leading a person to mental exhaustion. (As a former undercover CIA officer — in other words, someone who told detailed, elaborate lies for a living — explained on Reddit last week, it’s hard to get every part of a cover story straight.) Keeping track of all those various details while recounting the same thing in different ways “essentially puts them at the edge of their ability to function cognitively,” Geiselman told the Times.
So there you have it. Next time someone tells you a story that seems a little off, make them tell it again, in greater detail. And then backward. And then tell it like they’re someone else. And then draw it. Soon enough, you’ll either have the truth, or your friends will get annoyed and just stop telling you things altogether. Either way, no more lies.
People Who Are Always Late
Everybody has a friend who is always late. Now it is suggested that chronic tardiness could be a form of insanity.
People who are consistently late for engagements are irrational in how they view time and may have a “bizarre compulsion to defeat themselves” by making commitments they cannot possibly keep, Tim Urban, the author of the study, reported on his blog.
He even has an acronym for them: Clip, or chronically late insane people. Mr Urban argues that Clips have a “deep inner drive to inexplicably miss the beginning of movies, endure psychotic stress running to catch the train or crush their own reputation at work. As much as they may hurt others, they usually hurt themselves even more.”
There are three main reasons why Clips are always late, he argues. Some are in denial about how time works, other hate change. In some cases they are simply “mad” at themselves.
Ronald Bracey, a clinical psychologist based in Hampshire, said: “Certain people have personality traits where they avoid things. They are always running behind because they are putting things off. Then there is the obsessional, always picking things up and having to complete things, they can’t do something else until they have completed a ritual. For some it is personality and psychiatric problems, such as obsessive compulsive disorder and depression or procrastination or performance anxiety, but all these factors can contribute.”
Richard Nisbett, professor of psychology at the University of Michigan, said: “As a recovering latecomer, I insist that for many if not most of my kind, being late is a manifestation of disorganisation or hostility that is fully within one’s ability to control.”
Linda Sapadin, a New York psychologist, says that punctuality can be learnt. “Some people comply right away and most people don’t,” she told The Atlantic. “Habits take time to break. It’s not only about breaking this pattern, it’s also about building another one.”