School went badly last year for Jose, Angel and Estefani. The 8-year-old twins and their 7-year-old sister are recent immigrants to the Washington Heights neighborhood of Manhattan. In part because they didn't speak much English, late in 2010 all three were notified they were in danger of failing.
But their fortunes changed in January. They began going to the Fort Washington Library every Saturday for two hours of one-on-one tutoring from Elayne Castillo-Velez, her sister, Sharon Castillo, and their grandmother, Saturnina Gutierrez. The children had lost confidence and didn't feel that more hours spent with school books would produce anything, said Castillo-Velez. "There were times when all they wanted to do was talk about their week," she said.
"But once we started working one-on-one it triggered something in them," she said. "They were enthusiastic." Castillo-Velez would ask Jose's teacher what he should work on, and the teacher would write back - work on vowel sounds, or subtraction strategies. The children began to take books out of the library every week. Their grades improved. When June came, they all passed, and won certificates for academic improvement and achievement.
The two families met because of a bank - a time bank, where the unit of currency is not a dollar, but an hour. When you join a time bank, you indicate what services you might be able to offer others: financial planning, computer de-bugging, handyman repairs, housecleaning, child care, clothing alterations, cooking, taking someone to a doctor's appointment on the bus, visiting the homebound or English conversation. People teach Mandarin and yoga and sushi-making. Castillo-Velez earns a credit for each hour she spends tutoring Jose. She spends the credits on art classes.
A time bank is a way to make a small town out of a big city. Time banks - more than 300 of them - exist in 23 countries. The largest one in New York City is the Visiting Nurse Service of New York Community Connections TimeBank.
It has more than 2,000 members and is most active in three places - Upper Manhattan (Washington Heights and Inwood), Lower Manhattan (Battery Park City, Chinatown and the Lower East Side) and parts of Brooklyn (Sunset Park and Bay Ridge). Members come from all over New York City, but exchanges are easiest when people live in the same neighborhood - like Castillo-Velez and Jose.
There is something old fashioned about a time bank. Home repair, child care, visiting shut-ins and taking someone to the doctor are now often commercial transactions; a time bank is a return to an era where neighbors did these tasks for each other. But a time bank is also something radical. It throws out the logic of the market - in a time bank, all work has equal value. A 90-year-old can contribute on an equal basis with a 30 year old. Accompanying someone to the doctor is as valuable as Web design.
The idea comes from Edgar Cahn, a legendary anti-poverty activist. (Cahn and his late wife, Jean Camper Cahn, established the Antioch School of Law to train advocates for the poor, and were instrumental in founding the federal Legal Services Corporation.) In his book 'No More Useless People,' Cahn writes that time banks were a response to cuts in social programs during the Reagan years. Cahn wrote: "If we can't have more of that kind of money, why can't we create a new kind of money to put people and problems together?"
Time banks also owe much of their development to Ana Miyares, who in the 1980s gave up a lucrative position in international banking to join the time bank movement in its infancy. She has founded time banks in various countries, and today is the manager of the Visiting Nurse Service's time bank. Miyares sees time banking a little differently than Cahn does. "I would like to see social justice - but in a different way, using social capital, energizing social capital to be responsible citizens," she said.
Ana Miyares, the manager of Community Connections TimeBank, said that the program restores trust among new immigrants and helps them integrate.
The value of a time bank during a time of high unemployment is obvious. It is a way for underemployed people to put their skills to work to get things they need. (During the Great Depression, a group of men living in a Hooverville of unlaid sewer pipe in California began a barter exchange that eventually had 100,000 members.) Forty percent of the members of the Visiting Nurse Service's time bank, for example, have an annual income of less than $9,800. Many time banks have a large percentage of members who are older and living on a fixed income. "The difference it makes to have a handyman come out and do a repair for the cost of materials could be the difference between being able to purchase medicine or not," said Barbara Huston, the president and chief executive of Partners in Care, a time bank based near Baltimore. "Getting a ride to the doctor and saving $30 to $50 in transport costs might mean being able to buy all their vegetables."
But a time bank it is more than a barter Craigslist. Mashi Blech, the director of the Visiting Nurse Service time bank, said that only 10 percent of members bother to consistently record the hours they put in. In what industry would 90 percent of wage workers not care about recording their hours?
Castillo-Velez, for one, doesn't always record hers. She knows the Fort Washington Library well, because she used to go there as a child. Her grandmother is an avid reader in Spanish, and Castillo-Velez inherited her love of books. Now a graduate of Stony Brook University, Castillo-Velez tutors Jose because she remembers her own journey. "I know how it is to have to learn another language and have no one really there. I also overcame my shyness - sitting in class without speaking up. I saw myself in them," she said.
Regina Gradess, a member of the Community Connections TimeBank, at her home. "Slowly, through sharing," she said, "friendships form."
Several TimeBank members told me that activities gradually cease being services performed and become instead hours with friends. When Regina Gradess was 56, for example, she met Doris Feldman, who was 80. She began to drive Feldman places in her car. Technically, Gradess was providing companionship to an elderly woman. But that's not what it felt like. "I would take her to the duck pond, or a unique thrift shop, or libraries," she said. "We'd ride the bus to a museum and talk about all the architecture we saw. Every time I was with her we had tons of things to talk about. It was wonderful for me and wonderful for her." They saw each other at least every other week. When Feldman died in July at the age of 84, Gradess said she felt like she had lost her soul mate.
A time bank is a way to make a small town out of a big city - something especially important for retired people, who might go for days without human contact. The Visiting Nurse Service TimeBank has group gatherings - birthday parties, potlucks, trips - in addition to the work exchanges. A survey of members over 60 years old in 2009 found that 90 percent had made new friends, 71 percent saw those friends at least once a week, and 42 percent saw their TimeBank friends a few times a week. By overwhelming margins, the members reported that they felt more a part of a community, and their trust of others had increased - especially of other people who were different from them. The vast majority of pairings in the TimeBank bring together very different people - in ethnicity, income level, or especially, age; in Castillo-Velez's family, her grandmother, mother, aunt, sister, brother, husband and sister-in-law also are active in the TimeBank. Many pairings also cross language barriers. Members speak 29 different languages, and for just under half the members, English is not one of them.
Despite its size - or perhaps because of it - New York City offers people many different groups to join and many different ways to make friends. What makes a time bank different is that the purpose of the connection is ostensibly to give help - something that makes a lot of people more comfortable and confident. "I'm a shy person and I have a problem with receiving," said Gradess. Even if you happen to be the one receiving services in any particular transaction, you know you will be giving help to someone else.
Blech tells a story from an earlier time bank experience about Betty, a member in her 70s who suffered from several serious medical conditions and had been the caregiver for her mother for 15 years. When Betty's mother died, "people were thinking now she'll be less stressed," said Blech. "I was concerned that she'd be depressed - she was losing her role in life."
Blech called one afternoon two weeks after the mother's death and found Betty very depressed. "Later, I found out that she had barely gotten out of bed in two weeks, and that the bottle of antidepressants she was prescribed was still sitting on her table, unopened," she said.
They talked for a while, and then Blech said, "I need you." Betty was a skilled crocheter. "I was going to an international conference and needed a baby gift," Blech said. Would Betty make a baby blanket? "I don't think I'm up for that," Betty said. Blech asked her to think about it. Five minutes later Blech's phone rang. "Should it have a hood?" said Betty. "How about a matching crocheted hippo or dog?" Blech invited her to the next time bank gathering to show off her crocheting. She left with 10 orders.
This is a story many of us can relate to. People like to cook for others, to make things for others, to teach what they know, to use their skills to do a job for someone who needs it. People need to feel valued.
On Wednesday I'll respond to comments and explain one of the mysteries of time banks - why does the Visiting Nurse Service run one? Why do so many hospitals? Being with and giving to others, it turns out, are good for your health.
Building Watson
THE assignment was one of the biggest challenges in the field of artificial intelligence: build a computer smart enough to beat grand champions at the game of "Jeopardy."
Related
When I stepped up to lead the team at I.B.M. that would create this computer, called Watson, I knew the task would be formidable. The computer would have to answer an unpredictable variety of complex questions with confidence, precision and speed. And we would put it to the test in a publicly televised "human versus machine" competition against the best players of all time.
It was not easy finding people to join the Watson team in the mid-1990s. Most scientists I approached favored their own individual projects and career tracks. And who could blame them? This was an effort that, at best, would mingle the contributions of many. At its worst it would fail miserably, undermining the credibility of all involved.
Scientists, by their nature, can be solitary creatures conditioned to work and publish independently to build their reputations. While collaboration drives just about all scientific research, the idea of "publishing or perishing" under one's own name is alive and well.
I remember asking some researchers how long they had been working in natural language processing - the field of computer science focused on getting computers to interact in ordinary human language. For many, it had been well over a decade.
I asked them if they preferred spending the next 10 years as they had the first 10, publishing isolated research results and earning modest acclaim within a niche community. Or would they like to see whether the technology that had been their life's work could accomplish something monumental?
For the scientist in me, it was an irresistible challenge. I believed it was a rare opportunity to counter conventional wisdom and advance technology. I was willing to live with possible failure as a downside, but was the team?
A few people were extremely hesitant to join the project and later left, thinking that the whole enterprise was insane. But a majority bought in. We eventually pulled together a core group of 12 talented scientists, which over time grew to 25 members. It was a proud moment, frankly, just to have the courage as a team to move forward.
From the first, it was clear that we would have to change the culture of how scientists work. Watson was destined to be a hybrid system. It required experts in diverse disciplines: computational linguistics, natural language processing, machine learning, information retrieval and game theory, to name a few.
Likewise, the scientists would have to reject an ego-driven perspective and embrace the distributed intelligence that the project demanded. Some were still looking for that silver bullet that they might find all by themselves. But that represented the antithesis of how we would ultimately succeed. We learned to depend on a philosophy that embraced multiple tracks, each contributing relatively small increments to the success of the project.
Technical philosophy was important, but so were personal dynamics. Early on, I made the unpopular decision to bring the entire team together in a war room, to maximize communication. The shared space encouraged people with wildly different skills and opinions to exchange ideas.
The early practice rounds for "Jeopardy" were downright disappointing. Many of Watson's answers were stupid and irrelevant, some laughably so. Each wrong answer demonstrated the profound failings of simple search-based technologies and showed how sophisticated Watson needed to become.
We had to keep the team's collective intelligence from being overcome by egos, or dragged down by desperation. Leadership had to be steadfast and persistent but grounded in optimism. Through it all, the team developed a culture of trust that let creativity flourish.
In the end, the hero was the team, not any individual member or algorithm. Eventually, everyone came to appreciate that. Well into the throes of the project, one researcher commented, "Compared to the way we work now, it's like we were standing still before."
Watson went on to win "Jeopardy" a year ago, but its work is far from over. Now we and other research and development teams at I.B.M. are busy developing ways to put Watson to work in several different areas, most notably health care.
As for the members of the original Watson team, they'd tell you that never in a million years could they have imagined what we accomplished. Just like Watson itself, we all learned that the sum is much greater than the parts.
The Third Industrial Revolution
The digitisation of manufacturing will transform the way goods are made - and change the politics of jobs too.
THE first industrial revolution began in Britain in the late 18th century, with the mechanisation of the textile industry. Tasks previously done laboriously by hand in hundreds of weavers' cottages were brought together in a single cotton mill, and the factory was born. The second industrial revolution came in the early 20th century, when Henry Ford mastered the moving assembly line and ushered in the age of mass production. The first two industrial revolutions made people richer and more urban. Now a third revolution is under way. Manufacturing is going digital. As this week's special report argues, this could change not just business, but much else besides.
A number of remarkable technologies are converging: clever software, novel materials, more dexterous robots, new processes (notably three-dimensional printing) and a whole range of web-based services. The factory of the past was based on cranking out zillions of identical products: Ford famously said that car-buyers could have any colour they liked, as long as it was black. But the cost of producing much smaller batches of a wider variety, with each product tailored precisely to each customer's whims, is falling. The factory of the future will focus on mass customisation - and may look more like those weavers' cottages than Ford's assembly line.
Towards a third dimension
The old way of making things involved taking lots of parts and screwing or welding them together. Now a product can be designed on a computer and printed on a 3D printer, which creates a solid object by building up successive layers of material. The digital design can be tweaked with a few mouseclicks. The 3D printer can run unattended, and can make many things which are too complex for a traditional factory to handle. In time, these amazing machines may be able to make almost anything, anywhere - from your garage to an African village.
The applications of 3D printing are especially mind-boggling. Already, hearing aids and high-tech parts of military jets are being printed in customised shapes. The geography of supply chains will change. An engineer working in the middle of a desert who finds he lacks a certain tool no longer has to have it delivered from the nearest city. He can simply download the design and print it. The days when projects ground to a halt for want of a piece of kit, or when customers complained that they could no longer find spare parts for things they had bought, will one day seem quaint.
Other changes are nearly as momentous. New materials are lighter, stronger and more durable than the old ones. Carbon fibre is replacing steel and aluminium in products ranging from aeroplanes to mountain bikes. New techniques let engineers shape objects at a tiny scale.
Nanotechnology is giving products enhanced features, such as bandages that help heal cuts, engines that run more efficiently and crockery that cleans more easily. Genetically engineered viruses are being developed to make items such as batteries. And with the internet allowing ever more designers to collaborate on new products, the barriers to entry are falling. Ford needed heaps of capital to build his colossal River Rouge factory; his modern equivalent can start with little besides a laptop and a hunger to invent.
Like all revolutions, this one will be disruptive. Digital technology has already rocked the media and retailing industries, just as cotton mills crushed hand looms and the Model T put farriers out of work. Many people will look at the factories of the future and shudder. They will not be full of grimy machines manned by men in oily overalls. Many will be squeaky clean - and almost deserted. Some carmakers already produce twice as many vehicles per employee as they did only a decade or so ago. Most jobs will not be on the factory floor but in the offices nearby, which will be full of designers, engineers, IT specialists, logistics experts, marketing staff and other professionals. The manufacturing jobs of the future will require more skills. Many dull, repetitive tasks will become obsolete: you no longer need riveters when a product has no rivets.
The revolution will affect not only how things are made, but where. Factories used to move to low-wage countries to curb labour costs. But labour costs are growing less and less important: a $499 first-generation iPad included only about $33 of manufacturing labour, of which the final assembly in China accounted for just $8. Offshore production is increasingly moving back to rich countries not because Chinese wages are rising, but because companies now want to be closer to their customers so that they can respond more quickly to changes in demand. And some products are so sophisticated that it helps to have the people who design them and the people who make them in the same place. The Boston Consulting Group reckons that in areas such as transport, computers, fabricated metals and machinery, 10-30% of the goods that America now imports from China could be made at home by 2020, boosting American output by $20 billion-55 billion a year.
The shock of the new
Consumers will have little difficulty adapting to the new age of better products, swiftly delivered. Governments, however, may find it harder. Their instinct is to protect industries and companies that already exist, not the upstarts that would destroy them. They shower old factories with subsidies and bully bosses who want to move production abroad. They spend billions backing the new technologies which they, in their wisdom, think will prevail. And they cling to a romantic belief that manufacturing is superior to services, let alone finance.
None of this makes sense. The lines between manufacturing and services are blurring. Rolls-Royce no longer sells jet engines; it sells the hours that each engine is actually thrusting an aeroplane through the sky. Governments have always been lousy at picking winners, and they are likely to become more so, as legions of entrepreneurs and tinkerers swap designs online, turn them into products at home and market them globally from a garage. As the revolution rages, governments should stick to the basics: better schools for a skilled workforce, clear rules and a level playing field for enterprises of all kinds. Leave the rest to the revolutionaries.
Management Coaches
Chief executives go to coaches to improve their performance. So a coach should give them a kick in the pants. Most business leaders are surrounded by yes-men who just praise their ideas; what they actually need is someone who is prepared to tell them when they have got it wrong, says John Blakey, co-author of the book Challenging Coaching: Going beyond traditional coaching to face the facts.
"People at this level rarely get honest feedback from those around them or get held to account in the way that they hold others to account," said Blakey, a former international managing director at Logica, the consultancy. "If their external coach cannot do it, what hope is there that others will?"
The trouble is, though, some coaches don't want to get tough. "Coaching [traditionally] offers too much support and not enough challenge," said Blakey.
This tendency to treat senior executives as delicate flowers goes back to the beginnings of the coaching industry. "It started in the boom years as a way for companies to give their high-potential executives a bit of extra tender loving care to discourage them from leaving to join a competitor. As coaching grew, it borrowed from other professions. It particularly sucked in a lot of attitudes from the world of therapy and counselling."
As a result, many coaches adopted techniques that were perfectly reasonable for working with vulnerable or damaged people but inappropriate for senior executives. For example, therapists are careful not to influence their clients by giving them their opinion, and coaches, too, are not supposed to respond directly when someone says, "So, what do you think?", said Blakey. "You are supposed to say, 'It's not important what I think. I'm here to help you find your own answer.' That's classic coaching. But if you are on the receiving end of that response, it can get a bit irritating."
So what is his answer if one of his clients asks a straight question? "I might, for example, say that in their shoes I would not do this and I would think twice about that. I am not looking to persuade them to do it . . . it is just another way of getting them to think through the challenge."
Blakey is not in the least bit worried that any of his answers will have an undue influence on a client; senior managers are more than willing to push back. The bigger risk, he argues, would be failing to challenge. "These are tough, capable leaders. They are not dysfunctional - they are very, very functional - so what makes them work and what gets the best out of them is very different to what is needed to help a vulnerable person. Coaching has been perceived as a bit soft - as people asking questions in a nice way, doing a lot of listening and empathising - but chief executives and other board-level people can see through that very quickly."
And while most people worry about challenging senior executives, business leaders tend not to react to criticism in the way that the rest of us do, Blakey added. "That fear arises because people are thinking about how they would react to criticism, not how a chief executive would react. Chief executives are in that position because they have thick skins and they are robust - they would not be there if they weren't. So they can take a much stronger challenge than most of us would and respond well to it."
"It wakes them up and engages them. They are not people who crumble at the first sign of a strong opinion."
In the long run, failing to challenge a chief executive's flawed thinking or to hold them accountable for poor behaviour is not in the interests of the individual or the organisation. However, given that it can be difficult for a chief executive's employees to do this, it is up to coaches to raise uncomfortable truths, Blakey said. "Coaches are free from being part of the system, so they have a responsibility to step into these uncomfortable, difficult conversations when they are necessary to maximise the performance not simply of the individual they are coaching, but of his or her team," he said.
Clearly, coaches can only do this as part of a formal relationship; even the most battle-hardened chief executive might not appreciate unsolicited advice from a stranger.
So how can chief executives tell whether they might need to find themselves a professional pants-kicker? "One sign might be pressure from other stakeholder groups," said Blakey. In other words, deteriorating performance is the clearest sign, even if it can't always be measured in the bottom line.
More coach, less commander
Andrew Gould had a clear idea of where he wanted to take Jones Lang LaSalle, the commercial property company, when he became the UK chief executive three years ago. So he set about it in his usual direct manner. But when he realised that things weren't going to plan, he turned to John Blakey. "I needed to work with someone like John who could figure out what was really going on and find ways to address it," he said. "John has the ability to listen very carefully and then take [me] to that zone of uncomfortable truth."
What Blakey told Gould was that he was part of the problem: his own behaviour was affecting the dynamic of the senior management team. Blakey suggested that Gould consider changing his leadership style so that he became more of a coach and less of a commander.
"You do feel a little uncomfortable when you are being asked to reassess the way you operate, but it was a very successful process," said Gould.
"It probably sounds more brutal than it is. It's based on the idea that when you go into the coaching relationship, you have to do it with absolute honesty . . . [so that the coach] feels able to say, 'Can I tell you my read of what's really going on here, which is that you are fooling yourself on this issue and this needs to change.'"
It is not just the coach who should be able to give the chief executive a bit of a prod, he said, adding that his senior team also give him "the loving boot" when needed. "As a chief executive, what you want is people who are willing to challenge you . . . It is allowing people to ask questions, cut the bullshit and say, 'What's really going on here?'"
Lifeguard's ordeal is parable about outsourcing
By now you've probably heard the story of the young Florida lifeguard, Tomas Lopez, who was fired earlier this month because he left his station unmanned to help with a rescue in an unguarded section of the beach, in violation of his company's standard operating procedures.
Over the Fourth of July holiday, this story played out in the national media as a parable about the foolish rigidity of business managers. Lopez, along with six other guards who were fired or quit as part of the imbroglio, were all offered their jobs back by an apologetic owner of the lifeguard management firm (they declined). And last week, Lopez was awarded the key to the city by Hallandale Beach's elected leaders who vowed to not renew the contract with Jeff Ellis Management, the lifeguard firm that had brought such unwelcome attention.
From another angle, this is also a parable about outsourcing and how it is reshaping large swaths of the economy.
Jeff Ellis, it turns out, is something of a pioneer in the lifeguard business. He started out as a consultant to water parks, clubs and municipalities, training lifeguards and performing annual audits of equipment and procedures that include simulated drownings to test the guards and the equipment. Today that business has nearly 700 clients in 50 states and 14 countries. Insurers have been known to give discounts on liability policies to operators who use Ellis's services.
About a decade ago, Ellis moved to extend his franchise by offering to hire, train and manage lifeguards for water parks, municipalities and pool owners that wanted to outsource that function. One of his first customers was Hallandale, which had up to that time put beach protection operations under its fire department. In the first year, Ellis says, his company cut the city's annual $700,000 tab for lifeguards in half. In the nine years since, Ellis says there have been no drownings and his contract has been routinely renewed.
Hallandale is like many public and private enterprises that decided to outsource to contractors work that is not part of their core missions or competencies.
Because they are generally free from union contracts and the unwritten norms of pay equality that exist within any enterprise, contractors are able to pay lower wages and benefits - in many cases, a lot lower. That was certainly the case with Ellis and the Hallandale lifeguards.
The second big advantage that outsourcing firms enjoy is the economies of scale. A firm that specializes in one function and does a lot of it can generally do it at a lower cost simply by spreading fixed costs over a much larger base of business.
Simply by having more experience, a specialty contractor is also more likely to hit upon the most efficient and effective ways of doing things and can quickly adopt those improvements throughout its operations. Unlike in-house operations, outside contractors are also subject to the discipline of competition when contracts are up for renewal.
There is, however, an important trade-off that is made by outsourcing that contractors reflexively deny but is inherent in any firm that derives its competitive advantage from having carefully constructed systems for doing just about everything.
It is these systems - the rules, the procedures, in effect the operational software - that allow companies to take relatively low-skilled, low-paid workers with relatively little experience and have them do tasks that were once done by people with higher skills, higher pay and more experience. And it is the very nature of these systems that workers are discouraged, if not prohibited, from exercising their own discretion. Their only job is to follow rules, stick to the script and leverage the experience and expertise that are embedded in the system.
That's why the person in the airline call center in Bangalore can't do what is necessary to help you catch your honeymoon cruise after your flight has been canceled because a co-pilot failed to show up on time. Her computer simply won't allow her.
It's why the person from the credit card company can't speak to you about your account until you put the cardholder - your recently deceased husband - on the line to give the authorization.
It's why your company cafeteria won't serve an Ethiopian option even if half the staff is Ethiopian.
It's why the security guard won't let you into the building you've worked in for 25 years because your identification card has expired - not even to go to the security office to renew your card.
It's why, when you call the utility company to tell them your service is down for the third time that week, the person at the call center is completely uninterested in what you learned from the last two repairmen, assuring you that somebody will be out to check the problem and asking if there is anything else he can do for you today.
It's why, when you call the 800 number for tech services while working at home over the weekend on a crucial project, they tell you they don't support the application you are using and can't find somebody who does.
And, yes, it's why Tomas Lopez was initially fired by a supervisor who had ordered him to remain at his post until another guard arrived before responding to calls for help from a distant, unpatrolled part of the beach. (As it turned out, by the time Lopez arrived, the rescue had largely been made by another swimmer with a boogie board).
The reason these various systems can deliver reliable service at lower cost most of the time is precisely because front-line workers are willing and able to act like cogs in a machine. So when two of Lopez's colleagues later told supervisors they would have done the same thing, they were fired as well.
If you want discretion and judgment, if you want workers who really understand and relate to customers, if you want the flexibility necessary to respond to individual needs or unforeseen circumstances, then you can go back to paying twice as much to have your own, longtime employees doing the work. That's the outsourcing trade-off. It may be a good trade-off - most of the time I suspect it is. But it is an unavoidable trade-off, no matter how good the contractors or their systems.
You can see how this process bifurcates labor markets and increases income inequality. At the low end are the low-cost expendable cogs. At the high end are those whose experience and intelligence and training allow them to demand very good salaries for designing, creating and managing these systems. There's not much in between - or even much of a ladder for getting from one to the other.
The manufacturing sector has already gone through this process, and most of the work that can be outsourced has been. Now a new phase is beginning. It turns out that if jobs are so routinized that they can be done by someone without deep skills and experience, then those jobs are also ripe for being done by robots or computer-run production processes. And that makes it possible to bring work back in house, or at least closer to home, by a fewer number of experienced workers who are higher-skilled and higher-paid. The revival of the U.S. steel and auto industry is predicated on this model, and I think we could eventually see it in parts of high tech and even consumer goods. In the service sector, the drivers will be the Internet, user-friendly interfaces and intelligent, interactive software.
Of course, there will always be work that can't be easily automated or upgraded, and being a lifeguard may be one of those. But Tomas Lopez probably made the right decision to spend the rest of the summer focused on his studies at nearby Broward College. For all its faults, that's still the best way to ensure that he won't end up as a discretion-less cog in a low-wage machine.
Interviews Favor Those Seen First
A NEVER-ENDING flow of information is the lot of most professionals. Whether it comes in the form of lawyers' cases, doctors' patients or even journalists' stories, this information naturally gets broken up into pieces that can be tackled one at a time during the course of a given day. In theory, a decision made when handling one of these pieces should not have much, if any, impact on similar but unrelated subsequent decisions. Yet Uri Simonsohn of the University of Pennsylvania and Francesca Gino at Harvard report in Psychological Science that this is not how things work out in practice.
Dr Simonsohn and Dr Gino knew from studies done in other laboratories that people are, on the whole, poor at considering background information when making individual decisions. At first glance this might seem like a strength that grants the ability to make judgments which are unbiased by external factors. But in a world of quotas and limits - in other words, the world in which most professional people operate - the two researchers suspected that it was actually a weakness. They speculated that an inability to consider the big picture was leading decision-makers to be biased by the daily samples of information they were working with. For example, they theorised that a judge fearful of appearing too soft on crime might be more likely to send someone to prison if he had already sentenced five or six other defendants only to probation on that day.
To test this idea, they turned their attention to the university-admissions process. Admissions officers interview hundreds of applicants every year, at a rate of 4-5 a day, and can offer entry to about 40% of them. In theory, the success of an applicant should not depend on the few others chosen randomly for interview during the same day, but Dr Simonsohn and Dr Gino suspected the truth was otherwise.
They studied the results of 9,323 MBA interviews conducted by 31 admissions officers. The interviewers had rated applicants on a scale of one to five. This scale took numerous factors, including communication skills, personal drive, team-working ability and personal accomplishments, into consideration. The scores from this rating were then used in conjunction with an applicant's score on the Graduate Management Admission Test, or GMAT, a standardised exam which is marked out of 800 points, to make a decision on whether to accept him or her.
Dr Simonsohn and Dr Gino discovered that their hunch was right. If the score of the previous candidate in a daily series of interviewees was 0.75 points or more higher than that of the one before that, then the score for the next applicant would drop by an average of 0.075 points. This might sound small, but to undo the effects of such a decrease a candidate would need 30 more GMAT points than would otherwise have been necessary.
As for why people behave this way, Dr Simonsohn proposes that after accepting a number of strong candidates, interviewers might form the illogical expectation that a weaker candidate 'is due'. Alternatively, he suggests that interviewers may be engaging in mental accounting that simplifies the task of maintaining a given long-term acceptance rate, by trying to apply this rate to each daily group of candidates. Regardless of the reason, if this sort of thinking proves to have a similar effect on the judgments of those in other fields, such as law and medicine, it could be responsible for far worse things than the rejection of qualified business-school candidates.
Local Manufacture
The dairy farms that once draped the countryside here were paved over so the Japanese carmaker Nissan could build its first American assembly plant. Eighty miles to the south, another green pasture was replaced by a Nissan engine factory, and across Tennessee about 100 Nissan suppliers dot the landscape, making steel in Murfreesboro, air conditioning units in Lewisburg, transmission parts in Portland.
Three decades ago, none of this existed. The conventional wisdom at the time was simple: Japanese automakers would not build many cars anywhere but Japan, where supply chains were in place, costs were tightly controlled and the reputation for quality was unparalleled.
"They were very unfamiliar doing anything outside Japan," said Senator Lamar Alexander, a Republican who was governor of Tennessee when Nissan opened its factory here in 1983. "They were tentative and awkward even discussing it."
Today, echoes of that conventional wisdom can be heard within the American technology industry. For years, high-tech executives have argued that the United States cannot compete in making the most popular electronic devices. Companies like Apple, Dell and Hewlett-Packard, which rely on huge Asian factories, assert that many types of manufacturing would be too costly and inefficient in America. Only overseas, they have said, can they find an abundance of educated midlevel engineers, low-wage workers and at-the-ready suppliers.
But the migration of Japanese auto manufacturing to the United States over the last 30 years offers a case study in how the unlikeliest of transformations can unfold. Despite the decline of American car companies, the United States today remains one of the top auto manufacturers and employers in the world. Japanese and other foreign companies account for more than 40 percent of cars built in the United States, employing about 95,000 people directly and hundreds of thousands more among parts suppliers.
The United States gained these jobs through a combination of public and Congressional pressure on Japan, 'voluntary' quotas on car exports from Japan and incentives like tax breaks that encouraged Japanese automakers to build factories in America. Pressuring technology companies to move manufacturing here would pose different challenges. For one thing, Apple and many other technology giants are American, not foreign, and so are viewed differently by politicians and the public. But it is possible and the benefits might be worth it, some economists say.
"The U.S. has a long history of demanding that companies build here if they want to sell here, because it jump-starts industries," said Clyde V. Prestowitz Jr., a senior trade official in the Reagan administration who helped negotiate with Japan in the 1980s. The government could also encourage domestic production of technologies, including display manufacturing and advanced semiconductor fabrication, that would nurture new industries. "Instead, we let those jobs go to Asia, and then the supply chains follow, and then R&D follows, and soon it makes sense to build everything overseas," he said. "If Apple or Congress wanted to make the valuable parts of the iPhone in America, it wouldn't be hard."
One country has recently succeeded at forcing technology jobs to relocate. Last year, Brazilian politicians used subsidies and the threat of continued high tariffs on imports to persuade Foxconn - which makes smartphones and computers in Asia for dozens of technology companies - to start producing iPhones, iPads and other devices in a factory north of Sao Paulo. Today, the new plant has 1,000 workers, and could employ many more. Apple and Foxconn declined to comment about the specifics of their Brazilian manufacturing.
However, a developing country like Brazil can adopt trade policies that would be difficult for the United States to do. Taking a hard line to reduce imports of technology goods and encourage domestic manufacturing could violate international trade agreements and set off a trade confrontation. "We're a long way from even talking about limits on imported iPhones or iPads," said a former high-ranking Obama administration official who did not want to be named because he was not authorized to speak.
Protectionism is bad policy in today's globalized world, many economists argue. Countries benefit most when they concentrate on what they do best, and trade barriers harm consumers by driving up prices and undermine a nation's competitiveness by shielding industries from market forces that spur innovation. The United States needs to create new jobs, economists say, but it should not chase low-paid electronics assembly work that at some point may be replaced by robots. Instead, it should focus on higher-paying jobs.
"Closing our border is a 20th-century thought, and it will only weaken the economy over the long term," said Andrew N. Liveris, president of Dow Chemical and co-chairman of the Advanced Manufacturing Partnership, a group of executives and academics convened by the White House who have studied ways to encourage domestic manufacturing.
The debate is not just economic, however. Increasingly, it is political. With high unemployment, the question of how to create jobs has taken a role in the presidential race between President Obama and Mitt Romney, and both have traded barbs on outsourcing by American companies.
Although the car and technology industries are different, and the eras are separated by 30 years, the resurgence of American auto manufacturing in the 1980s is an example of how one industry created tens of thousands of good jobs. Since its first pickup truck rolled off the line here on June 16, 1983, Nissan has produced more than seven million vehicles in the United States. It now employs 15,000 people in this country. It makes more than a half-million cars, trucks and S.U.V.'s a year, with the plant in Smyrna building six models, including the soon-to-be-produced, all-electric Nissan Leaf.
Other foreign carmakers settled in America - Honda, Toyota, Hyundai, BMW, Mercedes-Benz and, most recently, Volkswagen - after a failed attempt decades ago. And some of those factories have become among the best in the world. The Nissan engine plant in Decherd, Tenn., for instance, exports engines to Japan. "We have 14 companies now that produce light vehicles here, and that is enormous," said Thomas Klier, a senior economist at the Federal Reserve Bank in Chicago. "There is no major market in the world that compares to it."
Tennessee?
"Where is Tennessee?"
It was a blunt question, posed by Takashi Ishihara, president of Nissan, to Mr. Alexander, then the state's governor.Mr. Alexander, who had journeyed to Tokyo in 1979 to pitch Nissan on building a plant in his state, was ready with his answer: "I said, 'It's right in the middle.'" To help out, he displayed a satellite photograph of the United States at night, showing the bright lights shining on the East and West Coasts and the relative darkness of Tennessee.
"We were the third-poorest state in the nation back then," Mr. Alexander said. "President Carter had told all the U.S. governors to go to Japan and persuade the Japanese to make in the U.S. what they sell in the U.S." Mr. Alexander recalled that the Nissan executives were 'incredibly anxious' about testing their homegrown production systems abroad. Could the Japanese car companies achieve the same quality using American workers?
Despite the concerns, pressures were growing for Nissan to break out of its manufacturing cocoon in Japan, including currency fluctuations that made exporting more expensive. The final push came from American anger as imports grabbed one-fourth of the United States market. "Japanese automakers had achieved rapid growth by exporting to America," said Hidetoshi Imazu, a senior manufacturing executive at Nissan in Tokyo who led the development of the plant here in its early years. "But it was clear that model would no longer work."
In the fall of 1980, Congress held hearings to limit Japanese imports. With tensions running high, Nissan announced plans for the $300 million assembly plant in Smyrna. That gave the company a head start in circumventing looming restrictions. In May 1981, Japan agreed to limit exports to America to 1.68 million cars annually, a 7 percent reduction from a year earlier. In addition, the United States imposed a 25 percent tax on imported pickup trucks.
"The pressure put on the Japanese was absolutely critical for them to agree to export restraints," said Stephen D. Cohen, a professor emeritus of international studies at American University.
Rural Tennessee may not have seemed a likely place to build a giant automotive factory, but its location was actually a selling point. It was far from Detroit and the United Auto Workers - and the Japanese wanted to work without what they saw as union interference.
Nissan's choice of Tennessee was not popular with everyone. On a 20-degree February morning in 1981, trade unionists jeered Mr. Alexander and Nissan executives as they turned the first shovelfuls of dirt for the factory, protesting nonunion construction crews. An airplane circled overhead, urging a boycott of Japanese vehicles.
Standing nearby was Marvin Runyon, a 37-year veteran of Ford who had been recruited as Nissan's first American plant manager. In a later interview with The New York Times, Mr. Runyon was asked what his old colleagues in Detroit thought of his new job. "They wish me luck," he said. "But not too much."
Success did not come overnight. Many Japanese were skeptical of their new colleagues. Americans, they had heard, were soft, lazy and incapable of mastering the precision manufacturing that had made Nissan great.
To train its new American engineers, Nissan flew workers to its Zama factory in eastern Japan. There the Nissan officials, assisted by English-speaking Japanese workers called 'communication helpers,' imparted the intricacies of the company's production techniques to the Americans.
Beginnings at Nissan
Early on, Nissan guarded against quality concerns by not relying on parts from American suppliers. Most components were either shipped from Japan or produced by Japanese companies that set up operations nearby. "We felt sourcing parts in the U.S. wouldn't allow us to make cars in our own way," said Mr. Imazu, the Nissan manufacturing executive.
By 1985, Nissan was confident enough about the quality that it added passenger cars to Smyrna's assembly lines. Gradually, American parts makers were allowed to bid on supply contracts. Even that came amid arm-twisting by Congress, which passed a law in 1992 requiring auto makers to inform consumers of the percentage of parts in United States-made cars that came from North America, Asia or elsewhere.
Calsonic Kansei of Tokyo opened its first plant in Tennessee in the mid-1980s, and now employs about 2,600 Americans making instrument panels, exhaust systems, and heating and cooling modules for Nissan. "The Japanese suppliers were encouraged to localize production," said Matt Mulliniks, vice president for sales and marketing at Calsonic Kansei in Tennessee.
Nissan's early doubts are reflected in recent debates over whether American workers can compete with overseas laborers. Within the technology industry, workers in Asia are viewed as hungrier and more willing to tolerate harsh work schedules to achieve productivity. The numbingly repetitive jobs of assembling cellphones and tablet computers, executives say, would be scorned here; they worry that many Americans would not make the sacrifices that success demands, and want too much vacation time and predictable work schedules.
In the auto industry, the belief that American workers could not match Japanese workers has long since faded. "A big part of the reluctance of Japanese automakers to come to the U.S. was the belief that their manufacturing systems could only work with loyal Japanese employees," said Dr. Cohen, the American University professor. "Everybody was surprised how quickly the systems were adopted here."
This year, Nissan held an internal competition to decide where to produce a new Infiniti-brand luxury sport utility vehicle. The plant in Smyrna was vying against one in Japan.
The surprising winner: Smyrna.
"All my life I've heard about how great luxury brands like Lexus and BMW are," said Richard Soloman, a 20-year veteran at the Smyrna plant. "Now we will be building a vehicle of that standard right here in Tennessee."
The Japanese presence has rippled through the South. But no place has benefited to the extent of Tennessee, which counts more than 60,000 jobs related to automobile and parts production. The state's jobless rate, which exceeded the national average by a significant margin in 1983 when Nissan opened its plant, is now lower - 8.1 percent in June versus 8.2 percent nationwide.
Brazil's Breakthrough
Earlier this year, when Apple's chief executive, Tim Cook, took the stage at a technology conference, he was asked if his company - which once made computers in America, but now locates most assembly in China and other countries - would ever build another product in the United States.
"I hope so," Mr. Cook replied. "One day."
That day came recently for Brazil.
In Jundiai, an hour's drive from Sao Paulo, a strip of asphalt has recently been rechristened Avenida Steve Jobs, or Steve Jobs Avenue. Alongside is a factory where workers make iPhones and iPads. Brazil got these jobs through tactics the United States once used to persuade Nissan and other foreign carmakers to build plants in America: it cajoled Apple and Foxconn with a combination of financial incentives and import penalties.
Like the United States, Brazil is a big market - the third largest for computers after China and the United States. It has long imposed tariffs on imported technology products to encourage domestic manufacturing. Those fees mean that smartphones and laptops often cost consumers more in Brazil, and that domestic manufacturers can be at a disadvantage if their products require imported parts.
In April 2011, Brazil's president, Dilma Rousseff, traveled to Asia with a pitch, much as Mr. Alexander did in 1979. The federal government would give Foxconn tax breaks, subsidized loans and special access through customs and lower tariffs for imported parts if it started assembling Apple products in Brazil, where Foxconn was already producing electronics for Dell, Sony and Hewlett-Packard.
Foxconn agreed. Within months, new Brazilian engineers were flying to China for training. By year's end, Foxconn was making iPhones in Jundiai, and it began making iPads there in early 2012, according to Evandro Oliveira Santos, director of the Jundiai Metalworkers Union, whose members work at the plant. Stores now carry Apple products with the inscription Fabricado no Brasil - Made in Brazil.
Apple products remain expensive; the latest iPad, for instance, costs about $760 in Brazil, compared with $499 in the United States. But because those devices are made in Brazil and lower tariffs are charged on parts used to assemble them, Foxconn and Apple are pocketing larger shares of the profits, analysts say, offsetting the increased costs of building outside China.
Foxconn declined to discuss specific customers, but said that the Brazilian government's incentive programs had influenced its decisions and that the company expected to generate more Brazilian jobs and aid the government's goal of furthering the country's technology industries.
Indeed, Brazil hopes that compelling Foxconn to assemble iPhones and iPads domestically will help set off a technology explosion. Ms. Rousseff has said that Foxconn could invest $12 billion more in Brazil. And as an electronics supply chain develops within the country, as it has in China, the expectation is that other manufacturers will build factories.
The government also hopes to use consumer electronics as a springboard for more advanced manufacturing. Targeting high-tech parts like computer displays and semiconductors could help Brazil reduce its trade deficit in these products and develop a robust homegrown industry, said Virgilio Almeida, information technology secretary at the Ministry of Science and Technology. "They are deemed high priority in the Brazilian industrial policy and are part of the Greater Brazil Plan," he said. "Brazil has developed specific policies that grant incentives to foment research, development and industrial production."
America's Gap
Throughout his term, Mr. Obama has regularly gathered advisers to discuss manufacturing, according to former high-ranking White House officials. As one meeting was breaking up, Mr. Obama casually tapped an aide's iPhone to raise a point. Since the device is designed domestically, he said, it should be possible to make it in this country as well.
But it became clear at the meetings that there were differences of opinion over how best to bring manufacturing home, according to people familiar with the discussions who did not want to be named because the sessions were private. Everyone shared the same goal: establishing a level playing field and creating as many jobs in America as possible. But the debate centered, in part, on choosing among different tactics the American government has used in the past: penalties like tariffs against foreign countries that do not play by the rules or incentives like tax breaks to encourage more domestic manufacturing. On one side were officials like Ron Bloom, until earlier this year the president's senior counselor for manufacturing policy, who favored more aggressive stances to counter policies used by Asian countries. He argued that the United States should fight China's efforts to keep its currency weak. If China's currency were stronger, American companies might find it costlier to make their goods in China and could have greater incentive to manufacture more in this country.
Aligned on the other side at times were two powerful voices: Lawrence H. Summers, the top economic adviser to Mr. Obama until 2010, and Treasury Secretary Timothy F. Geithner. Along with many economists, Mr. Summers argued that an overly aggressive trade stance could hurt manufacturing - by, for instance, pushing up the price of imported steel used by carmakers - and over time, drive companies away.
Mr. Geithner thought diplomacy was more effective than confrontational tactics like labeling China a currency manipulator. "He told us, 'It's going to be a trade war if we go there,'" according to a person who attended the meetings. But this person countered that China would respond only to pressure. "What doesn't work is the quiet stuff," he said.
Mr. Summers, in a recent interview, declined to discuss his role at the White House. But speaking more broadly, he said that protectionist measures might incite new domestic manufacturing in the short run, but that it would come at a high price. "People will pay more for the product because it's produced in a place that can't make it at the lowest cost," he said. "It burdens exporters because they pay more for their inputs. And it removes the spur of competition."
A spokeswoman for Mr. Geithner said, "A multidimensional approach to tough yet smart engagement with China is the most effective way to level the playing field." This strategy has had some success in persuading China to increase the value of its currency, she noted.
One of the president's economic advisers also said that, despite some differences, Mr. Obama's team, including Mr. Geithner and Mr. Summers, united to preserve manufacturing jobs in a critical area by bailing out the auto industry in the wake of the financial crisis.
But the divisions within the White House have often frustrated those who wanted a sharper focus on manufacturing. "The critics would say we didn't really fight for manufacturing policy," said another former high-ranking official who took part in many of those meetings and who did not want to be named because the discussions were confidential. "They have a strong point."
Now, with unemployment high and a growing debate over outsourcing of jobs, manufacturing is on the political agenda. In March, Gene B. Sperling, director of the White House's National Economic Council, outlined initiatives - including tax breaks for building factories here, infrastructure investments and going after "unfair trade practices" - to reinvigorate manufacturing. In May, the Commerce Department announced tariffs on Chinese solar panels for selling below fair-market value. The White House has challenged China's trade practices on tires and rare-earth metals, and has established an 'interagency trade enforcement center' to combat unfair trade.
Washington, however, has generally shied from addressing the protectionist measures of countries like China with countermeasures, as politicians once did against Japan.
After the Senate passed legislation last year imposing tariffs on nations whose currency is undervalued - a salvo aimed at China - the bill went nowhere in the House of Representatives, and the White House indicated it did not like the proposal.
However, champions of 'in-sourcing' legislation - which takes away benefits from companies moving jobs abroad and provides incentives for those bringing jobs back - said the tenor of the debate was changing. "The public by and large has been betrayed by large American corporations that outsource. I think Congress is catching on to that," said Senator Sherrod Brown, Democrat of Ohio.
Still, he does not advocate tariffs or quotas. Senator Debbie Stabenow, Democrat of Michigan, also favors tax breaks, rather than penalties. "I love my iPad," she said. "And I want it made in America."
One reason for the difference today: Unlike in the 1980s, when Japanese auto imports upset many voters, there has been little public outcry over imported cellphones and computers.
Back then, American workers were losing jobs as imports from Japanese companies cut into sales of the Big Three automakers.
But consumer electronics are different. Though some jobs have moved to Asia, many were never here to begin with. And the biggest technology importers - like Apple, Hewlett-Packard, Dell and Microsoft - are American companies.
Today, many consumers do not know or care where their smartphones are made. "Where it was built, what it means for politics, how it affects the economy," said Raymond Stata, a founder of Analog Devices, one of the largest semiconductor manufacturers, "that's not something people think about when they buy."
Outsourcing
Small-business owners are like Swiss Army knives: expected to handle dozens of specialized tasks without falling apart. But even the sharpest entrepreneurs have it tough this time of year - inevitably, some will outsource part of their workload to other enterprising people.
This season, dozens of start-ups are competing to take on your holiday headaches. Here are four time-gobbling situations and the young companies vying to eliminate them:
CHALLENGE Your to-do list is crammed with tiny tasks. How can you delegate them cheaply?
ONE SOLUTION For $5 you could drink a large latte and work through the night. Or you could hire a minion at Fiverr, which bills itself as "the world's largest marketplace for small services." Starting at $5 apiece, tasks include designing business cards and letterheads, sending out handwritten cards, editing newsletters, making short commercial videos and throwing darts at a picture of your rival.
"Pretty much anything you imagine can be found on Fiverr," said the company's chief executive, Micha Kaufman, who set out in 2010 with Shai Wininger to build what Mr. Kaufman calls "an eBay for services."
"It's giving people the tools to do business with the entire world," he added.
Fiverr, with headquarters in Tel Aviv and offices in New York and Amsterdam, has more than a million active buyers and sellers across 200 countries, Mr. Kaufman said. He would not disclose revenue or the number of sales his site has brokered so far. Fiverr has raised $20 million in financing and has 60 full-time staff members. The company collects a 20 percent commission on each sale.
THE COMPETITION Fiverr's success has inspired an army of imitators, including Gig Me 5, Gigbucks, TenBux and Zeerk. Building and selling Fiverr copycat sites has also become a cottage industry for online software developers. Asked whether he took this as a compliment, Mr. Kaufman replied dryly, "One of my friends said, 'It may be flattering, but it's a very annoying way to flatter you.' "
*** CHALLENGE You are overwhelmed by errands and other location-specific jobs that cannot be farmed out to the other side of the planet. You need an affordable gofer: competent, trustworthy, local.
ONE SOLUTION TaskRabbit is an on-demand service for handling quick jobs: assembling Ikea furniture, packing boxes, wrapping gifts, mailing invitations or even carrying awkward objects like Christmas trees. The company sends requests to a network of 'rabbits' - errand-runners screened through video interviews and background checks - who bid for the work. Last month, 80 of them were hired to wait on Black Friday lines.
Leah Busque got the idea for TaskRabbit one night in 2008, when she was heading out to dinner and realized she had no food in the house for Kobe, her yellow Labrador. Envisioning an online service for dispatching errand-runners, she quit her job as an I.B.M. software engineer to build it. A year later, she won a slot in Facebook's now defunct incubator program. Shortly thereafter she moved her company, then called RunMyErrand, to San Francisco from Boston.
Now TaskRabbit has 60 employees at its headquarters, along with more than 4,000 freelancers wrangling tasks for customers in the Bay Area as well as in Austin, Tex.; Boston; Chicago; Los Angeles; New York; Portland, Ore.; and San Antonio.
TaskRabbit has raised almost $40 million in financing, and revenue nearly quintupled this year, Ms. Busque said. She would not disclose sales figures but said the company typically charges users 18 percent on top of its freelancers' fees. Small businesses, she said, are her fastest-growing group of customers.
THE COMPETITION: Agent Anything, Exec., Fancy Hands, PAForADay and Zaarly.
*** CHALLENGE You want to delegate complex, highly specialized tasks, but it's hard to find people whose expertise matches your needs.
ONE SOLUTION SkillPages connects skilled workers with those who want to hire them. The site showcases an array of specialists - beekeepers, tree surgeons, witches, clog dancers - along with professionals with more conventional business skills, like payroll administrators, social media marketers and typists.
Iain Mac Donald decided to start SkillPages after seeking a tree cutter online to do work in his yard. "This guy arrives with a huge truck, and he could have taken down a forest," Mr. Mac Donald said. "He was going to charge me $3,000. It just wasn't right."
Mr. Mac Donald figured there had to be a way to help make better matches. To that end, SkillPages identifies specialists whom users' families and friends may already know through social networks like Facebook, LinkedIn and Twitter. Users can also view work samples online and contact members directly.
Based in Ireland, SkillPages went live in 2011 and opened an office in Palo Alto, Calif., this year. The company's 35 employees handle traffic from more than nine million users worldwide, 1.5 million of them in North America. The company has received $18.5 million in financing, said Mr. Mac Donald, the chief executive, declining to disclose sales figures.
SkillPages' basic services are free. To make money, it sells advertising space and offers premium memberships with stand-alone Web sites. Next year, Mr. Mac Donald plans to offer a paid matchmaking service for talent-seeking companies. He is also building a 'targeted offers' program that will let niche vendors present deals on products and services to members with relevant expertise. The vendors will pay SkillPages a bounty for each sale.
THE COMPETITION Guru, oDesk and Elance also focus on skilled work. LinkedIn added a 'skills' component to its profiles last year.
*** CHALLENGE Your business moved. In days of yore, you would just update the address in the local Yellow Pages. But now that information appears on myriad Web sites like Yelp, Citysearch, Yahoo and Foursquare. How do you adjust them all?
ONE SOLUTION Yext gives business owners a single dashboard for updating directory information and posting special offers across 57 listing sites. After Hurricane Sandy, about 2,300 users logged on to post closings and other storm-related messages, according to Yext's chief executive, Howard Lerman. "My favorite was one guy who put up a 24-hour elevator rescue hot line," he said.
Founded in 2006 in New York City, Yext, in its first incarnation, drove sales leads to other businesses on a pay-per-call basis. In August, Mr. Lerman sold that service, which he said was profitable and generating eight-figure revenue. He wanted to refocus on expanding Yext's fledgling directory information product, which came out in 2011. "I'm perfectly happy with the word 'gamble,' " he said. "You should only take big bets in technology."
Yext has raised $27 million in financing so far for its listings service, which passed the 100,000-subscriber mark this month and generates more than $30 million in annual revenue, according to Mr. Lerman. The full service costs $42 a month and also notifies users when new reviews of their companies appear on listings sites.
"To go to all of those sites individually and try to manage your information or update stuff would take hours and hours and hours," Mr. Lerman said. "Yext is all about businesses owning their own data."
THE COMPETITION Localeze, Express Update and CityGrid.
Are Droids Taking Our Jobs?
(Andrew McAfee is a Principal Research Scientist at MIT's Centre for Digital Business. He co-authored the book Race Against the Machine with Eric Brynjolfsson in which they examined how white-collar jobs may be replaced by artificial intelligence in the same way that many blue-collar jobs have been replaced by machinery. He recently gave an insightful TED talk entitled, Are Droids Taking Our Jobs? in which he pointed out both the reasons to be optimistic about the future and those to be concerned.)
What type of technology will replace white-collar jobs in the future?
I think it is going to be the suite of technologies that we label artificial intelligence and/or machine learning. So machines that can do things like understand what we are saying, talk back to us, get an accurate answer to a question, translate between human languages, do a lot of these things that we used to absolutely need people for. Technologies to me are demonstrating that they are adequate and in many cases excellent at these things.
In your TED talk you refer to algorithms that can write news stories. Are journalists at risk?
Journalism is one of those cases where the low end is at risk and the high end seems safe for now. The long stories I read, online and in print, that require research, synthesis and investigation, I don't see computers doing in the immediate term.
Output will go up, quality will go up and prices will go down. This is an unambiguously good thing
However, they are already writing perfectly good earnings summaries, summaries of sporting events, a lot of things where you can give an algorithm basic facts and it will put together a perfectly good narrative of those - that is happening right now.
How will humans benefit from being made redundant?
You're asking two different questions. You're asking, how will society benefit and then you're asking, what happens to the people who are displaced by this technology. I think we have to unpack those two. On the question of how society will benefit, I find it a pretty easy question to answer. They will benefit in the same way they did from previous waves of really powerful technology. Output will go up, quality will go up and prices will go down. From any kind of economics point of view that is an unambiguously good thing. It means that more people have access to more stuff and that we get to live in many ways like the rich got to live a few generations ago.
I find it really hard to over emphasise how beneficial and how powerful that is going to be. It is why I tried to end my TED talk on a really optimistic note - we are going to have access to some amazing stuff. Some of that is going to be available to us almost no matter what our wealth and income levels are. A cute way to say it is that Warren Buffet doesn't have anymore Wikipedia than I do. He can buy more Google stock than I can but he has very little additional access to Google's resources than I do. This is pretty amazing.
The second part of your question is, what happens to people who are displaced? I am personally concerned about that issue. In previous waves of technology there has been a lot of temporary unemployment but it has always turned out that new companies, new industries and new needs come along for which we need people and the people who are displaced find new things to do for a living. So history would teach us to be very confident. The reason I am less confident this time around is that when I look at the total bundle of skills that a person might offer a workforce or an employer, all previous waves of technology have encroached on a pretty small percentage of that bundle of skills and almost not at all on the mental bundle.
Wikipedia is free but health care is still very far from free in my country
What I see happening now is the encroachment on that skill bundle by digital technology. That makes me less confident that history is going to be the same this time around and that all those waves of displaced people are just going to find new jobs and new homes for their talent.
Do you see it becoming a more equal society economically?
No, the trend is pretty clear. The UK and the US are becoming more unequal over time. That trend is really clear and it's bad news. I can't find a good story to tell about rising inequality, especially when the people at the bottom aren't just holding steady or growing more slowly than the people at the top, in many ways they are heading in the wrong direction. When I look at technological improvement, the kind of things that are coming, I see that trend accelerating and that makes me nervous.
So are you optimistic or pessimistic about the future?
There is going to be enough stuff to go around, that I'm very confident about. Because of these amazing technologies, materially, we're going to have a very affluent society. The question is, are people going to have enough access to all that wealth? Because even though Wikipedia is free, health care is still very far from free in my country, housing is still expensive, and education in many areas is still really expensive, so even though we're getting a lot more stuff and some prices are going down, that doesn't mean we are suddenly freed from the need to have wealth and to have a job and income. Navigating this transition is going to be the big challenge we face. It's maybe not something we face in the next year but certainly over the next twenty years.
Work and Income
Here's my big hazy generalization about what the left gets wrong about the economy. The left is rightly concerned about the real wages and incomes of ordinary working people. But the left fallaciously analogizes from a bargaining dynamic at a single firm to the entire economy.
The way a given worker or class of workers improves his real wages is by persuading his boss to give him a nominal raise that outpaces the growth in the cost of living. But the way the economy as a whole works is that my income is your cost of living. If Slate doubled my salary, I'd be thrilled. But if everyone's boss doubled everyone's salary starting on Monday, we'd just have one-off inflation. As it happens, a little inflation would do the macroeconomy some good at the moment. But the point still holds that while "you get a raise" is the way to raise your living standards, "everyone gets a raise" is not the way to raise everyone's living standards.
To raise real wages across the economy rather than for some favored group of insiders, what you need to do is make things cheaper. And that generally entails an accumulated series of events that make selected groups of people's nominal incomes lower. If everyone could be chauffered around cheaply by autonomous cars, then the people who currently earn a living driving cabs and buses and trucks will lose out. If computers take over routine diagnostic work, then doctors will lose out. If online courses that serve tens of thousands of students displace community colleges, then community college instructors will lose
The Internet, as best as I can tell, has been a total disaster for the relative earnings of journalists. But by the same token, in a small way the widespread availability of free online content has raised the real wages of everyone else. Newspaper and magazine subscriptions just weren't ever a very large part of anyone's consumption bundles.
But the way to raise real incomes across the board is for that same wave of technological change that's transformed the media to start transforming the health care, education sectors, and transportation sectors. Cheaper health care and college and better transportation is a raise for every janitor, short-order cook, yoga instructor, and nurse in America.
A BA isn't worth much anymore
There's a growing perception out there that a college degree no longer delivers the value that it used to.
Too many college kids are living in Mom's basement, or working at Starbucks. Like most personal finance columnists, I get the letters from them: what do I do? How do I fix this? For many, the answer is grad school. But I get the letters from grad students too. A while back, I found myself talking to a professor whose school has a number of impressive-sounding graduate programs that were originally conceived as add-ons for a professional degree in law or medicine or business. They are now attracting a number of students who just go for the standalone degree. He didn't understand what the career path was for these kids, and he wasn't sure that they did either.
"It sounds good, so they can persuade their parents to pay for it," he said, a touch guiltily.
A new paper from Paul Beaudry, David Green, and Benjamin Sand argues that these worried kids--and their worried parents--are not just imagining things. The phenomenon is all too real. Skilled workers with higher degrees are increasingly ending up in lower-skilled jobs that don't really require a degree--and in the process, they're pushing unskilled workers out of the labor force altogether.
The graph above shows the average cognitive load of the work that college students are doing. As you can see, in the 1990-2000 period it spiked, as the IT revolution created new opportunities for "thought work". Then it started to fall. A brief recovery around 2006 was pretty much squashed by the financial crisis. Meanwhile, the amount of routine work that they're doing has risen.
The authors think they have an explanation: during the great IT boom, the returns to cognitive skill rose. Since then, the process has gone into reverse: demand for cognitive tasks is falling. Perhaps this is because installing robots consumes more resources than maintaining them, or perhaps it's simply that the robots are doing an increasing number of those cognitive tasks. But whatever the reason, we no longer want or need so many skilled workers doing non-routine tasks with a big analytical component. The workers who can't get those jobs are taking less skilled ones. The lowest-skilled workers are dropping out entirely, many of them probably ending up on disability.
This is, of course, highly speculative: it's one paper. But it would explain a lot. Six months ago, I made quite a splash with a Newsweek story arguing that we may be overinvesting in college. There were basically three parts to this argument: first, that a lot of college attendance is signalling activity rather than skill acquisition; second, that more students with BAs are ending up in jobs that don't require them; and third, that a substantial number of kids don't finish, washing out with a lot of debt and no commensurate earning power to pay it.
My many critics responded that the wage premium for a college graduate is higher than ever. But this is consistent with the story that Beaudry, et al are telling: lower skilled workers are increasingly falling out of the higher paying jobs altogether as college graduates move down the skill ladder. So while college graduates are having trouble getting college-style jobs, the unskilled workers are doing even worse. This is not necessarily evidence that the college degree is producing the wage--it might be that folks capable of getting into college would be able to get that barista job even if they didn't go.
Obviously, if Beaudry et al right, this is ferociously depressing news. It suggests that we're pushing more and more people into (more and more expensive) college programs, even as the number of jobs in which they can use those skills has declined. A growing number of students may be in a credentialling arms race to gain access to routine service jobs. Or maybe the productivity of our nation's wait staff is spiking as more skilled workers flood into these jobs.
Unfortunately, there's no obvious policy response to this. It's easier to create more college educated workers through government policy than it is to create jobs for them. It's not even obvious what the personal response should be--except that if you're planning to major in English, you should maybe see if you can't get a job at Starbucks instead
Work and IQ
FOR 20 years the champions of emotional intelligence have been telling us that EQ is more important than IQ at work. They are wrong. And no amount of assertion or anecdote can refute the evidence.
Smarter, brighter, cleverer people do better at (most) work. They acquire the knowledge and qualifications to get the good jobs. They climb further and faster. They are paid more. And they often enjoy the acclaim, respect and support of others. Certainly, soft skills help, but they amount to little without the horsepower of intelligence.
There are exceptions to the rule: the less brainy, low-wattage school failure who goes on to make it; the very clever person who becomes an also-ran.
How is it that averagely intelligent people can do so well? Intelligence is what it takes to get a job, but to succeed takes culture 'savvy' and motivation.
There are basically three reasons. First, success in some jobs does not seem to be clearly related to intelligence. Selling is one example: you have to be motivated and resilient rather than super-smart. And many entrepreneurs are, by their own admission, not that clever - energy, determination and an eye for opportunities are more important. You need to be hungry and dedicated, and these characteristics are not related to intelligence.
Second, some people can 'inherit' jobs way above their ability. Nepotism remains rife in many organisations and people can still be 'gifted' jobs by family or friends. Given a good structure and lots of support, they can appear to do well.
Third, while the Peter Principle asserts that most people are promoted to their level of incompetence, there are situations where that incompetence is never really revealed - for example, in a monopoly.
What the intellectually unremarkable yet extremely successful all have in common are considerable motivation and social skills. They have to persuade by charm rather than analysis. They have to enlist support for aspects of the job they don't do well. And many have to work very hard to maintain their position - they are dedicated plodders.
Most people recognise the never-promoted manager. These people do have various and worrying shortcomings. They tend to be slow and change-averse. Many like to micro-manage, choosing in effect to work one or more levels down where they feel more comfortable. Some become rather dependent on their teams and easy prey for any manipulative workers with bigger brains.
What of the super-smart who don't make it? Many of us can point to school chums with a fine future behind them. Top of the class, effortlessly brilliant, they ran away with all the prizes. Yet in terms of their working life, they didn't fulfil their promise. In fact, some seem to have had astonishingly mediocre careers. Why?
Again, there are several explanations. The first is that they couldn't, didn't or wouldn't accept the rules. They could not do 'corporate man'. They were insubordinate, disrespectful to seniors (and procedures) and had a taste for subversive anarchism - particularly if they were the artistic and creative type, who often seem passive-aggressive and quite unable to adapt to corporate life.
Second, bright people have a habit of asking good questions that upset those in power. As bright women have learnt, faster and better than men, there is certainly a time when it pays to hide your intelligence.
Third, bright people don't always make good people decisions. Being 'psychologically minded' and insightful does not seem to have much relation to cognitive ability. Choosing the wrong company, the wrong job and the wrong career path can soon nullify all the benefits of a big brain.
Fourth, and this is the argument of the emotional intelligence lobby, some bright people fail because they are not very good at social relationships. You have to work with and through other people. You have to motivate and inspire them. Intelligence alone does not guarantee this. People can, after all, be 'too clever by half', too cognitive. They may be dismissive of those around them who are too slow to understand concepts. They are good with ideas, bad with people.
Leadership has been simply defined as 'the ability to form, direct and energise a high-performing team that is superior to its competitors'. This is more the world of emotional than intellectual engagement. It is not much use accomplishing a better analysis of your company's problems if you can't bring your team with you.
So both cognitive and emotional intelligence are necessary for success. Alone, neither is sufficient. Motivation and knowledge can compensate . . . but only up to a point.
Future of Work
This second paragraph from Lawrence Summers In a Washington Post op-ed provides a nice and concise rundown of what the American political elite is worried about. So nice and so concise that it helpfully illustrates how contradictory these elite worries are, though unfortunately he takes the column in a different direction:
Meanwhile, profound changes are redefining the global order. Emerging economies, led by China, are converging toward the West. Beyond the current economic downturn lies the even more serious challenge of the rise of technologies, which may increase average productivity but which also displace large numbers of workers. The combination of an aging population and the rising costs of health care and education will put pressure on future budgets.
I understand why people worry about technological unemployment. And I understand why people worry about rising entitlement spending burdens. What I don't understand is why people worry about them both simultaneously. In the technological unemployment world, we'll be able to give everyone a 2013 level of consumption goods with a radically diminished workforce, raising the question of what everyone is going to actually do. To optimists, this simply amounts to ushering in an era of utopian socialism, but to pessimists it smacks of decadence and decay.
The other worry is the opposite of this one. It's that in the future a very large share of our population will be elderly nonworkers and a very large share of our workforce will be dedicated to taking care of elderly nonworkers ("skyrocketing health care costs"), and that consequently younger people's living standards will diminish or stagnate.
Either of those things could happen, but they can't both happen. A world of widespread technological unemployment is a world in which productivity-enhancing technology is allowing us to care for the elderly in some as-yet-unknown low-cost manner. If that doesn't happen, and health care costs continue to skyrocket, we'll simply be seeing a structural shift in the patterns of employment. At some point we stopped needing very many farmers to produce enough food to eat, so workers went to factories, and we increased our consumption of manufactured goods. If robots and Chinese people can make the manufactured goods, then workers will go to hospitals and nursing homes, and we'll be able to cope with the needs of an aging population. Alternatively, if robots can also take care of the elderly, then it really isn't obvious where the labor demand will come from, but there's going to be no entitlement crisis - just a clash of social values between moralists and utopians about how to build a leisure society.
Future of Work
Advances in 3D printing, new human-robot interactions, extreme customization and shale energy are just some of the elements that will shape the future of manufacturing. As Yogi Berra said, "the future is no longer what it used to be". But he also said that sometimes it is just "deja vu all over again".
The future of manufacturing, like its past, involves astonishing changes. After all, etymologically, the term literally means handmade or handicraft. The word stuck, even though production processes changed to mean almost its opposite. These changes, while unpredictable in their detail, seem to follow certain broad directions.
What impressed Adam Smith back in 1776 was the fact that manufactures could constantly reorganize labor, dividing tasks between people into narrower chunks that could then be either better mastered by the worker or more easily substituted by a machine. A Boeing 747 has over 6 million different parts. With the division of labor we can use much more knowledge than can be mastered by any individual. Through this process productivity increases, allowing us to do more with less.
At a more abstract level, what has been happening is simply a consequence of the laws of thermodynamics. We want to create products that satisfy our needs because nature does not provide them in the shape, quantities and locations we want them in. So we have to reorder the world. But order is not what the world tends to move into on its own - quite the contrary. So to create order, we need information about what that order is supposed to look like and knowledge about how to get there. But to create order you need to do work, you need to use energy. That is why so much of the technological revolutions of times past have been related to mastering energy: from waterpower, to the coal-powered steam engine, the electric motor and the gasoline-powered internal combustion engine. But knowledge about how to reorder matter - from chemistry, biology and solid-state physics - and about encoding and manipulating information allows us to use even less matter and energy in achieving our goals. In going from the wax candle in 1800 to the fluorescent light bulb of 1992, the number of lumen-hours per unit of work increased by a factor of more than 44,000. The Apple II Plus with 48k of RAM cost more than a Mac Pro today even though it had 125,000 times less RAM memory and ran at a frequency over 300 times slower.
More and more information is getting packed into less matter. As a consequence, more of the work goes into manipulating information rather than matter. Jobs move from the shop floor to the design floor. A Boeing 747 or an iPhone are made mostly out of fairly common materials that are worth at most just a few dollars a pound. However, they both go for over $1,000 per pound. The bulk of the value is in the information content, not the raw materials. And that is where the jobs and the livelihoods are going.
As this happens, the nature of work changes causing job losses for some while opportunities open up for others. The future is always some combination between the promise of new possibilities and the threat to existing ones. Again this is not new. Back in 1811, skilled craftsmen - the so-called Luddites - attacked the new automated power looms designed to cost-effectively replace their handlooms and their jobs. Today, similar fears about outsourcing generate much anxiety in advanced countries. A Google search finds the phrase "jobs that cannot be outsourced" in over 470,000 different documents. In fact, in his 2013 State of the Union Address, President Obama presented a manufacturing plan to "bring jobs back" to the U.S., a phrase that suggests a return to a better past.
The truth is that new jobs are not 'coming back' but forward. The world is changing as technology advances and diffuses throughout the globe. This is also not new. But for the first time since the Industrial Revolution, the last decade has seen growth in the so-called advanced countries account for less than 50 percent of world growth, down from over 80 percent in the 1970s and 1980s. They will be down to less than 30 percent through 2014. The rest of the world is simply catching up faster than before. This can be made into good news. First, the income per capita of the new fast growers is on average less than 20 percent of that of the advanced countries. So the world is becoming less unequal. More importantly, the fast growers will need more machines, materials and knowhow and these have to come from somewhere.
The opportunity for advanced countries lies in building advanced tools needed to make more tools, and supplying the programming, finance, logistics and marketing required to intelligently manipulate matter. In this way, manufacturing will continue to pack more information and knowledge into less matter using less energy, making the world to order.
Jobs are constantly shifting and not always out of manufacturing. It used to be that farmers made their own fertilizer with dung and plowed land with their own animals. Today, fertilizers, tractors and fuel are made with manufacturing jobs that have displaced agricultural work. Even service jobs have moved into manufacturing. Penicillin destroyed thousands of jobs in Alpine sanatoria, to the delight of the sick. Accountants used to work with paper and pencil. And only yesterday, airline staff printed boarding passes at airport counters. Machines are eliminating these jobs. And these machines have to be made and programmed by people too.
So I guess it is deja vu all over again, after all. Just as before, manufacturing will make more with less. It will pack more information and knowledge into less matter using less energy while making more effective products. Jobs will keep moving from manipulating matter to playing with information and ideas, as tasks will keep moving towards design, programming, finance, logistics, marketing, commerce and repairs and into making sure that this much deeper division of labor and tasks works smoothly. And as always, the future of manufacturing will just get better.
Who They'll Need
What's the crucial career strength that employers everywhere are seeking -- even though hardly anyone is talking about it? A great way to find out is by studying this list of fast-growing occupations, as compiled by the U.S. Bureau of Labor Statistics.
Sports coaches and fitness trainers. Massage therapists, registered nurses and physical therapists. School psychologists, music tutors, preschool teachers and speech-language pathologists. Personal financial planners, chauffeurs and private detectives. These are among the fields expected to employ at least 20% more people in the U.S. by 2020.
Did you notice the common thread? Every one of these jobs is all about empathy.
In our fast-paced digital world, there's lots of hand-wringing about the ways that automation and computer technology are taking away the kinds of jobs that kept our parents and grandparents employed. Walk through a modern factory, and you'll be stunned by how few humans are needed to tend the machines. Similarly, travel agents, video editors and many other white-collar employees have been pushed to the sidelines by the digital revolution's faster and cheaper methods.
But there's no substitute for the magic of a face-to-face interaction with someone else who cares. Even the most ingenious machine-based attempts to mimic human conversation (hello, Siri) can't match the emotional richness of a real conversation with a real person.
Visit a health club, and you'll see the best personal trainers don't just march their clients through a preset run of exercises. They chat about the stresses and rewards of getting back in shape. They tease, they flatter -- maybe they even flirt a little. They connect with their clients in a way that builds people's motivation. Before long, clients keep coming back to the gym because they want to spend time with a friend, and to do something extra to win his or her respect.
It's the same story in health care or education. Technology can monitor an adult's glucose levels or a young child's counting skills quite precisely. Data by itself, though, is just a tool. The real magic happens when a borderline diabetic or a shy preschooler develops enough faith and trust in another person to embark on a new path. What the BLS data tells us is that even in a rapidly automating world, we can't automate empathy.
Last week, when the BLS reported that the U.S. economy added 175,000 jobs in May, analysts noted that one of the labor market's bright spots involved restaurants and bars. Waiters, cooks and bartenders accounted for a full 16% of the month's job growth. As the Washington Post's Neil Irwin put it, "A robot may be able to assemble a car, but a cook still grills burgers."
Actually, it's the people in the front of the restaurant -- and behind the bar -- that should command our attention. The more time we spend in the efficient but somewhat soulless world of digital connectivity, the more we will cherish a little banter with wait-staff and bartenders who know us by name. We will pay extra to mingle with other people who can keep the timeless art of conversation alive.
Future of Work
Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson's contention really is. Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology - from improved industrial robotics to automated translation services - are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.
That robots, automation, and software can replace people might seem obvious to anyone who's worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee's claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.
Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity - the amount of economic value created for a given unit of input, such as an hour of labor - is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the 'great decoupling.' And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.
It's a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. "It's the great paradox of our era," he says. "Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren't keeping up."
Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.
Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world's manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts - tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics' Baxter (see 'The Blue-Collar Robot,' May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google's driverless car suggest what automation might be able to accomplish someday soon.
A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics - all made possible by the ever increasing availability of cheap computing power and storage capacity - are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center's intelligence systems lab and a former economics professor at Stanford University, calls it the 'autonomous economy.' It's far more subtle than the idea of robots and automation doing human jobs, he says: it involves "digital processes talking to other digital processes and creating new processes," enabling us to do many things with fewer people and making yet other human jobs obsolete.
It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, "digital versions of human intelligence" are increasingly replacing even those jobs once thought to require people. "It will change every profession in ways we have barely seen yet," he warns.
McAfee, associate director of the MIT Center for Digital Business at the Sloan School of Management, speaks rapidly and with a certain awe as he describes advances such as Google's driverless car. Still, despite his obvious enthusiasm for the technologies, he doesn't see the recently vanished jobs coming back. The pressure on employment and the resulting inequality will only get worse, he suggests, as digital technologies - fueled with "enough computing power, data, and geeks" - continue their exponential advances over the next several decades. "I would like to be wrong," he says, "but when all these science-fiction technologies are deployed, what will we need all the people for?"
New Economy?
But are these new technologies really responsible for a decade of lackluster job growth? Many labor economists say the data are, at best, far from conclusive. Several other plausible explanations, including events related to global trade and the financial crises of the early and late 2000s, could account for the relative slowness of job creation since the turn of the century. "No one really knows," says Richard Freeman, a labor economist at Harvard University. That's because it's very difficult to 'extricate' the effects of technology from other macroeconomic effects, he says. But he's skeptical that technology would change a wide range of business sectors fast enough to explain recent job numbers.
Employment trends have polarized the workforce and hollowed out the middle class.
David Autor, an economist at MIT who has extensively studied the connections between jobs and technology, also doubts that technology could account for such an abrupt change in total employment. "There was a great sag in employment beginning in 2000. Something did change," he says. "But no one knows the cause." Moreover, he doubts that productivity has, in fact, risen robustly in the United States in the past decade (economists can disagree about that statistic because there are different ways of measuring and weighing economic inputs and outputs). If he's right, it raises the possibility that poor job growth could be simply a result of a sluggish economy. The sudden slowdown in job creation "is a big puzzle," he says, "but there's not a lot of evidence it's linked to computers."
To be sure, Autor says, computer technologies are changing the types of jobs available, and those changes "are not always for the good.? At least since the 1980s, he says, computers have increasingly taken over such tasks as bookkeeping, clerical work, and repetitive production jobs in manufacturing - all of which typically provided middle-class pay. At the same time, higher-paying jobs requiring creativity and problem-solving skills, often aided by computers, have proliferated. So have low-skill jobs: demand has increased for restaurant workers, janitors, home health aides, and others doing service work that is nearly impossible to automate. The result, says Autor, has been a 'polarization' of the workforce and a 'hollowing out' of the middle class - something that has been happening in numerous industrialized countries for the last several decades. But "that is very different from saying technology is affecting the total number of jobs," he adds. "Jobs can change a lot without there being huge changes in employment rates."
What's more, even if today's digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today's computing technologies will be different, creating long-term involuntary unemployment.
At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post-World War II years to around 10 percent today - partly because of increasing automation, especially during the 1980s.
While such changes can be painful for workers whose skills no longer match the needs of employers, Lawrence Katz, a Harvard economist, says that no historical pattern shows these shifts leading to a net decrease in jobs over an extended period. Katz has done extensive research on how technological advances have affected jobs over the last few centuries - describing, for example, how highly skilled artisans in the mid-19th century were displaced by lower-skilled workers in factories. While it can take decades for workers to acquire the expertise needed for new types of employment, he says, "we never have run out of jobs. There is no long-term trend of eliminating work for people. Over the long term, employment rates are fairly stable. People have always been able to create new jobs. People come up with new things to do."
Still, Katz doesn't dismiss the notion that there is something different about today's digital technologies - something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is "genuinely a question," he says. "If technology disrupts enough, who knows what will happen?"
Dr. Watson
To get some insight into Katz's question, it is worth looking at how today's most advanced technologies are being deployed in industry. Though these technologies have undoubtedly taken over some human jobs, finding evidence of workers being displaced by machines on a large scale is not all that easy. One reason it is difficult to pinpoint the net impact on jobs is that automation is often used to make human workers more efficient, not necessarily to replace them. Rising productivity means businesses can do the same work with fewer employees, but it can also enable the businesses to expand production with their existing workers, and even to enter new markets.
Take the bright-orange Kiva robot, a boon to fledgling e-commerce companies. Created and sold by Kiva Systems, a startup that was founded in 2002 and bought by Amazon for $775 million in 2012, the robots are designed to scurry across large warehouses, fetching racks of ordered goods and delivering the products to humans who package the orders. In Kiva's large demonstration warehouse and assembly facility at its headquarters outside Boston, fleets of robots move about with seemingly endless energy: some newly assembled machines perform tests to prove they're ready to be shipped to customers around the world, while others wait to demonstrate to a visitor how they can almost instantly respond to an electronic order and bring the desired product to a worker's station.
A warehouse equipped with Kiva robots can handle up to four times as many orders as a similar unautomated warehouse, where workers might spend as much as 70 percent of their time walking about to retrieve goods. (Coincidentally or not, Amazon bought Kiva soon after a press report revealed that workers at one of the retailer's giant warehouses often walked more than 10 miles a day.)
Despite the labor-saving potential of the robots, Mick Mountz, Kiva's founder and CEO, says he doubts the machines have put many people out of work or will do so in the future. For one thing, he says, most of Kiva's customers are e-commerce retailers, some of them growing so rapidly they can't hire people fast enough. By making distribution operations cheaper and more efficient, the robotic technology has helped many of these retailers survive and even expand. Before founding Kiva, Mountz worked at Webvan, an online grocery delivery company that was one of the 1990s dot-com era's most infamous flameouts. He likes to show the numbers demonstrating that Webvan was doomed from the start; a $100 order cost the company $120 to ship. Mountz's point is clear: something as mundane as the cost of materials handling can consign a new business to an early death. Automation can solve that problem.
Meanwhile, Kiva itself is hiring. Orange balloons - the same color as the robots - hover over multiple cubicles in its sprawling office, signaling that the occupants arrived within the last month. Most of these new employees are software engineers: while the robots are the company's poster boys, its lesser-known innovations lie in the complex algorithms that guide the robots' movements and determine where in the warehouse products are stored. These algorithms help make the system adaptable. It can learn, for example, that a certain product is seldom ordered, so it should be stored in a remote area.
Though advances like these suggest how some aspects of work could be subject to automation, they also illustrate that humans still excel at certain tasks - for example, packaging various items together. Many of the traditional problems in robotics - such as how to teach a machine to recognize an object as, say, a chair - remain largely intractable and are especially difficult to solve when the robots are free to move about a relatively unstructured environment like a factory or office.
Techniques using vast amounts of computational power have gone a long way toward helping robots understand their surroundings, but John Leonard, a professor of engineering at MIT and a member of its Computer Science and Artificial Intelligence Laboratory (CSAIL), says many familiar difficulties remain. "Part of me sees accelerating progress; the other part of me sees the same old problems," he says. "I see how hard it is to do anything with robots. The big challenge is uncertainty." In other words, people are still far better at dealing with changes in their environment and reacting to unexpected events.
For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. "People and robots working together can happen much more quickly than robots simply replacing humans," he says. "That's not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver."
One of the friendlier, more flexible robots meant to work with humans is Rethink's Baxter. The creation of Rodney Brooks, the company's founder, Baxter needs minimal training to perform simple tasks like picking up objects and moving them to a box. It's meant for use in relatively small manufacturing facilities where conventional industrial robots would cost too much and pose too much danger to workers. The idea, says Brooks, is to have the robots take care of dull, repetitive jobs that no one wants to do.
It's hard not to instantly like Baxter, in part because it seems so eager to please. The 'eyebrows' on its display rise quizzically when it's puzzled; its arms submissively and gently retreat when bumped. Asked about the claim that such advanced industrial robots could eliminate jobs, Brooks answers simply that he doesn't see it that way. Robots, he says, can be to factory workers as electric drills are to construction workers: "It makes them more productive and efficient, but it doesn't take jobs."
The machines created at Kiva and Rethink have been cleverly designed and built to work with people, taking over the tasks that the humans often don't want to do or aren't especially good at. They are specifically designed to enhance these workers' productivity. And it's hard to see how even these increasingly sophisticated robots will replace humans in most manufacturing and industrial jobs anytime soon. But clerical and some professional jobs could be more vulnerable. That's because the marriage of artificial intelligence and big data is beginning to give machines a more humanlike ability to reason and to solve many new types of problems.
Even if the economy is only going through a transition, it is an extremely painful one for many.
In the tony northern suburbs of New York City, IBM Research is pushing super-smart computing into the realms of such professions as medicine, finance, and customer service. IBM's efforts have resulted in Watson, a computer system best known for beating human champions on the game show Jeopardy! in 2011. That version of Watson now sits in a corner of a large data center at the research facility in Yorktown Heights, marked with a glowing plaque commemorating its glory days. Meanwhile, researchers there are already testing new generations of Watson in medicine, where the technology could help physicians diagnose diseases like cancer, evaluate patients, and prescribe treatments.
IBM likes to call it cognitive computing. Essentially, Watson uses artificial intelligence techniques, advanced natural-language processing and analytics, and massive amounts of data drawn from sources specific to a given application (in the case of health care, that means medical journals, textbooks, and information collected from the physicians or hospitals using the system). Thanks to these innovative techniques and huge amounts of computing power, it can quickly come up with advice - for example, the most recent and relevant information to guide a doctor's diagnosis and treatment decisions.
Despite the system's remarkable ability to make sense of all that data, it's still early days for Dr. Watson. While it has rudimentary abilities to learn from specific patterns and evaluate different possibilities, it is far from having the type of judgment and intuition a physician often needs. But IBM has also announced it will begin selling Watson's services to customer-support call centers, which rarely require human judgment thats quite so sophisticated. IBM says companies will rent an updated version of Watson for use as a customer service agent that responds to questions from consumers; it has already signed on several banks. Automation is nothing new in call centers, of course, but Watson's improved capacity for natural-language processing and its ability to tap into a large amount of data suggest that this system could speak plainly with callers, offering them specific advice on even technical and complex questions. It's easy to see it replacing many human holdouts in its new field.
Digital Losers
The contention that automation and digital technologies are partly responsible for today's lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers - the income inequalities that many economists have worried about for decades. Digital technologies tend to favor superstars, they point out. For example, someone who creates a computer program to automate tax preparation might earn millions or billions of dollars while eliminating the need for countless accountants.
New technologies are encroaching into human skills in a way that is completely unprecedented, McAfee says, and many middle-class jobs are right in the bull's-eye; even relatively high-skill work in education, medicine, and law is affected. "The middle seems to be going away," he adds. "The top and bottom are clearly getting farther apart." While technology might be only one factor, says McAfee, it has been an underappreciated one, and it is likely to become increasingly significant.
Not everyone agrees with Brynjolfsson and McAfee's conclusions - particularly the contention that the impact of recent technological change could be different from anything seen before. But it's hard to ignore their warning that technology is widening the income gap between the tech-savvy and everyone else. And even if the economy is only going through a transition similar to those it's endured before, it is an extremely painful one for many workers, and that will have to be addressed somehow. Harvard's Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz's lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes.
Brynjolfsson himself says he's not ready to conclude that economic progress and employment have diverged for good. "I don't know whether we can recover, but I hope we can," he says. But that, he suggests, will depend on recognizing the problem and taking steps such as investing more in the training and education of workers.
"We were lucky and steadily rising productivity raised all boats for much of the 20th century," he says. "Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that's no longer true." He adds, "It's one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit." In other words, in the race against the machine, some are likely to win while many others lose.
Google Hiring: No More Brainteasers
“We found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”
That was just one of the many fascinating revelations that Laszlo Bock, Google’s senior vice president for people operations, shared with me in an interview that was part of the New York Times’ special section on Big Data published Thursday.
Bock’s insights are particularly valuable because Google focuses its data-centric approach internally, not just on the outside world. It collects and analyzes a tremendous amount of information from employees (people generally participate anonymously or confidentially), and often tackles big questions such as, “What are the qualities of an effective manager?” That was question at the core of its Project Oxygen, which I wrote about for the Times in 2011.
I asked Bock in our recent conversation about other revelations about leadership and management that had emerged from its research.
The full interview is definitely worth your time, but here are some of the highlights:
The ability to hire well is random. “Years ago, we did a study to determine whether anyone at Google is particularly good at hiring,” Bock said. “We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess, except for one guy who was highly predictive because he only interviewed people for a very specialized area, where he happened to be the world’s leading expert.”
Forget brain-teasers. Focus on behavioral questions in interviews, rather than hypotheticals. Bock said it’s better to use questions like, “Give me an example of a time when you solved an analytically difficult problem.” He added: “The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable ‘meta’ information you get about the candidate is a sense of what they consider to be difficult.”
Consistency matters for leaders. “It’s important that people know you are consistent and fair in how you think about making decisions and that there’s an element of predictability. If a leader is consistent, people on their teams experience tremendous freedom, because then they know that within certain parameters, they can do whatever they want. If your manager is all over the place, you’re never going to know what you can do, and you’re going to experience it as very restrictive.
GPAs don’t predict anything about who is going to be a successful employee. “One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation,” Bock said. “Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything. What’s interesting is the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.”
That was a pretty remarkable insight, and I asked Bock to elaborate.
“After two or three years, your ability to perform at Google is completely unrelated to how you performed when you were in school, because the skills you required in college are very different,” he said. “You’re also fundamentally a different person. You learn and grow, you think about things differently. Another reason is that I think academic environments are artificial environments. People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment. One of my own frustrations when I was in college and grad school is that you knew the professor was looking for a specific answer. You could figure that out, but it’s much more interesting to solve problems where there isn’t an obvious answer. You want people who like figuring out stuff where there is no obvious answer.”
German POWs in England After WW2
By the autumn of 1946 the war had been over for a year but several hundred thousand enemy prisoners of war were still harvesting potatoes and sugar beet on Britain’s farms.
These PoWs, easily identified by their chocolate-brown uniforms, made up almost a quarter of Britain’s agricultural workforce, as farmers struggled to feed the country. However, despite their numbers and the hundreds of camps across the country, there are few detailed studies of the PoWs’ experiences.
“It’s quite a morally complex subject,” says Sophie Cummings, the curator of a new exhibition about German and Italian prisoners of war in Britain. The last German PoWs were not sent home until 1948. Although it was argued that this was because of administrative difficulties and lack of housing in postwar Germany, there were concerns that they were in fact being used as forced labour.
“It’s the victors who write history. For a lot of Germans who went home this was something they chose to forget,” says Cummings. “Many camps were built over as housing estates.”
Cummings’s exhibition opens at Lydiard Park, a stately home near Swindon, where the gardens and parkland were turned into a dozen hospital prison camps between 1944 and 1948. It had previously been a hospital for US servicemen, but its role as a prison camp was little known until the daughter of one of its interpreters donated a collection of documents. More than 200 PoWs at a time and from all theatres of conflicts were treated for shrapnel wounds or burns following air accidents, often by fellow prisoners doubling up as medical staff.
U-boat crews were among the first PoWs, but significant numbers were held in Britain only after the threat of invasion had faded and following the capture of thousands of Italian soldiers in the deserts of North Africa.
“At least 60 per cent of these men are skilled agricultural workers, used to toiling in fields from dawn to dusk on a diet of bread and a few olives,” wrote one correspondent to The Times. “To make no use of them, when our land is crying out for labour, and the Government is threatening to call up even more of our farm workers to the Armed Forces, is surely a tragic waste of good material.”
Many Italians volunteered as “co-operators” after Italy surrendered in 1943 and became a regular sight cycling from their hostels to the fields and working alongside schoolboys and Land Girls. They were gradually granted permission to visit cinemas and spend their token pay.
By 1943 of 80,000 Italian PoWs in Britain, 35,000 were involved in agriculture, growing vegetables, harvesting crops and cutting firewood, as well as military labour and maintenance of roads and railways. “Many people they came in to contact with remember their love of opera,” says Cummings. “They excelled at gardening and left behind beautiful gardens and huts converted into ornate chapels.”
Farmers were more reluctant to use German prisoners but 70,000 were at work by March 1945. After the war 400,000 Germans became an essential source of labour and rules were relaxed allowing families to invite prisoners to tea or to celebrate Christmas. One German soldier said in 1947: “‘My’ family gave me a written invitation, money for the fare, they told me it wouldn’t be Christmas for them if I wouldn’t be with them. . . And all the people on the trains? No sign of hatred, of curiosity, only smiles, politeness, hearty conversation, a lot of small gifts — I cannot express my feelings about this.”
Although high-value Nazi prisoners taken to the London Cage in Kensington Palace Gardens for interrogation were often harshly treated, Cummings says that lower-level prisoners at camps such as Lydiard Park were generally looked after, in accordance with the Geneva Convention.
They received similar rations to a British soldier; a typical camp menu comprised a breakfast of bread, margarine and tea, lunch of pork and potatoes and a supper of milk and soup. A small shop at Lydiard Park sold matches, writing paper, soap and sweets. “The emphasis was on democratic re-education and there were classes to learn trades and languages.” Libraries and sporting clubs flourished. Bert Trautmann, the former paratrooper who after the war became Manchester City’s hugely popular goalkeeper despite initial protest, was perhaps the most famous prisoner to enjoy football matches with locals in Cheshire. Like him, more than 20,000 Germans, some of whom had married local women, applied to stay on as civilian workers.
At Lydiard Park, the camp’s senior officer wrote fondly of Werner Wachsmuth, a surgeon to the German Army and chief medical officer to prisoners: “I will miss him as a very able surgeon, a very loyal and able administrator and as a friend. I shall hope to see him again under happier conditions.”
Henry Ford and $5 a Day Wage
A century ago, the Ford Motor founder shocked the world of business by doubling wages to $5 a day. No altruist, he was playing a long game—one today’s short-sighted CEOs can’t fathom.
On January 5, 1914, the business world witnessed a revolutionary, shocking act. Henry Ford, founder, chief executive officer, and dictator of the Ford Motor Company, unilaterally raised—doubled!—the wages of thousands of production workers to $5 per nine-hour day, from about $2.38. Ford’s company was the unorthodox leader of what was rapidly growing into an iconic American industry. And Henry Ford was already regarded as an eccentric, an outsider, somewhat strange. “Crazy Henry,” the neighbors dubbed him when he tinkered in his garage. As Ford grew into his role, he did little to disabuse the public of that sobriquet, chartering a ship to Europe and sailing with anarchists in a vain effort to end World War I; backing an anti-Semitic newspaper, the Dearborn Independent; and taking a paternalistic interest in the social lives of his employees.
But the pay hike was something else. By paying a wage that was significantly above what the market required, Ford was betraying his fellow business owners and putting the whole of American enterprise in jeopardy. The Wall Street Journal editorial page, then, as now, a hotbed of revanchism, sniffed: “To double the minimum wage, without regard to length of service, is to apply Biblical or spiritual principles where they do not belong.” Ford may have been seeking a place in heaven, the Journal warned, but this action would more likely consign him to hell. Ford has “in his social endeavor committed blunders, if not crimes. They may return to plague him and the industry he represents, as well as organized society.”
Of course, Ford was motivated more by self-interest than by altruism. Turnover was huge in the growing auto industry, as workers hopped from factory to factory in search of better wages. The nation was in the midst of a rising wave of labor activism that frequently turned to violence. International networks of communists, socialists, and various other types of radical syndicalists were organized and active in America’s largest cities—and occasionally tossed bombs at business owners. Raising wages proactively was clearly a way to buy some short-term labor peace.
But Ford was playing a deeper, longer game. The Ford Motor Company was in the business of building an expensive durable good. The first cars he had built in number, the 1903 Model N, cost about $3,000, and so were accessible only to that era’s one percent. Henry Ford recognized that the automobile would be more successful as a volume business than as a niche product. “I would build a motorcar for the great multitudes,” he proclaimed. Through relentless innovation, vertical integration, and the obsessive development of an assembly line, Ford had already managed to bring the cost of the Model T, the first democratic car, down to about $500. And the company was moving about 250,000 cars a year. But per capita income was only $354 in 1913. The U.S. didn’t have a developed consumer credit industry. People paid for things with the wages they earned and their savings.
So this was Ford’s theory: Companies had an interest in ensuring that their employees could afford the products they produced. Put another way, employers had a role to play in boosting consumption. While paying higher wages than you absolutely needed to might lower profits temporarily, it would lead to a more sustainable business and economy over time. If the motorcar was going to be a mass-produced product for typical Americans, not a plaything for the rich, Ford would strive to pay his workers enough so they could afford the products they worked on all day.
Ford, of course, was right. And the rest is industrial history. The $5 day didn’t kill Ford, or American capitalism, as many capitalists had warned. By 1916, profits doubled and sales continued to boom. “The payment of $5 a day for an eight-hour day was one of the finest cost-cutting moves we ever made,” he said. By 1921, Ford had half the U.S. car market and, thanks to falling costs and rising wages, the price of a Model T stood at about half the level of per-capita income. Ford pioneered a massive new industry whose wages set the tone for the country and turned Detroit into a high-wage metropolis.
More significantly, Ford’s heretical proclamation was the first in a series of acts of individual and collective action, from the private and public sectors, that improved the lot of the typical worker in America and helped pave the way for the broad-based prosperity of the second half of the 20th century. In the 1920s, acting out of a mixture of paternalism and self-interest, many large companies started pensions, began offering health-care benefits and clinics, and boosted wages. Government did its part, too. Legislation passed in the 1930s empowered unions and enforced a federal minimum wage. In the 1940s and 1950s, an entente between big labor and big government—the five-year, no strike, annual-raise, expanded-pension deal between General Motors and the United Auto Workers in 1950 became known as the Treaty of Detroit—led to further increases in living standards.
Of course, corporate America has largely forgotten Ford’s insight. Worse, it has made a concerted effort in recent years to drive down wages.
Were Henry Ford to come back to earth today, 100 years after he promulgated the $5 day, he’d be shocked and dismayed.
Until 1975, wages generally amounted to 50 percent of gross domestic product. Labor essentially held its own through the 1980s and 1990s; in 2001, wages still constituted 49 percent of GDP. But by 2012 they were 43.5 percent—a modern-day low. Median household income fell in both 2010 and 2011, and the median income for working-age households fell 12.4 percent from 2000 to 2011, to $55,640, a period in which the U.S. economy grew by 18 percent. The chart of wages as a percentage of GDP and corporate profits over the last several years looks like the gaping maw of a hippopotamus.
There’s something deep in our contemporary and political culture, in the public and private sectors, that supports the proposition that employers should pay as little as possible at all times, at every point in the economic cycle. In the aggregate, U.S. corporations are in a better position to pay higher wages than they have been at any time in recent history. But bosses have been choosing not to raise wages even when they can. “I always try to communicate to our people that we can never make enough money,” as Caterpillar CEO Doug Oberhelman put it. “We can never make enough profit.” Caterpillar notched record profits in 2012 and then in early 2013 bludgeoned its unions into accepting a six-year wage freeze.
Henry Ford was one of the largest employers of his day, and he stood foursquare for higher wages, paying more than the market forced him to. He established new benchmarks and standards. But today’s CEOs think nothing about paying non-subsistence wages and are shocked when anyone has the temerity to push back.
Here’s the thing. The human cost of low wages is obvious and has been extensively documented. But there are important economic and business costs to low wages that are far less clear and little understood. Many of America’s largest employers pay as little as possible, driving down the consuming power of their workers, and then wonder why their customers are unable to spend.
Were Henry Ford to come back to earth today, 100 years after he promulgated the $5 day, he’d be shocked and dismayed.
Choosing A CEO
Hiring a chief executive who has done it before may seem the safe choice but it is unlikely to be the best one, according to research published in the management journal MIT Sloan Management Review.
Chief executives who move from one top job to another show a three-year return on assets that is 48% lower on average than that achieved by first-time chief executives, said Burak Koyuncu, an assistant professor at Neoma Business school in France, who co-wrote the report.
“At first [co-author Monika Hamori and I] thought the experienced chief executives might not be doing as well because they were being appointed to companies that were struggling . . . but we checked, and that was not related to it at all.”
Instead, they have attributed the gap in performance to the “experience trap” — the tendency for those who have been successful with an approach in the past to use it again. “People who have been a chief executive before bring their luggage from their old job and then try to act on their previous experience. But what worked in the past will not always work in the new context,” said Koyuncu. “Even if it is the same industry, the culture and everything else will be different.”
In fact, Koyuncu’s study of the S&P 500 index in the US found that experienced chief executives who moved to very different companies did not fall into the trap to the same extent as those who joined similar businesses, possibly because they were less inclined to rely on old methods. This suggests that companies should consider recruiting from outside their industry, he said.
Another way to minimise the effect is to give the new chief executives time to unlearn old habits. The study found that those who spent a year in an interim role before taking up the reins did better than those who stepped straight across. “When hiring, try to find a way to make sure people get to understand your company before they become chief executive. You could put them in an interim position such as chief operating officer, or bring them on the board a year before. That will give them the chance to see the environment without having the power to change anything.”
The simplest approach, however, is for recruiters and employers to stop relying on past performance. Koyuncu is not optimistic about how far this will change: “One of the interesting things about executive search firms and compensation consultants is that they usually play it safe.” Choosing candidates with a track record makes it easy to justify their decision — and to explain away any subsequent lack of success. “If things go wrong, they will tell the company that the board wasn’t letting the chief executive perform or that the management team was not as supportive as it should have been — there’s always an excuse.”
Patrick Tame, chief executive of the search firm Beringer Tame, highlighted another problem. “There is generally a risk-averse culture when it comes to hiring at any level, but this is amplified when it comes to the chief executive,” he said. “The chief executive can make or break a company’s fortunes, so risk aversion is prevalent.”
Companies that are more open-minded tend to have strong boards with a high degree of confidence in the rest of the management team, he said. “Privately owned businesses are far more likely to make bold choices than others, which is why they are also more likely to achieve rapid growth. With listed companies, the focus is more on not upsetting the share price.”
This generally means going for someone who is a known quantity, said Koyuncu. “I feel companies care more about the stock price than financial performance, and in that case maybe bringing in someone with prior chief executive experience would be better, for a few months anyway. But sustaining that stock market reaction in the long term is not easy.”
Another challenge is that not relying on past performance means spending much more time on interviews and other assessment methods. Not all candidates for a chief executive role are happy to go through such a rigorous process, said Tame, who recently conducted a search involving many tests. “A lot of people did not even start the process because they did not want to be tested. They felt they should be appointed on their record.”
On the other hand, a recent survey by the search firm Egon Zehnder suggested that many senior executives acknowledge there is more to potential than their history. A poll by the firm found that a “huge majority” of the 800 it questioned felt that past performance was not the most important predictor of future success, said Andrew Roscoe, the firm’s UK managing partner. “It seems to be that everyone is thinking that jobs are getting more complex, the world is changing faster and people need to be able to move quickly to keep up,” he said.
Finding other ways of working out which candidates will succeed is, of course, much trickier. Roscoe and his team now use a four-element model that tests qualities such as curiosity and insight to assess candidates’ potential. It may sound unusual to assess experienced chief executives in such a way but it does make sense, he said. “Proven chief executives come to the table with more experience and more to talk about, but that does not mean they are the best people to lead a company through a changing future.”
Koyuncu’s findings do not mean that a proven chief executive is automatically the worst person for a job at the top.
“This is just a statistical average, so there will be plenty of successful counter examples,” he said.
The Case For Higher Minimum Wage
Nearly five years into the recovery from the Great Recession, the American economy remains fundamentally broken. Inequality is getting worse: Ninety-five percent of income gains since 2009 have gone to the top one percent of earners. Private employers have added more than 8 million jobs, but nearly two-thirds are low-wage positions. The American worker's share of the national income is as low as it's been in the six decades since World War II. But even as most Americans struggle just to tread water, corporate profits have soared to record highs.
Why It's Time to Start Pushing For a Minimum Wage Hike
Worse: The bottom rung of the economy is growing crowded; 3.8 million Americans – the equivalent population of the city of Los Angeles – now labor at or below the minimum wage. And that wage itself has lost more than 12 percent of its value since it was last hiked to $7.25 in 2007, due to inflation. In a more prosperous era, the stereotype of a minimum-wage worker was a teenager flipping burgers, earning a little beer money on the side. But in the new American economy, dominated by low-wage service jobs, fewer than one in four minimum-wage workers are teens. More than half are 25 or older. "The demographics have shifted," says Rep. George Miller, ranking Democrat on the House labor committee. "These are now important wage earners in their families."
As a matter of public policy, the solution is obvious. There are few government interventions that can match the elegance of a higher minimum wage. It boosts the fortunes of the working poor and the economy at large, with minimal trade-offs. Raising the minimum wage does little or nothing to dampen job growth. The Congressional Budget Office estimates that an increase to $10.10 would trim payrolls by less than one-third of one percent, even as it lifts nearly 1 million Americans out of poverty.
Outside of Washington, D.C., raising the minimum wage is not a partisan issue. Supported by more than 70 percent of Americans, the policy achieves both liberal and conservative goals: It alleviates poverty even as it underscores the value of hard work. It reduces corporate welfare even as it lessens dependence on the social safety net. Today, taxpayers are shelling out nearly $250 billion a year on welfare programs for the working poor. Nearly 40 percent of food stamps are paid out to households with at least one wage earner.
And yet, the Republican Party is going all out to portray a mandatory pay hike as just more job-killing nanny-state overreach. "You've gotta totally wipe out this notion of fairness," said Rush Limbaugh. "That's not what a job is. It isn't charity."
The GOP's mysterious determination to wrong-foot itself with the American electorate on the minimum wage is handing the Democratic Party a potent political weapon – one that could make the difference in holding the Senate in November.
California Hikes Minimum Wage And More Reasons 2013 Wasn't the Worst
For Democrats, the politics of a higher minimum wage are as solid as the economics. The issue unites progressives and independents even as it drives a wedge between mainstream Republicans and Tea Party extremists. In his January State of the Union address, President Obama threw down the gauntlet, calling on the GOP to join Democrats in increasing the minimum wage to $10.10 an hour. "Say yes," Obama said. "Give America a raise."
The current federal minimum wage is $7.25 an hour. That represents a pay cut, in real terms, of more than 30 percent from 1968's bottom wage. That decline in the value of the minimum wage has been a key driver of income inequality. "And unlike inequality that's been brought about by technological change or globalization," says Arindrajit Dube, labor economist at the University of Massachusetts Amherst, "we could have prevented it just by pegging the minimum wage to the cost of living."
There is no natural level for the minimum wage. Where it is set is purely a policy decision. In previous decades, the minimum wage kept pace with advances in productivity; as workers created more value for a company, they gained, too. Had the minimum wage tracked productivity gains since 1968, it would now stand above $20 an hour. More telling: Had workers on the lowest rung kept pace with the gains that have accrued to the one percent, it would have vaulted past $30 in 2007.
But there are other more wide-reaching effects of setting the minimum wage below what it takes to scrape by. A family of four trying to live on the earnings of a minimum-wage worker – $15,080 a year – falls more than $8,000 below the poverty line. As a result, today's minimum-wage workers are really expensive for the rest of us. They have to rely on taxpayers to supplement their subpoverty wages.
Essentially, a low minimum wage adds up to a massive stealth subsidy for corporate America. A recent University of California, Berkeley study reveals that the nation's largest fast-food chains earn $7 billion a year in inflated profits because the rest of us pick up the tab for the food stamps, housing vouchers, tax credits and Medicaid benefits that the businesses refuse to cover by paying adequate wages. Big-box stores get an even sweeter deal. A federal analysis of a Walmart Supercenter in Wisconsin found that safety-net subsidies ran approximately $5,500 per low-wage associate. If that's representative, every Supercenter in America is enjoying a rolling bailout of nearly $1 million a year.
Taxpayer subsidies to the working poor make welfare queens of some of the world's most profitable corporations. "The large restaurant chains, the Walmarts – they hold themselves up as captains of the free-enterprise system," Rep. Miller says, "but their whole business plan is dependent on using the social safety net."
One of the most expensive programs that taxpayers fund is the Earned Income Tax Credit – which doles out $60 billion in welfare payments to poor working parents every year at tax time. The EITC lifts millions out of poverty. But thanks to the inadequacy of the minimum wage, it also creates a perverse incentive. The EITC subsidizes poverty-wage work, so businesses can – and do – drive wages even further below the poverty line.
More than one-third of the EITC is pocketed by employers through artificially low labor costs, according to a Princeton economic analysis. Worse: The EITC actually hurts many single workers without kids, who don't qualify for the subsidy and are made strictly worse off by its existence.
A rising minimum wage is a tide that lifts all ships. This is common sense: If a shift worker gets a raise and is now making what the line manager has earned, the line manager is also going to get a bump in pay. Raise the minimum wage, and the bottom 20 percent of wage earners soon enjoy larger paychecks, says Dube of UMass.
A $10.10 minimum wage would boost the incomes of 27.8 million workers, according to an analysis by the Economic Policy Institute. Far from the image of a teen flipping burgers at Jack in the Box, the median worker who would benefit is a full-time working woman in her thirties, responsible for half of her family's income.
Because these workers spend all the money they make, the $35 billion in extra wages they would earn as $10.10 is phased in would get pumped right back into the U.S. economy – doing far more to stimulate growth than if the same dollars were bloating some billionaire's bank account.
At $10.10, a full-time worker would earn $21,000 a year. If not a living wage, that's at least enough to pull a family of three above the poverty line. According to Dube's math, this boost in wages would drive a 10 percent reduction in poverty – pushing the official poverty rate back down where it was before Bear Stearns.
Skeptics of minimum-wage hikes have long argued that increased wages cost jobs for those who need them the most. As House Speaker John Boehner put it, "When you raise the price of employment, guess what happens? You get less of it." His line is echoed by many of the party's potential 2016 presidential contenders. Texas Sen. Ted Cruz calls it "wrongheaded"; Kentucky Sen. Rand Paul claims it will hurt the "least-skilled people in our society"; and Florida Sen. Marco Rubio declares that "raising the minimum wage does not grow the middle class."
For other Republicans, blocking an increase in the minimum wage isn't radical enough; they argue America must repeal the wage floor altogether. Texas Rep. Joe Barton recently declared that the minimum wage has "outlived its usefulness." In a sign of how far the national GOP has tilted to the extreme right, such outré notions are now being advanced by senators long regarded as moderates. Last June, Lamar Alexander – the GOP's ranking member on the Senate labor committee – announced, "I don't believe" in the minimum wage, insisting that employers should be able to get away with paying $2 an hour.
Such arguments may have intuitive appeal, but in recent years the conventional wisdom has been upended. The minimum wage is the most exhaustively researched subject in economics. Social scientists have scrutinized bordering counties that run along state lines – think Washington and Idaho – measuring what happens when one state boosts its minimum wage and the other doesn't. The results are in: A 2013 meta-analysis of minimum-wage studies by the Center for Economic Policy and Research concludes that higher minimum wages "have no discernible effect on employment." To the degree that mainstream economists still debate the topic, says Dube, it's whether the jobs impact is "fairly small or something close to zero."
Minimum-wage foes – prominently Tyler Cowen, a free-market economist who directs the Koch-funded Mercatus Center – like to point to a controversial 2009 study by University of California, Irvine economist David Neumark, which argues that high minimum wages are disadvantageous to teen job seekers. Yet even Neumark himself does not oppose minimum-wage hikes. "It doesn't mean we shouldn't do it," he said, announcing his study. "If 10 workers lost their jobs but 1,000 families were lifted out of poverty, we'd probably say that was a pretty good trade-off."
The trade-offs of a $10.10 minimum wage came into sharp relief in February, when the CBO projected that such an increase could reduce payrolls by 0.3 percent, or 500,000 jobs. If accurate, that jobs number is nothing to scoff at. But for a sense of perspective, consider that the CBO also estimated that the GOP-led sequester killed 750,000 jobs last year, providing zero benefit to the economy. In contrast, the CBO minimum-wage report calculates that for every disadvantaged job seeker, 33 workers would receive a fatter paycheck. Taken as a group, the nation's low-wage workforce would have an extra $31 billion to spend every year, stimulating the economy. "The bottom line from the CBO report," says Larry Katz, a Harvard economist, "is that for the vast majority of Americans, an increase to $10.10 is a big win."
Nearly all minimum-wage jobs – greater than 85 percent – are now found in restaurants, retail, nursing homes and office buildings. Jobs loading up the deep-fat fryer, changing bedpans and mopping floors can't be shipped to Bangladesh or cheaply automated. The dark reality of the American economy today is that globalization has already done its number on us. "These are the jobs that are left, and they're left for a reason," says Dube. "Barring teleportation," he says, laughing, these jobs will have to be filled in America even at higher wages.
Counterintuitively, those higher paychecks can create benefits for the businesses that write them. Better pay leads to quicker hiring, reduced turnover and happier workers. The success of high wage discount retailers like Costco demonstrates that livable wages and low prices aren't mutually exclusive. But even if every penny of increased labor costs were passed on to shoppers, the results wouldn't give anyone sticker shock. A UC Berkeley study found Walmart could finance a pay hike to $12 an hour for its nearly 1 million low-wage associates by boosting prices just 1.1 percent – at a cost to the average shopper of just $12.49 a year, or the price of a bag of Cat Chow.
Minimum-wage hikes haven't always been held back by partisanship. George W. Bush signed the last increase into law in 2007. But with the national GOP wildly out of step with the American public on this issue, Democrats are pressing their advantage. This is not a new playbook. The minimum wage proved its worth as an off-year wedge issue as recently as 2006, when Claire McCaskill ran blistering ads in her Missouri Senate campaign against Republican Jim Talent, describing him as the kind of politician who "votes 11 times against increasing the minimum wage but takes six congressional pay raises." On Election Day, boosted by unusually high turnout, McCaskill secured a 50,000-vote victory.
Seeking to shore up the most vulnerable incumbents in the Senate, labor activists are now pushing state minimum-wage ballot initiatives in Alaska, Arkansas and South Dakota. Quite apart from the obvious economic benefits, the political goal is to give the party's base voters – who often sit out nonpresidential elections – some skin in the game on Election Day.
Proving that the issue can be used to play offense as well as defense, Kentucky Democrat Alison Grimes has turned the minimum wage into the driving issue of her candidacy against Senate Majority Leader Mitch McConnell. The Republican is facing a well-funded primary challenge from the far right, and has chosen to prove his conservative mettle by denouncing a minimum-wage increase as the "last thing we should do."
Noting that 250,000 Kentucky women would benefit from a raise to $10.10 an hour, Grimes has countered that voting for a minimum-wage increase would be her first priority. In early polling, the untested Democrat has leapt to a four point advantage over Kentucky's 30-year incumbent.
The political battle lines have been drawn. But is $10.10 really the best that America can do by its poorest workers? The experience of other advanced democracies suggests that the minimum wage could rise far higher still. In Australia, the minimum wage is now greater than US$16 an hour, yet the unemployment rate Down Under – 5.8 percent – is significantly lower than our own.
Nationwide, there is one high-profile campaign to push the minimum wage significantly above $10.10. Ironically, this leadership is coming from the conservative end of the spectrum. Ron Unz, a Republican multimillionaire from Silicon Valley, is advancing a ballot measure to hike California's minimum wage to $12 an hour.
Unz is best known as a foe of illegal immigration, and he says he was initially attracted to the minimum wage as a means to put U.S. citizens back to work in the kinds of jobs Americans supposedly won't do anymore. But Unz has since embraced livable wages on the economic merits alone – arguing that no American should be forced to subsidize the labor costs of profitable corporations. Unz has especially harsh words for those, like Florida's freshman senator, who would increase the Earned Income Tax Credit instead of forcing Walmart to pay honest wages. "Why should all taxpayers pay for massive, hidden government subsidies?" he asks. "But that's what Marco Rubio and fellow Republicans are calling for: an increase in welfare spending!"
In the past, conservative opposition to higher minimum wages was premised on the fear that they would drive an increase in joblessness, creating greater dependency on the welfare state, Unz says. But now that hard economic data prove the opposite case – that higher hourly wages don't kill job growth and simultaneously reduce reliance on Uncle Sugar – Unz believes there's no reason this policy shouldn't unite both bleeding-heart liberals and Mitt Romney conservatives, who fret about the freeloading of the 47 percent of Americans who don't pay income taxes.
"There are a lot of conservative reasons," Unz says, "to increase the minimum wage."
Do Your Staff Hate You?
Hate’s a strong word, perhaps excessively harsh, but for many employees it’s a genuine emotion they feel towards their boss. Going to work is torture, not because they dislike their job or their organisation, but because they detest their manager. And their manager, to make matters worse, is oblivious.
So how can you tell your employees are allergic to your management style?
There are obvious markers such as high staff turnover, absenteeism, conflict and poor performance. If those are widespread within your team, you should be examining the degree to which you’re contributing to the problem. But often that realisation is too little too late. Here, then, are seven subtle signs your employees would rather be managed by anyone other than you.
They don't share information. Of priority to your employees is their protection. This means they withhold information because they fear you’ll either shoot the messenger or react irrationally. This applies to you if you find yourself thinking or saying at work: “Why didn’t anyone tell me this earlier?”
You don’t receive any compliments. The thing about positive feedback is that it isn’t always just about the manager providing it to employees. The best managers often have grateful workers – without any hint of sucking up – letting them know they’re inspirational and valued.
They’re out the door by 5.00pm. Or whatever time they’re supposed to finish and not a minute later. Ten minutes before the end, they’re already packing up. And, if you look carefully, they’re winding down even when there’s thirty minutes to go.
They avoid eye contact. When having conversations with employees, they’re reticent to look at you directly. They appear nervous and fidgety. This could be because you’re intimidating, or maybe you’re yet to build a relationship with them, which means there’s still an element of discomfort and mistrust.
You’re not invited to social events. Everyone heads out for a drink after work, or perhaps even just a team lunch, without extending an invitation to you. Why? Because, if you were there, it’d be difficult to talk about you behind your back.
You think you’re flawless. The most talented managers – those with plenty of evidence to back up their smarts – are nonetheless humble enough to know there’s always more to learn. So they frequently seek feedback and devour books and courses in an attempt to constantly improve. If you can’t see gaps in your management repertoire, hubris is probably blocking your vision.
No one’s asked you to be their mentor. Brilliant managers, by default, have people eagerly wanting to learn from them. Their challenge is one of turning mentees away. If your challenge is the opposite, and no one’s really expressed a desire to be mentored by you, it’s a good idea to explore why that’s the case.
Just one of the above signs is not enough to indicate you’re the problem. If several are present, however, chances are you’re the issue.
Anyway, you might be thinking, “so what?”, which is a fair question to ask, albeit an obliquitous one. Who cares if employees hate you? All that matters is that they get the job done, right? ‘Management is not a popularity contest’ is a refrain heard all the time.
In a sense, that cliché is correct. There’ll be times when you’ll have difficult conversations or you’ll communicate bad news of a type that will inevitably piss people off. And yet there’s a plethora of research demonstrating the greater performance and loyalty demonstrated by employees when they perceive their manager as charismatic, humorous, likeable, and emotionally intelligent.
The Future of Work 4
In Coningsby, a Benjamin Disraeli novel published in 1844, a character impressed with the technological spirit of the age remarks, “I see cities peopled with machines. Certainly Manchester is the most wonderful city of modern times.”
Today, of course, Manchester is mainly associated with urban decline. There is a simple economic explanation for this, and one that can help guide cities and nations as they prepare for another technological revolution.
Although new technologies have become available everywhere, only some cities have prospered as a result. As the late economist and historian David Landes famously noted, “Prosperity and success are their own worst enemies.” Prosperous places may indeed become self-satisfied and less interested in progress. But manufacturing cities such as Manchester and Detroit did not decline because of a slowdown in technology adoption. On the contrary, they consistently embraced new technologies and increased the efficiency and output of their industries. Yet they declined. Why?
The reason is that they failed to produce new employment opportunities to replace those that are being eroded by technological change. Instead of taking advantage of technological opportunities to create new occupations and industries, they adopted technologies to increase productivity by automating their factories and displacing labor.
The fate of manufacturing cities such as Manchester and Detroit illustrates an important point: long-run economic growth is not simply about increasing productivity or output—it is about incorporating technologies into new work. Having nearly filed for bankruptcy in 1975, New York City has become a prime case of how to adapt to technological change. Whereas average wages in Detroit were still slightly higher than in New York in 1977, they are now less than 60 percent of the latter’s incomes. At a time when Detroit successfully adopted computers and industrial robots to substitute for labor, New York adapted by creating new employment opportunities in professional services, computer programming and software engineering.
Long-run economic growth entails the eclipse of mature industries by new ones. To stave off stagnation, my own research with Thor Berger of Lund University suggests that cities need to manage the transition into new work pdf).
Such technological resilience requires an understanding of the direction of technological change. Unfortunately, economic history does not necessarily provide obvious guidance for policy makers who want to predict how technological progress will reshape labor markets in the future. For example, although the industrial revolution created the modern middle class, the computer revolution has arguably caused its decline.
To understand how technology will alter the nature of work in the years ahead, we need to look at the tasks computers are and will be able to perform. Whereas computerization historically has been confined to routine tasks involving explicit rule-based activities, it is now spreading to domains commonly defined as nonroutine. In particular, sophisticated algorithms are becoming increasingly good at pattern recognition, and are rapidly entering domains long confined to labor. What this means is that a wide range of occupations in transportation and logistics, administration, services and sales will become increasingly vulnerable to automation in the coming decades. Worse, research suggests that the next generation of big data–driven computers will mainly substitute for low-income, low-skill jobs over the next decades, exacerbating already growing wage inequality (pdf)).
If jobs for low-skill workers disappear, those workers will need to find to jobs that are not susceptible to computerization. Such work will likely require a higher degree of human social intelligence and creativity—domains where labor will hold a comparative advantage, despite the diffusion of big data–driven technologies.
The reason why Bloom Energy, Tesla Motors, eBay and Facebook all recently emerged in (or moved to) Silicon Valley is straightforward: the presence of adaptable skilled workers that are willing to relocate to the companies with the most promising innovations. Importantly, local universities, such as Stanford and U.C. Berkeley, have incubated ideas, educated workers and fostered technologies breakthroughs for decades. Since Frederick Terman, the dean of Stanford’s School of Engineering, encouraged two of his students, William Hewlett and David Packard, to found Hewlett–Packard in 1938, Stanford alumni have created 39,900 companies and about 5.4 million jobs.
For cities to prosper, they need to promote investment in relevant skills to attract new industries and enable workers to shift into new occupations. Big-data architects, cloud services specialists, iOS developers, digital marketing specialists and data scientists provide examples of occupations that barely existed only five years ago, resulting from recent technological progress. According to our estimates, people working in digital industries are on average much better educated and for any given level of education they are more likely to have a science, technology, engineering or mathematics (STEM) degree. By contrast, workers with professional degrees are seen less often in new industries, reflecting the fact that new work requires adaptable cognitive abilities rather than job-specific skills. It is thus not surprising that we find San Jose, Santa Fe, San Francisco and Washington, D.C., among the places that have most successfully adapted to the digital revolution.
The cities that invest in the creation of an adaptable labor force will remain resilient to technological change. Policies to promote technological resilience thus need to focus on the supply of technically skilled individuals and encouraging entrepreneurial risk-taking. For example, the National Science Foundation recently provided a grant to North Carolina Central University to integrate entrepreneurship into scientific education. More such initiatives are needed. Furthermore, immigration policies need to be made attractive to high-skill workers and entrepreneurs. Cities like New York and London owe much of their technological dynamism to their ability to attract talent.
Meanwhile, it is important to bear in mind that policies designed to support output created by old work are not a recipe for prosperity. Whereas General Motors has rebounded since its 2009 bailout, Detroit filed for Chapter 9 bankruptcy in 2013. Instead of propping up old industries, officials should focus on managing the transition of the workforce into new work. The places that do so successfully will be at the frontier. As argued by Jane Jacobs in more colorful terms: “Our remote ancestors did not expand their economies much by simply doing more of what they had already been doing…. They expanded their economies by adding new kinds of work. So do we.”
Standup Comedy and Job Interviews
A GSOH isn't just an attractive quality on dating websites. Skills gleaned from the world of stand-up comedy could also be the key to career success. Here's how...
1. Adopt a persona
Paralysed by nerves in presentations? That doesn't mean you can't become brilliant at it. "Many comedians are shy in real life," says Lynne Parker of Funny Women, a company that promotes female comedy. "Like them, conjure up a confident persona to mentally step into when you have to speak in front of an audience. A prop can help, like a particular pair of shoes that you wear to 'become' this character and overcome your fear."
2. Know your stuff
Good stand-ups appear totally spontaneous, but the reality is quite different. Just as comedians spend hours honing their routines, so the harder you work on a pitch, the more relaxed it will sound. "Develop your material from your experience, using stories that relate to the job you're applying for and practise until you know your stuff so well you don't need a script," advises Lynne. "A good comedian does this until it's totally embedded – that way, they can focus on their delivery and on responding to their audience."
3. Tailor your material
A good performer checks out their audience before going on stage and knows what material to use, and when. Before an interview, get the lowdown on who you'll be talking to, via the company's website. Consider what you're saying, how you're saying it and what it will mean to them. Is it all women? Or a youngish crowd? What will they want to know?
4. Connect with your audience
The sooner an audience feels connected with a comedian, the quicker they engage with the material. So, when you're in a meeting, remember people's names and use them. Listen to what they tell you about themselves – rather than thinking about what you can wow them with next – and look for ways to engage them. "Develop a 10-second intro that quickly identifies commonality with your audience," suggests Laura. "For example, 'Having spent five years as a manager…' or 'As a fellow Essex girl…'"
5. Diffuse tension
If you feel tense in an interview, humour can help win people over. "Being able to make people laugh is tremendously empowering," says Lynne. "Used appropriately, comedy can overcome obstacles and diffuse confrontational or awkward situations." So, if you drop your pen when you sit down for an interview, be yourself – if a suitable joke comes to mind, share it and diffuse the tension.
6. Own the room
Comedians often pause before starting their routine and make eye contact with people in the crowd. Why? To increase anticipation for what's to come. Business and coaching psychologist Mike Guttridge says: "Owning the room – or what facilitators call 'managing the space' – is always good. During a presentation, move about, especially if you are feeling nervous, make eye contact with people – and smile!"
Beat Procrastination
Each morning, we get a brief window of time during which we're most mentally capable of getting stuff done, said behavioral scientist Dan Ariely in a recent Ask Me Anything on Reddit. And yet most of us waste that time.
Generally speaking, Ariely said, the two hours after we become fully awake are, potentially, our most productive. But what's the first thing you do when your brain shakes off the fogginess of sleep? I know what I do: I ease into the day by attending to my most mindless tasks first, like replying to emails or playing around on Twitter. Ariely really wishes we would knock this off.
One of the saddest mistakes in time management is the propensity of people to spend the two most productive hours of their day on things that don't require high cognitive capacity (like social media). If we could salvage those precious hours, most of us would be much more successful in accomplishing what we truly want.
One way to fight against this tendency is to decide the night before what you want to accomplish in the morning, so you can jump right into your day. There is a time for mindlessness, but maybe save it for later.
The Third Industrial Revn
Some economists are offering radical thoughts on the job-destroying power of this new technological wave. Carl Benedikt Frey and Michael Osborne, of Oxford University, recently analysed over 700 different occupations to see how easily they could be computerised, and concluded that 47% of employment in America is at high risk of being automated away over the next decade or two. Messrs Brynjolfsson and McAfee ask whether human workers will be able to upgrade their skills fast enough to justify their continued employment. Other authors think that capitalism itself may be under threat.
The global eclipse of labour
This special report will argue that the digital revolution is opening up a great divide between a skilled and wealthy few and the rest of society. In the past new technologies have usually raised wages by boosting productivity, with the gains being split between skilled and less-skilled workers, and between owners of capital, workers and consumers. Now technology is empowering talented individuals as never before and opening up yawning gaps between the earnings of the skilled and the unskilled, capital-owners and labour. At the same time it is creating a large pool of underemployed labour that is depressing investment.
The digital revolution is opening up a great divide between a skilled and wealthy few and the rest of society
The effect of technological change on trade is also changing the basis of tried-and-true methods of economic development in poorer economies. More manufacturing work can be automated, and skilled design work accounts for a larger share of the value of trade, leading to what economists call “premature deindustrialisation” in developing countries. No longer can governments count on a growing industrial sector to absorb unskilled labour from rural areas. In both the rich and the emerging world, technology is creating opportunities for those previously held back by financial or geographical constraints, yet new work for those with modest skill levels is scarce compared with the bonanza created by earlier technological revolutions.
All this is sorely testing governments, beset by new demands for intervention, regulation and support. If they get their response right, they will be able to channel technological change in ways that broadly benefit society. If they get it wrong, they could be under attack from both angry underemployed workers and resentful rich taxpayers. That way lies a bitter and more confrontational politics.
Technology isn’t working
IF THERE IS a technological revolution in progress, rich economies could be forgiven for wishing it would go away. Workers in America, Europe and Japan have been through a difficult few decades. In the 1970s the blistering growth after the second world war vanished in both Europe and America. In the early 1990s Japan joined the slump, entering a prolonged period of economic stagnation. Brief spells of faster growth in intervening years quickly petered out. The rich world is still trying to shake off the effects of the 2008 financial crisis. And now the digital economy, far from pushing up wages across the board in response to higher productivity, is keeping them flat for the mass of workers while extravagantly rewarding the most talented ones.
Between 1991 and 2012 the average annual increase in real wages in Britain was 1.5% and in America 1%, according to the Organisation for Economic Co-operation and Development, a club of mostly rich countries. That was less than the rate of economic growth over the period and far less than in earlier decades. Other countries fared even worse. Real wage growth in Germany from 1992 to 2012 was just 0.6%; Italy and Japan saw hardly any increase at all. And, critically, those averages conceal plenty of variation. Real pay for most workers remained flat or even fell, whereas for the highest earners it soared.
It seems difficult to square this unhappy experience with the extraordinary technological progress during that period, but the same thing has happened before. Most economic historians reckon there was very little improvement in living standards in Britain in the century after the first Industrial Revolution. And in the early 20th century, as Victorian inventions such as electric lighting came into their own, productivity growth was every bit as slow as it has been in recent decades.
In July 1987 Robert Solow, an economist who went on to win the Nobel prize for economics just a few months later, wrote a book review for the New York Times. The book in question, “The Myth of the Post-Industrial Economy”, by Stephen Cohen and John Zysman, lamented the shift of the American workforce into the service sector and explored the reasons why American manufacturing seemed to be losing out to competition from abroad. One problem, the authors reckoned, was that America was failing to take full advantage of the magnificent new technologies of the computing age, such as increasingly sophisticated automation and much-improved robots. Mr Solow commented that the authors, “like everyone else, are somewhat embarrassed by the fact that what everyone feels to have been a technological revolution...has been accompanied everywhere...by a slowdown in productivity growth”.
This failure of new technology to boost productivity (apart from a brief period between 1996 and 2004) became known as the Solow paradox. Economists disagree on its causes. Robert Gordon of Northwestern University suggests that recent innovation is simply less impressive than it seems, and certainly not powerful enough to offset the effects of demographic change, inequality and sovereign indebtedness. Progress in ICT, he argues, is less transformative than any of the three major technologies of the second Industrial Revolution (electrification, cars and wireless communications).
Yet the timing does not seem to support Mr Gordon’s argument. The big leap in American economic growth took place between 1939 and 2000, when average output per person grew at 2.7% a year. Both before and after that period the rate was a lot lower: 1.5% from 1891 to 1939 and 0.9% from 2000 to 2013. And the dramatic dip in productivity growth after 2000 seems to have coincided with an apparent acceleration in technological advances as the web and smartphones spread everywhere and machine intelligence and robotics made rapid progress.
Have patience
A second explanation for the Solow paradox, put forward by Erik Brynjolfsson and Andrew McAfee (as well as plenty of techno-optimists in Silicon Valley), is that technological advances increase productivity only after a long lag. The past four decades have been a period of gestation for ICT during which processing power exploded and costs tumbled, setting the stage for a truly transformational phase that is only just beginning (signalling the start of the second half of the chessboard).
That sounds plausible, but for now the productivity statistics do not bear it out. John Fernald, an economist at the Federal Reserve Bank of San Francisco and perhaps the foremost authority on American productivity figures, earlier this year published a study of productivity growth over the past decade. He found that its slowness had nothing to do with the housing boom and bust, the financial crisis or the recession. Instead, it was concentrated in ICT industries and those that use ICT intensively.
That may be the wrong place to look for improvements in productivity. The service sector might be more promising. In higher education, for example, the development of online courses could yield a productivity bonanza, allowing one professor to do the work previously done by legions of lecturers. Once an online course has been developed, it can be offered to unlimited numbers of extra students at little extra cost.
Similar opportunities to make service-sector workers more productive may be found in other fields. For example, new techniques and technologies in medical care appear to be slowing the rise in health-care costs in America. Machine intelligence could aid diagnosis, allowing a given doctor or nurse to diagnose more patients more effectively at lower cost. The use of mobile technology to monitor chronically ill patients at home could also produce huge savings.
Such advances should boost both productivity and pay for those who continue to work in the industries concerned, using the new technologies. At the same time those services should become cheaper for consumers. Health care and education are expensive, in large part, because expansion involves putting up new buildings and filling them with costly employees. Rising productivity in those sectors would probably cut employment.
The world has more than enough labour. Between 1980 and 2010, according to the McKinsey Global Institute, global nonfarm employment rose by about 1.1 billion, of which about 900m was in developing countries. The integration of large emerging markets into the global economy added a large pool of relatively low-skilled labour which many workers in rich countries had to compete with. That meant firms were able to keep workers’ pay low. And low pay has had a surprising knock-on effect: when labour is cheap and plentiful, there seems little point in investing in labour-saving (and productivity-enhancing) technologies. By creating a labour glut, new technologies have trapped rich economies in a cycle of self-limiting productivity growth.
Fear of the job-destroying effects of technology is as old as industrialisation. It is often branded as the lump-of-labour fallacy: the belief that there is only so much work to go round (the lump), so that if machines (or foreigners) do more of it, less is left for others. This is deemed a fallacy because as technology displaces workers from a particular occupation it enriches others, who spend their gains on goods and services that create new employment for the workers whose jobs have been automated away. A critical cog in the re-employment machine, though, is pay. To clear a glutted market, prices must fall, and that applies to labour as much as to wheat or cars.
Where labour is cheap, firms use more of it. Carmakers in Europe and Japan, where it is expensive, use many more industrial robots than in emerging countries, though China is beginning to invest heavily in robots as its labour costs rise. In Britain a bout of high inflation caused real wages to tumble between 2007 and 2013. Some economists see this as an explanation for the unusual shape of the country’s recovery, with employment holding up well but productivity and GDP performing abysmally.
Productivity growth has always meant cutting down on labour. In 1900 some 40% of Americans worked in agriculture, and just over 40% of the typical household budget was spent on food. Over the next century automation reduced agricultural employment in most rich countries to below 5%, and food costs dropped steeply. But in those days excess labour was relatively easily reallocated to new sectors, thanks in large part to investment in education. That is becoming more difficult. In America the share of the population with a university degree has been more or less flat since the 1990s. In other rich economies the proportion of young people going into tertiary education has gone up, but few have managed to boost it much beyond the American level.
At the same time technological advances are encroaching on tasks that were previously considered too brainy to be automated, including some legal and accounting work. In those fields people at the top of their profession will in future attract many more clients and higher fees, but white-collar workers with lower qualifications will find themselves displaced and may in turn displace others with even lesser skills.
Lift out of order
A new paper by Peter Cappelli, of the University of Pennsylvania, concludes that in recent years over-education has been a consistent problem in most developed economies, which do not produce enough suitable jobs to absorb the growing number of college-educated workers. Over the next few decades demand in the top layer of the labour market may well centre on individuals with high abstract reasoning, creative, and interpersonal skills that are beyond most workers, including graduates.
Most rich economies have made a poor job of finding lucrative jobs for workers displaced by technology, and the resulting glut of cheap, underemployed labour has given firms little incentive to make productivity-boosting investments. Until governments solve that problem, the productivity effects of this technological revolution will remain disappointing. The impact on workers, by contrast, is already blindingly clear.
The hole in the middle
JUST ACROSS THE road from Gothenburg’s main railway station, at the foot of a pair of hotels, a line of taxis is waiting to pick up passengers. The drivers, all men, many of them immigrants, chat and lean against their vehicles, mostly Volvos. One of them, an older man with an immaculate cab, ferries your correspondent to Volvo’s headquarters on the other side of the river. Another car is waiting there, a gleaming new model with unusual antennae perched on two corners of its roof. An engineer gets in and drives the car onto a main commuter route. Then he takes his hands off the wheel.
Volvo, like many car manufacturers, is putting a lot of work into automated vehicle technology. Such efforts have been going on for some time and were responsible for the development of power steering, automatic transmissions and cruise control. In the 2000s carmakers added features such as automated parallel parking and smart cruise, which can maintain a steady distance between vehicles. In 2011 Google revealed it was developing fully autonomous cars, using its detailed street maps, an array of laser sensors and smart software. It recently unveiled a new prototype that can be configured to have no driver controls at all, save an on/off button. Traditional car manufacturers are taking things more slowly, but the trend is clear.
In many ways driverless cars would be a great improvement on the driven variety. Motoring accidents remain one of the leading causes of death in many countries. Automated driving promises huge improvements in both fuel efficiency and journey times and will give erstwhile drivers the chance to do other things, or nothing, during their trip.
Yet its effect on the labour market would be problematic. Only ten years ago driving a car was seen as the sort of complex task that was easy for humans but impossible for computers. Driving taxis, delivery vans or lorries has been one of the few occupations in which people without qualifications could earn a decent wage. Driverless vehicles could put an end to such work.
The apocalypse of the horsemen
Before the horseless carriage, drivers presided over horse-drawn vehicles. When cars became cheap enough, the horses and carriages had to go, which eliminated jobs such as breeding and tending horses and making carriages. But cars raised the productivity of the drivers, for whom the shift in technology was what economists call “labour-augmenting”. They were able to serve more customers, faster and over greater distances. The economic gains from the car were broadly shared by workers, consumers and owners of capital. Yet the economy no longer seems to work that way. The big losers have been workers without highly specialised skills.
The squeeze on workers has come from several directions, as the car industry clearly shows. Its territory is increasingly encroached upon by machines, including computers, which are getting cheaper and more versatile all the time. If cars and lorries do not need drivers, then both personal transport and shipping are likely to become more efficient. Motor vehicles can spend more time in use, with less human error, but there will be no human operator to share in the gains.
At the same time labour markets are hollowing out, polarising into high- and low-skill occupations, with very little employment in the middle. The engineers who design and test new vehicles are benefiting from technological change, but they are highly skilled and it takes remarkably few of them to do the job. At Volvo much of the development work is done virtually, from the design of the cars to the layout of the production line. Other workers, like the large numbers of modestly skilled labourers that might once have worked on the factory floor, are being squeezed out of such work and are now having to compete for low-skill and low-wage jobs.
Labour has been on the losing end of technological change for several decades. In 1957 Nicholas Kaldor, a renowned economist, set out six basic facts about economic growth, one of which was that the shares of national income flowing to labour and capital held roughly constant over time. Later research indicated that the respective shares of labour and capital fluctuate, but stability in the long run was seen as a good enough assumption to keep it in growth models and textbooks. Over the past 30 years or so, though, that has become ever harder to maintain as the share of income going to labour has fallen steadily the world over.
Recent work by Loukas Karabarbounis and Brent Neiman, of the University of Chicago, puts the global decline in labour’s share since the early 1980s at roughly five percentage points, to just over half of national income. This seems to hold good within sectors and across many countries, including fast-growing developing economies like China, suggesting that neither trade nor offshoring are primarily responsible. Instead, the two scholars argue, at least half of the global decline in the share of labour is due to the plummeting cost of capital goods, particularly those associated with computing and information technology.
By one reckoning the price of cloud-computing power available through Amazon’s web services has fallen by about 50% every three years since 2006. Google officials have said that the price of the hardware used to build the cloud is falling even faster, with some of the cost savings going to cloud providers’ bottom lines rather than to consumers—for now, at any rate. The falling cost of computing power does not translate directly into substitution of capital for labour, but as the ICT industry has developed software capable of harnessing these technologies, the automation of routine tasks is becoming irresistible.
From the end of the second world war to the mid-1970s productivity in America, measured by output per person, and inflation-adjusted average pay rose more or less in tandem, each roughly doubling over the period. Since then, and despite a slowdown in productivity growth, pay has lagged badly behind productivity growth. From 2000 to 2011, according to America’s Bureau of Labour Statistics, real output per person rose by nearly 2.5% a year, whereas real pay increased by less than 1% per year.
The counterpart to this eclipse of labour is the rise and rise of capital. In a landmark book that became an unlikely bestseller, Thomas Piketty, an economist at the Paris School of Economics and an authority on inequality, argues that economics should once again focus on distribution, as it did in the 19th and early 20th centuries. In those days the level of wealth in rich economies often approached seven times annual national income, so income earned from wealth played an enormous part in the economy and caused social strains that sometimes threatened the capitalist system. In the decades following the first world war old fortunes were wiped out by taxation, inflation and economic collapse, so by 1950 wealth in rich economies had typically fallen to just two or three times the level of annual national income. But since then it has begun to creep up again.
Mr Piketty acknowledges that inequality today is different from what it was 100 years ago. Today’s great fortunes are largely in the hands of the working rich—entrepreneurs who earned billions by coming up with products and services people wanted—rather than the idle gentry of the early industrial era. Yet even if the source of the new wealth is less offensive than that of the old, the eclipse of labour could still become a disruptive social force. Wealth is generally distributed less equally than capital; many of those getting an income from work own little or no wealth. And Mr Piketty reckons that as wealth plays a bigger part in an economy, it will tend to become more concentrated.
The decline in the role of wealth in the early part of the 20th century, Mr Piketty observes, coincided with a levelling out of the wealth distribution, as for the first time in modern economic history a broad, property-owning middle class emerged. That middle class has been a stabilising force in politics and society over the past 70 years, he reckons. If it were to disappear, politics could become more contentious again.
Labour in America would have lost out to capital even more dismally except for soaring pay among a small group of high earners, according to a study in 2013 by Michael Elsby, of the University of Edinburgh, Bart Hobijn, of the Federal Reserve Bank of San Francisco, and Aysegul Sahin, of the Federal Reserve Bank of New York. The typical worker has fallen behind even more than a straightforward look at the respective shares of labour and capital suggests.
One explanation for that is the changing nature of many jobs. In recent years economists such as David Autor and Daron Acemoglu of the Massachusetts Institute of Technology have pioneered a new way of looking at work: analysing occupations in terms of the tasks they involve. These can be manual or cognitive, routine or complex. The task content determines how skilled a worker must be to qualify for work in a particular occupation. Mr Autor argues that rapid improvement in ICT has enabled firms to reduce the number of workers engaged in routine tasks, both cognitive and manual, which are comparatively easy to programme and automate.
A manufacturing worker whose job consists of a clear set of steps—say, joining two sheets of metal with a series of welds—is highly vulnerable to being displaced by robots who can do the job faster, more precisely and at lower cost. So, too, is a book-keeper who enters standard data sets and performs simple calculations. Such routine work used to be done by people with mid-level skills for mid-range pay. Over the past generation, however, technology has destroyed large swathes of work in the middle of the skill and wage distribution, in a process economists call labour-force polarisation.
The hole in the middle
As recently as the 1980s demand from employers in rich countries was most buoyant for workers with a college education, less so for those with fewer qualifications and least so for those who had at best attended high school. But from the early 1990s that pattern changed. Demand still grew fastest for skilled workers and more slowly for less-skilled workers, but the share of employment in the middle actually shrank. In the 2000s the change became more pronounced: employment among the least-skilled workers soared whereas the share of jobs held by middle- and high-skill workers declined. Work involving complex but manual tasks, like cleaning offices or driving trucks, became more plentiful. Both in America and in Europe, since 2000 low-skill, low-productivity and low-wage service occupations have gained ground.
Highly skilled work, on the other hand, has become increasingly concentrated in jobs requiring complex cognitive or interpersonal tasks: managing a business, developing a new product or advising patients. As non-routine work has become more prized, supply and demand in the labour market have become increasingly unbalanced. Many cognitively complex jobs are beyond the abilities even of people with reasonable qualifications. The wage premium for college graduates has held steady in recent decades, but that is mainly because of the rising premium earned by holders of advanced degrees. The resulting competition for lower-level work has depressed wage growth, leading to stagnant pay for typical workers.
Technology has created a growing reservoir of less-skilled labour while simultaneously expanding the range of tasks that can be automated. Most workers are therefore being forced into competition both against each other and against machines. No wonder their share of the economic pie has got smaller, in developing economies as well as in the rich world.
Unpaid Interns
In Britain, in 2014, we are compelled to debate whether people should work for free. Unpaid internships have become a pillar of the modern British class system, discriminating on the basis of wealth rather than talent. The system acts as a filter for entire professions, helping to transform them into closed shops for the uber-privileged. Not only are they exploitative, they effectively allow the children of the well-to-do to buy up positions in the upper echelons of British society. But, finally, it is possible – just possible – that this key means of rigging Britain in favour of a small elite faces its reckoning. On Tuesday, Labour shadow minister Liam Byrne will return to his old school to set out the case for dealing with this national scandal. Despite some internal resistance, Labour’s leadership are moving towards backing a four-week limit on unpaid internships.
According to the Sutton Trust, more than one in three graduate interns are working for nothing. At any given time, the charity estimates, 21,000 are working unpaid, although a 2010 estimate by the thinktank IPPR put the figure at 100,000. For those unable to rely on the Bank of Mum and Dad, such unabashed exploitation can be completely unaffordable. Unpaid internships are often gateways to professions – like, for example, law, the media, the tragically professionalised political world – and are all too frequently located in London, one of the most expensive cities on Earth. The Sutton Trust estimates that a single person in London will have to cough up £5,556 for the privilege of undertaking an unpaid internship for six months; in Manchester it is not much cheaper, at £4,728.
For a generation facing a worse lot in life than their parents, this is a time of desperation. Hundreds of thousands of young people are out of work; many others have been driven into insecure or zero-hour employment; and around half of recent graduates are trapped in non-graduate work. Such desperation is lucrative for many employers. They know that those with the means will do whatever they can to get their foot in a door which has been slammed in the faces of so many others. After all, more than half of employers surveyed refuse to give jobs to graduates with no prior work experience.
The public has little doubt that unpaid internships are a wealth bar. According to polling by the Social Mobility and Child Poverty Commission, 74% of Britons believe that a young person in their family could not afford to take up an unpaid internship. Yes, there are many reasons why the apex of society is such a stitch-up for the pampered and privileged, but the internship filter is certainly one of them. More than half of the top 100 media professionals attended a fee-paying school, even though just 7% of Britons overall did; and 43% of newspaper columnists were educated in the private sector. This is not just an unjust waste of talent, leaving aspiring journalists from more humble backgrounds unable to pursue their dream. It helps to ensure that the media reflects the opinions, prejudices and priorities of a gilded elite.
Many unpaid interns wish to remain anonymous out of a fear of damaging their careers, but their experiences are telling. Take one woman who won a month-long internship with a leading Sunday newspaper. “Because the internship was unpaid and I’m from Leicester, not Chelsea, I could only afford to stay for one week and got very little out of it,” she says. She now works in press management. Freddie Foot from Bristol recently graduated with an international development degree. “The current climate seems to imply that to get your foot in the door you have to do one of these internships,” he says. “The issue is that unless your parents live in London – where most of these jobs are – or you can take three months off unpaid, it is basically an impossibility.”
When Matthew Cole moved to London, he lived in a “makeshift DIY bedroom partition in a lounge” in a building that should have been condemned, and worked to try to support his unpaid labour. “However, when you are exhausted by the work you do to pay the rent and eat, it’s very hard to find the energy or time to work for free on anything, internship or otherwise.”
Apologists for unpaid internships – proof that you can find people who will defend almost anything – sometimes mount the following defence: if the non-privileged are real go-getters, they will spend their every remaining hour slogging away in bar jobs to support themselves. What a society they condone, where those without money must work themselves half to death in order to even be considered for a job in an top profession.
These unpaid internships should be illegal – and by that, I mean under existing law. As Intern Aware, a group that has done more than anybody to fight this national scourge, point out, under employment law if you “work set hours, do set tasks and contribute value to an organisation” you are a worker and are entitled to a minimum wage. And yet a YouGov survey found more than eight out of 10 businesses who used unpaid interns admitted they undertook useful tasks.
HMRC, the department responsible for enforcing the law, has been “totally ineffective”, says Intern Aware’s Ben Lyons. So it took matters into its own hands, encouraging former unpaid interns to take their employers to court to recoup wages they should have been paid. Ex-interns from Harrods, Sony and a leading London tourist attraction are among those who were successful. Such cases serve as useful warnings, but they are no solution. “If the primary reason you’re doing an internship is to get a reference or get a new job, you won’t do that,” says Lyons. “There’s no real way under the existing law that the vast majority of internships will come forward.”
Change may now be afoot, however. As well as a hardening of the Labour line on internships, this debate is coming to the House of Lords – with some cross-party support for reform. There are other battles that must be fought: expensive post-graduate qualifications are now often a must for many often a must for professions but too costly for many; there’s a need for scholarships to support those from underrepresented backgrounds; and we have to tackle the social and economic inequality that lies at the root of the gap in educational attainment.
Yet a curbing of unpaid internships would be a real blow to Britain’s entrenched class system. What an opportunity: it must not be missed.
Why Is Everyone So Busy?
THE predictions sounded like promises: in the future, working hours would be short and vacations long. “Our grandchildren”, reckoned John Maynard Keynes in 1930, would work around “three hours a day”—and probably only by choice. Economic progress and technological advances had already shrunk working hours considerably by his day, and there was no reason to believe this trend would not continue. Whizzy cars and ever more time-saving tools and appliances guaranteed more speed and less drudgery in all parts of life. Social psychologists began to fret: whatever would people do with all their free time?
This has not turned out to be one of the world’s more pressing problems. Everybody, everywhere seems to be busy. In the corporate world, a “perennial time-scarcity problem” afflicts executives all over the globe, and the matter has only grown more acute in recent years, say analysts at McKinsey, a consultancy firm. These feelings are especially profound among working parents. As for all those time-saving gizmos, many people grumble that these bits of wizardry chew up far too much of their days, whether they are mouldering in traffic, navigating robotic voice-messaging systems or scything away at e-mail—sometimes all at once.
Why do people feel so rushed? Part of this is a perception problem. On average, people in rich countries have more leisure time than they used to. This is particularly true in Europe, but even in America leisure time has been inching up since 1965, when formal national time-use surveys began. American men toil for pay nearly 12 hours less per week, on average, than they did 40 years ago—a fall that includes all work-related activities, such as commuting and water-cooler breaks. Women’s paid work has risen a lot over this period, but their time in unpaid work, like cooking and cleaning, has fallen even more dramatically, thanks in part to dishwashers, washing machines, microwaves and other modern conveniences, and also to the fact that men shift themselves a little more around the house than they used to.
The problem, then, is less how much time people have than how they see it. Ever since a clock was first used to synchronise labour in the 18th century, time has been understood in relation to money. Once hours are financially quantified, people worry more about wasting, saving or using them profitably. When economies grow and incomes rise, everyone’s time becomes more valuable. And the more valuable something becomes, the scarcer it seems.
Individualistic cultures, which emphasise achievement over affiliation, help cultivate this time-is-money mindset. This creates an urgency to make every moment count, notes Harry Triandis, a social psychologist at the University of Illinois. Larger, wealthy cities, with their higher wage rates and soaring costs of living, raise the value of people’s time further still. New Yorkers are thriftier with their minutes - and more harried - than residents of Nairobi. London’s pedestrians are swifter than those in Lima. The tempo of life in rich countries is faster than that of poor countries. A fast pace leaves most people feeling rushed. “Our sense of time”, observed William James in his 1890 masterwork, “The Principles of Psychology”, “seems subject to the law of contrast.”
When people see their time in terms of money, they often grow stingy with the former to maximise the latter. Workers who are paid by the hour volunteer less of their time and tend to feel more antsy when they are not working. In an experiment carried out by Sanford DeVoe and Julian House at the University of Toronto, two different groups of people were asked to listen to the same passage of music—the first 86 seconds of “The Flower Duet” from the opera “Lakmé”. Before the song, one group was asked to gauge their hourly wage. The participants who made this calculation ended up feeling less happy and more impatient while the music was playing. “They wanted to get to the end of the experiment to do something that was more profitable,” Mr DeVoe explains.
The relationship between time, money and anxiety is something Gary S. Becker noticed in America’s post-war boom years. Though economic progress and higher wages had raised everyone’s standard of living, the hours of “free” time Americans had been promised had come to nought. “If anything, time is used more carefully today than a century ago,” he noted in 1965. He found that when people are paid more to work, they tend to work longer hours, because working becomes a more profitable use of time. So the rising value of work time puts pressure on all time. Leisure time starts to seem more stressful, as people feel compelled to use it wisely or not at all.
The harried leisure class
That economic prosperity would create feelings of time poverty looked a little odd in the 1960s, given all those new time-saving blenders and lawnmowers. But there is a distinct correlation between privilege and pressure. In part, this is a conundrum of wealth: though people may be earning more money to spend, they are not simultaneously earning more time to spend it in. This makes time - that frustratingly finite, unrenewable resource - feel more precious.
Being busy can make you rich, but being rich makes you feel busier still
Daniel Hamermesh of the University of Texas at Austin calls this a “yuppie kvetch”. In an analysis of international time-stress data, with Jungmin Lee, now of Sogang University in Seoul, he found that complaints about insufficient time come disproportionately from well-off families. Even after holding constant the hours spent working at jobs or at home, those with bigger paychecks still felt more anxiety about their time. “The more cash-rich working Americans are, the more time-poor they feel,” reported Gallup, a polling company, in 2011. Few spared a moment to feel much sympathy.
So being busy can make you rich, but being rich makes you feel busier still. Staffan Linder, a Swedish economist, diagnosed this problem in 1970. Like Becker, he saw that heady increases in the productivity of work-time compelled people to maximise the utility of their leisure time. The most direct way to do this would be for people to consume more goods within a given unit of time. To indulge in such “simultaneous consumption”, he wrote, a chap “may find himself drinking Brazilian coffee, smoking a Dutch cigar, sipping a French cognac, reading the New York Times, listening to a Brandenburg Concerto and entertaining his Swedish wife — all at the same time, with varying degrees of success.” Leisure time would inevitably feel less leisurely, he surmised, particularly for those who seemed best placed to enjoy it all. The unexpected product of economic progress, according to Linder, was a “harried leisure class”.
The explosion of available goods has only made time feel more crunched, as the struggle to choose what to buy or watch or eat or do raises the opportunity cost of leisure (ie, choosing one thing comes at the expense of choosing another) and contributes to feelings of stress. The endless possibilities afforded by a simple internet connection boggle the mind. When there are so many ways to fill one’s time, it is only natural to crave more of it. And pleasures always feel fleeting. Such things are relative, as Albert Einstein noted: “An hour sitting with a pretty girl on a park bench passes like a minute, but a minute sitting on a hot stove seems like an hour.”
The ability to satisfy desires instantly also breeds impatience, fuelled by a nagging sense that one could be doing so much else. People visit websites less often if they are more than 250 milliseconds slower than a close competitor, according to research from Google. More than a fifth of internet users will abandon an online video if it takes longer than five seconds to load. When experiences can be calculated according to the utility of a millisecond, all seconds are more anxiously judged for their utility.
New technologies such as e-mail and smartphones exacerbate this impatience and anxiety. E-mail etiquette often necessitates a response within 24 hours, with the general understanding that sooner is better. Managing this constant and mounting demand often involves switching tasks or multi-tasking, and the job never quite feels done. “Multi-tasking is what makes us feel pressed for time,” says Elizabeth Dunn, a psychology professor at the University of British Columbia in Vancouver, Canada. “No matter what people are doing, people feel better when they are focused on that activity,” she adds.
Yet the shortage of time is a problem not just of perception, but also of distribution. Shifts in the way people work and live have changed the way leisure time is experienced, and who gets to experience it. For the past 20 years, and bucking previous trends, the workers who are now working the longest hours and juggling the most responsibilities at home also happen to be among the best educated and best paid. The so-called leisure class has never been more harried.
Racing to the top
Writing in 1962, Sebastian de Grazia, a political scientist, cast a withering eye across the great American landscape, dismayed by all the relentless industry and consumption. “If executives are so powerful a force in America, as they indubitably are, why don’t they get more of that free time which everybody else, it seems, holds to be so precious?” Perhaps it is fortunate de Grazia did not live to see the day when executives would no longer break for lunch.
Thirty years ago low-paid, blue-collar workers were more likely to punch in a long day than their professional counterparts. One of the many perks of being a salaried employee was a fairly manageable and predictable work-week, some long lunches and the occasional round of golf. Evenings might be spent curled up with a Sharper Image catalogue by a toasty fire.
But nowadays professionals everywhere are twice as likely to work long hours as their less-educated peers. Few would think of sparing time for nine holes of golf, much less 18. (Golf courses around the world are struggling to revamp the game to make it seem speedy and cool—see article.) And lunches now tend to be efficient affairs, devoured at one’s desk, with an eye on the e-mail inbox. At some point these workers may finally leave the office, but the regular blinking or chirping of their smartphones kindly serves to remind them that their work is never done.
A Harvard Business School survey of 1,000 professionals found that 94% worked at least 50 hours a week, and almost half worked more than 65 hours. Other research shows that the share of college-educated American men regularly working more than 50 hours a week rose from 24% in 1979 to 28% in 2006. According to a recent survey, 60% of those who use smartphones are connected to work for 13.5 hours or more a day. European labour laws rein in overwork, but in Britain four in ten managers, victims of what was once known as “the American disease”, say they put in more than 60 hours a week. It is no longer shameful to be seen swotting.
All this work has left less time for play. Though leisure time has increased overall, a closer look shows that most of the gains took place between the 1960s and the 1980s. Since then economists have noticed a growing “leisure gap”, with the lion’s share of spare time going to people with less education.
In America, for example, men who did not finish high-school gained nearly eight hours a week of leisure time between 1985 and 2005. Men with a college degree, however, saw their leisure time drop by six hours during the same period, which means they have even less leisure than they did in 1965, say Mark Aguiar of Princeton University and Erik Hurst of the University of Chicago. The same goes for well-educated American women, who not only have less leisure time than they did in 1965, but also nearly 11 hours less per week than women who did not graduate from high school.
What accounts for this yawning gap between the time-poor haves and the time-rich have-nots? Part of it has to do with structural changes to the labour market. Work opportunities have declined for anyone without a college degree. The availability of manufacturing and other low-skilled jobs has shrunk in the rich world. The jobs that are left tend to be in the service sector. They are often both unsatisfying and poorly paid. So the value of working hours among the under-educated is fairly low by most measures, and the rise in “leisure” time may not be anything to envy.
Yet the leisure-time gap between employees with more and less education is not merely a product of labour-market changes. Less well-educated men also spend less time searching for work, doing odd jobs for money and getting extra training than unemployed educated men, and they do less work around the house and spend less time with their children.
But this does not explain why so many well-educated and better-paid people have less leisure time than they did in the 1960s. Various factors may account for this phenomenon. One is that college-educated workers are more likely to enjoy what they do for a living, and identify closely with their careers, so work long hours willingly. Particularly at the top, a demanding job can be a source of prestige, so the rewards of longer hours go beyond the financial.
Another reason is that all workers today report greater feelings of job insecurity. Slow economic growth and serious disruptions in any number of industries, from media to architecture to advertising, along with increasing income inequality, have created ever more competition for interesting, well-paid jobs. Meanwhile in much of the rich world, the cost of housing and private education has soared. They can also expect to live longer, and so need to ensure that their pension pots are stocked with ample cash for retirement. Faced with sharper competition, higher costs and a greater need for savings, even elite professionals are more nervous about their prospects than they used to be. This can keep people working in their offices at all hours, especially in America, where there are few legal limits on the working hours of salaried employees.
This extra time in the office pays off. Because knowledge workers have few metrics for output, the time people spend at their desks is often seen as a sign of productivity and loyalty. So the stooge who is in his office first thing in the morning and last at night is now consistently rewarded with raises and promotions, or saved from budget cuts. Since the late 1990s, this “long-hours premium” has earned overworkers about 6% more per hour than their full-time counterparts, says Kim Weeden at Cornell University. (It also helps reinforce the gender-wage gap, as working mothers are rarely able to put in that kind of time in an office.)
Ultimately, more people at the top are trading leisure for work because the gains of working - and the costs of shirking - are higher than ever before. Revealingly, inequalities in leisure have coincided with other measures of inequality, in wages and consumption, which have been increasing steadily since the 1980s. While the wages of most workers, and particularly uneducated workers, have either remained stagnant or grown slowly, the incomes at the top—and those at the very top most of all—have been rising at a swift rate. This makes leisure time terribly expensive.
So if leisureliness was once a badge of honour among the well-off of the 19th century, in the words of Thorsten Veblen, an American economist at the time, then busyness - and even stressful feelings of time scarcity - has become that badge now. To be pressed for time has become a sign of prosperity, an indicator of social status, and one that most people are inclined to claim. This switch, notes Jonathan Gershuny, the director of Oxford University’s Centre for Time Use Research, is only natural in economies where the most impressive people seem to have the most to do.
The American is always in a hurry
Though professionals everywhere complain about lacking time, the gripes are loudest in America. This makes some sense: American workers toil some of the longest hours in the industrial world. Employers are not required to offer their employees proper holidays, but even when they do, their workers rarely use the lot. The average employee takes only half of what is allotted, and 15% don’t take any holiday at all, according to a survey from Glassdoor, a consultancy. Nowhere is the value of work higher and the value of leisure lower. This is the country that invented take-away coffee, after all.
Some blame America’s puritanical culture. Americans are “always in a hurry,” observed Alexis de Tocqueville more than 150 years ago. But the reality is more complicated. Until the 1970s, American workers put in the same number of hours as the average European, and a bit less than the French. But things changed during the big economic shocks of the 1970s. In Europe labour unions successfully fought for stable wages, a reduced work week and more job protection. Labour-friendly governments capped working hours and mandated holidays. European workers in essence traded money for more time—lower wages for more holiday. This raised the utility of leisure, because holidays are more fun and less costly when everyone else is taking time off too. Though European professionals are working longer hours than ever before, it is still fairly hard to find one in an office in August.
In America, where labour unions have always been far less powerful, the same shocks led to job losses and increased competition. In the 1980s Ronald Reagan cut taxes and social-welfare programmes, which increased economic inequality and halted the overall decline in working hours. The rising costs of certain basics - pensions, health care and higher education, much of which is funded or subsidised in Europe - make it rational to trade more time for money. And because American holidays are more limited, doled out grudgingly by employers (if at all), it is harder to co-ordinate time off with others, which lowers its value, says John de Graaf, executive director of Take Back Your Time, an advocacy organisation in America.
The returns on work are also potentially much higher in America, at least for those with a college degree. This is because taxes and transfer payments do far less to bridge the gap between rich and poor than in other wealthy nations, such as Britain, France and Ireland. The struggle to earn a place on that narrow pedestal encourages people to slave away for incomparably long hours. “In America the consequences of not being at the top are so dramatic that the rat race is exacerbated,” says Joseph Stiglitz, a Nobel prize-winning economist. “In a winner-takes-all society you would expect this time crunch.”
So rising wages, rising costs, diminishing job security and more demanding, rewarding work are all squeezing leisure time—at least for the fortunate few for whom work-time is actually worth something. But without a doubt the noisiest grumbles come from working parents, not least the well-educated ones. Time-use data reveals why these people never have enough time: not only are they working the longest hours, on average, but they are also spending the most time with their children.
American mothers with a college degree, for example, spend roughly 4.5 hours more per week on child care than mothers with no education beyond high school. This gap persists even when the better-educated mother works outside the home, as she is now likely to do, according to research from Jonathan Guryan and Erik Hurst of the University of Chicago, and Melissa Kearney of the University of Maryland. As for fathers, those with a job and a college degree spend far more time with their children than fathers ever used to, and 105% more time than their less-educated male peers. These patterns can be found around the world, particularly in relatively rich countries.
If their leisure time is so scarce, why are these people spending so much of it doting on their sprogs, shepherding them from tutors to recitals to football games? Why aren’t successful professionals outsourcing more of the child-rearing? There are several reasons for this. The first is that people say they find it far more meaningful than time spent doing most other things, including paid work; and if today’s professionals value their time at work more than yesterday’s did, presumably they feel the time they spend parenting is more valuable still. Another reason is that parents - and above all educated parents - are having children later in life, which puts them in a better position emotionally and financially to make a more serious investment. When children are deliberately sought, sometimes expensively so, parenting feels more rewarding, even if this is just a confirmation bias.
A mother’s work
The rise in female employment also seems to have coincided with (or perhaps precipitated) a similarly steep rise in standards for what it means to be a good parent, and especially a good mother. Niggling feelings of guilt and ambivalence over working outside the home, together with some social pressures, compel many women to try to fulfil idealised notions of motherhood as well, says Judy Wajcman, a sociology professor at the London School of Economics and author of a new book, “Pressed for Time: The Acceleration of Life in Digital Capitalism”.
Though women do less work around the house than they used to, the jobs they do tend to be the never-ending ones
The struggle to “have it all” may be a fairly privileged modern challenge. But it bears noting that even in professional dual-income households, mothers still handle the lion’s share of parenting - particularly the daily, routine jobs that never feel finished. Attentive fathers handle more of the enjoyable tasks, such as taking children to games and playing sports, while mothers are stuck with most of the feeding, cleaning and nagging. Though women do less work around the house than they used to, the jobs they do tend to be the never-ending ones, like tidying, cooking and laundry. Well-educated men chip in far more than their fathers ever did, and more than their less-educated peers, but still put in only half as much time as women do. And men tend to do the discrete tasks that are more easily crossed off lists, such as mowing lawns or fixing things round the house. All of this helps explain why time for mothers, and especially working mothers, always feels scarce. “Working mothers with young children are the most time-scarce segment of society,” says Geoffrey Godbey, a time-use expert at Penn State University.
Parents also now have far more insight into how children learn and develop, so they have more tools (and fears) as they groom their children for adulthood. This reinforces another reason why well-off people are investing so much time in parenthood: preparing children to succeed is the best way to transfer privilege from one generation to the next. Now that people are living longer, parents are less likely to pass on a big financial bundle when they die. So the best way to ensure the prosperity of one’s children is to provide the education and skills needed to get ahead, particularly as this human capital grows ever more important for success. This helps explain why privileged parents spend so much time worrying over schools and chauffeuring their children to résumé-enhancing activities. “Parents are now afraid of doing less than their neighbours,” observes Philip Cohen, a sociology professor at the University of Maryland who studies contemporary families. “It can feel like an arms race.”
No time to lose
Leisure time is now the stuff of myth. Some are cursed with too much. Others find it too costly to enjoy. Many spend their spare moments staring at a screen of some kind, even though doing other things (visiting friends, volunteering at a church) tends to make people happier. Not a few presume they will cash in on all their stored leisure time when they finally retire, whenever that may be. In the meantime, being busy has its rewards. Otherwise why would people go to such trouble?
Alas time, ultimately, is a strange and slippery resource, easily traded, visible only when it passes and often most highly valued when it is gone. No one has ever complained of having too much of it. Instead, most people worry over how it flies, and wonder where it goes. Cruelly, it runs away faster as people get older, as each accumulating year grows less significant, proportionally, but also less vivid. Experiences become less novel and more habitual. The years soon bleed together and end up rushing past, with the most vibrant memories tucked somewhere near the beginning. And of course the more one tries to hold on to something, the swifter it seems to go.
Writing in the first century, Seneca was startled by how little people seemed to value their lives as they were living them—how busy, terribly busy, everyone seemed to be, mortal in their fears, immortal in their desires and wasteful of their time. He noticed how even wealthy people hustled their lives along, ruing their fortune, anticipating a time in the future when they would rest. “People are frugal in guarding their personal property; but as soon as it comes to squandering time they are most wasteful of the one thing in which it is right to be stingy,” he observed in “On the Shortness of Life”, perhaps the very first time-management self-help book. Time on Earth may be uncertain and fleeting, but nearly everyone has enough of it to take some deep breaths, think deep thoughts and smell some roses, deeply. “Life is long if you know how to use it,” he counselled.
Nearly 2,000 years later, de Grazia offered similar advice. Modern life, that leisure-squandering, money-hoarding, grindstone-nosing, frippery-buying business, left him exasperated. He saw that everyone everywhere was running, running, running, but to where? For what? People were trading their time for all sorts of things, but was the exchange worth it? He closed his 1962 tome, “Of Time, Work and Leisure”, with a prescription:
Lean back under a tree, put your arms behind your head, wonder at the pass we’ve come to, smile and remember that the beginnings and ends of man’s every great enterprise are untidy.
9 Second CVs
Teenagers who have spent hours crafting their CVs as they seek that elusive first job had better look away now.
Employers spend on average less than nine seconds scanning a candidate’s CV before moving on to the next, because of the huge volume of applications per vacancy.
One expert likened the process to the casual dating app Tinder, renowned for the speed at which users scroll through profiles before they find one — or more — they like.
The findings are based on a survey of 500 employers by OnePoll last month for National Citizen Service, a volunteering programme run by charities and others and set up at the behest of David Cameron.
It found the average time that employers spend scanning a CV was 8.8 seconds and half said they did so in 6 seconds, after applications per role shot up from 46 last year to 93. Piers Linney, who appears in the BBC series Dragons’ Den and runs an IT business, Outsourcery, said that young people looking for their first job needed to make their applications stand out rather than list exam grades.
“The process of reviewing CVs has become almost ‘Tinderised’, with each CV given just a few seconds to stand out against the competition before being kept or cast aside,” he said.
The survey found that most employers thought the ideal CV should be kept to two pages and most preferred the Arial font, using point size 11.
What they most wanted was evidence of skills and interests beyond school achievements, such as initiative, drive, communication skills and motivation. Despite employers being forbidden to ask applicants their date of birth, people over the age of 50 are deleting “O levels” from their CVs in an attempt to be considered for vacancies, as the outdated exams show their age.
In Praise of Networking
(Economist)
THE purported theme of the World Economic Forum (WEF) changes every year. At this year’s gathering, on January 21st-24th, it will be “the new global context”. Last time, it was “reshaping the world”. But the forum’s real theme is less ponderous, and more constant: the power of networking. Many people protest that they would rather devote their time to real work than to schmoozing. But the fact that more than 2,500 of the world’s busiest people fly out to the small Swiss resort of Davos for the shindig each year is proof that schmoozing gets results. As a veteran of the WEF once put it, “contacts ultimately mean contracts.”
Networking is not just for the elites. A study of staff at a range of German workplaces, carried out over three years by Hans-Georg Wolff and Klaus Moser of the University of Erlangen-Nuremberg, found a positive correlation between the amount of effort the workers said they put into building contacts—inside and outside their offices—and their pay rises and career satisfaction. “Networking can be considered an investment that pays off in the future,” it concludes. Indeed, Reid Hoffman has become a billionaire by investing in a series of companies that have brought networking to the masses—Friendster, SocialNet and LinkedIn.
How does one make the most of a networking opportunity, whether it is in a charming village in the Swiss Alps or in the conference hall of a soulless hotel next to a motorway? A few people are natural networkers. Bill Clinton is the superman of this world. He wraps people in his psychic embrace, persuading them, momentarily, that they are the most important person in the world to him. A few business leaders are also naturals. For example, Goldman Sachs’s boss, Lloyd Blankfein, has a knack of making people feel he has taken them into his confidence. But most people are more like Hillary than Bill; they have to work at it.
The first principle for would-be networkers is to abandon all shame. Be flagrant in your pursuit of the powerful and the soon-to-be-powerful, and when you have their attention, praise them to the skies. Academic research has found that people’s susceptibility to flattery is without limit and beyond satire. In a study published in 1997, B.J. Fogg and Clifford Nass of Stanford University invited people to play a guessing game with a computer, which gave them various types of feedback as they played. Participants who received praise rated both the computer and themselves more highly than those who did not—even those who had been warned beforehand that the machine would compliment them regardless of how well they were doing. Yes, even blatantly insincere, computer-generated flattery works.
But shamelessness needs to be balanced with subtlety. Pretend to disagree with your interlocutor before coming around to his point of view; that gives him a sense of mastery. Discover similar interests or experiences. People are so drawn to those like themselves that they are more likely to marry partners whose first or last names resemble their own. Go out of your way to ask for help. Lending a helping hand allows a powerful person to exercise his power while also burnishing his self-esteem. In his time in the Senate, in 2005-08, Barack Obama asked about a third of his fellow senators for help and advice.
The second principle is that you must have something to say. Success comes from having a well-stocked mind, not just a well-thumbed Rolodex. It is tempting to treat the conference’s official topic as a bit of a joke. Wrong. The more seriously you take it, the more you will succeed in your, and the gathering’s, true purpose. Go to the main sessions and ask sensible questions. Reward the self-styled “thought leaders” in each session by adding them to your Twitter “follow” list. But don’t get carried away. It is a mistake to lecture people on your own pet subjects, as this columnist has discovered. It is an even bigger mistake to question the shibboleths of the global elite. There is a case to be made that homogeneous organisations can do better than ones with diverse workforces, for example. But don’t go there. The aim is to fit in by saying the right things, not to challenge the received wisdom.
The third principle is that you need to work hard at networking. Swot up in advance on the most important people who will be at an event. If you manage to meet them, follow up with an e-mail and a suggestion to meet again. Mukesh Ambani, the boss of Reliance Industries, one of India’s largest conglomerates, makes sure that he is briefed on people he is about to meet, and asks them about their interests. Mark Tucker, the boss of AIA, one of Asia’s biggest insurers, follows up conversations with detailed e-mails, sent at all times of the day and night. Julia Hobsbawm of Editorial Intelligence, a firm which coaches executives on how to network, says that it is like exercise and dieting. You need to incorporate it into your daily routine.
Brown-nosing ahead
Although successful networkers must be calculating, ruthless and shameless, they do better when they somehow make it all seem spontaneous, accidental even. One trick is to engineer “chance” meetings that get you closer to your prey. If he is a fitness fanatic, for example, be working out in the hotel gym when he arrives for his early-morning session. Another is to make sure the people you socialise with happen to be able to introduce you to people who are useful. One of the best guidebooks on this subject, by Keith Ferrazzi, is called “Never Eat Alone”.
The perfect solution is to make networking a fundamental part of your job, perhaps by becoming some sort of ambassador for the company or maybe even by founding a global network of your own. In 1971 Klaus Schwab was a 32-year-old business professor, who might have spent his life publishing obscure academic articles. But instead Mr Schwab organised a meeting of European executives, which grew into the WEF. It now has an annual budget of $200m, and the bosses of the world’s biggest firms pay tens of thousands of dollars each to rub shoulders with him.
In Praise of Not Networking
Davos sounds like it was more hellish than ever this year, what with the hypocrisy of delegates flying into town on their private jets to discuss climate change, only 17 per cent of participants being women, and reporters spotting billionaires shopping for $90 million apartments while killing time in line for official “mindfulness” sessions.
But one constructive thing did happen as a result of the lavish and ludicrous Swiss shindig last week: Harvard Business Review published a surprising piece about the venture capitalist Rich Stromback, who, despite being the “unofficial expert on the Davos party scene”, having attended the World Economic Forum for the past ten years, pronounced that virtually all networking was a waste of time.
The blog quoted and paraphrased him at length, including his argument that “99 per cent of Davos is information or experience you can get elsewhere”; that people shouldn’t worry too much about first impressions; that the key to networking is to stop networking; and that nobody wants to have a “networking conversation”, especially those who are at the highest levels of business and politics.
But he had me at “99 per cent”. The idea of conventional corporate networking has long made me uncomfortable, not least when I once attended Davos and came back with only one business card.
In the past I’ve put this aversion down to coming from a large family (when you never had a bedroom of your own as a child, the desire to be left alone is fierce), intense Englishness (choosing between poverty and asking a mate for a favour, I’d very reluctantly pick the latter with a heavy heart) and my chosen trade.
After all, journalism is serviced by an entire parallel industry — public relations — that exists, in part, to persuade journalists to network with people they don’t really want to network with.
But Mr Stromback has helped me to realise that it’s not only me: the overwhelming majority of all networking everywhere is a waste of time. Which is not to say that the 1 per cent of worthwhile networking is not absolutely essential. Most jobs are found through personal connections and word of mouth, with studies finding a positive correlation between how much effort people put into building contacts and happiness and pay.
Meeting media students, you can always spot the ones who are going to make it: the ones who make efforts to become actual friends with people in the industry, who realise that success is not just about having good exam results and a sparkling CV, but also about being affable and liked.
However, conventional corporate networking, as it is propounded by self-proclaimed networking experts, seems to work against this. Not sure what I mean?
Well, take an article published on the subject in The Economist last week (article above), which, when it wasn’t advocating wilful brown-nosing and sycophancy (“Be flagrant in your pursuit of the powerful and the soon-to-be-powerful, and when you have their attention, praise them to the skies”), was recommending disingenuousness (“Pretend to disagree with your interlocutor before coming around to his point of view; that gives him a sense of mastery”), blandness (“The aim is to fit in by saying the right things, not to challenge the received wisdom”) and stalking (“One trick is to engineer ‘chance’ meetings that get you closer to your prey. If he is a fitness fanatic, for example, be working out in the hotel gym when he arrives for his early-morning session.”).
I can barely bring myself to look in the mirror after an hour in the gym, let alone engage in light networking with a complete stranger. And I really do doubt the point of such an inane pursuit of the rich and powerful.
There are only so many people with whom you can have a meaningful relationship anyway (one study famously put it at 150), you are likely to put off more people than you attract with such desperate behaviour, quality is plainly more important than quantity and, as Stromback points out, if you are good at what you do, and are a vaguely rounded human being, you will “network” naturally anyway, drifting towards people who are interesting and find you interesting in return. It’s human nature.
Indeed, I’m increasingly convinced that those who advance such networking advice do so from rather peculiar perspectives.
Take Julia Hobsbawm, for example, one of our country’s foremost proponents of networking, honorary professor of networking at Cass Business School, presenter of radio documentaries on the subject and giver of endless advice on the subject, such as “the faster you connect with someone, the sooner you will exchange valuable information with each other”.
Charming woman, I’m sure. But she would say that, wouldn’t she? Many of her businesses, from PR to awards, festivals and conferences, depend on knowing lots of people.
I really don’t think the rest of us need to try so hard, although the momentary hesitation I experienced while writing the preceding paragraph, as I worried for a microsecond that it might not be wise to criticise someone who is so “connected”, brings me to yet another problem with conventional corporate networking: it puts forward the false idea that we need to get on with everyone, when such a thing is not possible — or even desirable.
For when it comes to success, the enemies you make are as important as your friends.
I’m thinking here not only of the rather famous Winston Churchill quote (“You have enemies? Good. That means you’ve stood up for something, sometime in your life”), but also of Malcolm Gladwell’s argument that the best entrepreneurs in world are not only open to experience, and conscientious, but also disagreeable. His examples included the famously cantankerous Steve Jobs and Ingvar Kamprad, the Ikea founder.
If you look around, you’ll find that nearly everyone worth admiring has gathered enemies as efficiently as friends. And that is the thing about networking: not only is 99 per cent of it a waste of time, but the people you actively do not network with will influence your life as much as the few with whom you do. Pick them carefully.
CVs - The Common Mistakes
I've sent out hundreds of resumes over my career, applying for just about every kind of job. I've personally reviewed more than 20,000 resumes. And at Google we sometimes get more than 50,000 resumes in a single week.
I have seen A LOT of resumes.
Some are brilliant, most are just ok, many are disasters. The toughest part is that for 15 years, I've continued to see the same mistakes made again and again by candidates, any one of which can eliminate them from consideration for a job. What's most depressing is that I can tell from the resumes that many of these are good, even great, people. But in a fiercely competitive labor market, hiring managers don't need to compromise on quality. All it takes is one small mistake and a manager will reject an otherwise interesting candidate.
I know this is well-worn ground on LinkedIn, but I'm starting here because -- I promise you -- more than half of you have at least one of these mistakes on your resume. And I'd much rather see folks win jobs than get passed over.
In the interest of helping more candidates make it past that first resume screen, here are the five biggest mistakes I see on resumes.
Mistake 1: Typos. This one seems obvious, but it happens again and again. A 2013 CareerBuilder survey found that 58% of resumes have typos.
In fact, people who tweak their resumes the most carefully can be especially vulnerable to this kind of error, because they often result from going back again and again to fine tune their resumes just one last time. And in doing so, a subject and verb suddenly don't match up, or a period is left in the wrong place, or a set of dates gets knocked out of alignment. I see this in MBA resumes all the time. Typos are deadly because employers interpret them as a lack of detail-orientation, as a failure to care about quality. The fix?
Read your resume from bottom to top: reversing the normal order helps you focus on each line in isolation. Or have someone else proofread closely for you.
Mistake 2: Length. A good rule of thumb is one page of resume for every ten years of work experience. Hard to fit it all in, right? But a three or four or ten page resume simply won't get read closely. As Blaise Pascal wrote, "I would have written you a shorter letter, but I did not have the time." A crisp, focused resume demonstrates an ability to synthesize, prioritize, and convey the most important information about you. Think about it this way: the *sole* purpose of a resume is to get you an interview. That's it. It's not to convince a hiring manager to say "yes" to you (that's what the interview is for) or to tell your life's story (that's what a patient spouse is for). Your resume is a tool that gets you to that first interview. Once you're in the room, the resume doesn't matter much. So cut back your resume. It's too long.
Mistake 3: Formatting. Unless you're applying for a job such as a designer or artist, your focus should be on making your resume clean and legible. At least ten point font. At least half-inch margins. White paper, black ink. Consistent spacing between lines, columns aligned, your name and contact information on every page. If you can, look at it in both Google Docs and Word, and then attach it to an email and open it as a preview. Formatting can get garbled when moving across platforms. Saving it as a PDF is a good way to go.
Mistake 4: Confidential information. I once received a resume from an applicant working at a top-three consulting firm. This firm had a strict confidentiality policy: client names were never to be shared. On the resume, the candidate wrote: "Consulted to a major software company in Redmond, Washington." Rejected! There's an inherent conflict between your employer's needs (keep business secrets confidential) and your needs (show how awesome I am so I can get a better job). So candidates often find ways to honor the letter of their confidentiality agreements but not the spirit. It's a mistake. While this candidate didn't mention Microsoft specifically, any reviewer knew that's what he meant. In a very rough audit, we found that at least 5-10% of resumes reveal confidential information. Which tells me, as an employer, that I should never hire those candidates ... unless I want my own trade secrets emailed to my competitors.
The New York Times test is helpful here: if you wouldn't want to see it on the home page of the NYT with your name attached (or if your boss wouldn't!), don't put it on your resume.
Mistake 5: Lies. This breaks my heart. Putting a lie on your resume is never, ever, ever, worth it. Everyone, up to and including CEOs, gets fired for this. (Google "CEO fired for lying on resume" and see.) People lie about their degrees (three credits shy of a college degree is not a degree), GPAs (I've seen hundreds of people "accidentally" round their GPAs up, but never have I seen one accidentally rounded down -- never), and where they went to school (sorry, but employers don't view a degree granted online for "life experience" as the same as UCLA or Seton Hall). People lie about how long they were at companies, how big their teams were, and their sales results, always goofing in their favor.
There are three big problems with lying: (1) You can easily get busted. The Internet, reference checks, and people who worked at your company in the past can all reveal your fraud. (2) Lies follow you forever. Fib on your resume and 15 years later get a big promotion and are discovered? Fired. And try explaining that in your next interview. (3) Our Moms taught us better. Seriously.
So this is how to mess up your resume. Don't do it! Hiring managers are looking for the best people they can find, but the majority of us all but guarantee that we'll get rejected.
The good news is that -- precisely because most resumes have these kinds of mistakes -- avoiding them makes you stand out.
Now, on to your questions:
1. Should I have keywords and jargon on my resume?
Yes, alas, but put them in their own section. A major part of why we have unemployment - and why finding a job is so hard - is because resumes are awful at conveying who you really are and companies stink at screening resumes . Too many companies rely on clumsy software products that sort and filter resumes based on keywords. And too many recruiters do the same thing, looking for fancy schools or company names instead of at what you actually did. (Google applications are screened by real, live people.) Crummy as that is, it's reality. So for now, if you're in a technical field, have a section where you list all your programming languages. If you're in other professions, you may want to extract the buzzwords from the job posting and have a "skills" section (doesn't matter what you call it) where you can park your laundry list of jargon. Don't waste space on verbs. Just have a list. Save your compelling writing for the bullet points under each job. Lifehacker has some other good suggestions for getting past the machines. And I'm optimistic that somewhere out there someone is building a MUCH better system for inferring who you really are and understanding what employers really need.
2. Should I pay someone to write my resume?
Nope. See my post here on how to write a resume that will get you noticed. Even better, find someone like you who already has the job you want. If you're a veteran, find someone from your service who works in the job and company you want. If you're a student, find an alumna/-us who has your dream job (your career center will have resume books you can mine). Emulate their resume. (Notice I didn't say "copy" ... big difference!) Look at how they described their experiences and accomplishments. They wrote things in a way that got noticed. They got it right. Do what they did. Don't waste your money on something you can get for free.
3. Should I include organizations where I worked more than 20 years ago?
You don't need to. For a competent hiring manager, your early experience isn't relevant. No one cares that I worked at an Olive Garden 20+ years ago. So on my resume I can pick some arbitrary cut-off point, have a "Prior experience" section, and summarize that I worked at a range of jobs in restaurants, non-profits, and manufacturing.
4. Do resumes predict performance?
I haven't seen anything to suggest they do. Resumes are a very poor information source. Work sample tests are actually the best predictor of performance, followed by tests of cognitive ability, which are best assessed using structured interviews. I’ve got three chapters explaining how you can become a world class interviewer in my book WORK RULES!, coming out in April, if you’re interested in learning more.
5. The best people don't always have the best resumes. Excluding someone because of a typo is stupid and you're a horrible person for doing that.
Ok, (a) that's not a question. And (b), I confess that I do occasionally overlook an error, for example if the person writing the resume isn't a native English speaker. But (c), from the recruiter's perspective, if they have a choice between two equally impressive resumes, I think we can agree that the one that says "professional booger" instead of "blogger" is probably not going to get a call.
6. Shouldn't HR departments and recruiters work harder to find the best people? Why put the blame on the job seeker?
I want everyone to have the best possible chance of landing their dream job. That means controlling the parts of the application process you can. You can control every single word on your resume. You can't control the quality of the person reading it. But I will tell you that at recruiting firms they only get paid for filling jobs, so they do look hard at applications. What they see is in your control.
7. I'm a mom (or dad) coming back into the workforce after time off with my child. How do I explain the time off?
Don't apologize and don't hide. Put down that you took time off for your family. If you volunteered or did part-time work, list that too, but own your decision. Parents who have left the workforce and are coming back in are one of the biggest untapped sources of talent for recruiters. We get that at Google, and more and more other companies are starting to see it too.
8. Hey! You had a typo in your post!
Yes, but I promise you my resume is pristine! ;)
Best Resumes
There's a ton of unfairness in the job search process. As a candidate, you can’t control whether a company requires a work visa, whether some executive’s kid has an inside track on your dream job, or whether your interviewer has some private or unconscious bias that will hurt your chances. I’ll write about some of these -- especially unconscious bias -- in the future.
For now, I want to focus on the most controllable element of a job search: your resume. The sole purpose of a resume is to get you past that first screen and into an interview. In my last post, “The Biggest Mistakes I See on Resumes, and How to Correct Them,” I covered the all-too-common mistakes that knock applicants out of consideration at many companies. Let’s assume you’ve read that post and scrubbed your resume so it’s concise, error-free, legible, and honest. You’re already better off than at least half the applicants out there.
But how do you make your accomplishments stand out? There’s a simple formula. Every one of your accomplishments should be presented as:
Accomplished [X] as measured by [Y] by doing [Z]
In other words, start with an active verb, numerically measure what you accomplished, provide a baseline for comparison, and detail what you did to achieve your goal. Consider the following two descriptions of the same work, and ask yourself which would look better on a resume:
Studied financial performance of companies and made investment recommendations
Improved portfolio performance by 12% ($1.2M) over one year by refining cost of capital calculations for information-poor markets and re-weighting portfolio based on resulting valuations
The addition of the “12% improvement” makes the statement more powerful. Adding “($1.2M)” anticipates the reviewer’s question about whether 12% is a big deal or not. If you improved investment results by 12%, but that meant going from $100 to $112, that’s not too impressive. But adding $1.2M to the starting portfolio value of $10 million is huge. Explaining how you did it adds credibility and gives insight into your strengths.
Several examples inspired by actual resumes will show you what I mean. The first bullet is typical: not bad, but certain not to stand out. The second is a much better version of a similar accomplishment from a different resume. My own suggestions are in italics.
College student who is a leader in her sorority
Managed sorority budget
Managed $31,000 Spring 2014 budget and invested idle funds in appropriate high-yielding capital notes
Managed $31,000 Spring 2014 budget and invested $10,000 in idle funds in appropriate high-yielding capital notes returning 5% over the year
College student participating in a leadership program
Member of Management Leadership for Tomorrow (MLT)
Selected as one of 230 for this 18-month professional development program for high-achieving diverse talent
Selected as one of 230 participants nationwide for this 18-month professional development program for high-achieving diverse talent based on leadership potential, ability to contribute to this MLT cohort, and academic success
Finance or consulting professional
Responsible for negotiating service contracts with XYZ
Negotiated 30% ($500k) reduction in costs with XYZ to perform post-delivery support
Negotiated 30% ($500k) reduction in costs with XYZ to perform post-delivery support by designing and using results from an online auction of multiple vendors
Sales support associate
Achieved annual business plan commitments for volumes, model mix, wholesale revenue, selling expenses and brand
As a team member, contributed to 21% increase in advertiser spend by achieving 158% of target number of customer contacts (80 contacts per week) and 192% of target interaction depth (20 minutes per customer)
Candidate with skill-based resume
Skills: Excellent customer service skills. Friendly and positive attitude
Skills: Excellent customer service skills and positive attitude as demonstrated by receiving employee of the month in four consecutive months in 2014
Logistics expert
Reduce cost of goods sold strategy: Five years of line and supply chain management experience at XYZ distribution centers and managing outsourced third-party logistics providers
Achieved 30% logistics cost savings by reducing returns, use of overtime, excess and obsolete inventory and targeted outsourcing
Achieved 30% logistics cost savings ($900k) over five years by reducing returns (-8%), use of overtime (-7%), and excess and obsolete inventory (-5%), and through targeted outsourcing (-10%)
Marketing manager
Studied the branding and marketing strategies of XYZ. Analyzed the pricing strategies of XYZ in comparison to competitors
Led cross-functional 10-member team to develop and implement global advertising strategy for $X million XYZ brand
Led cross-functional 10-member team to develop and implement global advertising strategy for $X million XYZ brand resulting in 25-point increase in brand recall, 12% improvement in net promoter score, and contributing to 18% year-over-year sales improvement ($XM)
Veteran transitioning to the civilian sector
Worked as a trainer with deploying units to ready their medical personnel for combat action and trauma medicine
One of three officers selected to lead comprehensive redesign of the XYZ training program for X,000 Marines and sailors, increasing measured unit proficiency by 20% [This one is great -- I wouldn’t change a thing!]
You might feel like it’s hard to measure your work, but there is almost always something you can point to that differentiates you from others. Back when I was a waiting tables at the Olive Garden, I would have written, “Exhibited the spirit of Hospitaliano by achieving 120% of dessert sales targets (compared to an average of 98%) and averaging 26% in tips per night.”
Well, maybe I wouldn’t have mentioned the Hospitaliano....
And even if your accomplishments don’t seem that impressive to you, recruiters will nevertheless love the specificity. “Served 85 customers per day with 100% accuracy” sounds good, even if the customers are people you rang up at a grocery store. It’s even more impressive if you can add, “…compared to an average of 70 customers at 90% accuracy for my peers.” Providing data helps. Making it meaningful with a comparison helps even more.
Niebuhr said to change the things you can control. I agree. You can’t control the biases and attention span of whomever reviews your resume. You do control what’s on the page in front of him or her. Use the formula “accomplished [X] as measured by [Y] by doing [Z]” and recruiters will take notice.
Tell A Story
The age-old art of storytelling — something humans have done since they could first communicate. So why has it become this year’s buzzword? And what is its new value?
In these days of tougher-than-ever job searches, competition for crowdfunding and start-ups looking to be the next Google or Facebook, it’s not enough just to offer up the facts about you or your company to prospective employers or investors. Or even to your own workers.
You need to be compelling, unforgettable, funny and smart. Magnetic, even. You need to be able to answer the question that might be lingering in the minds of the people you’re trying to persuade: What makes you so special?
You need to have a good story.
“As human beings, we know that stories work, but when we get in a business relationship, we forget this,” said Keith Quesenberry, a lecturer at the Center for Leadership Education at Johns Hopkins University.
Learning — or relearning — how to tell stories requires some skill. And consultants are lining up to teach it — sometimes for a hefty fee.
But not every story is well told. Most of us know a compelling tale when we hear one, but “it’s difficult for people to articulate why they like what they like,” Professor Zak said.
Mr. Quesenberry decided to see if he could understand what drew people to a particular story. Along with a co-author and his graduate students, he dissected two years’ worth of Super Bowl commercials using Freytag’s Pyramid, named after a German novelist who saw common patterns in the plots of novels and stories and developed a diagram to analyze them.
It probably sounds familiar from middle-school English class: Act 1, scene setting; Act 2, rising action; Act 3, the turning point; Act 4, the falling action; and Act 5, the denouement or release. Variations of this include fewer or more stages, but they all follow the same pattern.
The team coded each of the Super Bowl commercials for their number of acts before they aired. Some had only one act, others went up to five.
Mr. Quesenberry determined that consumers rated the commercials with more acts as higher, which can increase the likelihood that they will be shared on social media.
In fact, this year he predicted that because the commercial called “Puppy Love,” advertising Budweiser, had the full number of plot points and told a complete story, it would win in the ratings.
And sure enough, it was the viewers’ favorite in a USA Today poll. Having adorable puppies and horses probably didn’t hurt.
So does this translate into sales? Well, a Budweiser spokesman said that the company experienced “a marked improvement in share trends” after the puppy commercial.
While stories in commercials aren’t new, as Mr. Quesenberry said, we “keep rediscovering this and have to remind ourselves of the point of stories in a business context.” And while there is a formula, stories fail if they’re perceived as formulaic. Walking that line is tricky.
Andrew Linderman tries to teach people how to find that balance. A story coach, he works with companies including American Express, PBS and Random House, charging $1,800 to $3,500 for workshops and $500 to $5,000 for one-on-one training (less for nonprofits and start-ups). For $40, you can also take one of his two-hour classes, Storytelling for Entrepreneurs.
“The specifics of storytelling are relatively easy to articulate,” he said. “It’s the nuances that make a story distinct.”
In a recent class in New York City, about 15 students wrote and told a story about a business experience — and struggled to figure out why their three-minute presentations often fell flat.
Rajesh Singh, 23, of Queens, who came with a friend with whom he plans to start a web development company, said his problem was, “I see patterns, but I’m not making others see the patterns.”
The reason many stories don’t work, Mr. Linderman explained, is that as adults, we tend to judge, analyze and explain an experience, rather than tell it.
“My boss is a jerk” is a judgment, he said. Showing with specifics why a boss is a jerk is much more effective.
“Good stories are detailed, honest and personal,” he said. And stories we particularly like usually involve some sort of vulnerability “without emoting too much and going off the rails.”
Will Mahony, 24, of Brooklyn, attended the class with his business partner because they plan to release an app next year and want to find investors.
“There are so many apps and the marketplace is so crowded,” Mr. Mahony said. “We took the class because we want to nail down our story and our pitch — that is, intertwine the ‘ask’ within the story without being too forward.”
His first pitch wasn’t too successful, he said, but with some advice it improved. “It’s about balancing your personal story — incorporating your values, tying it together with a vision of the future, and telling how investors can get involved and also benefit themselves.”
Narativ, another company that teaches storytelling, grew out of the AIDS crisis. One of the company’s founders found that people with AIDS “were not afraid of dying, but they were afraid of leaving nothing behind,” said Jerome Deroy, Narativ’s executive officer. So they were encouraged to tell their stories, some of which were eventually filmed and used to raise research funds for AIDS.
Now Narativ, which charges $6,000 to $25,000 for speeches and trainings for corporations — prices vary for nonprofits — works with organizations all over the world.
Storytelling isn’t just for pitches. It can help board members understand a company’s goals or get employees in large companies to better relate to workers in other departments. For example, Mr. Deroy recounted a story told by a phone operator, Jose, who worked at a large health insurance company.
A client was refusing to wear a respirator for sleep apnea; it turned out that the client had seen his father die with a respirator and so feared using one. With Jose’s help, the client overcame his aversion, Mr. Deroy said, and a video of Jose’s tale became a hit on the company’s internal website as well as a basis for an advertising campaign.
PowerPoints are the bane of storytellers, but here are a few bullet points to keep in mind when developing a good story:
■ Know who your audience is.
■ Have a beginning, middle and end. (That sounds obvious, but people often forget that.)
■ Use concrete details and personal experience.
■ Don’t self-censor.
■ Don’t try to memorize a story so it sounds rehearsed. It’s not about perfection. It’s about connecting.
It’s that simple. And that complicated. You can have a multimillion-dollar movie that bombs and a brilliant five-act story in 30 seconds.
Labor unions in a post-industrial age
(Seth Godin)
The us/them mindset of the successful industrialist led to the inevitable and essential creation of labor unions. If, as Smith and Marx wrote, owning the means of production transfers maximum value to the factory owner, the labor union provided a necessary correction to an inherently one-sided relationship.
Industrialism is based on doing a difficult thing (making something) ever cheaper and more reliably. The union movement is the result of a group of workers insisting that they be treated fairly, despite the fact that they don't own the means of production. Before globalism, unions had the ability to limit the downward spiral of wages.
But what happens when the best jobs aren't on the assembly line, but involve connection, creation and art? What happens when making average stuff isn't sufficient to be successful? When interactions and product design and unintended (or intended) side effects are at least as important as Frederick Taylor measuring every motion and pushing to get it done as cheaply as possible?
Consider what would happen if a union used its power (collective bargaining, slowdowns, education, strikes) to push management to take risks, embrace change and most of all, do what's right for customers in a competitive age...
What if the unionized service workers demanded the freedom to actually connect with those that they are serving, and to do it without onerous scripts and a focus on reliable mediocrity?
What would have happened to Chrysler or GM if the UAW had threatened to strike in 1985 because the design of cars was so mediocre? Or if the unions had pushed hard for more and better robots, together with extensive education to be sure that their workers were the ones designing and operating them?
Or, what if the corrections union, instead of standing up for the few bad apples, pushed the system to bring daylight and humanity to their work, so that more dollars would be available for their best people?
There's a massive cultural and economic shift going on. Senior management is slowly waking up to it, as are some unions. This sort of shift feels risky, almost ridiculous, but it's a possible next step as the workers realize that their connection to the market and the internet gives them more of the means of production than ever before.
Without a doubt, there's a huge challenge in ensuring that the people who do the work are treated with appropriate respect, dignity and compensation. It's not happening nearly enough. But in an economy that rewards the race to the top so much more heavily than cutting costs a few dollars, unions have a vested interest in pushing each of their members to reject the industrial sameness that seems so efficient but ultimately leads to a race to the bottom, and jobs (their jobs), lost.
Iam pretty sure I’ll never be old enough to hit the rocking chair. They keep putting up the retirement age. I’m just not ageing fast enough. On the other hand, Patrick Pichette, chief financial officer at Google, is clocking off age 52. He accompanied last week’s “shock” announcement with an extensive explanation. He’d been up Kilimanjaro, you see, when his wife suggested it might be time to step away from his relentless life as a Silicon Valley high-flyer. And, perhaps it was the altitude, he couldn’t think of a reason to disagree.
“It has been a frenetic pace for about 1,500 weeks now,” he wrote, not about Kilimanjaro, about work. “Always on — even when I was not supposed to be. Especially when I was not supposed to be.” We must let the irony that it’s Silicon Valley’s fault we’re all “on” even when we’re not supposed to be escape us. We must focus on the “shock”.
Here is a chap who describes himself as a member of “the noble fraternity of worldwide insecure overachievers”, which is almost as irritating as a colleague who listed “pathologically ambitious” as an attribute on her CV. And he’s just stopped. Gone. Walked away from a reported $5m-a-year salary to go . . . backpacking.
Google’s share price dropped by 3%. Experts talked of a brain drain. But Pichette isn’t the first tech squillionaire to go fishing.
My question is: why the shock? He might be walking away from $5m-a-year but he’s doing it with all his previous millions-a-year. It would be ridiculous to keep slogging away, wouldn’t it?
Frequently, I’ve asked unimaginably wealthy people why they carry on working. “I don’t know what I’d do if I didn’t work,” they say. Or: “I just love my job.” The real reason is fear. People drop dead when they retire. Or they have to look after their grandchildren. Nightmare.
I’m going to set up a new business advising squillionaires what to do if they give up work. I will map out their long life ahead step by step. First, they can go backpacking like Pichette. Inca trail? Tick. Uluru? Tick. Maybe pay some Sherpas to carry them up Everest.
Of course, you can’t do that for ever. A couple of years, tops. Then six months learning to surf somewhere tropical. Exhausting. So, next, sitting in a cottage in the Brecon Beacons drinking fine wine, reading fine books, going for fine walks. Five years. And then, to recover, another five years doing exactly the same thing except with mozzarella and sunshine in Umbria.
That’s 12 years, so now we have a crisis. What is the point of my existence, they will say. Well, I shall counter, what is the point of anyone’s existence? Organising financial deals at Google? So I shall pack them off to volunteer for a worthy African non-governmental organisation. Five years. Then some more backpacking, followed by a late-life fitness fad such as marathon running or a cross-Channel swim. And then some more mozzarella.
By which time, they will be old enough to chill out and enjoy the minimal, joyful trappings of a proper retirement.
I shall charge a fortune for my services. I shall make squillions. And the very minute I do, I’ll retire. Probably.
The high public cost of low wages
An advocacy group just published a study with an eye-catching number: $9,434,067,497. That's the amount of welfare workers get, as a group, each year.
The report is meant to underscore the high public cost of low wages, and its findings include:
• About one fifth of all restaurant workers live in poverty.
• Tipped employees, meanwhile, or those making $2.13 an hour plus gratuities, experience poverty at two and half times the rate of the overall U.S. workforce.
• Almost half of all workers' families are on at least one public-assistance program.
• Worst of all, this means each location of Olive Garden costs taxpayers $196,970 a year.
You Are Being Replaced
FORGET Skynet. Hypothetical world-ending artificial intelligence makes headlines, but the hype ignores what's happening right under our noses. Cheap, fast AI is already taking our jobs, we just haven't noticed.
This isn't dumb automation that can rapidly repeat identical tasks. It's software that can learn about and adapt to its environment, allowing it to do work that used to be the exclusive domain of humans, from customer services to answering legal queries
.
These systems don't threaten to enslave humanity, but they do pose a challenge: if software that does the work of humans exists, what work will we do?
In the last three years, UK telecoms firm O2 has replaced 150 workers with a single piece of software. A large portion of O2's customer service is now automatic, says Wayne Butterfield, who works on improving O2's operations. "Sim swaps, porting mobile numbers, migrating from prepaid onto a contract, unlocking a phone from O2" – all are now automated, he says.
Humans used to manually move data between the relevant systems to complete these tasks, copying a phone number from one database to another, for instance. The user still has to call up and speak to a human, but now an AI does the actual work.
To train the AI, it watches and learns while humans do simple, repetitive database tasks. With enough training data, the AIs can then go to work on their own. "They navigate a virtual environment," says Jason Kingdon, chairman of Blue Prism, the start-up which developed O2's artificial workers. "They mimic a human. They do exactly what a human does. If you watch one of these things working it looks a bit mad. You see it typing. Screens pop-up, you see it cutting and pasting."
One of the world's largest banks, Barclays, has also dipped a toe into this specialised AI. It used Blue Prism to deal with the torrent of demands that poured in from its customers after UK regulators demanded that it pay back billions of pounds of mis-sold insurance. It would have been expensive to rely entirely on human labour to field the sudden flood of requests. Having software agents that could take some of the simpler claims meant Barclays could employ fewer people.
The back office work that Blue Prism automates is undeniably dull, but it's not the limit for AI's foray into office space. In January, Canadian start-up ROSS started using IBM's Watson supercomputer to automate a whole chunk of the legal research normally carried out by entry-level paralegals.
Legal research tools already exist, but they don't offer much more than keyword searches. This returns a list of documents that may or may not be relevant. Combing through these for the argument a lawyer needs to make a case can take days.
ROSS returns precise answers to specific legal questions, along with a citation, just like a human researcher would. It also includes its level of confidence in its answer. For now, it is focused on questions about Canadian law, but CEO Andrew Arruda says he plans for ROSS to digest the law around the world.
Since its artificial intelligence is focused narrowly on the law, ROSS's answers can be a little dry. Asked whether it's OK for 20 per cent of the directors present at a directors' meeting to be Canadian, it responds that no, that's not enough. Under Canadian law, no directors' meeting may go ahead with less than 25 per cent of the directors present being Canadian. ROSS's source? The Canada Business Corporations Act, which it scanned and understood in an instant to find the answer.
By eliminating legal drudge work, Arruda says that ROSS's automation will open up the market for lawyers, reducing the time they need to spend on each case. People who need a lawyer but cannot afford one would suddenly find legal help within their means.
ROSS's searches are faster and broader than any human's. Arruda says this means it doesn't just get answers that a human would have had difficulty finding, it can search in places no human would have thought to look. "Lawyers can start crafting very insightful arguments that wouldn't have been achievable before," he says. Eventually, ROSS may become so good at answering specific kinds of legal question that it could handle simple cases on its own.
Where Blue Prism learns and adapts to the various software interfaces designed for humans working within large corporations, ROSS learns and adapts to the legal language that human lawyers use in courts and firms. It repurposes the natural language-processing abilities of IBM's Watson supercomputer to do this, scanning and analysing 10,000 pages of text every second before pulling out its best answers, ranked by confidence.
Lawyers are giving it feedback too, says Jimoh Ovbiagele, ROSS's chief technology officer. "ROSS is learning through experience."
Massachusetts-based Nuance Communications is building AIs that solve some of the same language problems as ROSS, but in a different part of the economy: medicine. In the US, after doctors and nurses type up case notes, another person uses those notes to try to match the description with one of thousands of billing codes for insurance purposes.
Nuance's language-focused AIs can now understand the typed notes, and figure out which billing code is a match. The system is already in use in a handful of US hospitals.
Kingdon doesn't shy away from the implications of his work: "This is aimed at being a replacement for a human, an automated person who knows how to do a task in much the same way that a colleague would."
But what will the world be like as we increasingly find ourselves working alongside AIs? David Autor, an economist at the Massachusetts Institute of Technology, says automation has tended to reduce drudgery in the past, and allowed people to do more interesting work.
"Old assembly line jobs were things like screwing caps on bottles," Autor says. "A lot of that stuff has been eliminated and that's good. Our working lives are safer and more interesting than they used to be."
Deeper inequality?
The potential problem with new kinds of automation like Blue Prism and ROSS is that they are starting to perform the kinds of jobs which can be the first rung on the corporate ladders, which could result in deepening inequality.
Autor remains optimistic about humanity's role in the future it is creating, but cautions that there's nothing to stop us engineering our own obsolescence, or that of a large swathe of workers that further splits rich from poor. "We've not seen widespread technological unemployment, but this time could be different," he says. "There's nothing that says it can't happen."
Kingdon says the changes are just beginning. "How far and fast? My prediction would be that in the next few years everyone will be familiar with this. It will be in every single office."
Once it reaches that scale, narrow, specialised AIs may start to offer something more, as their computation roots allow them to call upon more knowledge than human intelligence could.
"Right now ROSS has a year of experience," says Ovbiagele. "If 10,000 lawyers use ROSS for a year, that's 10,000 years of experience."
Which jobs will go next?
Artificial intelligence is already on the brink of handling a number of human jobs (see main story). The next jobs to become human-free might be:
Taxi drivers: Uber, Google and established car companies are all pouring money into machine vision and control research. It will be held back by legal and ethical issues, but once it starts, human drivers are likely to become obsolete.
Transcribers: Every day hospitals all over the world fire off audio files to professional transcribers who understand the medical jargon doctors use. They transcribe the tape and send it back to the hospital as text. Other industries rely on transcription too, and slowly but surely, machine transcription is starting to catch up. A lot of this is driven by data on the human voice gathered in call centres.
Financial analysts: Kensho, based in Cambridge, Massachusetts, is using AI to instantly answer financial questions which can take human analysts hours or even days to answer. By digging into financial databases, the start-up can answer questions like: "Which stocks perform best in the days after a bank fails". Journalists at NBC can already use Kensho to answer questions about breaking news, replacing a human researcher.
Tall People Are Worth More
Employers pay tall people more than short people — and it seems that they are absolutely right to do so.
It has long been known that there is a “height premium” in employment. An increase in male stature corresponds to a 2.3 percentage point rise in salary, while the earnings difference between the bottom and top quartiles in height is roughly equivalent to that associated with an extra year or two of schooling.
The question for economists has been whether this demonstrates unconscious bias on the part of the employers, or whether it is because tall people really are better.
Now researchers from Ohio State University have found that the premium paid for height can be entirely explained by the fact that taller people are on average cleverer and have better social skills. This is explained, they said, by the fact that, on a population level, tall people are likely to have been better nourished as children, something that would help their brain development.
“Height captures two things — that you are growing up in a good healthy environment, and because you are able to grow up in a healthy environment you are able to develop as fully as you are allowed and access your true cognitive capacity,” said Andreas Schick, who led the study.
However, the researchers said that this would only explain half the difference. “There is more to the brain than cognitive ability,” Dr Schick said. “There is also social ability.” The team used the British National Childhood Development Study, which follows children born in 1958, to see if they could explain the other half.
The British study included teacher assessments of various social skills. “When these were considered, the whole relationship was completely explained,” said Dr Schick.
He emphasised, however, that this was not necessarily bad news for short people. The study was considering population averages, and what was crucial for an individual was not height relative to other people but height relative to the height their genes could achieve. If your parents are short you are more likely to be short, but that does not mean your brain is underdeveloped.
Rise of the Robots
From the self-checkout aisle of the grocery store to the sports section of the newspaper, robots and computer software are increasingly taking the place of humans in the workforce. Silicon Valley executive Martin Ford says that robots, once thought of as a threat to only manufacturing jobs, are poised to replace humans as teachers, journalists, lawyers and others in the service sector.
"There's already a hardware store [in California] that has a customer service robot that, for example, is capable of leading customers to the proper place on the shelves in order to find an item," Ford tells Fresh Air's Dave Davies.
In his new book, Rise of the Robots, Ford considers the social and economic disruption that is likely to result when educated workers can no longer find employment.
On robots in manufacturing
Any jobs that are truly repetitive or rote — doing the same thing again and again — in advanced economies like the United States or Germany, those jobs are long gone. They've already been replaced by robots years and years ago.
So what we've seen in manufacturing is that the jobs that are actually left for people to do tend to be the ones that require more flexibility or require visual perception and dexterity. Very often these jobs kind of fill in the gaps between machines. For example, feeding parts into the next part of the production process or very often they're at the end of the process — perhaps loading and unloading trucks and moving raw materials and finished products around, those types of things.
But what we're seeing now in robotics is that finally the machines are coming for those jobs as well, and this is being driven by advances in areas like visual perception. You now have got robots that can see in three-dimension and that's getting much better and also becoming much less expensive. So you're beginning to see machines that are starting to have the kind of perception and dexterity that begins to approach what human beings can do. A lot more jobs are becoming susceptible to this and that's something that's going to continue to accelerate, and more and more of those jobs are going to disappear and factories are just going to relentlessly approach full-automation where there really aren't going to be many people at all.
There's a company here in Silicon Valley called Industrial Perception which is focused specifically on loading and unloading boxes and moving boxes around. This is a job that up until recently would've been beyond the robots because it relies on visual perception often in varied environments where the lighting may not be perfect and so forth, and where the boxes may be stacked haphazardly instead of precisely and it has been very, very difficult for a robot to take that on. But they've actually built a robot that's very sophisticated and may eventually be able to move boxes about one per second and that would compare with about one per every six seconds for a particularly efficient person. So it's dramatically faster and, of course, a robot that moves boxes is never going to get tired. It's never going to get injured. It's never going to file a workers' compensation claim.
On a robot that's being built for use in the fast food industry
Essentially, it's a machine that produces very, very high quality hamburgers. It can produce about 350 to 400 per hour; they come out fully configured on a conveyor belt ready to serve to the customer. ... It's all fresh vegetables and freshly ground meat and so forth; it's not frozen patties like you might find at a fast food joint. These are actually much higher quality hamburgers than you'd find at a typical fast food restaurant. ... They're building a machine that's actually quite compact that could potentially be used not just in fast food restaurants but in convenience stories and also maybe in vending machines.
On automated farming
In Japan they've got a robot that they use now to pick strawberries and it can do that one strawberry every few seconds and it actually operates at night so that they can operate around the clock picking strawberries. What we see in agriculture is that's the sector that has already been the most dramatically impacted by technology and, of course, mechanical technologies — it was tractors and harvesters and so forth. There are some areas of agriculture now that are almost essentially, you could say, fully automated.
On computer-written news stories
Essentially it looks at the raw data that's provided from some source, in this case from the baseball game, and it translates that into a real narrative. It's quite sophisticated. It doesn't simply take numbers and fill in the blanks in a formulaic report. It has the ability to actually analyze the data and figure out what things are important, what things are most interesting, and then it can actually weave that into a very compelling narrative. ... They're generating thousands and thousands of stories. In fact, the number I heard was about one story every 30 seconds is being generated automatically and that they appear on a number of websites and in the news media. Forbes is one that we know about. Many of the others that use this particular service aren't eager to disclose that. ... Right now it tends to be focused on those areas that you might consider to be a bit more formulaic, for example sports reporting and also financial reporting — things like earnings reports for companies and so forth.
On computers starting to do creative work
Right now it's the more routine formulaic jobs — jobs that are predictable, the kinds of jobs where you tend to do the same kinds of things again and again — those jobs are really being heavily impacted. But it's important to realize that that could change in the future. We already see a number of areas, like [a] program that was able to produce [a] symphony, where computers are beginning to exhibit creativity — they can actually create new things from scratch. ... [There is] a painting program which actually can generate original art; not to take a photograph and Photoshop it or something, but to actually generate original art.
Shadow Work - Your Third Job
Technology has knocked the bottom rung out of the employment ladder, which has sent youth unemployment around the globe skyrocketing and presented us with a serious economic dilemma. While many have focused on the poor state of our educational system or the “jobless” recovery, another, overlooked factor behind this trend is the phenomenon of “shadow work.” I define shadow work as all the unpaid jobs we do on behalf of businesses and organizations: We are pumping our own gas, scanning our own groceries, booking our travel and busing our tables at Starbucks. Shadow work is a new concept, so as yet, no one has compiled economic data on how many jobs we, the consumers, have taken over from (erstwhile) employees. Yet it is surely a force shrinking the job market, and the unemployment it creates is structural. Thanks in part to this new phenomenon, widespread joblessness could become entrenched in the social landscape.
Consider what you now do yourself: You can bank on your cell phone, check yourself out at CVS or the grocery store without ever speaking to an employee, book your own flights and print your boarding pass at the airport without ever talking to a ticket agent—and that’s just in the last few years. Imagine what’s coming next.
In the modern economy, there is no bigger issue than jobs and the cost of maintaining a staff. For the vast majority of businesses, schools and nonprofits, personnel is the largest budget item. This includes, of course, both salaries and benefits. (The latter were once called “fringe benefits,” though the term “fringe” disappeared when the category outgrew anything resembling a fringe.) Hiring, training and supervising employees augment the cost of personnel, and another outlay kicks in when workers retire—pensions, annuities and, for some employers, the gigantic healthcare costs that pile up from retirement until the end of life, which has become a lengthy period as life spans stretch into the 80s and 90s.
In recent years, salaries in real dollars have either remained static or dropped for most of the labor force. But the galloping cost of benefits—one rule of thumb pegs them at 40 percent of salary—has put steady pressure on employers. Health care expenses, in particular, have driven up this line item. In the United States, health care has become an enormous, seemingly uncontrollable sector, swelling relentlessly and growing far faster than the rest of the economy—much as cancer grows, without relationship to neighboring cells.
Short of a seismic change like universal single-payer health insurance with price controls on drugs and procedures, the upward pressure on employee benefits will continue. The upshot is a strong incentive to replace full-time employees with part-time, outsourced, overseas or contract workers, who receive no benefits. Better yet, simply lay people off—or hand off jobs to customers as shadow work.
Politicians and pundits who shake their heads at the stubbornness of high unemployment rates are either overlooking or ignoring the obvious. Our economic and political system is stacked to reward businesses for discarding employees, not hiring them.
There are three main strategies for cutting payroll; two are well known.
Downsizing is a classic: Lay off workers and shift their jobs to the remaining, shrunken staff. Not long ago, a nonprofit education newsletter in Boston replaced all three of its full-time staff members with one new full-time editor and a part-time assistant, who were expected to carry on—with half the previous staffing level. The remaining employees have no choice but to work more. Supposedly they feel grateful to still have jobs. Downsizing is an in-house breed of shadow work created by thinning out both senior people and support staff.
Second, automation replaces employees with machines. This has gone on for centuries, at least since the Industrial Revolution and probably longer. Automation pervades manufacturing and many service industries. Robots do not draw salaries, or belong to labor unions, or receive fringe benefits. They need maintenance, but don’t require vacation time, sick time, maternity leaves or, best of all, health insurance. Robots are impeccable “team players” with no personal agendas. They’ll work round the clock and on weekends at their regular hourly rate and never ask for raises. Hence, whenever financially feasible, businesses will substitute robotics for people.
The third, less-recognized way to cut staff is to outsource jobs as shadow work. In this model, the customers do the work, operating hand-in-hand with robots to complete transactions. The new check-in kiosk in the hotel lobby, for example, means one less person behind the desk. This pincer movement spins off unemployment that may be permanent, because technology, not the business cycle, drives it. Historically, automation has eliminated jobs at the point of production, e.g., in factories. Shadow work instead deletes jobs at the point of sale, e.g., at drugstore checkouts. As noted, there are no hard data on this yet, but one thing we do know is that points of sale vastly outnumber points of production. The development of ever-more-sophisticated technologies only fuels the growth of new forms of shadow work, as it enables consumers to do more kinds of jobs, and more cheaply. ATMs arrived decades ago, but today, customers can do much of their banking with a handheld device.
Shadow work is squeezing out entry-level jobs that have launched countless careers. These jobs at the base of the economic pyramid pay little but lay the foundation for everything that rises above them—and as with any structure, when the foundation crumbles, the superstructure may collapse as well. Entry-level jobs provide more than a paycheck. They are the sidewalk of the workplace, the platform that allows entry to all the businesses on Main Street.
Consider my father, who in 1937 began on the bottom rung of the ladder as a messenger in a small-town bank in New Jersey. In 1963 he became president, chairman of the board and CEO after having been promoted through the ranks as a teller, bookkeeper, loan officer and executive vice president. He understood every facet of banking by the time he took the helm of the company. Contrast this with the preparation many banking executives get today: an M.B.A. with specialization in finance and a penchant for high-risk derivatives. If these bankers had gone out on hundreds of mortgage appraisals like my dad, seeing the actual houses for which they were lending money and meeting real, live borrowers, would the 2008 banking crisis have happened?
Starter positions, including summer and part-time gigs, are where young people learn how to hold down a real job. (That means a job with wages—not a volunteer job, not an unpaid internship, not an NGO project in a developing country.) This is where they learn to show up on time, appropriately dressed and groomed, with a professional attitude, and learn habits like cooperation, punching a clock, service with a smile. But how does an aspiring banker work his way up from the teller’s window if ATMs and shadow-working customers have displaced tellers? How does a secretary become the office manager and later an executive if shadow work eliminates support staff—so there are no secretaries?
For those without education and skills, these low-level positions often are their careers. If such jobs vanish, a throng of unemployed young people will find themselves with little money and too much spare time. This is a dangerous development in any society. Unrest and violence throughout the Arab world have erupted from streets teeming with young men lacking jobs—angry youth who congregate online through social media. Such mobs can become unruly. In 2003, the dissolution of the Iraqi army put 400,000 young men out of work, triggering a bloody insurgency that still continues. In today’s global village, where citizens network and congregate in political flashmobs, we cannot risk creating an immense underclass of idle youth.
Yet this is exactly what we are doing. Young people aged 15 to 24 make up 17 percent of the global population but 40 percent of the unemployed, according to the World Economic Forum. In 2013, their rate of joblessness (12.6 percent) was about triple the worldwide adult rate of 4.5 percent. Youth unemployment has reached 17.1 percent in North America, 21.4 percent in the European Union, 14.3 percent in Latin America and the Caribbean, 27.9 percent in North Africa, 26.5 percent in the Middle East and 9 percent in East Asia.
In some countries, the better-educated may be psychologically deflated as well, as they’ve been told that education guarantees a successful career but are finding that’s not true. Many hold college and graduate degrees and even have “hard” skills like computer training, yet must move back in with their parents. In the United States, students float from internship to internship to degree program, as worthwhile salaried jobs remain scarce. Emily, at age 28, has moved in with her retired parents in downtown Boston. After earning a college degree in psychology from the University of Virginia, she hopped from one internship to another in fields including advertising and market research. She is now completing studies to become a licensed physician assistant, hoping that this recognized “hard” skill will finally begin a career for her.
Unfortunately, shadow work may be one obstacle that keeps this generation economically disabled. They can get stuck doing internships for years and find that they’ve only been spinning their wheels doing shadow work instead of building a career.
Yet all is not lost. While shadow work eliminates some jobs, it spins off others. For example, let’s reconsider the robotic gasoline pumps that have replaced pump jockeys, those unskilled teenagers who once filled gas tanks. The advent of self-service pumps also creates new jobs, like designing, manufacturing, installing and maintaining the robotic pumps. Furthermore, the charge-card data on gasoline sales gets uploaded via satellite to financial institutions, a process that needs technical and business oversight and employees to do it. Similarly, while Orbitz and Kayak.com reduce travel agencies’ business by transferring shadow work to customers, such websites also produce jobs for web designers, software engineers, online marketers and advertising executives.
Skilled jobs of this kind require education and technical training. Their salaries are a distinct upgrade over pump-jockey pay. But to cash in on the opportunities, we must renew our educational establishment, gearing it to the kinds of expertise the emerging workplace rewards. The information economy favors job applicants with technical skills and facility in the digital realm. “People” skills will also be rewarded in growth sectors like home health care. Even there, however, knowing the ropes of the health-insurance system and the complexities of pharmaceutical treatment will become more and more salient. Paradoxically, careers that depend on physical expertise, from hair styling to orthopedic surgery, may be among the most secure, as new software is unlikely to supplant the services such professionals provide.
The future of entry-level work remains a conundrum. Those who lack education or technical training could find themselves permanently frozen out of the job market, as robotic technology automates much of manufacturing and repetitive tasks, and shadow-working consumers take on jobs that are easily learned, like scanning bar-coded purchases. There will be fewer chances to start a career without some kind of skill to offer employers, as “on-the-job training” becomes something done not by salaried staff, but by you—and other shadow-working customers.
A World Without Work
1. Youngstown, U.S.A.
The end of work is still just a futuristic concept for most of the United States, but it is something like a moment in history for Youngstown, Ohio, one its residents can cite with precision: September 19, 1977.
For much of the 20th century, Youngstown’s steel mills delivered such great prosperity that the city was a model of the American dream, boasting a median income and a homeownership rate that were among the nation’s highest. But as manufacturing shifted abroad after World War II, Youngstown steel suffered, and on that gray September afternoon in 1977, Youngstown Sheet and Tube announced the shuttering of its Campbell Works mill. Within five years, the city lost 50,000 jobs and $1.3 billion in manufacturing wages. The effect was so severe that a term was coined to describe the fallout: regional depression.
Youngstown was transformed not only by an economic disruption but also by a psychological and cultural breakdown. Depression, spousal abuse, and suicide all became much more prevalent; the caseload of the area’s mental-health center tripled within a decade. The city built four prisons in the mid-1990s—a rare growth industry. One of the few downtown construction projects of that period was a museum dedicated to the defunct steel industry.
This winter, I traveled to Ohio to consider what would happen if technology permanently replaced a great deal of human work. I wasn’t seeking a tour of our automated future. I went because Youngstown has become a national metaphor for the decline of labor, a place where the middle class of the 20th century has become a museum exhibit.
The paradox of work is that many people hate their jobs, but they are considerably more miserable doing nothing.
“Youngstown’s story is America’s story, because it shows that when jobs go away, the cultural cohesion of a place is destroyed,” says John Russo, a professor of labor studies at Youngstown State University. “The cultural breakdown matters even more than the economic breakdown.”
In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe?
Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.
But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away?
The U.S. labor force has been shaped by millennia of technological progress. Agricultural technology birthed the farming industry, the industrial revolution moved people into factories, and then globalization and automation moved them back out, giving rise to a nation of services. But throughout these reshufflings, the total number of jobs has always increased. What may be looming is something different: an era of technological unemployment, in which computer scientists and software engineers essentially invent us out of work, and the total number of jobs declines steadily and permanently.
This fear is not new. The hope that machines might free us from toil has always been intertwined with the fear that they will rob us of our agency. In the midst of the Great Depression, the economist John Maynard Keynes forecast that technological progress might allow a 15-hour workweek, and abundant leisure, by 2030. But around the same time, President Herbert Hoover received a letter warning that industrial technology was a “Frankenstein monster” that threatened to upend manufacturing, “devouring our civilization.” (The letter came from the mayor of Palo Alto, of all places.) In 1962, President John F. Kennedy said, “If men have the talent to invent new machines that put men out of work, they have the talent to put those men back to work.” But two years later, a committee of scientists and social activists sent an open letter to President Lyndon B. Johnson arguing that “the cybernation revolution” would create “a separate nation of the poor, the unskilled, the jobless,” who would be unable either to find work or to afford life’s necessities.
The job market defied doomsayers in those earlier times, and according to the most frequently reported jobs numbers, it has so far done the same in our own time. Unemployment is currently just over 5 percent, and 2014 was this century’s best year for job growth. One could be forgiven for saying that recent predictions about technological job displacement are merely forming the latest chapter in a long story called The Boys Who Cried Robot—one in which the robot, unlike the wolf, never arrives in the end.
The end-of-work argument has often been dismissed as the “Luddite fallacy,” an allusion to the 19th-century British brutes who smashed textile-making machines at the dawn of the industrial revolution, fearing the machines would put hand-weavers out of work. But some of the most sober economists are beginning to worry that the Luddites weren’t wrong, just premature. When former Treasury Secretary Lawrence Summers was an MIT undergraduate in the early 1970s, many economists disdained “the stupid people [who] thought that automation was going to make all the jobs go away,” he said at the National Bureau of Economic Research Summer Institute in July 2013. “Until a few years ago, I didn’t think this was a very complicated subject: the Luddites were wrong, and the believers in technology and technological progress were right. I’m not so completely certain now.”
2. Reasons to Cry Robot
What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.
After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.
• Labor’s losses. One of the first things we might expect to see in a period of technological displacement is the diminishment of human labor as a driver of economic growth. In fact, signs that this is happening have been present for quite some time. The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century.
A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software. In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.
• The spread of nonworking men and underemployed youth. The share of prime-age Americans (25 to 54 years old) who are working has been trending down since 2000. Among men, the decline began even earlier: the share of prime-age men who are neither working nor looking for work has doubled since the late 1970s, and has increased as much throughout the recovery as it did during the Great Recession itself. All in all, about one in six prime-age men today are either unemployed or out of the workforce altogether. This is what the economist Tyler Cowen calls “the key statistic” for understanding the spreading rot in the American workforce. Conventional wisdom has long held that under normal economic conditions, men in this age group—at the peak of their abilities and less likely than women to be primary caregivers for children—should almost all be working. Yet fewer and fewer are.
Economists cannot say for certain why men are turning away from work, but one explanation is that technological change has helped eliminate the jobs for which many are best suited. Since 2000, the number of manufacturing jobs has fallen by almost 5 million, or about 30 percent.
Young people just coming onto the job market are also struggling—and by many measures have been for years. Six years into the recovery, the share of recent college grads who are “underemployed” (in jobs that historically haven’t required a degree) is still higher than it was in 2007—or, for that matter, 2000. And the supply of these “non-college jobs” is shifting away from high-paying occupations, such as electrician, toward low-wage service jobs, such as waiter. More people are pursuing higher education, but the real wages of recent college graduates have fallen by 7.7 percent since 2000. In the biggest picture, the job market appears to be requiring more and more preparation for a lower and lower starting wage. The distorting effect of the Great Recession should make us cautious about overinterpreting these trends, but most began before the recession, and they do not seem to speak encouragingly about the future of work.
• The shrewdness of software. One common objection to the idea that technology will permanently displace huge numbers of workers is that new gadgets, like self-checkout kiosks at drugstores, have failed to fully displace their human counterparts, like cashiers. But employers typically take years to embrace new machines at the expense of workers. The robotics revolution began in factories in the 1960s and ’70s, but manufacturing employment kept rising until 1980, and then collapsed during the subsequent recessions. Likewise, “the personal computer existed in the ’80s,” says Henry Siu, an economist at the University of British Columbia, “but you don’t see any effect on office and administrative-support jobs until the 1990s, and then suddenly, in the last recession, it’s huge. So today you’ve got checkout screens and the promise of driverless cars, flying drones, and little warehouse robots. We know that these tasks can be done by machines rather than people. But we may not see the effect until the next recession, or the recession after that.”
Some observers say our humanity is a moat that machines cannot cross. They believe people’s capacity for compassion, deep understanding, and creativity are inimitable. But as Erik Brynjolfsson and Andrew McAfee have argued in their book The Second Machine Age, computers are so dexterous that predicting their application 10 years from now is almost impossible. Who could have guessed in 2005, two years before the iPhone was released, that smartphones would threaten hotel jobs within the decade, by helping homeowners rent out their apartments and houses to strangers on Airbnb? Or that the company behind the most popular search engine would design a self-driving car that could soon threaten driving, the most common job occupation among American men?
In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.”
After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.
For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.
Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.
Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”
Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.
To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.
3. Consumption: The Paradox of Leisure
Work is really three things, says Peter Frase, the author of Four Futures, a forthcoming book about how automation will change America: the means by which the economy produces goods, the means by which people earn income, and an activity that lends meaning or purpose to many people’s lives. “We tend to conflate these things,” he told me, “because today we need to pay people to keep the lights on, so to speak. But in a future of abundance, you wouldn’t, and we ought to think about ways to make it easier and better to not be employed.”
Frase belongs to a small group of writers, academics, and economists—they have been called “post-workists”—who welcome, even root for, the end of labor. American society has “an irrational belief in work for work’s sake,” says Benjamin Hunnicutt, another post-workist and a historian at the University of Iowa, even though most jobs aren’t so uplifting. A 2014 Gallup report of worker satisfaction found that as many as 70 percent of Americans don’t feel engaged by their current job. Hunnicutt told me that if a cashier’s work were a video game—grab an item, find the bar code, scan it, slide the item onward, and repeat—critics of video games might call it mindless. But when it’s a job, politicians praise its intrinsic dignity. “Purpose, meaning, identity, fulfillment, creativity, autonomy—all these things that positive psychology has shown us to be necessary for well-being are absent in the average job,” he said.
The post-workists are certainly right about some important things. Paid labor does not always map to social good. Raising children and caring for the sick is essential work, and these jobs are compensated poorly or not at all. In a post-work society, Hunnicutt said, people might spend more time caring for their families and neighbors; pride could come from our relationships rather than from our careers.
The post-work proponents acknowledge that, even in the best post-work scenarios, pride and jealousy will persevere, because reputation will always be scarce, even in an economy of abundance. But with the right government provisions, they believe, the end of wage labor will allow for a golden age of well-being. Hunnicutt said he thinks colleges could reemerge as cultural centers rather than job-prep institutions. The word school, he pointed out, comes from skholē, the Greek word for “leisure.” “We used to teach people to be free,” he said. “Now we teach them to work.”
Hunnicutt’s vision rests on certain assumptions about taxation and redistribution that might not be congenial to many Americans today. But even leaving that aside for the moment, this vision is problematic: it doesn’t resemble the world as it is currently experienced by most jobless people. By and large, the jobless don’t spend their downtime socializing with friends or taking up new hobbies. Instead, they watch TV or sleep. Time-use surveys show that jobless prime-age people dedicate some of the time once spent working to cleaning and childcare. But men in particular devote most of their free time to leisure, the lion’s share of which is spent watching television, browsing the Internet, and sleeping. Retired seniors watch about 50 hours of television a week, according to Nielsen. That means they spend a majority of their lives either sleeping or sitting on the sofa looking at a flatscreen. The unemployed theoretically have the most time to socialize, and yet studies have shown that they feel the most social isolation; it is surprisingly hard to replace the camaraderie of the water cooler.
Most people want to work, and are miserable when they cannot. The ills of unemployment go well beyond the loss of income; people who lose their job are more likely to suffer from mental and physical ailments. “There is a loss of status, a general malaise and demoralization, which appears somatically or psychologically or both,” says Ralph Catalano, a public-health professor at UC Berkeley. Research has shown that it is harder to recover from a long bout of joblessness than from losing a loved one or suffering a life-altering injury. The very things that help many people recover from other emotional traumas—a routine, an absorbing distraction, a daily purpose—are not readily available to the unemployed.
The transition from labor force to leisure force would likely be particularly hard on Americans, the worker bees of the rich world: Between 1950 and 2012, annual hours worked per worker fell significantly throughout Europe—by about 40 percent in Germany and the Netherlands—but by only 10 percent in the United States. Richer, college-educated Americans are working more than they did 30 years ago, particularly when you count time working and answering e-mail at home.
In 1989, the psychologists Mihaly Csikszentmihalyi and Judith LeFevre conducted a famous study of Chicago workers that found people at work often wished they were somewhere else. But in questionnaires, these same workers reported feeling better and less anxious in the office or at the plant than they did elsewhere. The two psychologists called this “the paradox of work”: many people are happier complaining about jobs than they are luxuriating in too much leisure. Other researchers have used the term guilty couch potato to describe people who use media to relax but often feel worthless when they reflect on their unproductive downtime. Contentment speaks in the present tense, but something more—pride—comes only in reflection on past accomplishments.
The post-workists argue that Americans work so hard because their culture has conditioned them to feel guilty when they are not being productive, and that this guilt will fade as work ceases to be the norm. This might prove true, but it’s an untestable hypothesis. When I asked Hunnicutt what sort of modern community most resembles his ideal of a post-work society, he admitted, “I’m not sure that such a place exists.”
Less passive and more nourishing forms of mass leisure could develop. Arguably, they already are developing. The Internet, social media, and gaming offer entertainments that are as easy to slip into as is watching TV, but all are more purposeful and often less isolating. Video games, despite the derision aimed at them, are vehicles for achievement of a sort. Jeremy Bailenson, a communications professor at Stanford, says that as virtual-reality technology improves, people’s “cyber-existence” will become as rich and social as their “real” life. Games in which users climb “into another person’s skin to embody his or her experiences firsthand” don’t just let people live out vicarious fantasies, he has argued, but also “help you live as somebody else to teach you empathy and pro-social skills.”
But it’s hard to imagine that leisure could ever entirely fill the vacuum of accomplishment left by the demise of labor. Most people do need to achieve things through, yes, work to feel a lasting sense of purpose. To envision a future that offers more than minute-to-minute satisfaction, we have to imagine how millions of people might find meaningful work without formal wages. So, inspired by the predictions of one of America’s most famous labor economists, I took a detour on my way to Youngstown and stopped in Columbus, Ohio.
4. Communal Creativity: The Artisans’ Revenge
Artisans made up the original American middle class. Before industrialization swept through the U.S. economy, many people who didn’t work on farms were silversmiths, blacksmiths, or woodworkers. These artisans were ground up by the machinery of mass production in the 20th century. But Lawrence Katz, a labor economist at Harvard, sees the next wave of automation returning us to an age of craftsmanship and artistry. In particular, he looks forward to the ramifications of 3‑D printing, whereby machines construct complex objects from digital designs.
The factories that arose more than a century ago “could make Model Ts and forks and knives and mugs and glasses in a standardized, cheap way, and that drove the artisans out of business,” Katz told me. “But what if the new tech, like 3-D-printing machines, can do customized things that are almost as cheap? It’s possible that information technology and robots eliminate traditional jobs and make possible a new artisanal economy … an economy geared around self-expression, where people would do artistic things with their time.”
In other words, it would be a future not of consumption but of creativity, as technology returns the tools of the assembly line to individuals, democratizing the means of mass production.
Something like this future is already present in the small but growing number of industrial shops called “makerspaces” that have popped up in the United States and around the world. The Columbus Idea Foundry is the country’s largest such space, a cavernous converted shoe factory stocked with industrial-age machinery. Several hundred members pay a monthly fee to use its arsenal of machines to make gifts and jewelry; weld, finish, and paint; play with plasma cutters and work an angle grinder; or operate a lathe with a machinist.
When I arrived there on a bitterly cold afternoon in February, a chalkboard standing on an easel by the door displayed three arrows, pointing toward bathrooms, pewter casting, and zombies. Near the entrance, three men with black fingertips and grease-stained shirts took turns fixing a 60-year-old metal-turning lathe. Behind them, a resident artist was tutoring an older woman on how to transfer her photographs onto a large canvas, while a couple of guys fed pizza pies into a propane-fired stone oven. Elsewhere, men in protective goggles welded a sign for a local chicken restaurant, while others punched codes into a computer-controlled laser-cutting machine. Beneath the din of drilling and wood-cutting, a Pandora rock station hummed tinnily from a Wi‑Fi-connected Edison phonograph horn. The foundry is not just a gymnasium of tools. It is a social center.
Alex Bandar, who started the foundry after receiving a doctorate in materials science and engineering, has a theory about the rhythms of invention in American history. Over the past century, he told me, the economy has moved from hardware to software, from atoms to bits, and people have spent more time at work in front of screens. But as computers take over more tasks previously considered the province of humans, the pendulum will swing back from bits to atoms, at least when it comes to how people spend their days. Bandar thinks that a digitally preoccupied society will come to appreciate the pure and distinct pleasure of making things you can touch. “I’ve always wanted to usher in a new era of technology where robots do our bidding,” Bandar said. “If you have better batteries, better robotics, more dexterous manipulation, then it’s not a far stretch to say robots do most of the work. So what do we do? Play? Draw? Actually talk to each other again?”
You don’t need any particular fondness for plasma cutters to see the beauty of an economy where tens of millions of people make things they enjoy making—whether physical or digital, in buildings or in online communities—and receive feedback and appreciation for their work. The Internet and the cheap availability of artistic tools have already empowered millions of people to produce culture from their living rooms. People upload more than 400,000 hours of YouTube videos and 350 million new Facebook photos every day. The demise of the formal economy could free many would-be artists, writers, and craftspeople to dedicate their time to creative interests—to live as cultural producers. Such activities offer virtues that many organizational psychologists consider central to satisfaction at work: independence, the chance to develop mastery, and a sense of purpose.
After touring the foundry, I sat at a long table with several members, sharing the pizza that had come out of the communal oven. I asked them what they thought of their organization as a model for a future where automation reached further into the formal economy. A mixed-media artist named Kate Morgan said that most people she knew at the foundry would quit their jobs and use the foundry to start their own business if they could. Others spoke about the fundamental need to witness the outcome of one’s work, which was satisfied more deeply by craftsmanship than by other jobs they’d held.
Late in the conversation, we were joined by Terry Griner, an engineer who had built miniature steam engines in his garage before Bandar invited him to join the foundry. His fingers were covered in soot, and he told me about the pride he had in his ability to fix things. “I’ve been working since I was 16. I’ve done food service, restaurant work, hospital work, and computer programming. I’ve done a lot of different jobs,” said Griner, who is now a divorced father. “But if we had a society that said, ‘We’ll cover your essentials, you can work in the shop,’ I think that would be utopia. That, to me, would be the best of all possible worlds.”
5. Contingency: “You’re on Your Own”
One mile to the east of downtown Youngstown, in a brick building surrounded by several empty lots, is Royal Oaks, an iconic blue-collar dive. At about 5:30 p.m. on a Wednesday, the place was nearly full. The bar glowed yellow and green from the lights mounted along a wall. Old beer signs, trophies, masks, and mannequins cluttered the back corner of the main room, like party leftovers stuffed in an attic. The scene was mostly middle-aged men, some in groups, talking loudly about baseball and smelling vaguely of pot; some drank alone at the bar, sitting quietly or listening to music on headphones. I spoke with several patrons there who work as musicians, artists, or handymen; many did not hold a steady job.
“It is the end of a particular kind of wage work,” said Hannah Woodroofe, a bartender there who, it turns out, is also a graduate student at the University of Chicago. (She’s writing a dissertation on Youngstown as a harbinger of the future of work.) A lot of people in the city make ends meet via “post-wage arrangements,” she said, working for tenancy or under the table, or trading services. Places like Royal Oaks are the new union halls: People go there not only to relax but also to find tradespeople for particular jobs, like auto repair. Others go to exchange fresh vegetables, grown in urban gardens they’ve created amid Youngstown’s vacant lots.
When an entire area, like Youngstown, suffers from high and prolonged unemployment, problems caused by unemployment move beyond the personal sphere; widespread joblessness shatters neighborhoods and leaches away their civic spirit. John Russo, the Youngstown State professor, who is a co-author of a history of the city, Steeltown USA, says the local identity took a savage blow when residents lost the ability to find reliable employment. “I can’t stress this enough: this isn’t just about economics; it’s psychological,” he told me.
Russo sees Youngstown as the leading edge of a larger trend toward the development of what he calls the “precariat”—a working class that swings from task to task in order to make ends meet and suffers a loss of labor rights, bargaining rights, and job security. In Youngstown, many of these workers have by now made their peace with insecurity and poverty by building an identity, and some measure of pride, around contingency. The faith they lost in institutions—the corporations that have abandoned the city, the police who have failed to keep them safe—has not returned. But Russo and Woodroofe both told me they put stock in their own independence. And so a place that once defined itself single-mindedly by the steel its residents made has gradually learned to embrace the valorization of well-rounded resourcefulness.
Karen Schubert, a 54-year-old writer with two master’s degrees, accepted a part-time job as a hostess at a café in Youngstown early this year, after spending months searching for full-time work. Schubert, who has two grown children and an infant grandson, said she’d loved teaching writing and literature at the local university. But many colleges have replaced full-time professors with part-time adjuncts in order to control costs, and she’d found that with the hours she could get, adjunct teaching didn’t pay a living wage, so she’d stopped. “I think I would feel like a personal failure if I didn’t know that so many Americans have their leg caught in the same trap,” she said.
Perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity.
Among Youngstown’s precariat, one can see a third possible future, where millions of people struggle for years to build a sense of purpose in the absence of formal jobs, and where entrepreneurship emerges out of necessity. But while it lacks the comforts of the consumption economy or the cultural richness of Lawrence Katz’s artisanal future, it is more complex than an outright dystopia. “There are young people working part-time in the new economy who feel independent, whose work and personal relationships are contingent, and say they like it like this—to have short hours so they have time to focus on their passions,” Russo said.
Schubert’s wages at the café are not enough to live on, and in her spare time, she sells books of her poetry at readings and organizes gatherings of the literary-arts community in Youngstown, where other writers (many of them also underemployed) share their prose. The evaporation of work has deepened the local arts and music scene, several residents told me, because people who are inclined toward the arts have so much time to spend with one another. “We’re a devastatingly poor and hemorrhaging population, but the people who live here are fearless and creative and phenomenal,” Schubert said.
Whether or not one has artistic ambitions as Schubert does, it is arguably growing easier to find short-term gigs or spot employment. Paradoxically, technology is the reason. A constellation of Internet-enabled companies matches available workers with quick jobs, most prominently including Uber (for drivers), Seamless (for meal deliverers), Homejoy (for house cleaners), and TaskRabbit (for just about anyone else). And online markets like Craigslist and eBay have likewise made it easier for people to take on small independent projects, such as furniture refurbishing. Although the on-demand economy is not yet a major part of the employment picture, the number of “temporary-help services” workers has grown by 50 percent since 2010, according to the Bureau of Labor Statistics.
Some of these services, too, could be usurped, eventually, by machines. But on-demand apps also spread the work around by carving up jobs, like driving a taxi, into hundreds of little tasks, like a single drive, which allows more people to compete for smaller pieces of work. These new arrangements are already challenging the legal definitions of employer and employee, and there are many reasons to be ambivalent about them. But if the future involves a declining number of full-time jobs, as in Youngstown, then splitting some of the remaining work up among many part-time workers, instead of a few full-timers, wouldn’t necessarily be a bad development. We shouldn’t be too quick to excoriate companies that let people combine their work, art, and leisure in whatever ways they choose.
Today the norm is to think about employment and unemployment as a black-and-white binary, rather than two points at opposite ends of a wide spectrum of working arrangements. As late as the mid-19th century, though, the modern concept of “unemployment” didn’t exist in the United States. Most people lived on farms, and while paid work came and went, home industry—canning, sewing, carpentry—was a constant. Even in the worst economic panics, people typically found productive things to do. The despondency and helplessness of unemployment were discovered, to the bafflement and dismay of cultural critics, only after factory work became dominant and cities swelled.
The 21st century, if it presents fewer full-time jobs in the sectors that can be automated, could in this respect come to resemble the mid-19th century: an economy marked by episodic work across a range of activities, the loss of any one of which would not make somebody suddenly idle. Many bristle that contingent gigs offer a devil’s bargain—a bit of additional autonomy in exchange for a larger loss of security. But some might thrive in a market where versatility and hustle are rewarded—where there are, as in Youngstown, few jobs to have, yet many things to do.
6. Government: The Visible Hand
In the 1950s, Henry Ford II, the CEO of Ford, and Walter Reuther, the head of the United Auto Workers union, were touring a new engine plant in Cleveland. Ford gestured to a fleet of machines and said, “Walter, how are you going to get these robots to pay union dues?” The union boss famously replied: “Henry, how are you going to get them to buy your cars?”
As Martin Ford (no relation) writes in his new book, The Rise of the Robots, this story might be apocryphal, but its message is instructive. We’re pretty good at noticing the immediate effects of technology’s substituting for workers, such as fewer people on the factory floor. What’s harder is anticipating the second-order effects of this transformation, such as what happens to the consumer economy when you take away the consumers.
Technological progress on the scale we’re imagining would usher in social and cultural changes that are almost impossible to fully envision. Consider just how fundamentally work has shaped America’s geography. Today’s coastal cities are a jumble of office buildings and residential space. Both are expensive and tightly constrained. But the decline of work would make many office buildings unnecessary. What might that mean for the vibrancy of urban areas? Would office space yield seamlessly to apartments, allowing more people to live more affordably in city centers and leaving the cities themselves just as lively? Or would we see vacant shells and spreading blight? Would big cities make sense at all if their role as highly sophisticated labor ecosystems were diminished? As the 40-hour workweek faded, the idea of a lengthy twice-daily commute would almost certainly strike future generations as an antiquated and baffling waste of time. But would those generations prefer to live on streets full of high-rises, or in smaller towns?
Today, many working parents worry that they spend too many hours at the office. As full-time work declined, rearing children could become less overwhelming. And because job opportunities historically have spurred migration in the United States, we might see less of it; the diaspora of extended families could give way to more closely knitted clans. But if men and women lost their purpose and dignity as work went away, those families would nonetheless be troubled.
The decline of the labor force would make our politics more contentious. Deciding how to tax profits and distribute income could become the most significant economic-policy debate in American history. In The Wealth of Nations, Adam Smith used the term invisible hand to refer to the order and social benefits that arise, surprisingly, from individuals’ selfish actions. But to preserve the consumer economy and the social fabric, governments might have to embrace what Haruhiko Kuroda, the governor of the Bank of Japan, has called the visible hand of economic intervention. What follows is an early sketch of how it all might work.
In the near term, local governments might do well to create more and more-ambitious community centers or other public spaces where residents can meet, learn skills, bond around sports or crafts, and socialize. Two of the most common side effects of unemployment are loneliness, on the individual level, and the hollowing-out of community pride. A national policy that directed money toward centers in distressed areas might remedy the maladies of idleness, and form the beginnings of a long-term experiment on how to reengage people in their neighborhoods in the absence of full employment.
We could also make it easier for people to start their own, small-scale (and even part-time) businesses. New-business formation has declined in the past few decades in all 50 states. One way to nurture fledgling ideas would be to build out a network of business incubators. Here Youngstown offers an unexpected model: its business incubator has been recognized internationally, and its success has brought new hope to West Federal Street, the city’s main drag.
Near the beginning of any broad decline in job availability, the United States might take a lesson from Germany on job-sharing. The German government gives firms incentives to cut all their workers’ hours rather than lay off some of them during hard times. So a company with 50 workers that might otherwise lay off 10 people instead reduces everyone’s hours by 20 percent. Such a policy would help workers at established firms keep their attachment to the labor force despite the declining amount of overall labor.
Spreading work in this way has its limits. Some jobs can’t be easily shared, and in any case, sharing jobs wouldn’t stop labor’s pie from shrinking: it would only apportion the slices differently. Eventually, Washington would have to somehow spread wealth, too.
One way of doing that would be to more heavily tax the growing share of income going to the owners of capital, and use the money to cut checks to all adults. This idea—called a “universal basic income”—has received bipartisan support in the past. Many liberals currently support it, and in the 1960s, Richard Nixon and the conservative economist Milton Friedman each proposed a version of the idea. That history notwithstanding, the politics of universal income in a world without universal work would be daunting. The rich could say, with some accuracy, that their hard work was subsidizing the idleness of millions of “takers.” What’s more, although a universal income might replace lost wages, it would do little to preserve the social benefits of work.
Oxford researchers have forecast that machines might be able to do half of all U.S. jobs within two decades.
The most direct solution to the latter problem would be for the government to pay people to do something, rather than nothing. Although this smacks of old European socialism, or Depression-era “makework,” it might do the most to preserve virtues such as responsibility, agency, and industriousness. In the 1930s, the Works Progress Administration did more than rebuild the nation’s infrastructure. It hired 40,000 artists and other cultural workers to produce music and theater, murals and paintings, state and regional travel guides, and surveys of state records. It’s not impossible to imagine something like the WPA—or an effort even more capacious—for a post-work future.
What might that look like? Several national projects might justify direct hiring, such as caring for a rising population of elderly people. But if the balance of work continues to shift toward the small-bore and episodic, the simplest way to help everybody stay busy might be government sponsorship of a national online marketplace of work (or, alternatively, a series of local ones, sponsored by local governments). Individuals could browse for large long-term projects, like cleaning up after a natural disaster, or small short-term ones: an hour of tutoring, an evening of entertainment, an art commission. The requests could come from local governments or community associations or nonprofit groups; from rich families seeking nannies or tutors; or from other individuals given some number of credits to “spend” on the site each year. To ensure a baseline level of attachment to the workforce, the government could pay adults a flat rate in return for some minimum level of activity on the site, but people could always earn more by taking on more gigs.
Although a digital WPA might strike some people as a strange anachronism, it would be similar to a federalized version of Mechanical Turk, the popular Amazon sister site where individuals and companies post projects of varying complexity, while so-called Turks on the other end browse tasks and collect money for the ones they complete. Mechanical Turk was designed to list tasks that cannot be performed by a computer. (The name is an allusion to an 18th-century Austrian hoax, in which a famous automaton that seemed to play masterful chess concealed a human player who chose the moves and moved the pieces.)
A government marketplace might likewise specialize in those tasks that required empathy, humanity, or a personal touch. By connecting millions of people in one central hub, it might even inspire what the technology writer Robin Sloan has called “a Cambrian explosion of mega-scale creative and intellectual pursuits, a generation of Wikipedia-scale projects that can ask their users for even deeper commitments.”
There’s a case to be made for using the tools of government to provide other incentives as well, to help people avoid the typical traps of joblessness and build rich lives and vibrant communities. After all, the members of the Columbus Idea Foundry probably weren’t born with an innate love of lathe operation or laser-cutting. Mastering these skills requires discipline; discipline requires an education; and an education, for many people, involves the expectation that hours of often frustrating practice will eventually prove rewarding. In a post-work society, the financial rewards of education and training won’t be as obvious. This is a singular challenge of imagining a flourishing post-work society: How will people discover their talents, or the rewards that come from expertise, if they don’t see much incentive to develop either?
Modest payments to young people for attending and completing college, skills-training programs, or community-center workshops might eventually be worth considering. This seems radical, but the aim would be conservative—to preserve the status quo of an educated and engaged society. Whatever their career opportunities, young people will still grow up to be citizens, neighbors, and even, episodically, workers. Nudges toward education and training might be particularly beneficial to men, who are more likely to withdraw into their living rooms when they become unemployed.
7. Jobs and Callings
Decades from now, perhaps the 20th century will strike future historians as an aberration, with its religious devotion to overwork in a time of prosperity, its attenuations of family in service to job opportunity, its conflation of income with self-worth. The post-work society I’ve described holds a warped mirror up to today’s economy, but in many ways it reflects the forgotten norms of the mid-19th century—the artisan middle class, the primacy of local communities, and the unfamiliarity with widespread joblessness.
The three potential futures of consumption, communal creativity, and contingency are not separate paths branching out from the present. They’re likely to intertwine and even influence one another. Entertainment will surely become more immersive and exert a gravitational pull on people without much to do. But if that’s all that happens, society will have failed. The foundry in Columbus shows how the “third places” in people’s lives (communities separate from their homes and offices) could become central to growing up, learning new skills, discovering passions. And with or without such places, many people will need to embrace the resourcefulness learned over time by cities like Youngstown, which, even if they seem like museum exhibits of an old economy, might foretell the future for many more cities in the next 25 years.
On my last day in Youngstown, I met with Howard Jesko, a 60-year-old Youngstown State graduate student, at a burger joint along the main street. A few months after Black Friday in 1977, as a senior at Ohio State University, Jesko received a phone call from his father, a specialty-hose manufacturer near Youngstown. “Don’t bother coming back here for a job,” his dad said. “There aren’t going to be any left.” Years later, Jesko returned to Youngstown to work, but he recently quit his job selling products like waterproofing systems to construction companies; his customers had been devastated by the Great Recession and weren’t buying much anymore. Around the same time, a left-knee replacement due to degenerative arthritis resulted in a 10-day hospital stay, which gave him time to think about the future. Jesko decided to go back to school to become a professor. “My true calling,” he told me, “has always been to teach.”
One theory of work holds that people tend to see themselves in jobs, careers, or callings. Individuals who say their work is “just a job” emphasize that they are working for money rather than aligning themselves with any higher purpose. Those with pure careerist ambitions are focused not only on income but also on the status that comes with promotions and the growing renown of their peers. But one pursues a calling not only for pay or status, but also for the intrinsic fulfillment of the work itself.
When I think about the role that work plays in people’s self-esteem—particularly in America—the prospect of a no-work future seems hopeless. There is no universal basic income that can prevent the civic ruin of a country built on a handful of workers permanently subsidizing the idleness of tens of millions of people. But a future of less work still holds a glint of hope, because the necessity of salaried jobs now prevents so many from seeking immersive activities that they enjoy.
After my conversation with Jesko, I walked back to my car to drive out of Youngstown. I thought about Jesko’s life as it might have been had Youngstown’s steel mills never given way to a steel museum—had the city continued to provide stable, predictable careers to its residents. If Jesko had taken a job in the steel industry, he might be preparing for retirement today. Instead, that industry collapsed and then, years later, another recession struck. The outcome of this cumulative grief is that Howard Jesko is not retiring at 60. He’s getting his master’s degree to become a teacher. It took the loss of so many jobs to force him to pursue the work he always wanted to do.
The Uber Economy Needs New Job Classification
Consider your average Uber driver. He clearly works for Uber. He is indispensable to the operations of the company. It sets his pay and tells him how to do his job. It fires him if he falls below certain strict standards. But he also clearly works for himself. He has no boss. He chooses his own hours. He accepts and rejects work at will.
According to American employment law, though, our driver must be one or the other, a 1099 contractor or a W2 employee. And the gulf between the two in terms of mandated government protections and benefits is as wide as the line between them is blurry. As such, thousands of on-demand-economy employees and scads of lawyers are at war in court to determine what camp our average driver should fall into.
The presumption in that legal war is that one side will win — our driver, and thousands and thousands of other workers like him, will either remain contractors or become employees. But more and more politicians and labor experts are arguing that neither the employee nor the contractor designation really fits. It might be time for a new standard that splits the difference between the two — a “dependent contractor,” as some labor experts call it — that would be better for businesses, consumers, and all those workers themselves.
In the meantime, of course, the employee-or-not-employee war drags on. Last month, the California Labor Commission ruled that one former Uber driver named Barbara Ann Berwick was indeed an employee, awarding her $4,152 in reimbursement for business expenses. (Uber has appealed the ruling.) A number of other legal initiatives, including a major class-action suit, are also ongoing.
Perhaps partly in response to those legal challenges, some businesses are preemptively making their contractors employees. Take Shyp, a kind of Uber-for-mail that will pick up your items, package them, and send them for you. “As we gear up for national expansion, it’s a good time to look at our current model and see how we can improve it,” Shyp’s chief executive officer, Kevin Gibbon, wrote in an open letter this month. “After careful consideration, we’ve decided to transition Shyp couriers, the individuals who complete pickups at our customers’ homes and offices, to W2 employees. This move is an investment in a longer-term relationship with our couriers, which we believe will ultimately create the best experience for our customers.”
But there are significant downsides to both the employee and the contractor designation — for the worker, for the customer, and for the business mediating between the two. "The jury in this case will be handed a square peg and asked to choose between two round holes," Vince Chhabria, a California judge, wrote in a case concerning Uber's competitor Lyft earlier this year. "The test the California courts have developed over the 20th century for classifying workers isn't very helpful in addressing this 21st-century problem."
Take the contractor designation first. The downsides for the workers are obvious. Per-gig pay that sometimes works out to less than the minimum wage. No benefits. Responsibility for costs incurred while at work. The need to pay payroll taxes out of pocket. And there are downsides for consumers and businesses, too. The contractor designation prevents start-ups like Uber and TaskRabbit from giving their workers too much supervision or direction, making their services uneven and their workforces occasionally unreliable.
Then consider the employee designation. There, the downsides for businesses are obvious. Companies generally have to provide employees with minimum-wage protections, overtime, workers’ compensation, health insurance, and unemployment insurance. Were contractors for businesses like Uber, Lyft, Homejoy, Handy, TaskRabbit, and dozens of others to become employees, that would mean far more limited scalability and much more costly overhead for those businesses. There would be downsides for consumers and workers as well: higher prices for the former, far less workplace flexibility for the latter.
But there are ways to correct some of the worst problems associated with the contractor model while avoiding the worst problems associated with the employee model. State, local, and federal governments could create a system that allowed for flexible, low-commitment work that still provided at least $7.25 an hour and basic benefits.
The government could, for instance, levy a surcharge on businesses using "dependent contractors," a new worker designation. Those funds could pay for unemployment insurance, workers’ compensation, and other minimal benefits. At the same time, Uncle Sam could require on-demand businesses to reimburse workers for basic expenses and pay out at least the local minimum wage for any hours spent on the job.
That would require the businesses to better track their workers' hours. It would probably raise prices for consumers, too. But it would ameliorate one of the worst problems with the 1099 economy without fundamentally quashing the business model, namely pay that works out to less than the minimum wage. And that low pay is a big, big problem: One recent survey found that while "greater schedule flexibility" is the top reason for workers to join on-demand firms, "insufficient pay" is the most common reason for them to quit. Businesses might end up with more ability to control their workers and standardize their services, too.
Another, more holistic solution would be for the government to create a system of prorated fundamental benefits, paid for by deductions from all workers’ paychecks, no matter how big or small. Those benefits could include sick leave, maternity and paternity leave, retirement savings, unemployment insurance, and workers’ compensation. Nick Hanauer and David Rolf have a fleshed-out version of such a plan in Democracy, and argue that a "shared security system" would help guarantee a pathway to the middle class without crushing these nascent businesses or precluding the rise of flexible, part-time work.
With 1099 businesses flourishing alongside the legal threats to their business models, a number of politicians and labor experts have started pushing for Washington to think about developing new rules or a third major worker designation. Sarah Leberstein, a senior staff attorney with the National Employment Law Project in New York, said that she would rather see the on-demand economy reclassify its contractors as employees. But were that not to happen, the state and federal governments should get creative, she said. "Given how rapidly work is changing, and given that there is so much dependent on classification of a worker as an employee, we should find other ways to guarantee workers baseline standards and social safety net protections outside that designation," she said. And some politicians are starting to listen to the calls for change: Senator Mark Warner of Virginia, for instance, has urged Congress to address the issue.
There is one giant hitch: Any of these changes would require legislatures to act. Right now, at least on a federal level, that seems as remote a possibility as Uber suddenly deciding to make all its workers full-timers with health insurance and 401(k) matching. But the black-and-white decisions before the courts — and the current system of designations afforded workers and businesses — might not be the right ones. States and local governments should start functioning as the laboratories of democracy that they are, trying out third designations and negotiating terms with 1099 businesses. New kinds of work need new kinds of laws. Think of it as Uber for worker designations.
McJobs and UberJobs
THE French enjoy nothing more than resisting the forces of Anglo-Saxon capitalism. On June 25th French taxi drivers paralysed Paris in protest against Uber, a ride-sharing service, and attacked a few Uber cars for good measure. On June 29th police arrested two of Uber’s managers in France for “illicit activity”. But from Uber’s point of view, all this is but a minor inconvenience: Paris is just one of 300 cities it serves. Far more worrying is what is happening in the company’s own backyard in San Francisco.
On June 3rd the California Labour Commissioner ruled that Uber owes a former driver, Barbara Ann Berwick, $4,152, mostly in expenses, on the ground that she was an employee rather than, as Uber claims, an independent contractor. Uber is appealing against the ruling. But it is a harbinger of things to come: San Francisco courts are also hearing two more cases that hinge on the same question. If the rulings go against the company, its labour costs may rise significantly, as it is forced to pay drivers’ social security and other benefits as well as their expenses. Its valuation, which is currently above $40 billion, may suffer.
Uber is not the only big American company whose business model may be upended by employment law. Last year the National Labour Relations Board’s general counsel said he would treat McDonald’s as a joint employer, together with franchisees, of staff in the chain’s franchised restaurants. This opinion will soon be tested in a case brought by ten employees who claim that they were sacked by a franchisee in Virginia on racial grounds.
Both Uber and McDonald’s are up against powerful interest groups that are capable of both fighting prolonged legal battles and playing on the public’s heartstrings. Uber has to confront state governments which stand to gain sizeable tax revenues if on-demand workers are classified as employees. McDonald’s has to wrestle with the Service Employees International Union, which has been trying for years to unionise fast-food restaurants.
The legal situation seems to be murky in both cases. A pro-Uber lawyer could argue that the firm is essentially little more than a marketmaker that provides a forum for buyers and sellers of rides to come together. Its drivers own their vehicles and choose their working hours. They are free to work for rivals, such as Lyft. An anti-Uber lawyer could retort that the company exercises considerable control over its workers. It screens them for criminal records, and weeds out those who get poor reviews from passengers. Likewise, a pro-McDonald’s lawyer could argue that it is the franchisees who hire and fire workers, and who run the business from day to day. An anti-McDonald’s lawyer could point to the detailed rules that the company lays down on how workers in franchised restaurants are trained and how they should serve customers.
The fundamental problem is that in America, as in many other rich countries, employment law has failed to keep up with the changing realities of modern work. Its labour rules are rooted in a landmark piece of legislation, the Fair Labour Standards Act, passed in 1938 during Franklin Roosevelt’s presidency. In those days a far larger proportion of American men worked in manufacturing; most women did not work; and the difference between employees, who worked full-time for a company, and contractors, who were typically tradesmen such as plumbers, seemed much clearer. The post-war growth of franchising, and the expansion of companies like Amway and Avon that used freelance door-to-door sellers, began to blur the distinction. Now, the “on-demand” economy is all but obliterating it, by letting people sell their labour and rent out their assets—from cars to apartments—in a series of short-term assignments arranged by smartphone app.
That the law is so dated suggests that judges should exercise as light a touch as possible. The franchise model has thrived because it allows local entrepreneurs to join forces with a global goliath to scale up their businesses quickly while operating them according to local labour-market conditions. Forcing McDonald’s to become a co-employer would expose those franchisees to co-ordinated union action and make it much more difficult for them to respond to local circumstances.
The benefits of flexibility
The case for a light touch is even more compelling when it comes to Uber and its peers. The most important thing to remember about the on-demand economy is that it has been a dramatic success not just for consumers but also for workers seeking flexibility. That is why Uber’s number of drivers has been doubling every six months for the past couple of years. Some on-demand companies will choose to classify their workers as employees: for instance, Instacart, a grocery-delivery service, has invited some of its freelancers to become part-time employees, in the belief that this will make it easier to train and supervise them. But other firms should be free to decide otherwise. Uber’s drivers, and their peers at on-demand firms, would get expenses and other benefits if they were declared employees—but they would have less flexibility over working hours and, more important, the increased cost of employing them might mean fewer jobs.
America needs to update its employment law to take into account the fact that FDR is no longer president. This will involve some careful balancing. Policymakers need to recognise that people want to work more flexible hours and that technology has made it possible to create spot markets in surplus labour and idle assets. But they must also recognise the state’s need to raise taxes to pay for public services and benefits. Given the dysfunctional nature of America’s politics, such updating will take a good deal of time and will probably involve many false starts. Until then judges should leave open as many options as possible. The last thing the country needs is for over-strict interpretations of outdated laws to kill exciting new businesses and sabotage jobs. Do that and you end up like France.
Male Care Workers
More male care workers are needed to look after the personal needs of the growing number of men who are living longer, according to Care England, the organisation that represents independent care providers.
Its chief executive called for the government to do more to encourage men into adult social care, particulary in jobs directly helping the elderly.
Martin Green said that because more men lived longer, there was a growing need for more males to be employed to help with their personal care. “We have an ageing population and a lot of people who receive care into old age now are men,” Professor Green said. “The majority of carers are women. When it comes to personal care in particular, some men prefer this to be done by a male rather than female.”
A report published this year found that 82 per cent of the estimated 1.45 million people working in the adult care sector were female.
The Skills for Care report found that among those providing direct care to people, the percentage of women was slightly higher at 83 per cent.
The report said that the large gender bias in favour of women was a matter of concern along with high staff turnover, low rates of pay and a reliance on nonBritish workers. It said that about a quarter of those employed in the sector were on zero-hours contracts and it estimated that a senior care worker’s annual pay was £15,700, while a care worker received £14,000.
Professor Green told the Today programme on Radio 4 that “entrenched societal perceptions” stopped men from looking at care work. “The problem is people always see caring roles as being female roles. We need to make society understand that everyone has the potential to be carer,” he said. He called on the government to make sure that every school understood that care career paths were for men as well as women and to portray more men in government information on care roles.
A spokesman for the Department of Health said: “We would encourage more people, including men, to join the social care workforce. “There are a wide range of opportunities for both men and women and we have published guidance on how care companies can attract more men to the profession.”
Social Skills Needed
For all the jobs that machines can now do — whether performing surgery, driving cars or serving food — they still lack one distinctly human trait. They have no social skills.
Yet skills like cooperation, empathy and flexibility have become increasingly vital in modern-day work. Occupations that require strong social skills have grown much more than others since 1980, according to new research. And the only occupations that have shown consistent wage growth since 2000 require both cognitive and social skills.
The findings help explain a mystery that has been puzzling economists: the slowdown in the growth even of high-skill jobs. The jobs hit hardest seem to be those that don’t require social skills, throughout the wage spectrum.
“As I’m speaking with you, I need to think about what’s going on in your head — ‘Is she bored? Am I giving her too much information?’ — and I have to adjust my behavior all the time,” said David Deming, associate professor of education and economics at Harvard University and author of a new study. “That’s a really hard thing to program, so it’s growing as a share of jobs.”
Some economists and technologists see this trend as cause for optimism: Even as technology eliminates some jobs, it generally creates others. Yet to prepare students for the change in the way we work, the skills that schools teach may need to change. Social skills are rarely emphasized in traditional education.
“Machines are automating a whole bunch of these things, so having the softer skills, knowing the human touch and how to complement technology, is critical, and our education system is not set up for that,” said Michael Horn, co-founder of the Clayton Christensen Institute, where he studies education.
Preschool classrooms, Mr. Deming said, look a lot like the modern work world. Children move from art projects to science experiments to the playground in small groups, and their most important skills are sharing and negotiating with others. But that soon ends, replaced by lecture-style teaching of hard skills, with less peer interaction.
Work, meanwhile, has become more like preschool.
Jobs that require both socializing and thinking, especially mathematically, have fared best in employment and pay, Mr. Deming found. They include those held by doctors and engineers. The jobs that require social skills but not math skills have also grown; lawyers and child-care workers are an example. The jobs that have been rapidly disappearing are those that require neither social nor math skills, like manual labor.
Math and Science Are Not Enough
The jobs that have grown most consistently in the last two decades have been those that require high math skills and high social skills.
Despite the emphasis on teaching computer science, learning math and science is not enough. Jobs that involve those skills but not social skills, like those held by bookkeepers, bank tellers and certain types of engineers, have performed worst in employment growth in recent years for all but the highest-paying jobs. In the tech industry, for instance, it’s the jobs that combine technical and interpersonal skills that are booming, like being a computer scientist working on a group project.
“If it’s just technical skill, there’s a reasonable chance it can be automated, and if it’s just being empathetic or flexible, there’s an infinite supply of people, so a job won’t be well paid,” said David Autor, an economist at the Massachusetts Institute of Technology. “It’s the interaction of both that is virtuous.”
Mr. Deming’s conclusions are supported by previous research, including that of Mr. Autor. Mr. Autor has written that traditional middle-skill jobs, like clerical or factory work, have been hollowed out by technology. The new middle-skill jobs combine technical and interpersonal expertise, like physical therapy or general contracting.
James Heckman, a Nobel Prize-winning economist, did groundbreaking work concluding that noncognitive skills like character, dependability and perseverance are as important as cognitive achievement. They can be taught, he said, yet American schools don’t necessarily do so.
These conclusions have been put into practice outside academia. Google researchers, for example, studied the company’s employees to determine what made the best manager. They assumed it would be technical expertise. Instead, it was people who made time for one-on-one meetings, helped employees work through problems and took an interest in their lives.
Mr. Deming’s study quantifies these types of skills. Using data about the tasks and abilities that occupations require from a Department of Labor survey called O*NET, he measured the economic return of social skills, after controlling for factors like cognitive skill, years of education and occupation.
Mr. Deming explains it in terms of the economic notion of comparative advantage.
Say two workers are publishing a research paper. If one excels at data analysis and the other at writing, they would be more productive and create a better product if they collaborated. But if they lack interpersonal skills, the cost of working together might be too high to make the partnership productive.
Women seem to have taken particular advantage of the demand for social skills. The decline in routine jobs has hit women harder than men. Yet women have more successfully transitioned into collaborative jobs like managers, doctors and professors.
That might be because, starting in infancy, females traditionally excel at things like social perceptiveness, emotional intelligence and working with others, Mr. Deming and other researchers say.
These conclusions do not mean traditional education has become unnecessary, researchers say — in fact, traditional school subjects are probably more necessary than ever to compete in the labor market. But some schools are experimenting with how to add social skills to the curriculum.
At many business and medical schools, students are assigned to small groups to complete their work. So-called flipped classrooms assign video lectures before class and reserve class for discussion or group work. The idea is that traditional lectures involve too little interaction and can be done just as well online.
The Minerva Schools in San Francisco, a start-up college, takes that approach. The idea is to transmit facts outside of class, said its dean, Stephen Kosslyn, and use class to teach effective communication and interaction. “It involves creativity, judgment, all that stuff that is hard for a machine to be programmed to do,” he said.
Another way to teach these skills is through group activities like sports, band or drama, said Deborah Slaner Larkin, chief executive of the Women’s Sports Foundation. Students learn important workplace skills, she said: trusting one another, bringing out one another’s strengths and being coachable.
Someday, nearly all work could be automated, leaving humans to revel in never-ending leisure time. But in the meantime, this research argues, students should be prepared for the actual world of work. Maybe high schools and colleges should evaluate students the way preschools do — whether they “play well with others.”
Knocker Uppers
As the clocks go forward for the start of British Summer Time, many of us will rue the loss of an hour in bed. But how did people get to work on time before alarm clocks?
Until the 1970s in some areas, many workers were woken by the sound of a tap at their bedroom window. On the street outside, walking to their next customer's house, would be a figure wielding a long stick.
The "knocker upper" was a common sight in Britain, particularly in the northern mill towns, where people worked shifts, or in London where dockers kept unusual hours, ruled as they were by the inconstant tides.
"They used to come down the street with their big, long poles," remembers Paul Stafford, a 59-year-old artist who was raised above a shop in Oldham. "I would sleep with my brother in the back room upstairs and my parents slept in the front. "[The knocker upper] wouldn't hang around either, just three or four taps and then he'd be off. We never heard it in the back, though it used to wake my father in the front." While the standard implement was a long fishing rod-like stick, other methods were employed, such as soft hammers, rattles and even pea shooters.
But who woke the knocker uppers? A tongue-twister from the time tackled this conundrum:
We had a knocker-up, and our knocker-up had a knocker-up
And our knocker-up's knocker-up didn't knock our knocker up
So our knocker-up didn't knock us up
'Cos he's not up.
"The knocker uppers were night owls and slept during the day instead, waking at about four in the afternoon," says author Richard Jones.
One problem knocker uppers faced was making sure workers did not get woken up for free. "When knocking up began to be a regular trade, we used to rap or ring at the doors of our customers," Mrs Waters, a knocker upper in the north of England told an intrigued reporter from Canada's Huron Expositor newspaper in 1878. "The public complained of being disturbed... by our loud rapping or ringing; and the knocker-up soon found out that while he knocked up one who paid him, he knocked up several on each side who did not," she continued. The solution they hit on was modifying a long stick, with which to tap on the bedrooms windows of their clients, loudly enough to rouse those intended but softly enough not to disturb the rest.
The trade spread rapidly across the country, particularly in areas where poorly paid workers were required to work shifts but could not afford their own watches.
Charles Dickens references knocking up in Great Expectations and it also features in the story of the Jack the Ripper murders in east London.
Knocking up was so commonplace Dickens made passing reference to it in his novel Great Expectations. The orphan Pip takes up the tale in chapter six: "As I was sleepy before we were far away from the prison-ship, Joe took me on his back again and carried me home.
"He must have had a tiresome journey of it, for Mr Wopsle, being knocked up, was in such a very bad temper that if the Church had been thrown open, he would probably have excommunicated the whole expedition, beginning with Joe and myself."
Robert Paul, the man who discovered the body of the Ripper's first victim - Mary Nichols, described how the policeman he informed saw no reason to let it detain him from his knocking up duties, Mr Jones says. "I saw [a policeman] in Church-row, just at the top of Buck's-row, who was going round calling people up," Mr Paul told the inquest. "And I told him what I had seen, and I asked him to come, but he did not say whether he should come or not. He continued calling the people up, which I thought was a great shame, after I had told him the woman was dead."
Knocker uppers were not only confined to industrial cities. Caroline Jane Cousins - affectionately known as Granny Cousins - was born in Dorset in 1841 and became Poole's last knocker upper, waking brewery workers each morning until retiring in 1918. Another well known knocker upper was Mrs Bowers, of Greenfield Terrace in Sacriston, County Durham. She was a familiar sight out on the streets with her dog Jack. She woke each day at 1am and left her warm bed to wake the miners on the early shift. She began knocking up during World War One and continued for many years, according to Beamish, the Living Museum of the North.
The trade also ran in families. Mary Smith, who used a pea shooter, was a well-known knocker upper in east London and her daughter, also called Mary, followed in her mother's footsteps. The latter is widely believed to have been one of the capital's last knocker uppers, according to Mr Jones.
With the spread of electricity and affordable alarm clocks, however, knocking up had died out in most places by the 1940s and 1950s.
Yet it still continued in some pockets of industrial England until the early 1970s, immortalised in songs performed by the likes of folk singer Joe Stead.
"Through cobbled streets, cold and damp, the knocker-upper man is creeping.
"Tap, tapping on each window pane, to keep the world from sleeping..."
Handwritten Letters To Impress
In this age of email, there is still nothing like the pleasure of receiving a handwritten letter.
One young entrepreneur is capitalising on fondness for old-fashioned communication by charging companies to write handwritten correspondence to customers on their behalf. Charlotte Pearce, 24, originally from Worcestershire, launched Inkpact after meeting executives who felt that email was not a suitable medium for some types of business correspondence. “They said it was difficult to send more than ten letters at a time. So I thought I could help by getting together a team of people to write the letters,” she said.
By the time Ms Pearce graduated from the University of Southampton she had set up her own company. She wrote the first batch of 50 letters herself, with help from her cousin, before realising that her own handwriting “wasn’t great”.
She now employs more than 100 writers throughout the UK, who range from students to single mothers who want to get paid to write from home. The company has proved popular as much for the employees as those businesses seeking out Inkpact’s services. Each day between five and ten people contact her asking to be a writer, she claimed.
All letters are written with a fountain pen and writers are given advice on how to create different strokes and learn the different scripts and characters.
When Inkpact first started two years ago no technology was used, but now writers log in to an online system to see what needs to be written, to whom and to when. Letters include small note cards and A4 pieces of paper. Each letter takes less than 15 minutes to write.
Ms Pearce’s clients range from small self-employed businessmen to vast multinationals. “Last year we sent tens of thousands of letters and this year it will be hundreds of thousands,” she said.
She attributes her success to creating something “personal and physical”, adding: “Everyone is so tech focused. We sit in a tech sphere like everyone else, but the end result is hand-written. We take online offline.”
Prices range from £4.50 to £9.50 per card and each comes with a wax seal.
China Manufacturing Long Term
After three decades of dramatic growth, China’s manufacturing engine has largely stalled. With rising salaries, labor unrest, environmental devastation and intellectual property theft, China is no longer an attractive place for Western companies to move their manufacturing. Technology has also eliminated the labor cost advantage, so companies are looking for ways to bring their high-value manufacturing back to the United States and Europe.
China is well aware that it has lost its advantage, and its leaders want to use the same technologies that have leveled the playing field to give the country a new strategic edge. In May 2015, China launched a 10-year plan, called Made in China 2025, to modernize its factories with advanced manufacturing technologies, such as robotics, 3-D printing and the Industrial Internet. And then, in July 2015, it launched another national plan, called Internet Plus, “to integrate mobile Internet, cloud computing, big data and the Internet of Things with modern manufacturing.”
China has made this a national priority and is making massive investments. Just one province, Guangdong, committed to spending $150 billion to equip its factories with industrial robots and create two centers dedicated to advanced automation. But no matter how much money it spends, China simply can’t win with next-generation manufacturing. It built its dominance in manufacturing by offering massive subsidies, cheap labor and lax regulations. With technologies such as robotics and 3-D printing, it has no edge.
After all, American robots work as hard as Chinese robots. And they also don’t complain or join labor unions. They all consume the same electricity and do exactly what they are told. It doesn’t make economic sense for American industry to ship raw materials and electronics components across the globe to have Chinese robots assemble them into finished goods that are then shipped back. That manufacturing could be done locally for almost the same cost. And with shipping eliminated, what once took weeks could be done in days and we could reduce pollution at the same time.
Most Chinese robots are also not made in China. An analysis by Dieter Ernst of the East-West Center showed that 75 percent of all robots used in China are purchased from foreign firms (some with assembly lines in China), and China remains heavily dependent on the import of core components from Japan. By Ernst’s count, there are 107 Chinese companies producing robots but many have low quality and safety and design standards. He anticipates that fewer than half of them will survive.
The bigger problem for China is its workforce. Even though China is graduating far more than 1 million engineers every year, the quality of their education is so poor that they are not employable in technical professions. This was documented by my research teams at Duke and Harvard. Western companies already have great difficulty in recruiting technical talent in China. This will get worse because advanced manufacturing requires management and communication skills and the ability to operate complex information-based factories. Ernst predicts that the increasing scarcity of specialized skills may be the Achilles’ heel of China’s push into advanced manufacturing and services.
Even if China solves its skills problem, builds its own high-quality industrial robots, and develops innovative industrial processes, it won’t be able to maintain its advantage for long. We could simply import the Chinese robots and copy its industrial innovations. I doubt that even Donald Trump’s immigration walls would keep the foreign robots out.
There is little doubt in my mind that over the next five to 10 years, manufacturing will return, en masse, to the United States. It will once again become a local industry. Yes, it won’t employ the numbers of workers that old-line manufacturing did, but advanced manufacturing will create hundreds of thousands of high-skilled, high-paying jobs. With its massive investments, China is only accelerating the demise of its export-oriented manufacturing industry.
Are robots really going to steal our jobs?
“THE REALITY IS that we are facing a jobless future: one in which most of the work done by humans will be done by machines. Robots will drive our cars, manufacture our goods, and do our chores, but there won’t be much work for human beings.” That’s the dire warning of software entrepreneur and Carnegie Mellon engineer Vivek Wadhwa.
Former Microsoft ceo Bill Gates agrees: Technology “will reduce demand for jobs, particularly at the lower end of skill set,” he has predicted. Gates has also proposed taxing robots to support the victims of technological unemployment. “In the past,” software entrepreneur Martin Ford declared last year, “machines have always been tools that have been used by people.” But now, he fears, they’re “becoming a replacement or a substitute for more and more workers.” A much-cited 2013 study from the Oxford Martin Programme on Technology and Employment struck an even more dire note, estimating that 47 percent of today’s American jobs are at risk of being automated within the next two decades.
The conventional wisdom among technologists is wellestablished: Robots are going to eat our jobs. But economists tend to have a different perspective.
Over the past two centuries, they point out, automation has brought us lots more jobs—and higher living standards too. “Is this time different?” the Massachusetts Institute of Technology economist David Autor said in a lecture last year. “Of course this time is different; every time is different. On numerous occasions in the last 200 years scholars and activists have raised the alarm that we are running out of work and making ourselves obsolete.... These predictions strike me as arrogant.”
“We are neither headed toward a rise of the machine world nor a utopia where no one works anymore,” said Michael Jones, an economist at the University of Cincinnati, last year. “Humans will still be necessary in the economy of the future, even if we can’t predict what we will be doing.” When the Boston University economist James Bessen analyzed computerization and employment trends in the U.S. since 1980, his study concluded that “computer use is associated with a small increase in employment on average, not major job losses.”
Who is right, the terrified technologists or the totally chill economists?
THIS TIME IS ALWAYS DIFFERENT
IN 1589, QUEEN Elizabeth I refused to grant a patent to William Lee for his invention of the stocking frame knitting machine, which sped up the production of wool hosiery. “Thou aimest high, Master Lee,” she declared. “Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars.” In the early 19th century, English textile work- ers calling themselves Luddites famously sought to protect their livelihoods by smashing industrial weaving machines.
The economist John Maynard Keynes warned in 1930 that the “means of economising the use of labour [is] outrunning the pace at which we can find new uses for labour,” resulting in the “new disease” of “technological unemployment.” In 1961, Time warned: “Today’s new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.” A 1989 study by the International Metalworkers Federation forecasted that within 30 years, as little as 2 percent of the world’s current labor force “will be needed to produce all the goods necessary for total demand.” That prediction has just two years left to come true.
This year the business consultancy McKinsey Global Institute issued a report that analyzed the potential impact of automation on individual work activities rather than entire occupations. The McKinsey researchers concluded that only 5 percent of occupations are fully automatable using currently available technologies. On the other hand, the report also estimated that “about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies”—principally the physical work that takes place in highly structured and predictable environments along with routine data collection and processing.
In March, the consultancy PricewaterhouseCoopers concluded 38 percent of jobs in the U.S. are at high risk of automation by the early 2030s. Specifically, jobs in transportation and storage, retail and wholesale trade, food service and accommodation, administrative and support services, insurance and finance, and manufacturing are particularly vulnerable.
And that 2013 study from Oxford’s Martin Programme on Technology and Employment? Economist Bessen points out that of the 37 occupations it identified as fully automatable— including accountants, auditors, bank loan officers, messengers, and couriers—none has been completely automated since the study was published. Bessen further notes that of the 271 jobs listed in the 1950 Census, only one has truly disappeared for reasons that can largely be ascribed to automation: the elevator operator. In 1900, 50 percent of the population over age 10 was gainfully employed. (Child labor was not illegal in most states back then, and many families needed the extra income.) In 1950, it was 59 percent of those over age 16. Now the civilian labor participation rate stands at 63 percent.
Of course, the jobs that people do today—thanks largely to high productivity made possible by technological progress—are vastly different than those done at the turn of the 20th century.
ARE WE WORKING LESS?
IN A 2015 essay titled “Why Are There Still So Many Jobs?,” mit economist Autor points out that most new workplace tech- nologies are designed to save labor. “Whether the technology is tractors, assembly lines, or spreadsheets, the first-order goal is to substitute mechanical power for human musculature, machine-consistency for human handiwork, and digital calculation for slow and error-prone ‘wetware,’” he writes. Routinized physical and cognitive activities—spot welding car chassis on an assembly line or processing insurance claim paperwork at a desk—are the easiest and first to be automated.
If the technologists’ fears are coming true, you’d expect to see a drop in hours worked at middle-skill, middle-wage jobs— the ones politicians often refer to as “good jobs.” And indeed, in 2013, Autor and David Dorn of the Center for Monetary and Financial Studies in Madrid found a significant decrease in hours worked in construction, mining, and farm work between 1980 and 2005; the researchers concluded that this was because the routine manual and cognitive activities required by many of those middle-class occupations were increasingly being performed by ever cheaper and more capable machines and computers. They also found a 30 percent increase in hours spent working at low-skill jobs that require assisting or caring for others, from home health aides to beauticians to janitors.
But this year a better-designed study by two more economists— Jennifer Hunt of Rutgers and Ryan Nunn of Brookings— challenged that conclusion. Instead of focusing on the average wages of each occupation, Hunt and Nunn sorted hourly workers into categories by their real wages, reasoning that the averages in certain jobs could mask important trends.
Hunt and Nunn found that men experienced downward wage mobility in the 1980s, due largely to deunionization and the decline in manufacturing. Beginning around 1990, the percentage of both men and women in their lower-wage category declined, while rising in the higher-wage group.
After adjusting for business cycle fluctuations, they found that there was a small increase in the percentage of workers in their best-compensated category (people earning more than $25.18 an hour) between 1979 and 2015, with very little change in the other groups—certainly nothing that looked like the radical polarization Autor and others fear.
So far, robots don’t seem to be grabbing human jobs at an especially high rate. Take the much-touted finding by mit economist Daron Acemoglu and Boston University economist Pascual Restrepo in a working paper released in March. Since 1990, they say, each additional industrial robot in the U.S. results in 5.6 American workers losing their jobs. Furthermore, the addition of one more robot per thousand employees cuts average wages by 0.5 percent. The pair defined a robot as a programmable industrial machine that operates in three dimensions— think of spot welding and door handling robots on an automobile assembly line.
In total, Acemoglu and Restrepo report that the number of jobs lost due to robots since 1990 is somewhere between 360,000 and 670,000. By contrast, last year some 62.5 million Americans were hired in new jobs, while 60.1 million either quit or were laid off from old ones, according the Bureau of Labor Statistics. The impact of robots, in other words, is quite small, relatively speaking. Moreover, when the researchers include a measure of the change in computer usage at work, they found a positive effect, suggesting that computers tend to increase the demand for labor.
In 2015, economists Georg Graetz of Uppsala University and Guy Michaels of the London School of Economics analyzed the effects of industrial robots on employment in 17 different countries between 1993 and 2007. In contrast to the Acemoglu and Restrepo study, “We find a negative effect of robots on lowskilled workers’ employment,” says Michaels in an interview, “but no significant effect on overall employment.” Their study also found that the increases in the number of robots boosted annual economic growth by 0.37 percent.
WHERE DID THE JOBS GO? LOOK AROUND!
IN A 2011 television interview, President Barack Obama worried that “a lot of businesses have learned to become much more efficient with a lot fewer workers.” To illustrate his point, Obama noted, “You see it when you go to a bank and you use an atm, you don’t go to a bank teller.” But the number of bank tellers working in the U.S. has not gone down. Since 1990, their ranks have increased from around 400,000 to 500,000, even as the number of atms rose from 100,000 to 425,000. In his 2016 study, Bessen explains that the atms “allowed banks to operate branch offices at lower cost; this prompted them to open many more branches, offsetting the erstwhile loss in teller jobs.” Similarly, the deployment of computerized document search and analysis technologies hasn’t prevented the number of paralegals from rising from around 85,000 in 1990 to 280,000 today. Bar code scanning is now ubiquitous in retail stores and groceries, yet the number of cashiers has increased to 3.2 million today, up from just over 2 million in 1990, outpacing U.S. population growth over the same period.
This illustrates why most economists are not particularly worried about the notion of widespread technological unemployment. When businesses automate to boost productivity, they can cut their prices, thus increasing the demand for their products, which in turn requires more workers. Furthermore, the lower prices allow consumers to take the money they save and spend it on other goods or services, and this increased demand creates more jobs in those other industries. New products and services create new markets and new demands, and the result is more new jobs.
You can think of this another way: The average American worker today would only have to work 17 weeks per year to earn the income his counterpart brought in 100 years ago, according to Autor’s calculations—the equivalent of about 10 hours of work per week. Most people prefer to work more, of course, so they can afford to enjoy the profusion of new products and services that modern technology makes available, including refrigerators, air conditioners, next-day delivery, smartphones, air travel, video games, restaurant meals, antibiotics, year-round access to fresh fruits and vegetables, the internet, and so forth.
But if technologically fueled productivity improvements boost job growth, why are U.S. manufacturing jobs in decline? In a new study published in April, Bessen finds that as markets mature, comparatively small changes in the price of a product do not call forth a compensating increase in consumer demand. Thus, further productivity gains bring reduced employment in relatively mature industries such as textiles, steel, and automobile manufacturing. Over the past 20 years, U.S. manufacturing output increased by 40 percent while the number of Americans working in manufacturing dropped from 17.3 million in 1997 to 12.3 million now. On the other hand, Bessen projects that the ongoing automation and computerization of the nonmanufacturing sector will increase demand for all sorts of new services. In fact, he forecasts that in service industries, “faster technical change will...create faster employment growth.”
Since the advent of the smartphone just 10 years ago, for example, an “app economy” has emerged that “now supports an astounding 1.66 million jobs in the United States,” Progressive Policy Institute economist Michael Mandel reports. According to the Entertainment Software Association, more than 220,000 jobs now depend on the game software industry. The IBISWorld consultancy estimates that 227,000 people work in web design, while the Biotechnology Innovation Organization says that U.S. bioscience companies employ 1.66 million people. Robert Cohen, a senior fellow at the Economic Strategy Institute, projects that business spending on cloud services will generate nearly $3 trillion more in gross domestic product and 8 million new jobs from 2015 to 2025.
In 2014, Siemens usa ceo Eric Spiegel claimed in a Washington Post op-ed that 50 percent of the jobs in America today didn’t exist 25 years ago—and that 80 percent of the jobs students will fill in the future don’t exist today. Imagine, for instance, the novel occupations that might come into being if the so-called internet of things and virtual/augmented reality technologies develop as expected.
In a report this year for the Technology ceo Council, Mandel and analyst Bret Swanson strike a similar note, arguing that the “productivity drought is almost over.” Over the past 15 years, they point out, productivity growth in digital industries has averaged 2.7 percent per year, whereas productivity in physical industries grew at just 0.7 percent annually.
According to the authors, the digital industries currently account for 25 percent of private-sector employment. “Never mind the evidence of the past 200 years; the evidence that we have of the past 15 years shows that more technology yields more jobs and better jobs,” says Swanson.
Mandel and Swanson argue that the information age has barely begun, and that the “increased use of mobile technologies, cloud services, artificial intelligence, big data, inexpensive and ubiquitous sensors, computer vision, virtual reality, robotics, 3D additive manufacturing, and a new generation of 5G wireless are on the verge of transforming the traditional physical industries.” They project that applying these information technologies to I.T.-laggard physical industries will boost U.S. economic growth from its current annual 2 percent rate to 2.7 percent over the next 15 years, adding $2.7 trillion in annual U.S. economic output by 2031, and cumulatively raising American wages by $8.6 trillion. This would increase U.S. gdp per capita from $52,000 to $77,000 by 2031.
THE UNKNOWN FUTURE
“ELECTRIFICATION TRANSFORMED BUSINESSES, the overall economy, social institutions, and individual lives to an astonishing degree—and it did so in ways that were overwhelmingly positive,” Martin Ford writes in his book Rise of the Robots. But why doesn’t Martin mourn all the jobs that electrification destroyed? What about the ice men? The launderers? The household help replaced by vacuum cleaners and dishwashers? The firewood providers? The candle makers?
To ask is to answer. Electricity may have killed a lot of jobs, but on balance it meant many more. Developments in information technology will do the same.
Imagine a time-traveling economist from our day meeting with Thomas Edison, Henry Ford, and John D. Rockefeller at the turn of the 20th century. She informs these titans that in 2017, only 14 percent of American workers will be employed in agriculture, mining, construction, and manufacturing, down from around 70 percent in 1900. Then the economist asks the trio, “What do you think the other 56 percent of workers are going to do?”
They wouldn’t know the answer. And as we look ahead now to the end of the 21st century, we can’t predict what jobs workers will be doing then either. But that’s no reason to assume those jobs won’t exist.
“I can’t tell you what people are going to do for work 100 years from now,” Autor said last year, “but the future doesn’t hinge on my imagination.” Martin and other technologists can see the jobs that might be destroyed by information technology; their lack of imagination blinds them to how people will use that technology to conjure millions of occupations now undreamt of.