Google is everywhere you look, even when you're not online.
But it is not only ubiquitous, it is also, apparently, free. It is tempting, therefore, to see it as like air, necessary and morally and commercially neutral. Surely, therefore, there is nothing wrong with our leaders mingling with this company?
There is. Google is no more neutral than a party manifesto. It has an agenda based on one highly specialised interpretation of how the internet must work and evolve. Essentially, the company wants to rewrite copyright and intellectual property laws in the cause of making all the information in the world freely available. Google uses its near monopoly in internet searches to sell advertising, and the more material there is to search the more advertising can be sold.
This is a utopian vision and, like all such visions, it involves destruction. If your music, your newspapers, your films, your television, your books are all free, then, in time, they will no longer be produced because there will be no economic justification. Already, Robert Levine points out, the music business is trapped in a downward spiral that will end in oblivion; newspapers, especially in America, are going the same way; and films and television are being pirated on a huge scale.
"It's amazing," Levine writes, "how easy the internet makes it to destroy a business without creating another one in its place." Governments, when they are not lunching Google, are struggling to come up with an answer.
But the technology moves too quickly for politicians - Vince Cable, only recently, had to abandon plans to force the blocking of filesharing websites as cumbersome and unworkable. For politicians everything, when they gaze into the mire of internet wire, becomes too difficult.
The irony in all this is that the internet is largely parasitic on the media it is so ruthlessly destroying. Blogs would cease to exist without the mainstream media; there would cheaply replicated and, therefore, pirated. As fast as copyright owners think up ways of preventing this, the pirates think of ways of getting round their codes, paywalls or lawyers. I suspect the word "piracy" is part of the problem; it makes young geek hackers think they are Johnny Depp rather than common thieves.
One of the characters in this book is the elusive Christian Schmid, the founder of RapidShare. This, ingeniously, is a "locker service" - it acts as an innocent back-up system by enabling users to upload their material, but, in fact, it is used to access copyright material. As a result, it generates 1% of global internet traffic - as much as Facebook. The law fights with RapidShare, but so far with no conclusive victory. Schmid, meanwhile, gets very rich indeed thanks to the creative efforts of others.
Levine relentlessly ploughs through this and many other twists and turns of the war between the pirates, utopians and the copyright holders. It becomes clear after a while that there is no immediate or even likely solution. Plainly a radical internet land grab that destroyed net neutrality - the way all information is treated equally by the net whether it is from you and me or a giant corporation - and parcelled it out to commercial interests such as television or radio is undesirable. The utopians are half right when they celebrate the freedom and universality of the internet.
But, equally plainly, the claims of utopians such as the science-fiction novelist Cory Doctorow, who gave a speech entitled How Copyright Threatens Democracy, are missing a big point - copyright built and guarantees democracy. Furthermore, the utopians should be aware that, though they see themselves as freedom fighters, they are serving the interests of some of the biggest, most powerful companies in the world.
The solutions offered towards the end of this book are complicated and varied rather than plausible. Typically, they would involve a small, regular charge - say, for access to all of a company's films and television shows - that would create a pile of cash to be distributed to creators on the basis of the popularity of their works. This wouldn't stop piracy but, if the service was well designed, it would make it less attractive.
Where this will end is anybody's guess. In the immortal words of the great screenwriter William Goldman, "nobody knows anything". Neither the blasted heath of the utopians nor the walled gardens of those who would grab the territory of the internet seem attractive prospects. But two thing are clear: chancellors of the exchequer should not co-write articles with blatant commercial players ( should one really have to say this?) and economic advice from Google comes with wires attached. be no music or movies to pirate if there were no record companies or studios. On the internet wasteland of the utopians, only a few feeble amateur shoots would grow.
Or maybe not. Like Google, Levine's book has an agenda but it is the opposite of Schmidt's. Levine, an American technology and music journalist, is on the side of the decently rewarded creators against the utopians. Free Ride is flatly written and hard going, but it is important, not least because it concludes by offering some possible solutions to the problem.
And the problem, at heart, is replicability.
Ray Kurzweil
Androids and angels
Futurist and inventor Ray Kurzweil believes humans will soon be able to live forever with the help of computers. Barmy or brilliant?
IT USED to be that you would go into a dark tent where an old woman would gaze into a ball and tell you about the dark handsome stranger in your future.
In the 21st century, it seems, the tent is a rather eccentrically decorated office in the suburbs of Boston; the old woman, a professorial chap in a suit; and the handsome stranger, a network of hyper-intelligent computers that will take over the world.
It is hard not to think Arnold Schwarzenegger while talking to futurist Ray Kurzweil. This is not because he looks like Arnie (he is pretty much the physical opposite), but because he keeps saying things that sound like the plot of the movie Terminator. Nanobots, self-aware computers and human cyborgs litter his conversations.
The media love Kurzweil because his predictions are so bold and expressed with utter certainty. He says that in the first half of this century there will be a ''Singularity'', a period of incredibly rapid technological change, triggered by the moment that computers become smart enough to improve themselves without human intervention.
He sets 2029 as the year computers will overtake humans - which is when things start getting really weird. Disease will be cured, death defeated, the universe will become the playground of immortal super-beings.
"Every aspect of human life will be irreversibly transformed," says Kurzweil, who is due in Melbourne this month. Computers will enter our bodies and brains. The pace of change will be incomprehensible unless we enhance ourselves with artificial intelligence boosts.
"What Ray does consistently," says Neil Gershenfeld from MIT university, "is take a whole bunch of steps everyone agrees on, and take principles for extrapolating that everyone agrees on - so they lead to things that nobody agrees on - things that seem crazy".
But Kurzweil has serious, respectable chops in the prediction business. While these days he makes a fortune on the lecture circuit, much of his early success came as an inventor, forseeing needs that technology would soon be able to fulfil, then coming up with the gadgets to do it.
He started early. The way he tells it, a hospital in Spanish Harlem hired him to do statistics for a government program that provided preschool help for underprivileged families. He used electromechanical calculators with levers and gears to work out simple sums. But he was frustrated by their clunkiness.
"I discovered they actually had a computer, one of the 12 in New York City, in the building. I taught myself to program and wrote a program to do these [statistics]. They were surprised."
No wonder. He was 12 years old.
Kurzweil grew up in Queens, New York, born a few years after his parents had fled Hitler's Europe. They were a talented family. His grandmother was the first woman in Europe to get a PhD in chemistry, his father a successful concert pianist and conductor, his mother an artist.
"The family religion was the power of human ideas," Kurzweil says. Their idea of a sacred text was a da Vinci manuscript. He had an uncle who was a researcher at Bell Laboratories and introduced his nephew to early computers.
Kurzweil loved gadgets from an early age - aged eight he built his own mechanical puppet theatre and later he would build his own computers from surplus electronics. He was inspired by computers, "the idea that you could recreate the world, the thinking process."
At 17 he appeared on the TV game show I've Got a Secret and played a piece of piano music. One of the contestants guessed correctly that Kurzweil had programmed a computer that composed the piece itself, after analysing the patterns in 'real' music.
His path was set. Inventions he has had a hand in include the earliest flatbed scanner and a text-to-voice reader for the blind (used and endorsed by Stevie Wonder).
Gradually Kurzweil turned from inventor to futurist as his ambition and imagination flew further from practical products.
In the prediction business Kurzweil, though by no means infallible, seems to have had more hits than most. The one he is proudest of was predicting the date to within a year - and a decade before the fact - when the first chess computer would beat a human.
Then he latched on to the idea of Singularity. It was not his originally - science fiction writer Vernor Vinge was one of the first to propose it in detail, and the ''father of the computer'' John von Neumann also pondered it.
Kurzweil developed it into a "Law of Accelerating Returns", predicting exponential growth in power in anything technological. Think of how amazingly far we've come, he says, and there's no reason things won't just get more and more amazing, more and more quickly, because each new technology is built on the shoulders of the one before, until that dizzying "Singularity" where we lose track altogether.
In bestselling books and $30,000-a-pop lectures, Kurzweil describes Singularity as "the destiny of the human-machine civilisation."
Critics point to strong signs that the increase in computer power is starting to slow down. The cost of R&D has gone up with every computer generation, so a faltering economy could cause the law to fail. Others say there are various physical limits on computer circuits that will halt progress, possibly in only 10 years' time. Still others wonder what is going to power this vast new computer network.
Kurzweil isn't worried. He says new discoveries will inevitably cause things to pick up. In 40 years, he points out, we have gone from computers that filled a room and cost millions of dollars to massively faster ones that fit in a pocket and cost $100. In the next 40 years we will go from a unit that fits in your pocket to one the size of a blood cell - all running on solar power.
The future is closer than we think. We are already living in a society that depends on smart machines, he says.
"There's no question we don't have our hand on the plug - we could not turn off all the computers,"' he says. "Our civilisation's infrastructure would grind to a halt, all communication would stop, transportation would stop, financial markets would totally freeze up."
"We are increasingly dependent on machine intelligence as machines get more intelligent. But this is not an invasion from Mars. We have always been a human-machine civilisation, we create tools to extend our reach."
But there is a difference between machines getting better at their jobs, and their gaining artificial intelligence and self-awareness.
Kurzweil concedes that consciousness is still a mystery, in terms of how it comes about or how we can be sure it exists in a machine (or indeed a person). But he sidesteps this, saying that once a machine acts like a self-aware being, it might as well be, and people will perceive it as one - again by that magic date of 2029.
"[It's the] 'acts like a duck' test," he says. "If it really seems conscious and it's convincing, I will believe it is conscious. That's really the only test that matters."
When Watson the Computer won the US game show Jeopardy! earlier this year, Kurzweil saw a glimpse of the future he had predicted. "That's viscerally impressive," he says
Not everyone buys into Kurzweil's view of the future. In fact, he has many critics. One scientist called into a TV program he was on to complain that the producers had given time to this "highly sophisticated crackpot", a purveyor of "pseudo-religious predictions that have no basis in objective reality."
When Kurzweil dips into a subject such as biology or artificial intelligence, hard-core researchers tend to argue he has glossed over complexities that will make progress much slower. For example, computers may be getting ever faster at recalling data, but they still can't tell the difference between a dog and a cat.
One of Kurzweil's critics is scientist and blogger PZ Meyers ("pseudo-scientific dingbat" was a particularly stinging conclusion to a recent post).
"The heart of the Kurzweil method is to simply pick a date far enough in the future that we cannot predict what technological advances will occur, and also far enough forward that he isn't likely to be confronted with his failure by people who remember what he said, and all is good," Meyers wrote a year ago, during a spicy exchange over when we would "reverse-engineer" the brain to understand its inner workings.
Meyers also picks at Kurzweil's fondness for the word ''exponential'', saying it can't be used as a magic wand to wave at any currently intractable problem.
But Kurzweil also has plenty of supporters who respect his intellect and his predictions - although some debate his optimistic timeline and utopian assumptions.
Well-respected Australian philosopher David Chalmers, from New York University, believes the idea of the Singularity, while it may not actually come to pass, is credible enough that we should take it seriously and consider its implications.
Kurzweil accuses his critics of lack of imagination - or of resisting the idea of technological immortality, because of a superstitious belief that we have to stay on good terms with death.
Kurzweil's palpable fear of death ("It's such a profoundly sad, lonely feeling that I really can't bear it") infuses his work to such a degree that you wonder if his predictions are wish-fulfillment.
The death of his father at 58 from heart disease and an early diagnosis of Type 2 diabetes have made Kurzweil extremely health-conscious. He has his blood taken and checked every month, and downs 200 pills of vitamins, minerals and other concoctions every day (he calls it "reprogramming my biochemistry"). He describes death as a "looming disaster", a "looming tragedy."
"People believe that this disaster is a good thing, that the goal of life is to become comfortable with death," he says. "People have come to rely on these philosophies as a way of coping with this fundamental anxiety that permeates human life."
Kurzweil intends never to die.
"That's the goal," he says. Healthy living will keep him alive for at least another 15 years ("I'm really not ageing very much," he says), at which point he believes he will be able to directly program his genes to recover youth. Then 'nanobots' will live inside the body and do a better job than flesh and blood in keeping us going. And finally man will merge with machine and 'upload' to a cyber-life.
"That's my plan," he says. "It's not guaranteed to work, but I am optimistic it will. Some people define humanity based on our limitations. I define humanity as that species which seeks to overcome its limitations, and has done so, time and time again."
Online Computer Courses
Parlez-vous Python? What about Rails or JavaScript? Foreign languages tend to wax and wane in popularity, but the language du jour is computer code.
The market for night classes and online instruction in programming and Web construction, as well as for iPhone apps that teach, is booming. Those jumping on board say they are preparing for a future in which the Internet is the foundation for entertainment, education and nearly everything else. Knowing how the digital pieces fit together, they say, will be crucial to ensuring that they are not left in the dark ages.
Some in this crowd foster secret hopes of becoming the next Mark Zuckerberg. But most have no plans to quit their day jobs - it is just that those jobs now require being able to customize a blog's design or care for and feed an online database.
"Inasmuch as you need to know how to read English, you need to have some understanding of the code that builds the Web," said Sarah Henry, 39, an investment manager who lives in Wayne, Pa. "It is fundamental to the way the world is organized and the way people think about things these days." Ms. Henry took several classes, including some in HTML, the basic language of the Web, and WordPress, a blogging service, through Girl Develop It, an organization based in New York that she had heard about online that offers lessons aimed at women in a number of cities. She paid around $200 and saw it as an investment in her future.
"I'm not going to sit here and say that I can crank out a site today, but I can look at basic code and understand it," Ms. Henry said. "I understand how these languages function within the Internet." Some see money to be made in the programming trend.
After two free computer science classes offered online by Stanford attracted more than 100,000 students, one of the instructors started a company called Udacity to offer similar free lessons. Treehouse, a site that promises to teach Web design, picked up financing from Reid Hoffman, the founder of LinkedIn, and other notable early investors.
General Assembly, which offers workroom space for entrepreneurs in New York, is adding seven classrooms to try to keep up with demand for programming classes, on top of the two classrooms and two seminar rooms it had already. The company recently raised money from the personal investment fund of the Amazon founder Jeff Bezos and DST Global, which backed Facebook.
The sites and services catering to the learn-to-program market number in the dozens and have names like Code Racer, Women Who Code, Rails for Zombies and CoderDojo. But at the center of the recent frenzy in this field is Codecademy, a start-up based in New York that walks site visitors through interactive lessons in various computing and Web languages, like JavaScript, and shows them how to write simple commands.
Since the service was introduced last summer, more than a million people have signed up, and it has raised nearly $3 million in venture financing.
Codecademy got a big break in January when Michael R. Bloomberg, the mayor of New York, made a public New Year's resolution to use the site to learn how to code. The site is free. Its creators hope to make money in part by connecting newly hatched programmers with recruiters and start-ups.
"People have a genuine desire to understand the world we now live in," said Zach Sims, one of the founders of Codecademy. "They don't just want to use the Web; they want to understand how it works."
The blooming interest in programming is part of a national trend of more people moving toward technical fields. According to the Computing Research Association, the number of students who enrolled in computer science degree programs rose 10 percent in 2010, the latest year for which figures are available.
Peter Harsha, director of government affairs at the association, said the figure had been steadily climbing for the last three years, after a six-year decline in the aftermath of the dot-com bust. Mr. Harsha said that interest in computer science was cyclical but that the current excitement seemed to be more than a blip and was not limited to people who wanted to be engineers.
"To be successful in the modern world, regardless of your occupation, requires a fluency in computers," he said. "It is more than knowing how to use Word or Excel but how to use a computer to solve problems."
That is what pushed Rebecca Goldman, 26, a librarian at La Salle University in Philadelphia, to sign up for some courses. She said she had found herself needing basic Web development skills so she could build and maintain a Web site for the special collections department she oversees.
"All librarians now rely on software to do our jobs, whether or not we are programmers," Ms. Goldman said. "Most libraries don't have an I.T. staff to set up a server and build you a Web site, so if you want that stuff done, you have to do it yourself."
The challenge for Codecademy and others catering to the hunger for technical knowledge is making sure people actually learn something, rather than dabble in a few basic lessons or walk away in frustration.
"We know that we're not going to turn the 99 percent of people interested in learning to code into the 1 percent who are really good at it," said Mr. Sims of Codecademy. "There's a big difference between being code-literate and being a good programmer."
Some who have set their sights on learning to program have found it to be a steep climb. Andrew Hyde, 27, who lives in Boulder, Colo., has worked at start-ups and is now writing a travel book. He said he leaped at the chance to take free coding classes online.
"If you're working around start-ups and watching programmers work, you're always a little bit jealous of their abilities," he said. But despite his enthusiasm, he struggled to translate the simple commands he picked up through Codecademy into real-world development. "It feels like we're going to be taught how to write the great American novel, but we're starting out by learning what a noun is," he said.
Mr. Sims said he was aware of such criticisms and that the company was working to improve the utility of its lessons.
Seasoned programmers say learning how to adjust the layout of a Web page is one thing, but picking up the skills required to develop a sophisticated online service or mobile application is an entirely different challenge. That is the kind of technical education that cannot be acquired by casual use for a few hours at night and on the weekends, they say.
"I don't think most people learn anything valuable," said Julie Meloni, who lives in Charlottesville, Va., and has written guides to programming. At best, she said, people will learn "how to parrot back lines of code," when they really need "knowledge in context to be able to implement commands."
Even so, Ms. Meloni, who has been teaching in the field for over a decade, said she found the groundswell of interest in programming, long considered too specialized and uncool, to be an encouraging sign. "I'm thrilled that people are willing to learn code," she said. "There is value here. This is just the first step."
Ethics
BACK in 2004, as Google prepared to go public, Larry Page and Sergey Brin celebrated the maxim that was supposed to define their company: “Don’t be evil.” But these days, a lot of people — at least the mere mortals outside the Googleplex — seem to be wondering about that uncorporate motto.How is it that Google, a company chockablock with brainiac engineers, savvy marketing types and flinty legal minds, keeps getting itself in hot water? Google, which stood up to the Death Star of Microsoft? Which changed the world as we know it?
The latest brouhaha, of course, involves the strange tale of Street View, Google’s project to photograph the entire world, one street at a time, for its maps feature. It turns out Google was collecting more than just images: federal authorities have dinged the company for lifting personal data off Wi-Fi systems, too, including e-mails and passwords.
Evil? Hard to know. But certainly weird — and enough to prompt a small fine of $25,000 from the Federal Communications Commission and, far more damaging, howls from Congress and privacy advocates. A Google spokeswoman called the hack “a mistake” and disagreed with the F.C.C.’s contention that Google “deliberately impeded and delayed” the commission’s investigation.Many people might let this one go, were it not for all those other worrisome things at Google. The company has been accused of flouting copyrights, leveraging other people’s work for its benefit and violating European protections of personal privacy, among other things. “Don’t be evil” no longer has its old ring. And Google, an underdog turned overlord, is no humble giant. It tends to approach any controversy with an air that ranges somewhere between “trust us” and “what’s good for Google is good for the world.”
But ascribing what’s going on here solely to the power or arrogance of a single company misses an important dimension of today’s high-technology business, where there are frequent assaults, real or perceived, on various business standards and practices.
Mark Zuckerberg has apologized multiple times for Facebook’s changing policies on privacy and data ownership. Last year, he agreed to a 20-year audit of Facebook’s practices.
Jeffrey P. Bezos has been criticized for how Amazon.com shares data with other companies, and what information it stores in its browser. And Apple, even before it drew fire for the labor practices at Foxconn in China, had trouble over the way it handled personal information in making music recommendations.
When such problems arise, executives often stare blankly at their accusers. When a company called Path was recently found to be collecting the digital address books of its customers, for instance, its founder characterized the process as an “industry best practice.” He reversed the policy after a storm of criticism.
WHAT’S going on, when business as usual in such a dynamic industry makes the regulators — and the public — nervous?
Part of Google’s problem may be no more than an ordinary corporate quandary. “With ‘Don’t be evil,’ Google set itself up for accusations of hypocrisy anytime they got near the line,” says Roger McNamee, a longtime Silicon Valley investor. “Now they are on the defensive, with their business undermined especially by Apple. When people are defensive they can do things that are emotional, not reasonable, and bad behavior starts.”
But “Don’t be evil” also represents the impossibility of a more nuanced social code, a problem faced by many Internet companies. Nearly every tech company of significance, it seems, is building technologies that are producing an entirely new kind of culture. EBay, in theory, can turn anyone on the planet into a merchant. Amazon Web Services gives everyone a cheap supercomputer. Twitter and Facebook let you publish to millions. And tools like Google Translate allow us to transcend old language barriers.
“You want a company culture that says, ‘We are on a mission to change the world; the world is a better place because of us,’” says Reid Hoffman, founder of LinkedIn and a venture capitalist with Greylock Partners. “It’s not just ‘we create jobs.’ A tobacco company can do that.”
“These companies give away a ton of value, a public good, with free products like Google search, that transforms cultures,” Mr. Hoffman says. “The easy thing to say is, ‘If you try to regulate us, you’ll do more harm than good, you’re not good social architects.’ I’m not endorsing that, but I understand it.”
The executives themselves don’t know what their powerful changes mean yet, and they, like the rest of us, are dizzied by the pace of change. Sure, automobiles changed the world, but the roads, gas stations and suburbs grew over decades. Facebook was barely on the radar five years ago and now has a community of more than 800 million, doing things that no one predicted. When the builders of the technology barely understand the effect they are having, the regulators of the status quo can seem clueless.
Moreover, arrogance can come easily to phenomenally well-educated people who have always been at the top of the class. Success, though sometimes fickle, comes fast, and is registered in millions and billions of dollars. The world applauds, so it’s easy to see yourself as a person who can choose well for the world.
In the “people like us” haze of the rarefied realms of tech, it’s easy to forget that, well, not everyone is like us. Not everyone is comfortable with the idea of sharing personal information, of living in full view on the Web. And, of course, ordinary people have more downside risk than a 26-year-old Harvard dropout billionaire.
Another hazard is also one of the great strengths of the Silicon Valley: a tolerance of failure. Failing at an interesting project is seen as an important kind of learning. In the most famous case, Steve Jobs was driven from Apple, then failed in his NeXT Computer venture and for a while floundered at Pixar. But he picked up vital skills in management and technology along the way. There are a thousand lesser such stories.
If tech is building a new culture, with new senses of the private and the shared, the failure of overstepping boundaries is also the only way to learn where those boundaries have shifted. It is a self-serving point, but that doesn’t mean it’s entirely wrong. To the outsiders, it can look a lot as if the companies are playing “catch us if you can” by continually testing, and sometimes exceeding, boundaries.
IS there a better way? Mr. Hoffman says he thinks the tech industry has to acknowledge how much its products are shaping society. “We need something more than, ‘We’re good guys, trust us,’ ” he says. “There should be an industry group that discusses overall issues around data and privacy with political actors. Something that convinces them that you are good guys, but gives them a place to swoop in.”
Klout
Last spring Sam Fiorella was recruited for a VP position at a large Toronto marketing agency. With 15 years of experience consulting for major brands like AOL, Ford, and Kraft, Fiorella felt confident in his qualifications. But midway through the interview, he was caught off guard when his interviewer asked him for his Klout score. Fiorella hesitated awkwardly before confessing that he had no idea what a Klout score was.
The interviewer pulled up the web page for Klout.com—a service that purports to measure users’ online influence on a scale from 1 to 100—and angled the monitor so that Fiorella could see the humbling result for himself: His score was 34. “He cut the interview short pretty soon after that,” Fiorella says. Later he learned that he’d been eliminated as a candidate specifically because his Klout score was too low. “They hired a guy whose score was 67.”
Partly intrigued, partly scared, Fiorella spent the next six months working feverishly to boost his Klout score, eventually hitting 72. As his score rose, so did the number of job offers and speaking invitations he received. “Fifteen years of accomplishments weren’t as important as that score,” he says.
Much as Google’s search engine attempts to rank the relevance of every web page, Klout—a three-year-old startup based in San Francisco—is on a mission to rank the influence of every person online. Its algorithms comb through social media data: If you have a public account with Twitter, which makes updates available for anyone to read, you have a Klout score, whether you know it or not (unless you actively opt out on Klout’s website). You can supplement that score by letting Klout link to harder-to-access accounts, like those on Google+, Facebook, or LinkedIn. The scores are calculated using variables that can include number of followers, frequency of updates, the Klout scores of your friends and followers, and the number of likes, retweets, and shares that your updates receive. High-scoring Klout users can qualify for Klout Perks, free goodies from companies hoping to garner some influential praise.
But even if you have no idea what your Klout score is, there’s a chance that it’s already affecting your life. At the Palms Casino Resort in Las Vegas last summer, clerks surreptitiously looked up guests’ Klout scores as they checked in. Some high scorers received instant room upgrades, sometimes without even being told why. According to Greg Cannon, the Palms’ former director of ecommerce, the initiative stirred up tremendous online buzz. He says that before its Klout experiment, the Palms had only the 17th-largest social-networking following among Las Vegas-based hotel-casinos. Afterward, it jumped up to third on Facebook and has one of the highest Klout scores among its peers.
Klout is starting to infiltrate more and more of our everyday transactions. In February, the enterprise-software giant Salesforce.com introduced a service that lets companies monitor the Klout scores of customers who tweet compliments and complaints; those with the highest scores will presumably get swifter, friendlier attention from customer service reps. In March, luxury shopping site Gilt Groupe began offering discounts proportional to a customer’s Klout score.
Matt Thomson, Klout’s VP of platform, says that a number of major companies—airlines, big-box retailers, hospitality brands—are discussing how best to use Klout scores. Soon, he predicts, people with formidable Klout will board planes earlier, get free access to VIP airport lounges, stay in better hotel rooms, and receive deep discounts from retail stores and flash-sale outlets. “We say to brands that these are the people they should pay attention to most,” Thomson says. “How they want to do it is up to them.”
Not everyone is thrilled by the thought of a startup using a mysterious, proprietary algorithm to determine what kind of service, shopping discounts, or even job offers we might receive. The web teems with resentful blog posts about Klout, with titles like “Klout Has Gone Too Far,” “Why Your Klout Score Is Meaningless,” and “Delete Your Klout Profile Now!” Jaron Lanier, the social media skeptic and author of You Are Not a Gadget, hates the idea of Klout. “People’s lives are being run by stupid algorithms more and more,” Lanier says. “The only ones who escape it are the ones who avoid playing the game at all.” Peak outrage was achieved on October 26, when the company tweaked its algorithm and many people’s scores suddenly plummeted. To some, the jarring change made the whole concept of Klout seem capricious and meaningless, and they expressed their outrage in tweets, blog posts, and comments on the Klout website. “Not exactly fun having the Internet want to punch me in the face,” tweeted Klout CEO Joe Fernandez amid the uproar.
But not everyone wants to clock Fernandez. In fact, he appears to be at the forefront of a new and extremely promising online industry. Klout has received funding (a rumored $30 million of it) from venture capital behemoths like Kleiner Perkins Caufield & Byers and Venrock. It’s facing down competitors like Kred and PeerIndex, racing to establish something akin to the Nielsen ratings for online social interactions. Klout may be ridiculed by those who find it obnoxious or silly or both, but it is aiming to become one of the pillars of social media.
Klout scores are compiled using proprietary algorithms that purport to quantify online influence. Size matters: Large followings on Twitter or Facebook can boost your rating. But it’s more important to have a high percentage of posts that are liked or retweeted. And just interacting with someone who has lots of Klout can jack up your score.
For a guy whose company seems to encourage loudmouthed self-promoters, Fernandez himself is remarkably soft-spoken and self-effacing. When I meet him in Klout’s offices, beneath a freeway overpass in San Francisco’s South of Market district, he flops down in an armchair, wearing a faded plaid shirt and a pair of raggedy sneakers. His hair is unkempt, his smile goofy, his manner friendly and open. He frequently asserts that Klout has succeeded only because he “hired people much smarter than me.”
Fernandez’s humility is key to his appeal. “If the CEO of Klout was a type-A guy, I think many of us would take offense when he talks about scoring us or judging us,” says David Pakman, a partner at Venrock. “But Joe’s not like that. He’s uniquely suited to this role.”
Fernandez got the idea for Klout in 2007, when at the age of 30 he had surgery to correct a jaw misalignment that had plagued him for years. Doctors wired his jaw shut for three months. “It was mentally and emotionally way tougher than I thought it would be,” Fernandez says. “I couldn’t talk to anyone. Even my mother couldn’t understand what I was saying.” He resorted to posting on the still-young Facebook and Twitter as his only means of communication. He posted his opinions on videogames, suggested neighborhoods to check out, and recommended restaurants—even though he wasn’t eating solid food. Every time a family member or friend responded to one of his updates, he relished his ability to sway their behavior. And as he looked over his feed he saw countless other people doing the same thing, recommending products or activities to an enthusiastic audience. Fernandez began to envision social media as an unprecedented eruption of opinions and micro-influence, a place where word-of-mouth recommendations—the most valuable kind—could spread farther and faster than ever before.
Fernandez’s vision was helped along by a series of biographical confluences. He had studied computer science at the University of Miami, going on to help run a pair of analytics companies—one in education, the other in real estate—that worked with massive, unwieldy streams of information. So he was familiar with the concept of finding patterns and value in large amounts of data. And as the child of a casino executive who specialized in herding rich South American gamblers into comped Caesars Palace suites, Fernandez saw up close and from a young age the power of free perks as a marketing tool.
With his jaw still clamped shut, recovering in his Lower East Side apartment, Fernandez opened an Excel file and began to enter data on everyone he was connected to on Facebook and Twitter: how many followers they had, how often they posted, how often others responded to or retweeted those posts. Some contacts (for instance, his young cousins) had hordes of Facebook friends but seemed to wield little overall influence. Others posted rarely, but their missives were consistently rebroadcast far and wide. He was building an algorithm that measured who sparked the most subsequent online actions. He sorted and re-sorted, weighing various metrics, looking at how they might shape results. Once he’d figured out a few basic principles, Fernandez hired a team of Singaporean coders to flesh out his ideas. Then, realizing the 13-hour time difference would impede their progress, he offshored himself. For four months, he lived in Singapore, sleeping on couches or in his programmers’ offices. On Christmas Eve of 2008, back in New York a year after his surgery, Fernandez launched Klout with a single tweet. By September 2009, he’d relocated to San Francisco to be closer to the social networking companies whose data Klout’s livelihood depends on. (His first offices were in the same building as Twitter headquarters.)
Fernandez says that he sees Klout as a form of empowerment for the little guy. Large companies have always attempted to woo influential people. It’s why starlets get showered with free clothes and athletes get paid to endorse sports drinks. It’s also why, once blogging took off, popular scribes like mommy blogger Dooce started receiving free washing machines. But Fernandez says that, until the dawn of social media, there was no way to pinpoint society’s hidden influencers. These include friends and family members whose recommendations directly impact our buying decisions, as well as quasi-public figures best known for their Twitter updates—like, say, San Francisco sommelier Rick Bakas, whose 71,000-plus followers hang on his every wine-pairing suggestion. “This is the democratization of influence,” says Mark Schaefer, an adjunct marketing professor at Rutgers and author of the book Return on Influence. “Suddenly regular people can carve out a niche by creating content that moves quickly through an engaged network. For brands, that’s buzz. And for the first time in history, we can measure it.”
Calvin Lee is a graphic designer in Los Angeles with a Klout score of 74. He has received 63 Klout perks, scoring freebies like a Windows phone, an invitation to a VH1 awards show, and a promotional hoodie for the movie Contraband. To keep his score up, Lee tweets up to 45 times a day—an average of one every 32 minutes. “People like food porn,” he notes, “so I try to post a lot of pictures of things I eat.”
Lee once took a vacation during which he had no access to the Internet. This made him uncomfortable. “I was worried that brands couldn’t get in touch with me. It’s easy for them to forget about you. And I knew my Klout score would go down if I stopped tweeting for too long.” When he was loaned an Audi A8 for a few days as a Klout perk, Lee knew exactly where he wanted to drive it. He road-tripped from LA up to San Francisco, eventually arriving at the Klout offices and shaking hands with Joe Fernandez. Naturally he tweeted and hashtagged the entire journey.
It’s easy to understand why marketers would want to reach maniacs like Lee. “We want to create powerful brand advocates,” says Tom Norwalk, president and CEO of the Seattle Convention and Visitors Bureau, who arranged a two-day, all-expenses-paid trip for 30 high-Klout visitors. “We hope these folks will tweet and Instagram to their many followers.” Virgin America has offered free flights, Capital One has dispensed bonus loyalty points, and Chevrolet has loaned out its new Sonic subcompact for long weekends.
But there’s more to the Klout score than a thirst for freebies. Throughout our lives, we are tagged with scores, some of them far more crucial to our well-being than anything Fernandez has handed out. Credit scores are maddeningly opaque and can be used against us in infinitely more harmful ways than a Klout score ever could. Our health records are used by huge organizations to segment and sift us behind closed doors. And yet there is something uniquely infuriating about the Klout score. “They’re calculating a Q score for everybody, and it turns out there’s a lot of emotion tied up in that,” Schaefer says. And the fact that Klout users’ status is so explicitly linked to material gain makes it an even more freighted situation, he says. “This is the intersection of self-loathing with brand opportunity.”
Almost immediately after Fernandez sent his Christmas Eve tweet debuting Klout—long before there were any perks to win or advantages to gain—the company was deluged with users just curious to see how they measured up. “I didn’t think about the ego component of having a number next to your name,” Fernandez says. When we see ourselves ranked, “we’re trained to want to grow that score.”
When I began researching this story, my own score was a mere 31. So I asked Klout product director Chris Makarsky how I might boost it. His first suggestion was to improve the “cadence” of my tweets. (For a moment, I thought he meant I should tweet in iambic pentameter. But he just meant that I should tweet a lot more.) Second, he pushed me to concentrate on one topic instead of spreading myself so thin. Third, he emphasized the importance of developing relationships with high-Klout people who might respond to my tweets, propagate them, and extend my influence to whole new population groups. Finally, he advised me to keep things upbeat. “We find that positive sentiment drives more action than negative,” he warned.
Using these tips, I managed to boost my Klout to 46 before it plateaued. From that point, I just couldn’t jolt the needle any higher. And, to my sheepish frustration, I wasn’t being offered any good perks (which seem to kick in when scores hit 50). It became clear that if I wanted more Klout, I’d need to game the system harder. I could glom on to influential Twitterati and connive to get retweeted by them. I could dramatically accelerate the frequency of my tweets, posting late into the night. And I could commit myself to never taking a break: Makarsky made it clear that a two-week vacation from social media might cause my score to nose-dive. The thought of running on this hamster wheel forever was positively exhausting, and it made me wonder whether Klout was really measuring my influence or just my ability to be relentless, to crowd-please, and to brown-nose. Consider that the only perfect 100 Klout score belongs to Justin Bieber, while President Obama’s score is currently at 91. We might not wish to glorify a metric that deems a teen pop star more influential than the leader of the free world.
In the depths of my personal bout with Klout status anxiety, I installed a browser plug-in that allows me to see the Klout scores of everyone in my Twitter feed. At first, I marveled at the folks with scores soaring up into the seventies and eighties. These were the “important” people—big media personalities and pundits with trillions of followers. But after a while I noticed that they seemed stuck in an echo chamber that was swirling with comments about the few headline topics of the social media moment, be it the best zinger at the recent GOP debate or that nutty New York Times story everybody read over the weekend.
Over time, I found my eyes drifting to tweets from folks with the lowest Klout scores. They talked about things nobody else was talking about. Sitcoms in Haiti. Quirky museum exhibits. Strange movie-theater lobby cards from the 1970s. The un-Kloutiest’s thoughts, jokes, and bubbles of honest emotion felt rawer, more authentic, and blissfully oblivious to the herd. Like unloved TV shows, these people had low Nielsen ratings—no brand would ever bother to advertise on their channels. And yet, these were the people I paid the most attention to. They were unique and genuine. That may not matter to marketers, and it may not win them much Klout. But it makes them a lot more interesting.
Just Because You're Not On Facebook ....
What can social networks on the internet know about persons who are friends of members, but have no user profile of their own? Researchers from the Interdisciplinary Center for Scientific Computing of Heidelberg University studied this question. Their work shows that through network analytical and machine learning tools the relationships between members and the connection patterns to non-members can be evaluated with regards to non-member relationships. Using simple contact data, it is possible, under certain conditions, to correctly predict that two non-members know each other with approx. 40 percent probability.
For several years scientists have been investigating what conclusions can be drawn from a computational analysis of input data by applying adequate learning and prediction algorithms. In a social network, information not disclosed by a member, such as sexual orientation or political preferences, can be "calculated" with a very high degree of accuracy if enough of his or her friends did provide such information about themselves. "Once confirmed friendships are known, predicting certain unknown properties is no longer that much of a challenge for machine learning," says Prof. Dr. Fred Hamprecht, co-founder of the Heidelberg Collaboratory for Image Processing (HCI).
Until now, studies of this type were restricted to users of social networks, i.e. persons with a posted user profile who agreed to the given privacy terms. "Non-members, however, have no such agreement. We therefore studied their vulnerability to the automatic generation of so-called shadow profiles," explains Prof. Dr. Katharina Zweig, who until recently worked at the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University.
In an online social network, it is possible to infer information about non-members, for instance by using so-called friend-finder applications. When new Facebook members register, they are asked to make available their full list of e-mail contacts, even of those people who are not Facebook members. "This very basic knowledge of who is acquainted with whom in the social network can be tied to information about who users know outside the network. In turn, this association can be used to deduce a substantial portion of relationships between non-members," explains Ágnes Horvát, who conducts research at the IWR.
To make their calculations, the Heidelberg researchers used a standard procedure of machine learning based on network analytical structural properties. As the data needed for the study was not freely obtainable, the researchers worked with anonymised real-world Facebook friendship networks as a test set of basic data. The partitioning between members and non-members was simulated using a broad possible range of models. These partitions were used to validate the study results. Using standard computers the researchers were able to calculate in just a few days which non-members were most likely friends of each other.
The Heidelberg scientists were astonished that all the simulation methods produced the same qualitative result. "Based on realistic assumptions about the percentage of a population that are members of a social network and the probability with which they will upload their e-mail address books, the calculations enabled us to accurately predict 40 percent of the relationships between non-members." According to Dr. Michael Hanselmann of the HCI, this represents a 20-fold improvement compared to simple guessing.
"Our investigation made clear the potential social networks have for inferring information about non-members. The results are also astonishing because they are based on mere contact data," emphasises Prof. Hamprecht. Many social network platforms, however, have far more data about users, such as age, income, education, or where they live. Using this data, a corresponding technical infrastructure and other structural properties of network analysis, the researchers believe that the prediction accuracy could be significantly improved. "Overall our project illustrates that we as a society have to come to an understanding about the extent to which relational data about persons who did not provide their consent may be used," says Prof. Zweig.
Why Is Everyone On The Internet So Angry?
With a presidential campaign, health care and the gun control debate in the news these days, one can't help getting sucked into the flame wars that are Internet comment threads. But psychologists say this addictive form of vitriolic back and forth should be avoided — or simply censored by online media outlets — because it actually damages society and mental health.
These days, online comments "are extraordinarily aggressive, without resolving anything," said Art Markman, a professor of psychology at the University of Texas at Austin. "At the end of it you can't possibly feel like anybody heard you. Having a strong emotional experience that doesn't resolve itself in any healthy way can't be a good thing."
If it's so unsatisfying and unhealthy, why do we do it?
A perfect storm of factors come together to engender the rudeness and aggression seen in the comments' sections of Web pages, Markman said. First, commenters are often virtually anonymous, and thus, unaccountable for their rudeness. Second, they are at a distance from the target of their anger — be it the article they're commenting on or another comment on that article — and people tend to antagonize distant abstractions more easily than living, breathing interlocutors. Third, it's easier to be nasty in writing than in speech, hence the now somewhat outmoded practice of leaving angry notes (back when people used paper), Markman said.
And because comment-section discourses don't happen in real time, commenters can write lengthy monologues, which tend to entrench them in their extreme viewpoint. "When you're having a conversation in person, who actually gets to deliver a monologue except people in the movies? Even if you get angry, people are talking back and forth and so eventually you have to calm down and listen so you can have a conversation," Markman told Life's Little Mysteries.
Chiming in on comment threads may even give one a feeling of accomplishment, albeit a false one. "There is so much going on in our lives that it is hard to find time to get out and physically help a cause, which makes 'armchair activism' an enticing [proposition]," a blogger at Daily Kos opined in a July 23 article.
And finally, Edward Wasserman, Knight Professor in Journalism Ethics at Washington and Lee University, noted another cause of the vitriol: bad examples set by the media. "Unfortunately, mainstream media have made a fortune teaching people the wrong ways to talk to each other, offering up Jerry Springer, Crossfire, Bill O'Reilly. People understandably conclude rage is the political vernacular, that this is how public ideas are talked about," Wasserman wrote in an article on his university's website. "It isn't."
Communication, the scholars say, is really about taking someone else's perspective, understanding it, and responding. "Tone of voice and gesture can have a large influence on your ability to understand what someone is saying," Markman said. "The further away from face-to-face, real-time dialogue you get, the harder it is to communicate."
In his opinion, media outlets should cut down on the anger and hatred that have become the norm in reader exchanges. "It's valuable to allow all sides of an argument to be heard. But it's not valuable for there to be personal attacks, or to have messages with an extremely angry tone. Even someone who is making a legitimate point but with an angry tone is hurting the nature of the argument, because they are promoting people to respond in kind," he said. "If on a website comments are left up that are making personal attacks in the nastiest way, you're sending the message that this is acceptable human behavior." [Niceness Is in Your DNA, Scientists Find]
For their part, people should seek out actual human beings to converse with, Markman said — and we should make a point of including a few people in our social circles who think differently from us. "You'll develop a healthy respect for people whose opinions differ from your own," he said.
Working out solutions to the kinds of hard problems that tend to garner the most comments online requires lengthy discussion and compromise. "The back-and-forth negotiation that goes on in having a conversation with someone you don't agree with is a skill," Markman said. And this skill is languishing, both among members of the public and our leaders.
Apple AirPlay and the Window of Obsolescence
Anytime there’s a new operating system, there’s somebody who complains. Usually, the new OS “breaks” some older piece of software. Or maybe your printer won’t work after the upgrade. Something.
“AirPlay does not seem to work on any of Apple’s computers much older than one year,” wrote one unhappy reader. “Please have a look at the sea of negative comments mentioning this on Apple’s pertinent upgrade download page. The negatives are a resounding thumbs down on this upgrade.”
Well, I’m not sure on that last point — three million people downloaded Mountain Lion in the first four days, a faster adoption than for any other Mac OS in history, and the users have given it a cumulative 4.5 out of 5 stars on the Mac App Store. But the grumbling is real.
AirPlay is a fairly amazing feature. As I described it, “AirPlay mirroring requires an Apple TV ($100), but lets you perform a real miracle: With one click, you can send whatever is on your Mac’s screen — sound and picture — to your TV. Wirelessly. You can send photo slide shows to the big screen. Or present lessons to a class. Or play online videos, including services like Hulu that aren’t available on the Apple TV alone.”
Only one problem: AirPlay requires a recent Mac — 2011 or newer. The reason, Apple says, is that AirPlay requires Intel’s QuickSync video compression hardware, which only the latest chips include. Apple lists the models it works on.
Now there are, fortunately, alternatives for older Macs (and older Mac OS X versions). You can read about one of them here. But that software is more complicated, and the video can be choppy.
So there we are: a new OS feature that requires certain hardware, and the Mac faithful are not pleased. Not pleased at all.
“So what we have here is decision by Apple to not support a key feature for the majority of their users, for seemingly the sole reason of pushing hardware upgrades,” wrote one reader. “Both of my Macs are under a year old but I have to wonder, what upcoming features will I lose out on in 18 months’ time? That isn’t a way to treat faithful fans (or any installed base), and makes the transition from Jobs to Cook look ominous to say the least.”
“10.8 is Apple’s most offensive slap at customer loyalty ever,” wrote another.
I appreciate that these readers are unhappy. But it’s not AirPlay that’s the problem. It’s not even Apple that’s the problem. (New software features that require certain hardware isn’t anything new. When Windows Vista came out, Media Center didn’t work unless you had a TV tuner card. The handwriting recognition in Windows works only on PCs’ touch screens. And so on.)
No, it’s the way the entire computer industry works.
When you look at it one way, the tech industry is about constant innovation, steady progress. Of course some things will become obsolete. Of course there will be new features that your older computer can’t exploit.
But if you look at it another way, the whole thing is a scam to make us keep buying new stuff over and over again. A new phone every two years. A new computer every four. Bad for our wallets, bad for the environment.
In AirPlay’s case, that window of “obsolescence,” if we’re calling it that, was supershort — one year. If you bought your Mac only two years ago, you can’t use AirPlay.
What should Apple have done, then? It had this great technology, finished and ready to ship. How could it have avoided this “slap at customer loyalty”? Should it have sat on AirPlay and not released it, to avoid that perception? If so, how long? Three years? Four years?
Or is the problem not with the tech companies, but with ourselves? Should we just accept that this is how the game is played? That when you buy a new computer, you should buy it for what it does now, and learn not to resent the fact that, inevitably, there will be better, faster, cheaper computers in the future?
It’s hard to imagine either of those approaches becoming satisfying, either to the industry players or their customers. So until there’s a resolution to this stalemate, the status quo will prevail: upgrade/grumble, upgrade/grumble. On this one, friends, there’s no solution in sight.
Google Eater Eggs
Google just got really awesome.
Awesome meaning you probably won’t get any more work done today because you’ll be playing Six Degrees of Kevin Bacon for the next four hours.
Here’s what you do: Go to Google and type “Bacon number” followed by the name of any actor.
It’s a so-called Easter egg — a hidden time-killing gem recently instigated by Google.
Google engineer Yossi Matias told the Hollywood Reporter that it’s an obvious example of how Google is able to pinpoint the connections between people.
“It’s interesting that this small-world phenomena when applied to the world of actors actually shows that in most cases, most actors aren’t that far apart from each other,” Matias told the Reporter. “And most of them have a relatively small Bacon number.”
Barack Obama’s “Bacon number” is just one. See if you can figure that out before typing it into Google.(see also The Oracle of Bacon)
Internet Pirates Will Always Win
STOPPING online piracy is like playing the world’s largest game of Whac-A-Mole.
Hit one, countless others appear. Quickly. And the mallet is heavy and slow.
Take as an example YouTube, where the Recording Industry Association of America almost rules with an iron fist, but doesn’t, because of deceptions like the one involving a cat.
YouTube, which is owned by Google, offers a free tool to the movie studios and television networks called Content ID. When a studio legitimately uploads a clip from a copyrighted film to YouTube, the Google tool automatically finds and blocks copies of the product.
To get around this roadblock, some YouTube users started placing copyrighted videos inside a still photo of a cat that appears to be watching an old JVC television set. The Content ID algorithm has a difficult time seeing that the video is violating any copyright rules; it just sees a cat watching TV.
Sure, it’s annoying for those who want to watch the video, but it works. (Obviously, it’s more than annoying for the company whose product is being pirated.)
Then there are those — possibly tens of millions of users, actually — who engage in peer-to-peer file-sharing on the sites using the BitTorrent protocol.
Earlier this year, after months of legal wrangling, authorities in a number of countries won an injunction against the Pirate Bay, probably the largest and most famous BitTorrent piracy site on the Web. The order blocked people from entering the site.
In retaliation, the Pirate Bay wrapped up the code that runs its entire Web site, and offered it as a free downloadable file for anyone to copy and install on their own servers. People began setting up hundreds of new versions of the site, and the piracy continues unabated.
Thus, whacking one big mole created hundreds of smaller ones.
Although the recording industries might believe they’re winning the fight, the Pirate Bay and others are continually one step ahead. In March, a Pirate Bay collaborator, who goes by the online name Mr. Spock, announced in a blog post that the team hoped to build drones that would float in the air and allow people to download movies and music through wireless radio transmitters.
“This way our machines will have to be shut down with aeroplanes in order to shut down the system,” Mr. Spock posted on the site. “A real act of war.” Some BitTorrent sites have also discussed storing servers in secure bank vaults. Message boards on the Web devoted to piracy have in the past raised the idea that the Pirate Bay has Web servers stored underwater.
“Piracy won’t go away,” said Ernesto Van Der Sar, editor of Torrent Freak, a site that reports on copyright and piracy news. “They’ve tried for years and they’ll keep on trying, but it won’t go away.” Mr. Van Der Sar said companies should stop trying to fight piracy and start experimenting with new ways to distribute content that is inevitably going to be pirated anyway.
According to Torrent Freak, the top pirated TV shows are downloaded several million times a week. Unauthorized movies, music, e-books, software, pornography, comics, photos and video games are watched, read and listened to via these piracy sites millions of times a day.
The copyright holders believe new laws will stop this type of piracy. But many others believe any laws will just push people to find creative new ways of getting the content they want.
“There’s a clearly established relationship between the legal availability of material online and copyright infringement; it’s an inverse relationship,” said Holmes Wilson, co-director of Fight for the Future, a nonprofit technology organization that is trying to stop new piracy laws from disrupting the Internet. “The most downloaded television shows on the Pirate Bay are the ones that are not legally available online.”
The hit HBO show “Game of Thrones” is a quintessential example of this. The show is sometimes downloaded illegally more times each week than it is watched on cable television. But even if HBO put the shows online, the price it could charge would still pale in comparison to the money it makes through cable operators. Mr. Wilson believes that the big media companies don’t really want to solve the piracy problem.
“If every TV show was offered at a fair price to everyone in the world, there would definitely be much less copyright infringement,” he said. “But because of the monopoly power of the cable companies and content creators, they might actually make less money.”
The way people download unauthorized content is changing. In the early days of music piracy, people transferred songs to their home or work computers. Now, with cloud-based sites, like Wuala, uTorrent and Tribler, people stream movies and music from third-party storage facilities, often to mobile devices and TV’s. Some of these cloud-based Web sites allow people to set up automatic downloads of new shows the moment they are uploaded to piracy sites. It’s like piracy-on-demand. And it will be much harder to trace and to stop.
It is only going to get worse. Piracy has started to move beyond the Internet and media and into the physical world. People on the fringes of tech, often early adopters of new devices and gadgets, are now working with 3-D printers that can churn out actual physical objects. Say you need a wall hook or want to replace a bit of hardware that fell off your luggage. You can download a file and “print” these objects with printers that spray layers of plastic, metal or ceramics into shapes.
And people are beginning to share files that contain the schematics for physical objects on these BitTorrent sites. Although 3-D printing is still in its infancy, it is soon expected to become as pervasive as illegal music downloading was in the late 1990s.
Content owners will find themselves stuck behind ancient legal walls when trying to stop people from downloading objects online as copyright laws do not apply to standard physical objects deemed “noncreative.”
In the arcade version of Whac-A-Mole, the game eventually ends — often when the player loses. In the piracy arms-race version, there doesn’t seem to be a conclusion. Sooner or later, the people who still believe they can hit the moles with their slow mallets might realize that their time would be better spent playing an entirely different game.
Passwords
Researchers examined 3.4 million PINs, all released in security breaches, to form a snapshot of the modern human psyche.
You’ve probably never asked yourself what your bank card PIN says about you, but the answer is “a lot”. There are 10,000 four-digit PIN combinations but researchers at DataGenetics, the data analysis firm, discovered that about 20 per cent of us use one of three. And more than half of those use a single one. Can you guess it? 1234. Followed by 1111 and 0000.
If you’re wondering what this means about your personality, it’s this — that you’re staggeringly unoriginal (in your banking PIN) and probably not the type who goes in for burglar alarms.
Researchers trawled through 3.4 million PINs, all released in security breaches over the years, to uncover this data. The results form a snapshot of the modern human psyche. James Bond features as the 23rd most common with 0007, and George Orwell’s Nineteen Eighty-Four inspired the PIN that comes in at 26th. Unsurprisingly, the snigger-worthy (if you’re 8) 6969 comes tenth.
The first puzzling passcode that the researchers encountered was 2580 at 22nd. It was only when making a phone call that one of them realised these are the numbers straight down the middle of a telephone keypad.
If you’ve chosen a year of birth or anniversary, you’re one of the crowd. Every combination of PINs starting with 19 is in the database’s top fifth.
And the least common PIN, used by only 25 people out of the 3.4 million? If your PIN is 8068, give yourself a pat on the back — you’re special. The rest of you, get to the bank and change yours.
Avatars
You could soon exist in a thousand places at once. So what would you all do – and what would it be like to meet a digital you?
One morning in Tokyo, Alex Schwartzkopf furrows his brow as he evaluates a grant proposal. At the same time, Alex Schwartzkopf is thousands of kilometres away in Virginia, chatting with a colleague. A knock at the door causes them to look up. Alex Schwartzkopf walks in.
Schwartzkopf is one of a small number of people who can be in more than one place at once and, in principle, do thousands of things at the same time. He and his colleagues at the US National Science Foundation have trained up a smart, animated, digital doppelgänger - mimicking everything from his professional knowledge to the way he moves his eyebrows - that can interact with people via a screen when he is not around. He can even talk to himself.
Many more people could soon be getting an idea of what it's like to have a double. It's becoming possible to create digital copies of ourselves to represent us when we can't be there in person. They can be programmed with your characteristics and preferences, are able to perform chores like updating social networks, and can even hold a conversation.
These autonomous identities are not duplicates of human beings in all their complexity, but simple and potentially useful personas. If they become more widespread, they could transform how people relate to each other and do business. They will save time, take onerous tasks out of our hands and perhaps even modify people's behaviour. So what would it be like to meet a digital you? And would you want to?
It might not feel like it, but technology has been acting autonomously on our behalf for quite a while. Answering machines and out-of-office email responders are rudimentary representatives. Limited as they are, these technologies obey explicit instructions to impersonate us to others.
One of the first attempts to take this impersonation a step further took place in the late 1990s at Xerox's labs in Palo Alto, California. Researchers were trying to create an animated quasi-intelligent persona, to live on a website. It would do things like talk to that person's virtual visitors and relay messages to and from them. But it was unsophisticated and certainly far from capable of intelligent conversation, says Tim Bickmore, of Northeastern University in Boston who worked on the project, so it was not commercialised.
The consensus has long been that the roadblock to creating a convincing persona is artificial intelligence. It still hasn't advanced sufficiently to reproduce complex human behaviour, and it would take years of training for an AI to resemble a person. Yet it has lately become clear that fully representing a human is unnecessary in today's digital environments. While we cannot program machines to think, getting them to do specific tasks is not a problem, says Joseph Paradiso, an engineer at the Massachusetts Institute of Technology.
Faceted identity
To understand why and where this could be useful, consider the way that a person's identity is represented on the internet. The typical user has a fragmented digital self, broken up into social media profiles, professional websites, comment boards, Twitter and so on. Of course, people have always presented themselves differently depending on context - be it the workplace or a bar - but Danah Boyd, a social media researcher at Microsoft Research in Cambridge, Massachusetts, argues that digital communication enhances this because it inherently gives a narrow view of a person.
People manage these subsets of their identity like puppets, leaving them dormant when they're not needed. What researchers and companies have realised is that some of these puppets could be programmed to act autonomously. You don't need to copy a whole person, just a facet, and it doesn't require impressive AI and months of training.
For example, the website rep.licants.org, developed by artist Matthieu Cherubini, allows you to create a copy of your "social media self", which can take over Facebook and Twitter accounts when required. You prime it with data such as your location, age and topics that interest you, and it analyses what you've already posted on your various social networks. Armed with this knowledge, it then posts on your behalf.
In principle, such services could one day perform a similar job to the ghostwriters who manage the social media profiles of busy celebrities and politicians today. In fact, some people already automate their social media selves: some add-ons to a Twitter account can be programmed to send out messages such as a thank-you note if somebody follows you. As far as the recipients are concerned, the messages were sent by a real person.
Your professional persona can be replicated, too. The Australian company MyCyberTwin allows users to create copies of themselves that can engage visitors in a text conversation, accompanied by a photo or cartoon representation. These copies perform tasks such as answering questions about your work, like an interactive CV. "A single CyberTwin could be talking with millions of people at the same time," says John Zakos, who co-founded the firm. MyCyberTwin also uses tricks to add a touch of humanity. Users are asked to fill in a 30-question personality test, which means that the digital persona may act introverted or extroverted, for example.
In a few years, this simple persona could be extended to become an avatar - a visual animation of you. Avatars have long been associated with niche uses such as gaming or virtual worlds like Second Life, but there are signs that they could become more widespread. In the past year or two, Apple has filed a series of patents related to using animated avatars in social networking and video conferencing. Microsoft, too, is interested. It has been exploring how its Kinect motion-tracking device could map a user's face so it can be reproduced and animated digitally. The firm also plans to extend the avatars that millions of people use in its Xbox gaming system into Windows and the work environment.
So could avatars be automated too? It already happens in gaming: many people employ intelligent software to control their avatars when they're not around. For example, some World of Warcraft players program their avatars to fight for status or to farm gold.
To similar ends, in 2007 the National Science Foundation began Project Lifelike, an experiment to build an intelligent, animated avatar of Schwartzkopf, who at the time was a program director. The hope was to make the avatar good enough to train new employees.
Jason Leigh, a computer scientist at the University of Illinois at Chicago, used video capture of Schwartzkopf's face to create a dynamic, photorealistic animation. He also added a few characteristic quirks. For example, if Schwartkopf's copy was speaking intensely, his eyebrows would furrow, and he would occasionally chew his nails. "People's personal mannerisms are almost as distinguishing as their signature," Leigh says.
These tricks combined to make the copy seem more, well, human, which helped when Leigh introduced people to Schwartzkopf's doppelgänger. "They had a conversation with it as if it were a real person," he recalls. "Afterwards, they thanked it for the conversation."
The Project Lifelike researchers are now building a copy of the astronaut Jim Lovell, who flew on Apollo 13 and will answer questions at Chicago's Adler Planetarium, and one of Alan Turing, who will field questions at the Orlando Science Center in Florida. Others are working on ways to create doppelgängers that will persist after people die.
Dear doctor
Meanwhile, Bickmore and his team are developing animated avatars of doctors and other healthcare providers. One of the nurse avatars they created is designed to discharge people from hospitals. In tests, he found 70 per cent of patients preferred talking to the copy rather than a real nurse, because they felt less self-conscious. Doctors, meanwhile, could use avatars to streamline their work. "A doctor might want to make a copy, for example, if they are the pre-eminent expert in a field," Bickmore says.
Admittedly, some of these avatars did take a lot of time to train. Schwartzkopf spent months teaching his digital self about his job. But it depends on the sophistication of the task, says Jeremy Bailenson, who directs the Virtual Human Interaction Lab at Stanford University in California.
One way to shortcut this process is to give an avatar specific behaviours adapted for the purpose, says Bailenson. "We've demonstrated that it doesn't matter how good the AI is. What matters is the belief in the social presence." Along with collaborator Jim Blascovitch, he created an avatar to teach students via a screen in a lecture theatre. The pair designed it to peer at each student for 2 seconds at a time. They called it "supergaze", and found it made all the difference. When the students thought of the avatar as an unthinking, unfeeling AI, they stopped paying attention - even if it was programmed with the necessary knowledge. But with the supergaze, they were more likely to respond as if there was a human in control.
The point, says Bailenson, is that AI is not the stumbling block many researchers once thought it was. He argues that people will engage with a screen avatar if their abilities suit the task in hand, and if there is the small possibility that a human is operating them.
As with doctors, academics could spread their workload too. "This would allow you to teach as many sections as your department desires," Bailenson says. With several copies operating simultaneously, a teacher could jump between them at will, inhabiting any one without ever letting on to the students.
Of course, many people might be reluctant to set loose autonomous facets of themselves. What happens if they say something inappropriate, or even evolve on their own? The experience of British writer Jon Ronson provides a hint. Earlier this year, a Twitter account under the name @jon_ronson began issuing tweets, raising the hackles of the real Ronson, who tweets as @jonronson. It was an impersonation, operated by an algorithm.
Ronson discovered that it was created by a British company called Philter Phactory, which makes autonomous bots called Weavrs. These can operate Twitter accounts and other social media on a person's behalf. The company's selling point is that Weavrs can be used to trawl the web for interesting links about certain topics, then post status updates or share videos and articles about them.
The Ronson-bot's chatter was anodyne, expressing, among other things, a love for midnight snacking. In a film Ronson made about the experience, he described how he felt unsettled and angry, because he had no control over this copy. Someone had mimicked his digital persona without his knowledge and there was nothing he could do to stop them.
Toon army
Many actors and performers have digital personas, sometimes created against their will. It seems laws will need to be adapted to define who can control people's digital selves (see "Double jeopardy").
Some people are more troubled by the effects on society as a whole. Jaron Lanier, an author and Microsoft researcher, worries about technologies that claim to amplify our efficiency. The promise of technology to free our time for more leisure pursuits is an old one. It means we simply find new chores to keep us busy instead. Create 10,000 selves, he says, and we will create a world that demands a million. And in principle, doppelgängers could be cheaper to employ than real people. "If you're a history professor and you can operate 10,000 of these things, why does the university have to hire any other history professors?" Lanier asks.
For individuals, however, seeing copies of themselves acting outside their own bodies might have positive side effects. For example, when Bailenson subtly morphed people's avatars to be slightly more attractive, he found it gave them a confidence boost that persisted afterwards. Half an hour after the experiment, volunteers were asked to identify the most attractive person they thought they could successfully date: people made bolder choices when, unbeknown to them, their copy was slightly prettier or more handsome than reality. The same was true of a slight increase in height. Conversely, in another experiment, virtual doubles that were presented as fatter than their real counterparts successfully motivated participants to exercise.
Clearly, then, creating virtual selves could have unintended consequences. Meeting our digital counterparts will not be like meeting ourselves, at least not at first. But they might be a convincing facet, and could even give you insights into how other people see you. The several Alex Schwartzkopfs could be the start of a whole new population explosion.
Double jeopardy
Earlier this year, the rapper Tupac Shakur appeared on stage at a music festival in California. This was a surprise, because Shakur had been dead for 14 years. He was projected as a hologram.
Digital versions of performers and athletes routinely appear in movies, advertisements and games. Yet these advances raise ownership issues that lawyers are only now beginning to tackle.
When the actress Sigourney Weaver had her face and emotional expressions digitised to star inside the virtual world of the movie Avatar, the information was stored in a database. Who owns the rights to that face? To what extent are manipulations - such as putting words into Weaver's or Tupac's mouth - acceptable without consent?
Some existing laws can be adapted to deal with these issues, says Simon Baggs, a partner at Wiggin, a media law firm based in London. When a photo of the racing driver Eddie Irvine was manipulated suggestively in an advert he successfully sued. Meanwhile, some actors and athletes have already launched lawsuits under trademark law after their likenesses were used without their permission in advertisements and games.
If more of us start creating digital selves (see main story), other laws could be more suitable. Design rights, for example, could protect an animated avatar of your face and body against misuse under copyright law.
Still, each country's laws vary. Individuals in the US have far greater rights over their own image than they do in the UK, for example. Baggs says this could attract lawsuits to certain countries, much as the UK's strict defamation laws have attracted libel tourism.
5 people who are making a killing off of piracy
Piracy gets a bad reputation from most of the content creators in the world, but not everyone agrees. In fact, some people have managed to make piracy, and a relaxed attitude toward copyright, work in their favor. Here are five such individuals who don’t pirate themselves (at least that we know about) but managed to turn this fact of internet life into good business.
Psy
Have you heard about this Gangam Style thing? The answer to that is almost certainly an emphatic “yes.” Gangam Style, from South Korean pop/rap star Psy has taken the internet by storm, and part of that success has come by taking a relaxed attitude toward copyright infringement.
There have been remixes, mashups,vre-postings, and of course, torrents of Gangam Style ever since it first caught on. Rather than go after people infringing on the content, Psy has leaned back and watched the advertising dollars roll in. The Gangam Style video on YouTube is now the most viewed ever. It currently has nearly 1 billion views. It is estimated that Psy will pull in about $8.1 million this year thanks to his internet popularity. Popularity he wouldn’t have had if no one had been able to share the song.
Shahrzad Rafati
Where most content producers see pirated video as a negative for revenue, Shahrzad Rafati saw an opportunity. Rafati started Broadband TV in 2005 with the aim of monetizing and legitimizing pirated video. The Canadian company searches video sharing services, looking for copyrighted material. Instead of removing it, Broadband TV re-brands the content, includes relevant ads, and reposts. What once was infringing video has become a revenue source.
Rafati realized that people uploading videos aren’t doing it to be malicious. They just really enjoy the content and want to share it. A heavy handed approach to enforcement won’t work long-term. Broadband TV currently works with some big name partners like the NBA, Warner Brothers, Sony, and YouTube. The company also has a YouTube channel called VISO where users can watch sports, movie trailers, and news programs.
Louis CK
The internet’s favorite comedian, Louis CK, didn’t get that way by being a stickler for the rules. When CK decided to produce his own comedy special in 2011, he went straight to the fans. For the paltry sum of $5, everyone was invited to download a 720p H.264 video of the hour-long performance. What set this apart was the open approach.
Louis CK posted a statement on his website explaining that he decided to forgo any DRM on the video. He knew full well that it would be pirated, but he didn’t like the idea of causing legitimate buyers any trouble. So the file went live, and it was indeed pirated. However, the internet apparently liked the cut of Louis CK’s jib, and he’s made millions of dollars on the show.
Cory Doctorow
Geek icon Cory Doctorow has been writing books that appeal to tech-oriented folks for years. He’s also an editor at technology site Boing Boing, where he writes often about copyright and intellectual property. Whether or not you agree with his opinions, he really puts his money where his mouth is. All of Cory Doctorow’s books are available as free downloads, and you can even create derivative works with most of them.
Doctorow uses a Creative Commons license, and goes out of his way to spell out exactly what you, the downloader, can do with the work. He even invites folks to convert the PDFs and plain text files he posts to more easy to read formats, like MOBI and EPUB. It seems to be working for Doctorow, who just published a book with sci-fi great Charles Stross. Doctorow’s books are available for purchase through Tor books as well.
Peter Mountford
Some authors might be horrified to learn that their novel was being translated into other languages and pirated without their consent, but not Peter Mountford. He recently found that his book, A Young Man’s Guide to Late Capitalism was being translated into Russian. After a little digging, he discovered the project was a preamble to releasing the book on the Russian black market (for ebooks).
Mountford decided to help in the translation effort. He actually contacted the translator, and began working with him to translate some of the more complicated passages of his own book. There are relatively few ebook titles officially available in Russia, so the only way to make a splash is through the black market. Other authors have seen their popularity swell by leveraging piracy among Russian readers, so Mountford was happy to try. A Young Man’s Guide to Late Capitalism ultimately won the 2012 Washington State Book Award, though that was likely for other reasons.
While piracy can certainly harm content creators and rights holders, it is not an insurmountable force. With some strategy, and possibly a bit of compliance (or at least acceptance), it can be a harnessed and controlled. Then, with foresight and lucky, it can even be used as a positive force.
The Drawback of Free
I’ve made it pretty clear that I don’t like RSS readers. When you subscribe to your favorite sites and read all their articles in a single, text-heavy interface, you’re eliding the beauty and variety of design on the Web. You’re also turning news reading into a chore. Or, at least, that’s what I felt—with its prominent, hectoring count of all my unread posts, opening up Google Reader was as stressful as dealing with my email inbox. And I want the Web to be fun, not stressful. So when Google announced this week that it would soon kill Google Reader, I wasn’t bothered in the least. I might even have said a few mean things about it on Twitter.
But enough about me. Let’s talk about you. You didn’t just love Google Reader. No, your feelings about it were much deeper - you relied on Google Reader, making it a central part of your daily workflow, a key tool for organizing stuff you had to read for work or school. Now it’s gone, and you feel lost. Sure, there are alternatives, and transferring all your feeds to one of these will probably take just a few minutes. But that won’t be the end of it. You’ll still have to learn the quirks of your new software. You’ll still have to get the rhythm down. And most of all, you’ll still worry about abandonment. Google says it killed Reader because the software’s usage was on the decline. But Google Reader was the most popular RSS reader on the Web. If people were quitting Reader, aren’t they likely to quit the alternatives, too?
I feel for you. I really do. While I didn’t use Reader, Reader-lovers’ plight could happen to any of us. Every day, our computers, phones, and tablets harangue us to try new stuff—new apps, new sites, and new services that will supposedly make this or that thing so much more awesome than before. You’re aware of the dangers of committing too hastily, so before you get too invested, you diligently check reviews and solicit opinions from tech pundits like myself. But when all those assessments converge, who can blame you for getting in too deep? Back in 2005, when Google launched Reader, the company talked about it like they’d keep it around forever. And they probably thought they would. You took them at their word, and now you’ve been burned.
Reader’s death illustrates a terrible downside of cloud software—sometimes your favorite, most indispensable thing just goes away. Yes, software would get discontinued back in the days when we relied on desktop apps, but when desktop software died it wasn’t really dead. If you’re still a fan of ancient versions of WordPerfect or Lotus 1-2-3, you can keep using them on your aging DOS box. But when cloud software dies, it goes away for good. If the company that’s killing it is decent, it may let you export your data. But you’ll never, ever be able to use its code again.
That’s why we should all consider Reader’s death a wake-up call—a reminder that any time you choose to get involved with a new app, you should think about the long haul. It’s not a good idea to hook up with every great app that comes along, even if it’s terrifically innovative and mind-bogglingly cheap or even free. Indeed, you should be especially wary if something seems too cheap. That’s because software is expensive. To build and maintain the best software requires engineering and design talent that will only stick around when a company has an obvious way to make money.
If you want to use programs that last, it’s not enough to consider how well they work. You’ve also got to be sure that there’s a solid business model attached to the code.
And if a particular tool is indispensable to you—your project management software, for instance—you might want to think about choosing one of those incredibly old-fashioned software companies that will allow you to pay for its stuff. Just paying for software doesn’t guarantee its longevity—companies that accept your money can always go out of business. But companies that take your money are at least signaling to you that their software is just as important to them as it is to you. On the other hand, companies that don’t take your money and won’t even say how the product you love will ever make money—hey, they’re fun for a romp, but don’t be surprised when they ditch town in the middle of the night. (I’m looking at you, TweetDeck, Tr.im, Memolane, Posterous and all those Yahoo apps!)
This calculus becomes especially difficult with software made by Google, a company that doesn’t charge for much of anything, isn’t transparent about how its products make money, and is fond of experimenting with lots and lots of new products (and, lately, of killing off stuff that’s not part of its central mission).
There are many free Google products that you can use without worrying they’ll go away. Search is Google’s main moneymaker, so there’s no risk there. Chrome is pretty safe, too, as it boosts traffic to the rest of Google’s menagerie of products. Gmail displays ads alongside your email, but its real value to Google is the way it pushes you to stay logged in to your Google account, allowing the company to learn more about your travels across the Web (and, thus, feeds into its ad revenue). So there’s no risk Google will kill Gmail. Similarly, the free version of Google’s online productivity software, Docs, doesn’t make much money on its own, but it’s crucial to Google’s Chromebook project, so it’s probably OK. And then there’s Google+, the search company’s beleaguered social network. Sure, it’s a ghost town—in fact, according to BuzzFeed, the supposedly declining Google Reader drove more traffic across the Web than Google+. But Google’s efforts to deeply integrate its products into Google+ shows that it is heavily invested in making it work. So if you’re that one guy using Google+, don’t fret.
There are several other Google products I’m not so sure about. At the top of my list is Google Voice, which assigns you a phone number that rings all the rest of your phones. I’m deeply invested in Google Voice—I use it as my primary phone number, so if it were to shut down, I’d have to send new digits to all my friends and professional contacts. Why am I worried that it will vanish? Because Google Voice has no clear business model. Google won’t take my money for it. It’s been years since the company updated the service in any significant way. And Voice loyalists have alerted me to troubling warning signs—last year, for instance, Google moved a link to Voice from the drop-down menu labeled More in its navigation bar to a deeper, nested menu labeled Even More. Was this change significant? I have no idea. But it’s not encouraging - just troubling enough that I should probably begin to look for alternatives from companies that will take my money.
Or how about Google Scholar, the academic papers search engine? Sure, this extremely useful service is in keeping with Google’s larger mission to make the world’s knowledge universally accessible. But Google does not display any ads in Scholar, and I can’t think of any other way it’s padding Google’s bottom line. So might it, too, disappear some day?
And Orkut. Oh man, why on earth is Orkut still around? Seriously, Orkut users, you’re asking for trouble. Pack up and move elsewhere. You may have heard that Orkut is still popular in Brazil. It’s not; Facebook has eclipsed it even in Brazil. The end is nigh.
Reader’s death provides useful lessons even for software not made by Google. In particular, it reminds me that when I really get into software that does have a pay option, I should pay for it. I did this with Workflowy, the outliner that I raved about last year. I use it so heavily that I couldn’t justify not paying the $5 a month for it; if heavy users like me don’t pay, no one will, and then Workflowy will likely go away. I also pay for Freshbooks, and I send Google money to get extra storage in Gmail. If I were a more diligent Evernoter, I’d pay that company, too.
I encourage you to do the same, if you can afford it. Free stuff online is great, but nothing is free forever. If you care for something, open your wallet.
When Algorithms Go Wrong
Frustrating errors on everything from tax bills to T-shirts show why automated systems can’t replace real brains.
How will the future see us? It is a fair guess that our descendants will giggle a bit. Theses will be written on the overuse of the word “inappropriate”, scholars marvel at the suffix “-gate” on every other news story. And as a digitally preserved ghost, I hereby join my great-grandchildren in a moment of levity as we look back at how the IT revolution fostered an interlude of Brainless Automation. That would be in the period before the Great Employment Revolution of 2026, when the civilised world exploded in frustration and admitted that while silicon chips ease some drudgery, the cheapest and most versatile of quality control data filters are where they always were: lodged between pairs of human ears.
The Brainless Automation Story of the Week emerged when the Amazon website advertised T-shirts bearing the words “Keep Calm and Rape a Lot”, “Keep Calm and Hit/Choke/Grope Her”, etc.
It was of course a variant on the now hackneyed joke drawn from the lovely typeface and bland command of unused wartime posters saying “Keep Calm and Carry On”. You can’t pass a card shop or gift shop without spotting dozens, some quite witty. Or they were till they got boring.
But these slogans caused outrage. They are offensive and unfunny. What nasty, tasteless brain created them? What can the company (Solid Gold Bomb) have been thinking? The answer is, of course, that nobody created them and the company was thinking nothing. It had delegated its brain to a machine.
The mortified middlemen at Amazon explained, struggling to put them “in a queue for deletion”, that the product was computer-generated by an algorithm at an American company. Each T-shirt ordered is printed on demand; no stock and shelving systems there, only “a scripted computer process running against hundreds of thousands of dictionary words.We’re sorry for the ill feeling this has caused”.
How does this work? Ask any IT geek. On the T-shirt company’s computer will be a graphics file with the font and layout programmed for a lookalike poster, crown and all. An automated programme — the algorithm — has a list of verbs and an instruction to finish the sentence with “off”, “them”, “us”, “him”, “her”, “a lot”. It is possible that the computer can check online which words are most often repeated on the youthful target market’s news sites: drink, rape, grope ...
Even the link to Amazon can happen without human intervention once you have a computer-to-computer trading relationship. A further program within the marketing site itself may notice that you, as a user, are interested in stories with those words in. So until the shocked customers saw the T-shirt ads onscreen, it is entirely possible that no human eye had ever rested on them; no human intelligence thought: “Oops!”
Algorithms such as this are everywhere, mimicking decisions that were once made by people. Some work well: you can now renew your car tax disc online in two minutes instead of searching through kitchen drawers for three different documents and walking to the Post Office in the rain. The website checks your insurance and MOT in seconds, takes your money and sends the disc. Fine. But too much delegation to electronic brains causes annoyance, shading to disaster (remember the tax credits chaos, which had families queuing at food banks after government attempted to claw back computerised overpayments?).
Or consider the common problem when a bank’s anti-fraud algorithm abruptly stops your cards because they are being used abroad. Complain (if you have change for a Turkish phone box) that you told them you were going, you booked the flight on that card and bought currency at the airport. So QED — you’re in Turkey. They sadly say, “Sorry, it’s automatic”. And so it is: no human brain made the connection between your obvious trip and the attempt to pay abroad.
The touching thing is always how apologetic the humans are when you finally get to them, like downtrodden wives apologising for a drunken husband. My late mother-in-law was panicked by a council tax letter threatening bailiffs and distraint. I knew she owed nothing, and finally spoke to a nice Yorkshire lady who said: “Ooh yes, our computer sends out those letters, they’re ever so nasty.”
No harm occurred, but misery, even suicide can follow similar threats. And threats from big public authorities and government are increasingly automated. It’s easy, cheap and inflexible: the electronic jobsworth. My Granny used to announce, when in dispute with officialdom, “I Will Write Them a Human Letter”. Embarrassing though she was , she had a pretty good strike rate. I doubt she would do so well now. Probably be in Holloway.
Sometimes brainless automation is merely absurd, like those “cool” Japanese T-shirts that say “Naive Bear Paradise” or “Beach Thirst Sofa”. Google’s free automatic translator has brought great joy — once, in some bureaucratic Brazilian hellhole, my husband asked his iPad to render “I need clearance to sail my boat south tomorrow” in Portuguese. When he held it up to the officials, they not only laughed helplessly but called down colleagues from other floors to share the hysteria. He has never found out what he said. Probably just as well.
We all cherish letters addressed to Dear Mr Esq or Mrs Notknown, though it is depressing to receive, as numerous families lately have in Ireland, tax demands addressed to people long dead. We quite like it when companies make fools of themselves: my niece, when smaller, had her Mizz magazine (cheerful sub-teen stuff) automatically replaced by a “nearest equivalent” by the Tesco computer . A shriek from the living room alerted her parents to an older teenage magazine with a double-page spread of male genitalia entitled “Know the enemy”. I like to think of online supermarkets upsetting socialites and cardinals by confusing the Tatler with The Tablet, and sending House & Garden to frustrated Penthouse readers.
So we joke, shrug fatalistically when “computer says no”, slam down the phone on automated cold-calls. But as governments grow more authoritarian and false economies on staff more common, algorithms dominate everything from passport biometrics to NHS online diagnosis. And the dangers multiply. The moral is, I suppose, Keep Calm and Hire Humans.
Captcha, ReCaptcha and Luis von Ahn
ONLY a few weeks into graduate school and aged just 22, Luis von Ahn helped crack one of the thorniest problems bedevilling the web. It was the year 2000 and free, web-based e-mail services were booming. But spammers were creating thousands of accounts automatically and using them to blast out messages. When the accounts were shut down, they simply created new ones. At the same time, sites selling tickets to concerts and sporting events were being besieged by programs that bombarded them with orders, snapping up the best seats for resale at a higher price. Websites needed a way to distinguish between human visitors and automated ones.
Mr von Ahn had just arrived at Carnegie Mellon University in Pittsburgh when he and his PhD adviser, Manuel Blum, came up with just such a method. The solution had three requirements: it had to be a test that humans could pass easily and computers could not—but could use computers to determine whether the response was correct. The original idea was to show web users an image, for example of a cat or a roller coaster, and ask them to identify it. A correct answer would indicate that the entity at the other end of the internet connection was indeed human, granting access to the web-mail service or ticketing site. But it turned out that people were not very good at identifying images reliably.
So the pair came up with another idea: displaying a distorted sequence of letters and asking people to read them and type them into a box. This proved to be a much more reliable test of whether a visitor to a website was human or not (something that is known, in computer-science terminology, as a Turing test, in honour of Alan Turing, a British computer scientist). The result was the CAPTCHA, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”. Yahoo and other web-mail providers implemented the system, and it immediately made life harder for spammers.
Mr von Ahn went on to get his doctorate—and a phone call from Bill Gates of Microsoft offering him a job, which he turned down. He has since created a series of internet-based systems that bring many people together to perform useful work by dividing tasks into tiny pieces, often presented as a simple test or game, and aggregating the results. A decade ago Mr von Ahn called his approach “human computation” (the title of his thesis) and “games with a purpose” - precursors to the modern techniques of “crowdsourcing” and “gamification”.
For example, he noticed that search engines were bad at finding images, because pictures on web pages are rarely labelled with neat captions. So he created the ESP Game, in which two players in different locations are simultaneously shown the same image in their web browsers, and asked to type words describing what is in it. Each round of the game ends when both players use the same word, so the aim is to use the most obvious descriptive terms. In so doing, the players tag each image, and signal which words best describe it. The technology was acquired by Google in 2005 to help label images for its search engine.
Shall we play a game?
Mr von Ahn grew up in Guatemala, the son of two doctors. He stumbled into computers indirectly. In the mid-1980s, at the age of eight, he wanted to play video games. But instead of giving him a Nintendo console his mother bought a PC. To play games on it, he resorted to typing in programs from computer magazines and working out how to crack the copy-protection schemes on games sold on floppy discs. The young Luis also spent some time at a confectionery factory owned by his family. He was fascinated by the machines that made and wrapped the sweets, and was soon taking some of them apart and reassembling them. His love of engineering endured, but not his sweet tooth. “I got to play around the whole time—but now I can’t stand the taste of mint,” he says.
In Guatemala, nearly all students are tested before entering high school. The top 20 nationwide, of whom Mr von Ahn was one, are sent to a special school. He went on to study mathematics in America, at Duke University, switching to computer science for his postgraduate studies because it was more practical. “You talk to a mathematician, and he tells you that he’s one in three people in the world who understand the problem and it’s not been solved for 200 years,” says Mr von Ahn. “A computer scientist says: ‘I solved an open problem yesterday’.”
In late 2006 Mr von Ahn had just started teaching at Carnegie Mellon when he received a call from the MacArthur Foundation, saying that he was being awarded one of its coveted “genius” grants of $500,000. Around the same time, he did a back-of-the-envelope calculation to get a sense of CAPTCHA’s popularity, and realised that about 200m squiggly words were being recognised and typed into computers every day by internet users around the world. At about ten seconds apiece, that amounted to around half a million hours daily. This improved the security of the internet, but at the cost of making people perform a task whose results were immediately discarded. Surely the recipient of a genius grant ought to be able to find a way to make more productive use of their efforts?
Driving home from a meeting in Washington, DC, in his blue Volkswagen Golf, he was struck by an idea. Instead of showing users random letters, why not show them words from scans of old printed texts that automated document-digitisation systems, based on optical character recognition, could not understand? Such words were, by definition, incomprehensible to computers, but might be legible to humans. They could be shown to people as part of a modified CAPTCHA test, based on two words. One, the control word, is a known word; the other is an illegible word from a scanned document. The user reads and types in the two words, and is granted access provided the control word is correctly identified. And when a few users separately provide the same interpretation of the scanned word, it is fed back to the digitisation system.
People performing online security checks could thus be put to work digitising old books and newspapers, without even realising that they were doing so. Mr von Ahn called his new idea reCAPTCHA, and when the New York Times began to use the technology to digitise its archive, he span it out into a separate company. In 2009 it too was acquired by Google, for use in its ambitious book-digitisation project. (The slogan for reCAPTCHA is “Stop spam, read books”.) Mr von Ahn went to work at the internet giant for a year. Paradoxically one of his tasks while at Google was to shut down the ESP Game. It had served its purpose, labelling enough images to train an image-recognition system based on artificial intelligence, which could then perform the task automatically.
"More than 1 billion people have helped digitise the printed word by using reCAPTCHA.”
Other similar projects followed. Verbosity, for example, got players to create a compendium of common-sense facts, such as “milk is white”, which people know but computers do not. But none of Mr von Ahn’s other projects have come close to reCAPTCHA when it comes to doing useful work. The system now handles 100m words a day, equivalent to 200m books a year. If Google were to pay people America’s minimum wage to read and type in those illegible words, it would cost it around $500m a year.
Although still in his early 30s, Mr von Ahn has already made a unique contribution to computer science and artificial intelligence, by harnessing what he calls “the combined power of humans and computers to solve problems that would be impossible for either to solve alone.” Put simply, the idea is to “take something that already happens and try to get something else out of it,” he says. His work exploits the internet’s ability to reduce co-ordination and transaction costs, so that the efforts of hundreds of millions of people can be aggregated effectively. Mr von Ahn estimates that more than 1 billion people have helped digitise the printed word by using reCAPTCHA.
Found in translation
The notion of “human computation” has spawned its own academic field. But did Mr von Ahn really set out to generate useful results from mundane tasks, or did he stumble on the concept with CAPTCHA and then apply it in other domains? “It is a combination of both,” he says. He recently came across a plan, devised when he was 13 and saved by his mother, for a power company that would operate a free gym and generate electricity from people lifting weights, cycling and so forth. This was, he now realises, a precursor of his computer-science work, which does the same for mental activity.
Mr von Ahn has won many prizes, including a presidential award for excellence in science. He has a gaggle of patents to his name. His old blue Volkswagen has been replaced by a blue Porsche. And he continues to apply his distinctive approach to new problems. His latest project is a company, which he co-founded last year, called Duolingo. It helps people learn a foreign language (the game), while also providing a translation service (the useful work). People are shown a word or phrase which they do their best to translate; others then vote on the best translation. Duolingo already has 3m users, who use it, on average, for 30 minutes a day.
Although it is outwardly similar to reCAPTCHA, Duolingo marks a further development of Mr von Ahn’s model, because it also exploits “big data”. The firm collects huge volumes of data and performs experiments to determine what works best when learning a new language. For example, should one teach adjectives before adverbs? Even experts do not know, because there has never been a large-scale, empirical study. Thanks to Duolingo, there now is.
Among its early findings is that the best way to teach a language varies according to the student’s mother tongue. When teaching English, for example, pronouns such as “him”, “her” and “it” are usually introduced early on. But this can confuse Spanish speakers, who do not have an equivalent for “it”. The answer is to delay the introduction of the word “it”, which makes Duolingo users less likely to give up in frustration. Mr von Ahn hopes to apply this technique—using data to improve pedagogy—in other disciplines.
“Education seems like it should be an equaliser, but really it is not,” he says. “If you have money, you can get a good education; if not, you don’t.” He has seen at first hand what access to high-quality education can do, and wants to use technology to make it more widely available. This is, he realises, a far more ambitious aim than blocking spam or scanning books, useful as those are. But there is a clear thread running through his work, even as he tries to apply his approach to tackling bigger societal problems, not just technical ones. Whether people are in the gym, logging on to e-mail or learning a new language, he wants to enable them to “do something useful—and harness the power that they generate.”
AI
Hector Levesque thinks his computer is stupid—and that yours is, too. Siri and Google’s voice searches may be able to understand canned sentences like “What movies are showing near me at seven o’clock?,” but what about questions—“Can an alligator run the hundred-metre hurdles?”—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better.
In a terrific paper just presented at the premier international conference on artificial intelligence, Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. He argues that his colleagues have forgotten about the “intelligence” part of artificial intelligence.
Levesque starts with a critique of Alan Turing’s famous “Turing test,” in which a human, through a question-and-answer session, tries to distinguish machines from people. You’d think that if a machine could pass the test, we could safely conclude that the machine was intelligent. But Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. Every year, a number of machines compete in the challenge for real, seeking something called the Loebner Prize. But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence. One program worked by pretending to be paranoid; others have done well by tossing off one-liners that distract interlocutors. The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test.
To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game, building on work he did with Leora Morgenstern and Ernest Davis (a collaborator of mine). Together, they have created a set of challenges called the Winograd Schemas, named for Terry Winograd, a pioneering artificial-intelligence researcher at Stanford. In the early nineteen-seventies, Winograd asked what it would take to build a machine that could answer a question like this:
The town councillors refused to give the angry demonstrators a permit because they feared violence. Who feared violence?
a) The town councillors
b) The angry demonstrators
Levesque, Davis, and Morgenstern have developed a set of similar problems, designed to be easy for an intelligent person but hard for a machine merely running Google searches. Some are more or less Google-proof simply because they are about made-up people, who, by definition, have few Google hits:
Joan made sure to thank Susan for all the help she had given. Who had given the help?
a) Joan
b) Susan
(To make things harder to game, an alternative formulation substitutes “received” for “given.”)
One can’t simply count the number of Web pages in which people named Joan or Susan gave other people help. Instead, answering this question demands a fairly deep understanding of the subtleties of human language and the nature of social interaction.
Others are Google-proof for the reason that the alligator question is: alligators are real, but the particular fact in question isn’t one that people usually comment on. For example:
The large ball crashed right through the table because it was made of Styrofoam. What was made of Styrofoam? (The alternative formulation replaces Stryrofoam with steel.)
a) The large ball
b) The table
Sam tried to paint a picture of shepherds with sheep, but they ended up looking more like golfers. What looked like golfers?
a) The shepherds
b) The sheep
These examples, which hinge on the linguistic phenomenon known as anaphora, are hard both because they require common sense—which still eludes machines—and because they get at things people don’t bother to mention on Web pages, and that don’t end up in giant data sets.
More broadly, they are instances of what I like to call the Long-Tail Problem: common questions can often be answered simply by trawling the Web, but rare questions can still stymie all the resources of a whole Web full of Big Data. Most A.I. programs are in trouble if what they’re looking for is not spelled out explicitly on a Web page. This is part of the reason for Watson’s most famous gaffe—mistaking Toronto for a city in the United States.
The same problem comes up in image search, in two ways: many kinds of pictures are rare, and many kinds of labels are rare. There are millions of pictures of cats labelled “cat”; but a Google Image search for “scuba diver with a chocolate cigarette” yields almost nothing of relevance (dozens of pictures of cigars, pin-up girls, beaches, and chocolate cakes)—even though any human could readily summon a mental image of an appropriately adorned diver. Or take the phrase “right-handed man.” The Web is filled with pictures of right-handed men engaged in unmistakeably right-handed actions (like throwing a baseball), which any human working in a photo archive could rapidly sort out. But very few of those pictures are labeled as such. A search for “right-handed-man” instead returns a grab bag of sports stars, guitars, golf clubs, key chains, and coffee mugs. Some are relevant, but most are not.
Levesque saves his most damning criticism for the end of his paper. It’s not just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s that contemporary A.I. has largely forgotten about them. In Levesque’s view, the field of artificial intelligence has fallen into a trap of “serial silver bulletism,” always looking to the next big thing, whether it’s expert systems or Big Data, but never painstakingly analyzing all of the subtle and deep knowledge that ordinary human beings possess. That’s a gargantuan task - “more like scaling a mountain than shoveling a driveway,” as Levesque writes. But it’s what the field needs to do.
In short, Levesque has called on his colleagues to stop bluffing. As he puts it, “There is a lot to be gained by recognizing more fully what our own research does not address, and being willing to admit that other … approaches may be needed.” Or, to put it another way, trying to rival human intelligence, without thinking about all the intricacies of the human mind at its best, is like asking an alligator to run the hundred-metre hurdles.
Russia and Spam
For years, Igor A. Artimovich had been living in a three-room apartment he shared with his wife in St. Petersburg, sitting for long hours in front of his Lenovo laptop in his pajamas, drinking sugary coffee. Igor Artimovich has been linked with a prolific illegal network of virus-infected computers that send spam worldwide.
If he were known at all to Western security analysts who track the origins of spam, and in particular the ubiquitous subset of spam e-mails that promote male sexual enhancement products, it was only by the handle he used in Russian chat rooms, Engel.
His pleasant existence, living in obscurity, changed this summer when a court in Moscow linked Mr. Artimovich and three others with one of the world’s most prolific spambots, or illegal networks of virus-infected computers that send spam.
The ruling provided a peek into the shrouded world of the Viagra-spam industry, a multimillion-dollar illegal enterprise with tentacles stretching from Russia to India. Around the world every day, millions of people open their e-mail in-boxes to find invitations to buy Viagra or some other drug, potion or device to enhance sexual performance.
Who sends these notes and how they make money had remained a mystery to most recipients. The court put names and faces to a shadowy global network of infected computers known outside Russia as Festi and inside the country as Topol-Mailer, named after an intercontinental ballistic missile, the Topol-M. It was powerful enough to generate, at times, up to a third of all spam e-mail messages circulating globally.
Prosecutors say Mr. Artimovich was one of two principal programmers who controlled the network of infected computers in a group that included a former signals intelligence officer in the Federal Security Service, or F.S.B., the successor agency to the K.G.B.
Once they control the virus-infected computers, they are able to use software embedded on home and business computers to send persistent e-mails. The owner of an infected computer usually never knows the PC has been compromised.
More often than not these days, those infected computers are in India, Brazil and other developing countries where users cannot afford virus protection. But the high-end programming of viruses often takes place in Russia.
While the business model has been well understood — it was the subject of an extensive study by the University of California, San Diego — the individuals behind one of the largest spam gangs using it have largely avoided official scrutiny, until recently.
The Tushino Court in Moscow convicted two people of designing and controlling the Festi botnet, and two others of paying for its services, but none of them specifically of distributing spam. Instead, the court convicted the group of using the Festi network in 2010 to turn thousands of browsers simultaneously to the Web page of the online payment system of Aeroflot, the Russian national airline, crashing it in what is known as a distributed denial of service attack.
The spambot problem has vexed Western law enforcement officials, who complain the Russians ignore losses to global businesses that pay about $6 billion annually for spam filters, and to companies like Pfizer for sales lost to counterfeit pills.
Computer security experts have long been intrigued by the possibility that the Russian government has turned to so-called black hat hackers for political tasks in exchange offering protection from prosecution. But any direct evidence has been lacking, though the Festi case adds to the circumstantial evidence.
Russian authorities deny creating or turning a blind eye to botnets used to attack the Web sites of dissidents, or banks and government institutions in neighboring countries like Estonia or Georgia.
Valery V. Yaschenko, a deputy director of the Kremlin-linked Institute for Problems of Information Security, said the Russian government “condemns the practice of using strangers’ computers for attacks, or for any reason.”
For years, spam has been a very good business for Russian criminal gangs. An estimated $60 million a year is pulled in through these networks. Despite the Russian prosecutors’ victory this summer, similar networks remain active as tools for fraud and hacker attacks. Computer security experts say that suggests either the wrong men were convicted or the controlling codes were passed to somebody else.
Stefan Savage, a professor in the systems and networking group at the University of California, San Diego, studied the Festi scheme, in part by making test purchases.
The spam opened links to sites called “Canada Pharmacy” or “Canadian Pharmacy,” though they were in fact Russian-based companies that had privileges to process online payments from Visa through banks in Azerbaijan and Iceland. The sales were responsible for about a fifth of the $300 million global industry of selling fake drugs online, mostly to Americans, Mr. Savage said in an interview.
What arrived in the mail was Viagra counterfeited in India, where intellectual property rights on pharmaceutical industry products are loosely enforced. Mr. Savage tested the pills in a gas spectrometer; they were close enough chemically to real Viagra that they most likely functioned safely, and as intended, for tens of thousands of American men.
The Internet has experienced the ill effects. About 70 percent of all e-mail sent globally is still spam, according to Symantec, the antivirus company. Most of it violates a number of American laws, including the Can-Spam Act of 2003, which requires unsolicited e-mails to have a valid return address. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, named for a teenager who died from an overdose of Vicodin bought on the Internet, outlaws online sales of drugs without a doctor’s prescription. But there are still plenty of offers coming from abroad.
For a three-month period last year, the Festi botnet was bursting with activity. It generated about a third of all global spam for those months, Paul Wood, the cybersecurity intelligence manager at Symantec, said in an interview.
Why Russian authorities allowed Festi to function for years is unclear. Russians had little incentive to invest law enforcement resources in a crime that primarily affected Americans. But the illegal computer networks like Festi that are so useful for sending spam are also capable of crashing Web sites by flooding them with an overwhelming numbers of visits — the distributed denial of service attacks.
It was used last year inside Russia to crash opposition Web sites during the presidential election. The Festi network was the tool of choice in a prominent denial of service attack on LiveJournal, one of the blog-hosting services used by the Russian dissident and blogger Aleksei Navalny, according to Hacker, a Russian magazine focused on cybersecurity issues.
In one of the few crackdowns, the Russian court case singled out four men: Pavel Vrublevsky, the owner of an online payment settlement business called ChronoPay, who for years has denied accusations of ties to Viagra spam schemes; Maksim Permakov, an employee of Mr. Vrublevsky and a former F.S.B. agent; Igor Artimovich, a former employee of Sun Microsystems in Russia; and his brother Dmitry Artimovich, a freelance programmer.
All denied the charges and have said through their lawyers that they intend to appeal the sentences, which range from two to two and a half years in prison, except for Mr. Permakov, who conceded his role in using Festi and cooperated with investigators in exchange for a suspended sentence.
Prosecutors argued that Igor Artimovich designed Festi. They say the executives at ChronoPay hired him to crash the Aeroflot site because they were angry at losing a tender for Aeroflot’s business. The police say the executives asked Mr. Artimovich to settle the score. Analysts of Russian cybercrime say a line had been crossed by attacking a Russian site.
In an interview before his sentencing, Mr. Artimovich said he was working on code under contract with ChronoPay, but for an antivirus program, not a virus. He said the police planted evidence on his laptop hard drive after his arrest.
Mr. Vrublevsky, in an interview, denied any role in creating Festi and noted that in court a witness testified that the F.S.B., which investigated the case, had forged evidence.
Festi was not the first Russian botnet to combine pharmaceutical spam with politics. In 2007, a large-scale cyberattack was begun on Estonia, taking aim at sites of government agencies, banks and anti-Russian groups, and a futuristic North Atlantic Treaty Organization center for cyberwarfare was built in Tallinn in response. But when the center’s analysis of this attack and subsequent cyberstrikes on Georgia finally wrapped up, evidence pointed not to some similar, hushed bunker of military men somewhere in Russia, but to a server in St. Petersburg best known for its links to cybercrime, including penis-enlargement spam, and run by a hacker nicknamed Flyman.
The 2009 NATO report on the attacks on Russia’s neighbors noted pointedly of the St. Petersburg server’s suspicious activity that “the Russian authorities have remained remarkably passive in prosecuting the organization.”
The Story of Skype
"I don't care about Skype!" millionaire Jaan Tallinn tells me, taking off his blue sunglasses and finding a seat at a cozy open-air restaurant in the old town of Tallinn, Estonia. "The technology is 10 years old—that's an eternity when it comes to the Internet Age. Besides, I have more important things going on now."
Tallinn has five children, and he calls Skype his sixth. So why does he no longer care about his creation?
On August 29, 2003, Skype went live for the first time. By 2012, according to Telegeography, Skype accounted for a whopping 167 billion minutes of cross-border voice and video calling in a year—which itself was a stunning 44 percent growth over 2011. That increase in minutes was "more than twice that achieved by all international carriers in the world, combined." That is to say, Skype today poses a serious threat to the largest telcos on the planet. It also made Jaan Tallinn and other early Skypers rich.
But something changed along the way. Skype is no longer the upstart that refused to put signs on its offices, that dodged international lawyers, and that kept a kiddie pool in the boardroom. This is the real story of how a global brand truly began, told in more detail than ever before by those who launched it.
In 2000, as dot-com fever swept America, an entertainment and news portal called Everyday.com brought together a sextet of European revolutionaries.
It began with two people from the Swedish telecom Tele2—a Swede named Niklas Zennström and a Dane named Janus Friis. Zennström was Tele2 employee no. 23; Friis worked his way up in customer service for a Danish operator.
The Swedish owner of Tele2, Jan Stenbeck, was determined to launch the Everyday portal and launch it quickly. As the Swedes were having trouble, Stefan Öberg, the Marketing Director in Tele2's Estonian office, proposed finding some Estonians for the job. In May 1999, Tele2 published an ad in a daily newspaper calling for competent programmers and offering the hefty sum of 5,000 Estonian kroons (about $330) a day—more than an average Estonian earned in a month at the time.
The work went to Jaan Tallinn, Ahti Heinla, and Priit Kasesalu—Estonian schoolmates and tech fans. They had been into Fidonet, a computer network which preceded the Internet, since the Soviet era. They started a small company, Bluemoon, which made computer games such as Kosmonaut. (In 1989, Kosmonaut became the first Estonian game to be sold abroad.) The game earned its creators $5,000 dollars, which at the time was a large sum for any Estonian. But by the turn of the century, the three friends were down to their last penny and Bluemoon was facing bankruptcy.
Short of money, they applied for and got the Tele2 job. The PHP programming language needed for the work was new to them, but the team learned it in a weekend and completed their test assignment much faster than Tele2 requested.
The last of the Skype sextet, Toivo Annus, was hired in Tallinn to manage the development of Everyday.com. The site would soon be complete, with Zennström and Friis working in Luxembourg and Amsterdam, and Annus and the Bluemoon trio working from Tallinn.
Tele2 was thrilled with the Estonians, but the Everyday.com portal failed commercially. Zennström and Friis left Tele2 and lived in Amsterdam for a while. The homeless Friis stayed in Zennström's guest room, and they turned the kitchen into a temporary office.
Together, Zennström and Friis pored over new business ideas. As the US was fascinated at the time with the scandal surrounding Napster, Zennström and Friis planned something similar. But where Napster infuriated the music and movie industries, Zennström and Friis hoped to cooperate with them. They didn't have the slightest doubt about where their new product should be created—in Tallinn, obviously. Kazaa was born.
Kazaa
Kazaa's P2P file-sharing program allowed files to be transferred directly from one computer to another without an intermediary server, thus solving one of Napster's problems. Jaan Tallinn developed the program in a nine-floor, Soviet-style brick building on Sõpruse Puiestee in the Tallinn suburb of Mustamäe. The apartment was actually Jaan Tallinn's home, and at the time, Tallinn was a work-at-home dad. (He only sold the apartment in 2012 and told me that he contemplated attaching a memorial plaque to the wall stating, "Kazaa was created here.")
Kazaa, ready for service in September 2000, swiftly became the most downloaded program on the Internet. The service picked up users at the rate of one per second. Heinla, Tallinn, and Kasesalu were sipping fine wine in their headquarters and thinking, "So this is what it feels like to have half of the world's Internet traffic go through your software."
But on the business side, Zennström and Friis failed to seal a deal with US film and music companies. Kazaa was sued for enabling piracy. "Stolen" music, films, and pornography were being distributed via the application, and the Kazaa owners soon found themselves hiding from an army of ferocious US lawyers.
Zennström repeatedly dodged court summons. One time, he went to see a play at a Stockholm theater and was approached by a stranger. The individual handed Zennström's wife a bunch of flowers and held out an envelope containing a summons for Zennström. The Swede made a run for it; the summons failed to be duly delivered. He was similarly pursued in London, this time by a motorcycle, but service again failed.
When Zennström went to Tallinn for visits with his team, he did so by ferry as he was too scared to fly (by now he's clearly gotten over this, as he owns a private jet and all). And once there, he remained nervous about visitors. "When someone came in through the door and we weren't certain who it was, Niklas would hide under the table," an Estonian coworker reminisced.
The Bluemoon boys began encrypting all of their correspondence and their hard drives. E-mails were not stored for longer than six months. No one wanted to know more than they absolutely needed to know. Zennström changed his phone number as often as he changed his socks.
Charges were never pressed against the coders Heinla, Tallinn, and Kasesalu, but they were involved in the Kazaa proceedings as "an important source of information." A California court requested that the men be questioned and that business secrets concerning Kazaa be confiscated. At first the Estonian government rejected the request, but after a second appeal, the trio was interrogated in the presence of US lawyers.
For the Estonians, the Kazaa proceedings were like playing with fire—a little dangerous but still exciting—and their names began to pop up in the international press.
Afraid of being arrested, Zennström and Friis avoided flying to the US for several years, even though Kazaa had been promptly sold (at least on paper) to Australian businessmen, and its headquarters had been moved to the island nation of Vanuatu. The duo failed to make peace with the US for several years, and their ultimate redemption cost Friis and Zennström big money. The two eventually contributed to a more than $100 million payout for the music and movie industries.
Zennström and Friis planned to create a new service that would allow the sharing of home Wi-Fi. But then Annus and Friis had their "eureka!" moment—they could make voice calls cheap and easy by sharing data peer to peer just as Kazaa did. They even talked about creating Wi-Fi phones, an idea that would later be implemented in Skype. In the spring of 2003, an early alpha was coded and shared for testing with about 20 people.
The name of the project originated from the words "sky" and "peer." Following the example of Napster and others, the name was shortened to "Skyper." But because the domain Skyper.com was already taken, the 'r' was shaved off. "Skype" it was.
Talking to a computer felt silly at the time—as silly as talking to your hand did when mobile phones first appeared. Feedback on the initial version of Skype was not exactly enthusiastic. The sound was glitchy, for instance. But when testers realized that they could now speak via computer to people on the other side of the world for free, attitudes changes.
Zennström and Friis never wanted to be big-time pirates or a thorn in anyone's side. However, Kazaa turned out to have done Skype a huge service. The Robin Hoods of the music and film business now pounced on the telecoms that were making hundreds of millions a year by selling long-distance calls.
Dodging international police was also deeply rooted in the new product, too. Right from the beginning, Skype conversations were encrypted and impossible to intercept. This would eventually change, but at the time it made Skype a perfect tool for criminals. When later services were offered for a fee (e.g., Skype-out), the creators soon discovered that Skype became a tool for laundering money from stolen credit cards. The company had a hard time combating this.
Others were getting into the Internet telephony game, though, and Skype's success was by no means assured. Even Estonian telecom Elion had its Netifon, which seemed better than Skype at first glance. After all, Netifon allowed users to make calls from a computer to a mobile phone. A few years later, however, Elion shut down the service as it was full of bugs and the setup was too complicated for users.
Skype did have a major advantage. Unlike other services, Skype slipped easily through firewalls. The program left no footprints on the Internet, the sound was improved dramatically, and the service worked like a charm. "Right from the start we set out to write a program simple enough to be installed and used by a soccer mom with no knowledge of firewalls, IP addresses, or other technological terms," said one of the early Skypers, Lauri Tepandi.
In 2003, Skype was listed in the commercial register of Luxembourg. Seven people controlled the company's shares: Zennström, Friis, the Bluemoon boys, Annus, and Geoffrey Prentice, an American dealmaker who drew up all of Skype's important transactions.
But they weren't making money. In the summer of 2003, Skype's development came to a halt as the company was unable to pay its developers. The question of whether to charge users a monthly fee so soon after launch would remain unresolved for some time. Zennström had unpaid bills, and the Bluemoon boys hatched a plan to name their asset management company "Borealis Kinks"—an anagram of "Niklas is broke."
On paper, the Skype business plan was not convincing enough for potential investors. As the dot-com bubble burst, the Internet appeared "dead" from an investment perspective. Yet telecoms were still going strong. Potential investors were nervous—not only about losing their money in ventures like Skype but also about having to pay legal costs. Napster was often brought up.
William Draper, an American venture capitalist, was one of the few to say that it was the right time to invest in P2P technology. His "emissary" Howard Hartenbaum was sent to Europe to make a deal with Zennström and Friis. Hartenbaum wanted to invest in the team, whatever they set out to do. It didn't matter if they had a product or not. They didn't need to prove anything. They already had Hartenbaum's and Draper's unwavering trust with Kazaa.
In the end, Draper, Hartenbaum, and some other early angels soon fuelled Skype with its first millions—and recouped their investment a thousand times over within three years.
Skype went live for the first time on August 29, 2003. The Skype team, consisting of about 20 people, celebrated this in Stockholm by watching Startup.com, a documentary about the bursting of the technology bubble.
On its first day, Skype was downloaded by 10,000 people. Within a couple of months, it already had one million users.
Funding
Suddenly, every venture capitalist began lusting after Skype. Zennström left many of them out in the cold, but $18 million was provided for Skype by a consortium of venture capitalists, including Index Ventures, Bessemer Venture Partners, Mangrove Capital, and Draper Fisher Jurvetson.
Steve Jurvetson, an investor of Estonian descent, was part of the group. "I remember wondering: how can they be so good?" he told me, speaking about the Estonian core of Skype. "How can such a small group can do so much so quickly, compared to typical development efforts in, for example, Microsoft? I had the impression that maybe coming out of a time of Soviet occupation, when computers were underpowered, you had to know how to really program, effectively, parsimoniously, being very elegant in sculpting the programming code to be tight, effective, and fast. [That's] not like in Microsoft, which has a very lazy programming environment, where programs are created that have memory leaks and all sorts of problems, that crash all the time and no one really cares—because it's Microsoft!"
Jurvetson, who had already cashed in on Hotmail, was fascinated by Skype's talented team. (Nowadays, he's busy financing anything Elon Musk lays his hands on.) Jurvetson attended Skype's Supervisory Board meetings in Tallinn and, on one occasion, brought his father Tõnu with him. (Tõnu reminisced on the Radisson hotel's rooftop terrace about his departure from Estonia 60 years before, when he had escaped the Soviet invasion during World War II.)
In the end, the VC investments were repaid "only" 40 times over, and Jurvetson was right about the talent of the Skype team. From its $8 million, Jurvetson's firm made $300 million in less than two years.
Despite the inclination to avoid international regulation, little by little, Zennström and Friis learned to "boogie" with various countries' legislation. What was prohibited in the US could be entirely legal elsewhere. Everywhere, law enforcement wanted access. In response, Skype kept a low profile; even though Skype had been an international company since 2004, its eventual offices in Estonia, London, and Luxembourg didn't even have name plates.
The headquarters in Luxembourg was part a multi-story building not easily found by outsiders. It had no sign. A couple of floors up, you'd find an apartment where there was "an accountant working in the living room and another in the bathroom." Private conference calls were often made in a dark bathroom, since the fan started whirring as soon as the lights were switched on.
In London, where Zennström and Friis were then based, Skype was set up in an office with a glass wall. Behind the glass, at the other end of the corridor, a modeling agency was up and running. As the upper and lower parts of the glass wall were transparent and the middle part blurry, the mostly male employees watched mini-skirted models hurry past, seeing only heads and legs.
Corporate headquarters were officially based in Luxembourg from the beginning. One reason was that the tiny duchy had the lowest VAT rate (15 percent) in the EU, which is why all of Skype's sales passed through Luxembourg. However, Luxembourg was important to Skype for another reason: the country guaranteed a serene working environment. The duchy protects its companies and does not deliver claims or court papers. And Robert Miller, Skype's later legal adviser, would periodically go through the company's mailboxes in London and Luxembourg, removing the angry letters sent by telecom companies and government offices and shredding them without reading their contents.
"Miller is one of the few lawyers who doesn't hinder business," said Taavet Hinrikus, a former Skype employee who now works at a startup called Transferwise. "Most lawyers are all about the can'ts and shouldn'ts. Robert found ways we could... As a startup, you're a pirate anyway. It's impossible to obey every law! But when your company is the size of Microsoft, you can no longer afford not to."
However, Zennström claims that there were never any actual legal threats to Skype, except in a few countries like UAE and China. "From the beginning, I was very keen to comply with legislation and regulation, and we managed to keep Skype categorized as an electronic information provider—just like an e-mail provider rather than a telecom provider—for a long time," he wrote in an e-mail while on summer vacation sailing the Mediterranean. "That's why, for example, we never bundled Skype In and Skype Out."
Out of the kiddie pool
Until 2005, Skype operated casually. A network of people, who in many ways were free to come and go as they pleased, worked there as consultants and dealt with things they had never come across before. Ideas were programmed into a product the moment they popped into someone's mind; some coder had an idea in the morning and by the same evening it might already have 10,000 users.
When the price list was being drafted for making Skype calls to telephone networks (Skype Out), its creators didn't bother with market research. Instead, two Skype employees devised the list in one night, using nothing but Excel.
Every week, five to ten 10 employees joined the company. The screening system was simple and very much the product of Toivo Annus. Pass the test assignment? You're hired! Wage negotiations were often redundant—if you deserved to be in the company, you'd be paid what you needed.
Here's how casual the corporate culture was: in Silicon Valley, an American named Eileen Burbidge ditched Yahoo to come and work for Skype. She worked for free in London for eight months (she got paid later, though) and said that Skype was "the best time of her life".
"My first day I learned that I'd have to finalize my contract and terms with Niklas," Burbidge said. "Neither of us were concerned about it at that time. We were both much more interested in just getting me started and working. It was my fault for not raising this 'small admin issue' for months."
Like Jurvetson, Burbidge said that the Estonian team was able to work at the speed of light. "Having just come from 11 straight years of working in Silicon Valley, I was super impressed and actually amazed that these technical leaders seemed not to have any ego at all, didn't care about titles, didn't care about roles or pointing fingers and were all insanely committed to seeing the 'project which had turned into a company' succeed," she said. "They had a sense of responsibility and discipline that I had never witnessed before."
Used to American small talk, Burbidge quickly realized it would not work in this environment. "I was used to greeting people with a 'ping' or a 'you there?' followed by a 'how are you?,' 'having a good day?,' 'am I interrupting?,' or 'can I ask you a question?' But for Toivo all of this was superfluous and simply needless cycles. He would just reply with one word: 'Ask.'"
Skype's internal IT was just as casual. The location of the company's servers, who was being paid for them, and how much was only vaguely known to one person—system administrator Edgar Maloverjan, also known as Ets.
If the development team needed something, Ets would go to the store and wave his company credit card around. If a server had to be restarted, Ets would call the company's business partner. Sometimes they would say: "I can't—I dunno which server's yours! Besides, there's a game on."
When Ets needed server space, he Googled "data center" and "Luxembourg" and found a small service provider called Datacenter Luxembourg S.A. (Zennström and Friis sometimes joked that they didn't see why Skype needed servers at all, as everything was supposed to be peer-to-peer.) On one occasion, he even delivered servers from Sweden to Denmark in the boot of his car. It didn't matter where or how—what mattered was that Skype was working.
But as the company grew, it also acquired more professionalism. It was eventually being developed by people who the Estonians had never heard of.
The service was connected to telephone networks through the work of a Brit who happened to share his name with the pop star Michael Jackson. And Skype's visuals, trademark, language and look were all created by an ambitious young Danish designer, Malthe Sigurdsson. (He was later nominated as one of London's five most stylish men. When the Dane first arrived in Tallinn in 2003, however, Annus booked him into one of the city's worst accommodations—Tähetorni Hotel.)
Skype soon became all grown-up, and it had plenty of suitors lining up to court it. In the summer of 2005, Jaan Tallinn was often in London participating in talks with eBay, discussions that were being held at Morgan Stanley investment bank's offices. On one occasion someone jokingly said, "Hey, Jaan, are you gonna sell Skype?" Tallinn replied, "Yes, and I'll be bringing a big suitcase with me to take the money home in."
They turned out to be the words of a prophet.
First sale
The news of Skype being sold to eBay broke in September 2005. Skype was sold for $2.6 billion. The Bluemoon boys and Annus each got about $42 million, Friis and Zennström more than 10 times as much. Another 100 people in the Tallinn office and 40 in London also had company options.
Ross Mayfield, an American advisor to the President of Estonia, visited Skype's Tallinn office that day and had no idea what was going on around him. "I was struck by how the team had their heads down working like a normal workday," Mayfield said. "In Silicon Valley, everyone would be celebrating and counting their stock options. The core team had a no-nonsense focus on the work showing them the way and a real sense of purpose."
A couple of days before the news, the Estonian forces gathered at Annus' place for a briefing. Everyone had another go at calculating their options. "The atmosphere was ambivalent," remembered Kaido Kärner, a Skype engineer. "On the one hand, you had all this money. On the other, Skype used to be a value in itself. Now it was someone's property."
Jurvetson, the company's investor, did not agree to the sale and said Skype's value should be allowed to grow a little more. Its founders, primarily Zennström and Friis, were the ones who made the decision to sell.
"We kept getting offers," Jaan Tallinn said. "The question was when to start taking them seriously. Each new offer was slightly higher than the last." The fear that Skype might be past its sell-by date sped up the sale. "MSN and Yahoo had fixed their flaws," Tallinn said. "Google launched Talk the same year and started a rumor that the service enabled you to call phones for free. Calls to phones were and still are practically the only source of income for Skype.
"We saw how the risks kept on increasing; the offer was really good and would probably stop there. Besides, in summer 2005, for the first time, there was a moment when our user base started decreasing, which unsettled us quite a lot. We thought we might have a bug or something."
"In 2005, we knew that Yahoo, AOL, Microsoft, and Google were all getting into our market," Zennström added. "We had 20 million active users at the start of the year, while they each had over 100 million active users. Hence it was impossible to assess whether we were big enough to continue to be number one or if we would get crushed by one of them. Therefore we initiated strategic discussions with all of them about partnering, but it led to the same result: they wanted to either acquire us or compete. Because there were a lot of interested parties, we managed to get a very good price, so coupled with the high risk that they could all crush us, it led to the decision to sell."
Clash of cultures
When the deal was done, the Americans sent their manager Brian Sweeney to Tallinn to find out what they bought. He arrived at the Skype office and, to his surprise, everyone was quietly typing away at their computers.
A call came in from the US. "What's going on there?"
Sweeney replied, "Seems like nothing's going on..."
But Estonia grew on Sweeney. The office reminded him of the early years of eBay when people were enthusiastic about their work and there was no jibber-jabber or showing off. But the sale—and continued growth—did change Skype over the next few years. The Tallinn employees eventually felt a strong divide growing between them and Skype's other offices, especially London.
One issue concerned staffing. Estonia had plenty of great engineers, but it had no brand managers. Instead of showing apprentices in Tallinn how to become masters, Skype raised an army of managers in London, while the coders remained back in Tallinn.
"In the end, I spent half of my time pointlessly arguing with these people [in London], trying to make them understand that this camel's only got one hump," said engineer Kaido Kärner, who lost his motivation to work as a result of the quarrel. "They'd been working for the company for two weeks and thought they knew how things should be done."
Zennström said in an e-mail, "We faced both engineering vs. non-engineering and also Estonian vs. Anglo-Saxon culture and communication challenges. It is always easier to have one office and one nationality, but I think our mix, while harder to manage, built a stronger company and culture."
To create a sense of solidarity, the whole of Skype's international staff was invited to let their hair down in Estonia. At a fancy costume party at Sagadi Manor, Zennström dressed up as a pirate. At another event, Annus turned up as a blue monkey (inspired by software called Bonzi Buddy), holding a carton of juice and a bottle of Viru Valge vodka.
Then in 2006, the Americans were invited to what became the craziest party in Skype's history. It took place in Pärnu at the Strand Hotel. The more conservative management from the American eBay now met the liberal Estonian startup Skype en masse. The days were filled with "corporate bullshit bingo," as some Skypers called it, where the company's plans for development were outlined. In the evening, however, it was party time. Even the Californians sometimes think back on those nights.
As the bar closed, everyone spontaneously gathered by the pool and jumped in, fully clothed. Zennström was pouring vodka for everyone—first behind the bar, afterwards on top of it. "What happens in Estonia stays in Estonia," the usually reserved Zennström promised the guests.
Those eBay representatives who went back to their rooms by the time the pool party started turned on their TVs and saw a live broadcast of the party. They were shocked.
The owner of the hotel worked out the damages the next morning and Skype covered them. Skype users would get their own special emoticon to celebrate the party.
"We were young, most of us single, with no kids or anything—and if we knew how to work, we knew how to party too," one Skyper remembers.
But the parties didn't fix the broader cultural issues. The straightforward Annus left. The new Skypers loved Microsoft Outlook, which was banned by Annus. As he put it, "If we're still sending e-mails, why did we even make Skype in the first place?"
In 2007, Jaan Tallinn sent the company's management and all of the employees a heartfelt letter called "Jaan Tallinn's million-dollar manifesto" that pointed out in detail all of Skype's technological and commercial blunders. He also promised to contribute a million dollars of his own money, provided the problems were solved.
"The people who had a start-up background all saw that things were getting out of hand and no longer being fully done," Tallinn told me. "People were focusing on things that were nice to talk about at meetings instead of what was good for users—and also that Skype kept issuing glitchy plugins that hadn't been properly developed."
Skype and eBay never meshed well. In 2011, Microsoft bought Skype for $8.5 billion.
Microsoft steps in
For the second time, Zennström and Friis cashed in on selling Skype. That's because, instead of giving eBay the critical base technology that kept Skype going (the P2P system known as "Global Index"), Zennström's and Friis's company Joltid still owned it—they simply licensed it to Skype. The whole situation devolved into threats of litigation until a 2009 settlement gave Zennström and Friis a chunk of Skype ownership, which made them even more money when Microsoft bought the company.
Almost none of the people who were there when Skype started are still with the company. Decisions are no longer made in Tallinn or London but in Redmond. Soon, Skype will probably become Microsoft Skype or Microsoft Talk, the product of a massive multinational rather than a scrappy startup.
Recent revelations from Edward Snowden, the NSA leaker now granted asylum in Russia, have also shone a light on Skype's new willingness to help law enforcement. Snowden revealed, for instance, that in February 2011 eBay opened up the "spy-proof" Skype to US intelligence agencies. In order to clear up the technological and legal nuances of snooping, a secret project called Chess was conducted in Skype—a scheme only a few people in the company were aware of. That cooperation has apparently extended to Microsoft.
Taking all of this into consideration, it is no wonder many of the employees of the original Skype consider the company's upcoming tenth birthday its funeral. I call Steve Jurvetson on the other side of the Atlantic. He struggles for half an hour but cannot get Skype to work. I call his mobile. "Did Microsoft mess Skype up?" I ask him. "I wouldn't be surprised, Microsoft has messed up almost everything," he replies.
"Being owned by a large company with other business interests across the globe is a negative for Skype. A big multinational, like eBay or Microsoft, needs to accommodate business partners and governments across the globe, which limits Skype's ability to pursue growth aggressively in ways that threaten the entrenched government or business interests. For example, Skype for Wi-Fi-enabled cell phones has been delayed by pressure from wireless carriers who see their voice revenue at risk."
Will Skype even keep its Tallinn office? Microsoft has been known to shut local European offices (like those of the Norwegian Fast), but it has also kept and developed some national units (like the Danish Axapta). In 2011, Steve Ballmer said in Tallinn that the company was not just short of engineers in Redmond but elsewhere in the world as well. There's a group of development centers situated in similar time zones on a strip heading from North to South from Norway to Israel. Some creators of Skype are certain the Tallinn office will be closed; others say Microsoft might start developing other products here too. Much depends on Estonia's attitude toward foreign engineers—and right now, the country is not too open to them.
During the lowest point of the recent global recession, a rumor spread in Tallinn that the Estonians from Skype and the people formerly involved in a giant forestry company called Sylvester were the only ones still knee-deep in cash. Skype employees half-jokingly say that Annus, Heinla, Kasesalu, and Tallinn won the jackpot.
Jaan Tallinn doesn't agree, saying that he had some extra cash as early as 1999 when Tele2 paid him generously for developing the Everyday.com portal. In any event, he says that money was not a goal in itself for any of the four Estonians who helped start Skype. Having become multi-millionaires, they didn't get cocky or vain. Almost none of them bought an expensive sports car. Instead, the money was a bit of a nuisance, as it needed to be invested wisely.
The changes it brought were most visible in Kasesalu, who has cut off all his hair, shed a great deal of weight, and picked up a driver's license. The change was so drastic that his friends started asking, "Priit, are you seriously ill?"
"My life has changed quite a bit," Tallinn says now. "After Peter Thiel, I'm the second richest person in the world investing in the survival of the human race. Time is now much more valuable than money."
Annus left Skype as soon as the company was sold to eBay in September 2005. Tallinn and Heinla continued for another couple of years before resigning. Kasesalu still plays for the Skype team to this day. Zennström and Friis, of course, have made fortunes.
Today in Tallinn, those working for Skype are not exactly in high spirits. The coffee and furniture are fancier than ever at the office in Mustamäe. But the inner fire and sense of cooperation that Toivo Annus inspired in people are damped. According to one source, the company's employee surveys confirm that the number planning to quit is growing all the time.
Taavet Hinrikus, whose company Transferwise aims to revolutionize how money is transferred, said it would be a blessing if Skype eliminated its office in Tallinn. In that case, talented Estonians could do things that would be far more useful to the country than the tax revenue of few hundred high earners.
Zennström, however, is an optimist. He wrote, "The fact that they (Microsoft) have closed down MSN Messenger tells us that they are committed to Skype, which is one of the strongest brands in their portfolio. I hope you will e-mail me in another 10 years and want to do another story about Skype's second 10-year history."
Open Data
The phrase “open data” is heard increasingly around Whitehall and is even on the agenda of G8 meetings. But what is open data, what difference does it make and why should we care? Why is this abstract, even geeky, term getting so much attention?
This week two London summits will try to answer these questions. The first, organised by the Open Data Institute (ODI), which was founded exactly a year ago, will set out some of the results of open-data innovation. The second is an Open Government Partnership Summit involving senior figures from 60 countries.
Open data is information freely available on the Web for anyone to use without restriction. Over the past few years, governments in the UK and overseas have opened up more and more information in this way on websites such as data.gov.uk. Subjects can range from reported crime to train timetables, details of government spending to copies of contracts, infection rates in hospitals to the quality of bathing water. The data is non-personal, or made anonymous so that individuals cannot be identified from it.
This raw material can be used to build new services, and anyone can get in on the act. One start-up agency backed by the ODI, Mastodon C, took prescription data published each month by GPs in England and showed that if doctors had prescribed the generic version of statins rather than the more expensive, patented version, the NHS could have saved more than £200 million in a year. They now have contracts to do more analysis. The challenge is to use this kind of insight to change behaviour. But it is clear that open data can increase transparency, improve efficiency and create economic value.
Open data can also increase public participation. The police.uk website, which shows reported crimes and their outcomes, has had more than 53 million visits from 22 per cent of all family households in England and Wales since its launch two years ago.
Public Health England’s Longer Lives site also attracted national attention when it published data about mortality rates. It found wide variations between local authorities, with two and a half times as many premature deaths in Manchester than in Richmond upon Thames. It also showed how local authorities with similar levels of deprivation were doing against each other. This could allow services to be matched to need much more effectively to need at local level.
In the past year the ODI has helped ten start-up companies who are building new businesses out of open data. Many are working with the public sector. Carbon Culture is helping councils in Cardiff and Kensington and Chelsea to reduce their carbon footprints by publishing real-time information about energy consumption in their buildings.
Spend Networks (which launches in November) will help public sector buyers to save money by creating the first comprehensive and publicly available website comparing the cost of equivalent or identical items and services across government. This will open public sector dealings to public scrutiny — the “disinfectant of sunlight” as the Prime Minister once described it.
Apps created by Transport API, another ODI start-up, use travel data from Transport for London to allow commuters to alter journeys to avoid disruption and get to work on time. This is estimated to have saved the London economy up to £58 million a year.
The supply of useful data from the private sector is also gathering pace. Telefonica Dynamic Insights has agreed to make fully anonymous mobile network data publicly available for the first time. This data, combined with public sector information, can demonstrate to voters and government officials the results of their policy proposals. As a first step, the team has built a website that shows the effects of closing various fire stations in the current reorganisation of the London Fire Brigade. Until now, most of these sorts of decisions were made behind closed doors.
We founded the ODI with a mission to use open data to transform public services, improve policy-making help new open-data businesses and boost economic and social value on both the public and private sectors. At our summit tomorrow we will announce new ODI centres across the globe.
Open data is an idea whose time has come. Anyone can access open government data, and anyone can use that data to provide a more transparent, efficient and profitable application or service.
One sign of a good education is the recognition that questions are often more interesting than answers. A website called Quora increasingly illustrates this principle. It was launched in 2010 by a couple of twentysomething Facebook executives, with an initial valuation of $86m — and now a figure in the billions. Stephen Fry and the actor Ashton Kutcher were early fans, but the initial response from many British users was rather muted.
This seems to have changed in recent months. Quora has ripened into one of the most consistently engaging and enlightening spaces on the internet. Its premise is simple. Using their real names, people post questions for others to answer. You vote an answer up or down according to whether or not you like it; the most popular rise to the top.
Some questions, such as “What are some of the most epic photographs ever taken?”, offer nothing more than standard and enjoyable internet procrastination. Others, such as “What’s it like to be a geek in prison?” or “How does Apple keep secrets so well?”, allow people with specialist knowledge of a subject to explain. One of the most-viewed questions is “What’s it like to be a drug dealer?”. The “best” answer is a fascinating 3,000-word account from a self-professed “mid-level trafficker” who sold more than $1m-worth of marijuana and cocaine after leaving university — and got away with it.
The best questions are the ones with no right answer. In response to “What is/are the most startling discoveries made by the human race?”, users have argued for electricity, money, the wheel, antimatter, antibiotics, the heliocentric model and cooking. The current leader is mathematics.
What is the most underrated piece of art? Quora thinks it is Harry Beck’s 1931 map of the London Underground. This could be because most of Quora’s users are in America, though as many as a third are thought to live in India.
It is by no means a perfect place. The rhetorical skill of a respondent may bring an answer more “upvotes” than a carefully researched but badly written one receives. The highest-voted answer to “Why did almost all societies believe that women were inferior to men?” is several paragraphs of powerfully argued sophistry.
But a glance at Quora is easy to accommodate in one’s day and always worth it — if only for its brilliant insights and bottomless originality. You choose which topics to follow. On Friday people were giving answers to questions including “Why is glass so chemically stable?”, “What is the one question you wish no one would ever ask you?”, “Is it healthier to eat a banana when it’s fresh or when it’s ripe?”, “Would a child grown in total isolation develop language?” and “How close are we to a real Iron Man suit?”. The replies are alternately earnest and frivolous.
My favourite answer to “What are the most environmentally wasteful designs?” is “grass lawns”.
The respondent writes: “Billions of gallons of water and millions of pounds of fertiliser, herbicides and pesticides are used annually to trick non-native grass species into growing where they don’t naturally thrive”, explaining that these adulterants enter rivers and cause algae bloom, establishing dead zones.
It is the sort of clever, surprising answer for which the site is so good.
Cheating Hacks For Video Games
Zero is a customer service representative for one of the biggest video game cheat providers in the world. To him, at first, I was just another customer. He told me that the site earns approximately $1.25 million a year, which is how it can afford customer service representatives like him to answer questions over TeamSpeak. His estimate is based on the number of paying users online at any given time, the majority of whom, like me, paid for cheats for one game at $10.95 a month. Some pay more for a premium package with cheats for multiple games.
As long as there have been video games, there have been cheaters. For competitive games like Counter-Strike, battling cheaters is an eternal, Sisyphean task. In February, Reddit raised concerns about lines of code in Valve-Anti Cheat (VAC), used for Counter-Strike and dozens of other games on Steam, that looked into users’ DNS cache. In a statement, Gabe Newell admitted that Valve doesn't like talking about VAC because “it creates more opportunities for cheaters to attack the system." But since online surveillance has been a damning issue lately, he made an exception.
Newell explained that there are paid cheat providers that confirm players paid for their product by requiring them to check in with a digital rights management (DRM) server, similar to the way Steam itself has to check in with a server at least once every two weeks. For a limited time, VAC was looking for a partial match to those (non-web) cheat DRM servers in users’ DNS cache.
I knew that cheats existed, but I was shocked that enough people paid for them to warrant DRM. I wanted to find out how the cheating business worked, so I became a cheater myself.
That’s how I found Zero. After we finished talking, he reminded me to send him the $25 I promised him. I did not at any point say anything that could possibly even suggest that I would pay him for any reason. I asked him if he meant that was something I promised him or something that I should just do. “Both,” he said. “I also advise you not to use this information against me. That wouldn't be wise.”
How I became a cheating scumbag
Bohemia Interactive (Arma, DayZ) believes that only 1 percent of online players are willing to spend money to cheat on top of an already expensive hobby. Even by that estimate, Counter-Strike: Global Offensive alone had a potential 25,000 cheaters out of a total of 2.5 million unique players last month. Put on your green accountant visor, add up the player-bases of all the other popular multiplayer games cheat providers are servicing (Call of Duty, Battlefield, Rising Storm), and you’ll see a massively profitable market.
I wanted to cheat in CS:GO. I was good, once, when I had a high school student's endless free time to pour into Counter-Strike 1.3. These days, if I can play with friends, it’s fun. If I jump onto a random server I’m cannon fodder.
I Googled “Counter-Strike: Global Offensive cheats,” and quickly ended up at a user-friendly cheat provider. Based on the size of its community and traffic, it’s one of the biggest. I'm going to call it Ultra Cheats, a fake name, to protect the anonymity of the sources I talked to. Those sources, like Zero, have also had their online handles altered.
Ultra Cheats didn't accept credit (other sites did), so I used PayPal to buy a one-month subscription for CS:GO cheats for $10.95. This gave me access to the site’s VIP forums where I could talk to other members, administrators, cheat coders, and download Ultra Cheats’ cheat loader, which checks in with its DRM server. It also gave me access to around-the-clock technical and customer support via TeamSpeak.
I followed a simple list of steps, including disabling Windows’ default anti-virus protection. I launched a new copy of CS:GO on a fresh Steam account belonging to “Perry C. Gamble,” “loaded” the cheat using the cheat loader, and entered a match. For the first time, I wasn't just another player, but a kind of god.
The most obvious of my new superhuman abilities was spying on other players through walls. In CS:GO, wallhacking is incredibly useful. Faceoffs around corners come down to millisecond reactions. My ability to see exactly when the enemy was coming, or to know exactly where he was hiding when I was coming, was unfair to say the least.
It was also super fun. Maybe the most fun I've had with Counter-Strike in years. I was finally getting kills, more than one in a round, but I wasn't crushing everyone else. It was like a little boost that got me back into my high school fighting shape.
I wanted to see how far I could push it. I was paying for this. I wanted to feel powerful and get my money’s worth. I turned on auto-aim, and auto-trigger, which fires your weapon automatically when you point your cursor at an enemy.
I played with these options and others for a handful of matches. They didn't seem as useful as wallhacking, or they simply didn't work as well, but I was vote-kicked out of a match before I could make an educated decision. Halfway into my next match, two hours total since I started cheating, I was VAC-banned from CS:GO.
Counter-terrorists win?
VAC bans are usually irreversible. Perry C. Gamble would never play another match of CS:GO unless he opened another Steam account and bought another copy of the game. That’s where the charm of cheating wore off for me. It was fun while it lasted, but I couldn’t imagine paying another $15 for a new copy of CS:GO plus the ongoing $10.95 a month Ultra Cheats membership just to get easy kills.
John Gibson, president of Tripwire Interactive (Rising Storm, Killing Floor) told me plenty of cheaters feel differently. “We see a spike in hackers after we have a sale on one of our games,” he said. “Their last 10 Steam accounts have been banned, and the game is on sale for $3, so they’ll buy 10 copies for $30 on 10 different accounts and they’ll keep cheating.”
I told Gibson that I found that behavior mind-boggling. He isn’t confused by it. He’s just angry. “Give me five minutes alone with a hacker or a hack writer,” he laughed. “That’s what I think about that mindset.”
Newell called cheating “a negative-sum game, where a minority benefits less than the majority is harmed.” It’s obvious Valve and other developers take the issue seriously, but talking to Gibson made me realize it’s also personal. Before he would even talk to me, I had to prove that I wrote for PC Gamer. He’s been burned before. One of his first experiences with a hacker was someone who pretended to be a journalist with a fake, up-to-date gaming blog. He leveraged his early access to Tripwire and other developers’ games to provide hacks and pirate games.
He’s in jail now—for stealing credit card data, not cheating.
Gibson told me that, legally, it’s not worth going after sites like Ultra Cheats. Most of them are based out of Russia, China (Ultra Cheats is registered in Beijing), or other places where extradition is, in Gibson's words, “questionable.” At the very least, Tripwire would have to pay another lawyer in that country, making it prohibitively expensive and complicated.
Criminal justice systems, perhaps understandably, aren't preoccupied with people cheating in online games. “Especially when it’s international,” Gibson said. “Then you’re talking about the FBI and Interpol. If someone stole $10 million in diamonds, call them. If someone is hacking your game, they don’t care.”
If Tripwire, Valve, or other developers want to reduce the number of cheaters, they have to do it themselves. Note that it’s “reduce” and not “eliminate.” Like Newell, Gibson knows that this isn't a battle he can finish. “It’s like the Wild West,” he said. “It’s more about managing the risk and hacks without inconveniencing your legitimate players too much.”
Tripwire’s anti-cheat strategy is three-pronged. The first is technical, using both VAC and Punkbuster. This is one topic Gibson was secretive about, but he said Tripwire uses both because “they handle things in different ways.”
"If Tripwire, Valve, or other developers want to reduce the number of cheaters, they have to do it themselves."
The second is being a proactive developer. When Tripwire notices a loophole, it closes it as fast as possible. When Red Orchestra 2 first launched, it didn't do a whole lot of server-side validation on hit detection. The game was plagued by hacks that allowed your machine to tell the server you shot someone in the head even when you were clear across the map. “Very quickly we put up an update that basically verified, within a reasonable margin of error, that they kind of have to be where you say you shoot them at,” Gibson said. “If they’re not, then we know that it’s a hack and we ignore that shot.”
The third is having an engaged server admin community and giving them the tools to be the third line of defense. “That’s a huge thing for us,” Gibson said. “Hackers come in, it’s obvious fairly quickly that they’re hacking, the server admin bans them from the server and problem solved.”
Punkbuster also allows server admins to take screenshots of what players see. If the server admin captures evidence of cheating, he or she can submit the proof to PBBans, a global database of hackers, making it very difficult for that hacker to join any Punkbuster servers.
This also allows server admins to pass along evidence of cheating to Tripwire, which can use the information to close more loopholes.
Overall, Gibson thinks this strategy works very well. “I have over 1,275 hours in Red Orchestra 2 and Rising Storm,” he said. “I’ve been on a server with about two hackers in all that time.” I asked him if Tripwire downloads paid cheats as part of its efforts to prevent them. “We’re a proactive dev,” he chuckled. “Infer from that what you will.”
Gross Income
After being banned from Counter-Strike, I spent several weeks poking around the Ultra Cheats forums hoping that someone would talk to me about how the site was managed. I only got real attention once I admitted that I was writing a piece for PC Gamer. I bounced from admin to admin until I got to Slayer, Ultra Cheats’ manager and lead coder.
Slayer didn't want to talk at first. “I don’t think any good for Ultra Cheats would come from this,” he said. I promised him I wouldn’t use any real handles or even the site’s real name, and that I wanted him to respond to quotes from developers like Gibson. I suspect the notion that he’d get a reaction from a game developer is what got him on board.
Like Gibson, he needed confirmation that I was really writing for PC Gamer, and he was more thorough about it. I gave him my real email address and name (not Perry C. Gamble’s), Twitter, and an email confirmation from an editor.
Gibson was worried about hackers posing as journalists. Slayer was worried about giving legal ammunition to parties that want Ultra Cheats gone, and competing cheat providers.
We set a date to talk over Skype, but when the time came Slayer wouldn’t agree to a voice call, just text, because he was worried about me recording him as well as “other reasons.” To my surprise, he brought along another Ultra Cheats administrator, Prophet, and they’d only talk to me together. I guessed that this was to keep one another from saying anything they might regret.
They said part of Ultra Cheats’ money comes from a different site that it operates in Brazil (a huge gaming market) and reseller sites, which sell Ultra Cheats’ product under a different brand in exchange for a cut of sales.
Slayer said that Zero’s $1.25 million a year was a little inflated, but that I could come up with a rough estimate of Ultra Cheats’ annual revenue by gauging the size of the community.
On March 20, over 2,500 members logged into the Ultra Cheats’ forums, almost all of whom are plainly listed as paying for standard or more expensive cheat packages. At an average of $10 per user a month, Ultra Cheats makes $300,000 a year. Add to this the fact that the forum has almost 150,000 members overall (though we don’t know how many are active, paying users), the Brazil site, and resellers, and it’s not hard to imagine Ultra Cheats breaking a million dollars a year. Slayer declined to share the exact number of their active users.
He said coders supply cheats on the site in exchange for a cut of the sale. These “vendors,” as Slayer calls them, take in about half the profits of the whole operation. Both Prophet and Slayer said that they get paid “enough,” but not enough to quit their day jobs. “More than minimum wage,” they said. Customer support, technical support, and other people like Zero who help run the site get paid as well, but less. Zero didn’t want to say how much he makes, but admitted that he has a day job and that free cheats attracted him to the position.
“I do this because I really think of the community and staff as a big family,” Prophet said.
The rest of the money goes to “the ownership entity,” which Slayer and Prophet refused to talk about in any way. All they would say is that the entity controls the PayPal account I paid (and hence all Ultra Cheats' money) and that only Slayer knows anything about it. Anything between this ownership entity and the rest of Ultra Cheats goes through him. For all I know, this ownership entity doesn’t even exist and Slayer and Prophet were the actual owners.
Rage cheaters and closet cheaters
Gibson said that if you cheat, you always get detected eventually. After talking to cheaters, I’m not sure that developers are as effective at preventing cheats as they think. According to Slayer, there are two kinds of cheaters: rage hackers and closet hackers. A rage hacker is someone who uses cheats to their fullest potential, even employing features that kill everyone on the server instantly. They're the ones you notice and hate.
Zero said that if it wasn't for hacking, games wouldn't be fun. He said cheating is a rush, similar to the one he got when he used to deface websites. “In life, you’re always going to have rebels,” he said. “It’s like coming up to someone and asking, 'Why do you rape or kill?' But in this case it’s cheating.”
Since he compared cheating to the worst crimes a human can inflict on another human, I asked him if that means he thinks it’s a bad thing. He didn't answer. I asked him how he would feel if he was in a game with another player who was using cheats against him. “Doesn't matter to me because he’s probably one of our customers,” he said.
Slayer agrees with Gibson that anti-cheats like VAC and Punkbuster, which work similarly to anti-virus software, are effective at catching ragers and detecting “public” cheats quickly. “But their methods are so reverse-engineered it’s not even funny,” he said. Punkbuster's signature scans are easily dumped using public knowledge available on public forums. If you’re smart enough and you know the methods they employ, you can get around it easily.
"In life, you’re always going to have rebels. It’s like coming up to someone and asking, 'Why do you rape or kill?' But in this case it’s cheating.' "
“Punkbuster is basically defeated,” Slayer said. “If I write cheats and give them away on a public forum I can have my cheat up and running in 20 seconds because I found out exactly what they detected. If I was smart I would build that into my cheat and have my cheat fix itself on the fly, which isn’t a stretch. Call of Duty dropped Punkbuster for a reason.”
I asked Slayer why Valve, for example, doesn't download his cheats, track the server, block it, and come after him. If it wasn't obvious already, I wasn't a Computer Science major. Slayer is, and my questions amused him. “You could do that, but what if I cycle my server IP every day, or every hour? Or I could reasonably and securely move DRM to the client with check on a less regular basis, or I could just spoof what VAC sees :). To be honest Emanuel, I can rent a server using a prepaid credit card via a VPN in another country and you will NEVER find who rented it.”
Closet hackers hide the fact that they cheat. I'm proof that cheaters do get caught—Steam banned me after a little more than two hours of aggressive, blatant cheating—but members of the Ultra Cheats community told me that I was simply doing it wrong. In one of the most friendly, polite exchanges I’ve ever had with online strangers, especially in the gaming sphere, they gave me tips on how to cheat without being detected.
“Play like you’re not hacking,” one user who’s been cheating in CS:GO with the same Steam account for over 250 hours told me. “Play as you would normally, only you’re able to see through walls. Act.”
That means don’t stare at walls, don’t use an aimbot (since it moves the camera erratically and results in unreasonable kills), and make sure someone kills you in every match. He also believed you’re less likely to get banned if you buy in-game items and get some hours in before you start cheating. He suggested that next time, I should launch the game and let it idle for a few hours before I do anything.
Another cheater suggested I practice cheating in free-to-play games. “That’s what I love about free games,” he said. “You can just keep coming back and there’s nothing they can do about it.”
If you’re a good closet hacker you also won’t get caught by statistical anti-cheats like FairFight, used in Titanfall and other Electronic Arts games, or Overwatch, another, peer-review layer of CS:GO’s anti-cheat strategy, where approved players view flagged replay footage and vote on whether another player was cheating.
Tripwire closes loopholes as fast as possible, but Ultra Cheats is fast too. If a vendor’s cheat stops working, Ultra Cheats stops selling it and the money stops flowing. Detected cheats come back online within hours, days at the most.
And these are only the cheats that we know about. “Anti-cheat can’t detect what it can’t get its hands on,” as Slayer said. Between that and the proficient closet cheaters, I can guarantee that you’ve played with way more cheaters than you think.
Supply and demand
If closet cheaters aren't trying to crush other players, why do they turn to cheats in the first place? Prophet started cheating so he could play with his kids. He’s “over 50,” and suffers from a serious visual impairment. He says that without ESP (extrasensory perception), part of the wallhacking cheat that highlights enemy players with bright red boxes, he wouldn't be able to keep up. “If I did not use cheats I would not be playing at all,” he said.
Slayer said that they've heard from a few other people with disabilities who use cheats this way. “It enables them to enjoy a game like you or I would normally, without cheats,” he said. But even if there weren't players with disabilities cheating to “rise to a normal level of play,” as Prophet calls it, the reality is some players will always feel that they want special assistance.
If matchmaking worked perfectly and everyone always felt like a capable player up against equally skilled opponents, maybe there would be fewer of the closet cheaters that make Ultra Cheats a profitable business. When matchmaking works, you won't win every game, but you'll never feel dominated. It’s like a friendly neighborhood basketball game. When it doesn't work, it feels like being mercilessly dunked on by LeBron James. That's not fun.
At that point some players dedicate a significant amount of time to get better. Others quit. A small minority turns to cheats. Even Slayer admits that what he does isn’t good for games, but as long as there are enough of the latter he’ll provide supply where there’s demand.
Ultimately, the most effective anti-cheat strategy is to make cheating feel unnecessary. That means either more sophisticated, accurate matchmaking or some kind of handicap system, which some fighting games (Street Fighter IV, Smash Bros.) already implement.
Similar solutions in other games won’t stop ragers. Nothing will. But they'll get caught, eventually. For closet cheaters, it might offer a legitimate way to play with others and undercut the paid cheats business.
Until then, “this cycle is unstoppable,” as Slayer said. “If we didn't do it, someone else would.”
Net Neutrality
The net neutrality debate is based on a mental model of the internet that hasn’t been accurate for more than a decade. We tend to think of the internet as a massive public network that everyone connects to in exactly the same way. We envision data traveling from Google and Yahoo and Uber and every other online company into a massive internet backbone, before moving to a vast array of ISPs that then shuttle it into our homes. That could be a neutral network, but it’s not today’s internet. It couldn’t be. Too much of the traffic is now coming from just a handful of companies.
Craig Labowitz made this point last month, when he testified before a Congressional committee on the proposed Comcast-Time Warner merger. Ten years ago, internet traffic was “broadly distributed across thousands of companies,” Labovitz said in his prepared statement to the committee. But by 2009, half of all internet traffic originated in less than 150 large content and content-distribution companies, and today, half of the internet’s traffic comes from just 30 outfits, including Google, Facebook, and Netflix.
Because these companies are moving so much traffic on their own, they’ve been forced to make special arrangements with the country’s internet service providers that can facilitate the delivery of their sites and applications. Basically, they’re bypassing the internet backbone, plugging straight into the ISPs. Today, a typical webpage request can involve dozens of back-and-forth communications between the browser and the web server, and even though internet packets move at the speed of light, all of that chatter can noticeably slow things down. But by getting inside the ISPs, the big web companies can significantly cut back on the delay. Over the last six years, they’ve essentially rewired the internet.
Google was the first. As it expanded its online operation to a network of private data centers across the globe, the web giant also set up routers inside many of the same data centers used by big-name ISPs so that traffic could move more directly from Google’s data centers to web surfers. This type of direct connection is called “peering.” Plus, the company set up servers inside many ISPs so that it could more quickly deliver popular YouTube videos, webpages, and images. This is called a “content delivery network,” or CDN (see glossary, right).
“Transit network providers” such as Level 3 already provide direct peering connections that anyone can use. And companies such as Akamai and Cloudflare have long operated CDNs that are available to anyone. But Google made such arrangements just for its own stuff, and others are following suit. Netflix and Facebook have built their own CDNs, and according to reports, Apple is building one too.
The Google Edge
Does this give companies like Google and Netflix a potential advantage over the next internet startup? Sure it does. But this isn’t necessarily a bad thing. In fact, this rewiring has been great for consumers. It has allowed millions to enjoy House of Cards, YouTube, and Kai the hatchet-wielding hitchhiker. It’s the reason why the latest version of high-definition video, Ultra HD 4K, is available for streaming over the internet and not on some new disk format.
Plus, although Google does have an edge over others, not every company needs that edge. Most companies don’t generate enough traffic to warrant a dedicated peering connection or CDN. And if the next internet startup does get big enough, it too can arrange for a Google-like setup. Building the extra infrastructure is expensive, but making the right arrangements with a Comcast or a Verizon is pretty cheap - at least for now.
The problem today isn’t the fast lanes. The problem is whether the ISPs will grow so large that they have undue control over the market for fast speeds—whether they can independently decide who gets access to what connection at what price.
Why Do Trolls?
Why do people think it’s okay to say racist, inflammatory, or otherwise socially inappropriate things online? Research in communication and psychology has investigated people’s perceptions, rationale, and behavior and identified several factors that determine an individual’s likelihood of posting offensive content.
1. Anonymity. Some people are under the impression that you can say anything online and get away with it. Online forums, comment sections of news media sites, and other social media sites such as Reddit and Twitter allow people to make up screen names or handles that are not linked to their real world identity. The online disinhibition effect suggests that this anonymity may drive more deviant behavior, because it is easy to avoid consequences.
This is one reason that some online news sites now require users to sign in and comment using Facebook. The idea is that Facebook’s user agreement requires the use of one’s real name, and the news sites hope that people will be more conscientious about what they post via Facebook given that it’s tied to their true identity. (This is not to say people don’t make fake, pseudonymous Facebook accounts, but rather this is the guiding logic.) The problem with that is…
2. Perceived obscurity. Even if people are using their own Facebook accounts tied to their offline identities and know they are not anonymous, they may still have feelings of obscurity. That is, they believe that their expressions are still relatively private. If Henry is commenting on the site of his hometown paper in Kentucky, for example, he may feel less obscure than if he is commenting on a story on the Washington Post website. Even though their comments are tied to their names, posters think the people that matter in their lives won’t really notice these comments, and the people who will see these comments are just faceless masses that they won’t encounter offline.
3. Perceived majority status. The spiral of silence theory suggests that when people think they are in the majority in a certain setting, they will more freely express their opinion than those who are in the minority, who fear social ostracism if they express an unpopular opinion. Thus, although individuals may not make racist comments offline, they may feel it’s okay to do so in a particular online setting because they think their opinion is the prevalent one.
4. Social identity salience. The social identity model of deindividuation effects, commonly referred to as the SIDE model, suggests that social identity sometimes means more than our individual identity online. Sarah might be a nice, civil person offline, but when she goes online to talk about her favorite soccer team, she may behave like other hooligan fans and hurl insults at opponents and their fans. This occurrence is also often seen in political discussions, where people start responding like a group member based on political, national, ethnic, religious, or other identity or affiliation (e.g., a Libertarian, an Israeli-American, a Catholic).
This is the process of deindividuation, which in more drastic forms is known as “mob mentality”: You stop seeing yourself as an individual person and act more in line with the group. As a result, the group’s behavior can become more extreme than it would have been, as everyone is shifting to conform to the group even if they’re not as passionate or opinionated as other group members.
5. Surrounded by “friends.” On sites like Facebook, people may perceive their online environment to be full of people like them, because they are part of the same social network. Thus, individuals feel confident self-expressing because they anticipate getting support or agreement from their network members. John might post an angry, vitriolic political message because he assumes that his network members feel the same way. He might even do so to get “likes” or other expressions of agreement from his friends.
Often, however, our social media networks are more heterogeneous than we think. Privacy settings determine how broadly our posts may be read, and sometimes those are “friends of friends”—i.e., people we may not even know. Further, our comments can easily be shared outside of our immediate network. Thus, although we feel we are surrounded by people who agree with us, there actually may be many people who disagree or find our comments hurtful, insulting, or offensive.
6. Desensitization. Over time, we may get desensitized to the online environment. Whereas once we would have thought about the consequences of what we post online, now we just post without thinking about it. We may see so many nasty comments that we think making one ourselves is not a big deal. If we get used to using a certain social media site like Facebook to express our daily experiences and frustrations, we start to lose our filter. It is also easier to type something offensive or mean into a screen rather than say it to someone’s face.
7. Personality traits. Some individuals are outspoken by nature. Others tend to think that they are morally superior to others. And some just enjoy making other people uncomfortable or angry. Any of these traits may drive individuals to express themselves online without a filter. Personality traits such as self-righteousness and social dominance orientation (where you think some social groups, typically yours, are inherently better than others) are related to expressing intolerance. Others are “hard core” believers who will express their opinions no matter what, because they believe their opinion is infallible.
8. Perceived lack of consequences. Social exchange theory suggests that we analyze the costs and benefits in our communication and relationships. All in all, these factors all precede the belief the benefits of expressing oneself outweigh any costs. Anonymity and obscurity suggest you won’t be personally responsible. Perceived majority status, social identity salience, or being surrounded by friends means you believe that even if some people are upset or angry, you have more (or more important) people on your side of the argument, so you are winning more friends than you’re losing. Personality traits and desensitization may making offending or losing friends not seem like a real consequence, because those friends aren’t really worth it if they can’t handle the “truth,” or that those friends aren’t really friends if they don’t agree with you or tolerate you.
Robo Brain
Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.
To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation. This will all come in one package with Robo Brain.
"Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud," said Ashutosh Saxena, assistant professor of computer science at Cornell University. Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, say Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.
If a robot sees a coffee mug, it can learn from Robo Brain not only that it's a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.
The system employs what computer scientists call "structured deep learning," where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn.
Robot's computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot's brain makes its own chain and looks for one in the knowledge base that matches within those limits. "The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries," said Aditya Jami, a visiting researcher art Cornell, who designed the large-scale database for the brain. Perhaps something that looks like a chart of relationships between Facebook friends, but more on the scale of the Milky Way Galaxy.
Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.
More On Trolls
Online abuse teaches us two things: that we should all be nicer to each other – and that we must grow thicker skins.
Brenda Leyland, the Leicestershire mother who killed herself five days ago, was not a “troll”. She was, rather, a common-or-garden internet conspiracy theorist. I’ve come across hundreds, maybe thousands, of Brendas, all of whom believe there has been a vast official cover-up of some traumatic event and who feel it is their duty to bring it to the attention of a supine or bamboozled public. Some go for power theories, such as suggesting that George Bush blew up the World Trade Centre, others for health and child-related conspiracies such as the one over MMR or, in Leyland’s case, the fate of Madeleine McCann.
In her head, Madeleine’s parents were the key to the conspiracy, so when she typed into her computer phrases such as “I’d waterboard Kate and Gerry”, it was not fundamentally different from a 9/11 theorist writing: “I’d waterboard Dick Cheney and Donald Rumsfeld.”
There is an understandable fashion among mainstream journalists for “outing” anonymous so-called trolls. The logic runs that they deserve to be confronted because of the distress we think they cause. But the problem, as Carol Midgley wrote in yesterday’s Times, is that we have no idea who they really are, or their state of mind. In this instance the woman turned out to be very fragile indeed.
I am pretty sure that if the Sky News crime reporter Martin Brunt had his time over again he would decide not to doorstep an unknown and unknowable minor social miscreant like Brenda. In a very soft and calm voice she told him that she thought she was entitled to make the comments she had, and then she went off and committed suicide.
Who knew? And Brunt will be very upset. But now, of course, the critics came for him. A Facebook page was set up arguing that he should be fired. And a little voice whispered inside my head: “And what if he kills himself?” Will we go after the anti-Brunt Facebookers?
It is nearly two years since the Jacintha Saldanha suicide. She was the nurse hoaxed by a pair of Aussie radio pranksters into putting their phone call through to a colleague treating the Duchess of Cambridge for morning sickness. Ridiculed, she killed herself. Everyone then turned on the young DJs. Had she not left a note saying that she blamed them? So they were bashed up by everyone. Then it transpired that Ms Saldanha had a history of suicide attempts and their crime became a proximate, not an absolute, cause. So they were left alone for a bit.
Sometimes public life in the modern world appears to be one pitchfork procession after another. The mob collects outside the town hall with flaming torches, yelling and waving sharp implements. A whiskered man on the steps shouts: “Ludwig, you will take the first group and search the wood!” Mob: “Aarghh!” “And I will take the second group and search by the lake!” Mob: “Aarghh!” And off they go.
Off we go. Again and again and again. And nothing makes us more liable to shout for the capture of the monster than our confusion and fear about what people are doing in cyberspace. Increasingly we demand that the law or some other intermediary step in to prevent the daily unpleasantness and discourtesy that people give vent to. As Gerry McCann himself put it in an interview this week: “Something needs to be done about the abuse on the internet. I think we probably need more people to be charged.”
Lord above, but I understand why he says this. And Lord above, I also know that madness this way lies. Because unless we are very clear and limited in what we seek to ban or proscribe online, then we will find ourselves running around the woods and lakes of the internet ignoring really important issues until we discover that we’ve been chasing a chimera. If we don’t save legal intervention for the worst cases — the real stalking and serious harassment — then we will go crazy. And waste a lot of money.
Every huge brouhaha gives rise to Brendas. The Jimmy Savile affair has created a small army of Twitter child-abuse vigilantes, who tweet about nothing else and widen the circle of their celebrity accusations with every week that passes. The laws of libel are broken a thousand times a day and these “campaigners”, as they see themselves, regard any contradiction as a certain sign of paedophile tendencies. Shall we prosecute them all for their stupidity? Or just the ones who have more than, say, 100 Twitter followers? Or shall we demand that internet service providers understand the context of every communication made using their service and intervene far more liberally?
It isn’t practical and it isn’t really desirable. So two things follow. One, we must be nicer to each other and two — paradoxically — we must toughen up.
Be nicer. Brenda Leyland could not imagine Gerry and Kate McCann as real people who would suffer and suffer over the years. She could not empathise, or she chose not to.
Even so, do we imagine for a moment that she would have said her meanest things to their faces? Of course not. She lived quite near them. Her hatred was abstract. When she was “outed”, many people will have seen her simply as a hard-faced troll and failed entirely to see her as a person too.
My advice to people posting online, whether on Facebook or, more particularly on Twitter or chat rooms has always been to try to remember that they are publishing, just as I am now. Would you say these words in this voice to the person about whom you are writing? To your father or your daughter? No? Then don’t do it about others, even in what you imagine is a closed circle of like-minded people.
But this advice is slow to penetrate. And in the meantime the rest of us have to apply a firmer harm test, for our own sakes. Are these words really going to kill me? Can I not cope by blocking the offender or by realising that they are probably inadequate and isolated? Or am I, objectively, conniving at being driven round the bend?
I ask myself this about some silly accuser nearly every day. Me and Zionism. Me and child abusers. Me and Rupert Murdoch. My hide has got thicker and thicker. So when the right-wing controversialist Katie Hopkins tweets, as she did this week, to ask: “How many more must die before the McCanns accept that their negligence is at the heart of all their grief?” I don’t reach for my pitchfork. I reach for my mute button. Goodbye, Katie. And the rest is silence.
You're Continually Being Experimented On
On your way to this article, you probably took part in several experiments. You may have helped a search engine test a new way of displaying its results or an online retailer fine-tune an algorithm for recommending products. You may even have helped a news website decide which of two headlines readers are most likely to click on.
In other words, whether you realize it or not, the Web is already a gigantic, nonstop user-testing laboratory. Experimentation offers companies a powerful way to understand what customers want and how they are likely to behave, but it also seems that few people realize quite how much of it is going on.
This became clear in June, when Facebook experienced a backlash after publishing a study on the way negative emotions can spread across its network. The study, conducted by a team of internal researchers and academics, involved showing some people more negative posts than they would otherwise have seen, and then measuring how this affected their behavior. They in fact tended to post more negative content themselves, revealing a kind of “emotional contagion” (see “Facebook’s Emotion Study Is Just Its Latest Effort to Prod Users”).
Businesses have performed market research and other small experiments for years, but the practice has reached new levels of sophistication and complexity, largely because it is so easy to control the user experience on the Web, and then track how people’s behavior changes (see “What Facebook Knows”).
So companies with large numbers of users routinely tweak the information some of them see, and measure the resulting effect on their behavior—a practice known in the industry as A/B testing. Next time you see a credit card offer, for example, you might be one of a small group of users selected at random to see a new design. Or when you log onto Gmail, you may one of a chosen subset that gets to use a new feature developed by Google’s engineers. (see “Seeking Edge, Websites Turn to Experiments.”)
“When doing things online, there’s a very large probability you’re going to be involved in multiple experiments every day,” Sinan Aral, a professor at MIT’s Sloan School of Management, said during a break at a conference for practitioners of large-scale user experiments last weekend in Cambridge, Massachusetts. “Look at Google, Amazon, eBay, Airbnb, Facebook—all of these businesses run hundreds of experiments, and they also account for a large proportion of Web traffic.”
At the Sloan conference, Ron Kohavi, general manager of the analysis and experimentation team at Microsoft, said each time someone uses the company’s search engine, Bing, he or she is probably involved in around 300 experiments. The insights that designers, engineers, and product managers can glean from these experiments can be worth millions of dollars in advertising revenue, Kohavi said.
Kohavi’s group has developed a platform to allow other parts of the company to perform their own user experiments. The company’s flagship productivity software, Office, would likely benefit from more user experimentation, he said.
Facebook’s emotion study, published in the Proceedings of the National Academies of Science, went further than most daily Web experiments that measure only miniscule differences after influencing people’s behavior in subtle ways. But MIT’s Aral notes that eliciting an emotional response does not make an experiment unethical. “I was very surprised to see that people were upset about that,” he said, pointing out that that many television ads and newspaper headlines are arguably just as emotionally manipulative.
Yet, perhaps just as importantly, the Facebook study may also have revealed how few people realize they are being prodded and probed at all.
“I found the backlash quite paradoxical,” said Alessandro Acquisti, a professor at Carnegie Mellon University who studies attitudes towards privacy and security. Although he thinks user experiments need to be conducted with care and oversight, he felt the Facebook study was actually remarkably transparent. “If you’re really worried about experimentation, you should look at how it’s being used opaquely every day you go online,” Acquisti said.
Some practitioners say experimenters need to think carefully about how they present their work to users. Duncan Watts, a principal researcher at Microsoft (and previously a professor of sociology at Columbia University), said this was a problem with the Facebook study. “When people hear the word ‘experiment’ and they hear the word ‘manipulation’ they think of lab rats,” he said. “Inside this community we have a very different interpretation. We think of a systematic test of a hypothesis by randomizing assignment to different treatment conditions.”
To assuage concerns among its users, Facebook said this month that it would introduce a new process for reviewing potentially sensitive research, although it did not say what that would mean.
But even if experiments continue to have only subtle effects, some may find their scope, scale, and growing sophistication unsettling. “What’s happened in the last few years—and this to me is crucial—is that it’s becoming very specific to a person based on their personal information,” said Acquisti. “It’s becoming ubiquitous, and it’s becoming much more measurable.”
Facebook 'Like' Addiction
A "like" on Facebook is a treat. You get a little red pop-up on your notifications icon, you see the little box on the lower-left corner of your screen describing the like, and you get that warm, albeit fleeting sense of pride. Someone liked your post. Your post! You savor it. You inevitably want more likes. You wait.
To keep its 1.3 billion users clicking and posting (and stalking), Facebook scatters numbers everywhere. While it collects many metrics that users never see, it tells users plenty of others, too. Facebook tells you the number of friends you have, the number of likes you receive, the number of messages you get, and even tracks the timestamp to show how recently an item entered the news feed.
To keep you clicking, posting, and stalking, Facebook uses numbers strewn everywhere.
And these numbers, programmer and artist Ben Grosser argues, directly influence user behavior by being the root of Facebook addiction. In October 2012, he set out to find exactly what Facebook's metrics were doing to users after noticing how much he depended on them.
"There were times when I was more focused on the numbers than the content itself," he remembers. "I was more interested in how many likes I had instead of who liked it. I realized every time I logged in I looked at those numbers. Why was I caring? Why do I care so much?"
In response, he built a browser extension called The Facebook Demetricator, which, when installed and activated, hid all numbers on Facebook. Instead of seeing the little red pop-up showing the number of notifications you have, you'd simply see the icon take on a lighter blue color. Instead of seeing the number of likes a post received, you'd see the phrase "people like this."
Since releasing the extension two years ago, more than 5,000 users have adopted the tool, sending Grosser feedback on how the tool influenced their understanding of the social network. Grosser used their observations to write a paper examining the impact of metrics, published Monday in the journal Computational Culture.
His findings are illuminating. Sure, Facebook addiction is probably the oldest social-network-related epidemic, but Grosser's tool allowed users to experience a pressure-free Facebook. This experience demonstrated that metrics changed user behavior by encouraging competition (the more likes, the better), emotional manipulation (deleting posts when there weren't enough likes), reaction (liking more recent posts instead of older ones), and homogenization (liking because others liked).
Put simply, the numbers encouraged users to feel compelled to want more numbers. For example, friend count is seen as a mark of status because Facebook places a small "+1" next to the "Add Friend" button. Even if the user isn't aware of doing so, the number encourages her to make more connections, because she's shown that adding a friend is a positive action. That results in an overall and innate need for more on Facebook.
The more competition, the more the numbers matter, creating a vicious cycle that eventually creates what Grosser calls the "graphopticon" model, where the many watch the many. You log on, see how many likes other posts have received, and feel compelled to like as well. Facebook users therefore contribute data while pressuring others to do the same.
Which is why many users of Grosser's Demetricator tool found it difficult to leave the numbers behind.
"People realized when the numbers were gone, they had been using them to decide whether to like something," he tells me. "I certainly didn't expect these tendencies of people saying, 'I literally don't know what to do [without knowing the metrics].'"
Some Demetricator users rejected the tool completely, seeing it as going against the benefits of using Facebook in the first place.
"A huge category of response was, 'Why would I want to [hide the numbers]? The numbers are the whole point,'" Grosser says. "Some people really like their metrics."
It's not just Facebook where users have been conditioned to appreciate more notifications and likes. On Instagram, the hashtag #100likes is applied to photos that, well, achieve at least 100 likes, a mark of success used mainly among teens competing to enter "The 100 Club." Twitter prominently places the follower count at the top of users's profiles, and tracks the number of retweets and favorites. Even Ello, which has promised it won't involve advertisers, displays timestamps and the number of views.
Not all users of the Demetricator found the lack of numbers paralyzing. Instead, most continued using the tool, finding it "enjoyable that their emotional well-being was restored to some extent," Grosser says. No numbers, no pressure.
The more competition, the more the numbers matter, creating a vicious cycle where the many watch the many.
But whatever the outcome, the influence of the Demetricator is clear: The numbers made a difference in the way people used Facebook, often affecting their behavior. Which, Grosser says, isn't necessarily a bad thing. His project served only as an experiment reminding users to think about why metrics matter.
"I think it's a problem when we don't know what those likes mean, when we start focusing on wanting more likes," he says. "If we aren't aware of how these numbers are telling us to interact, then it's a problem."
And perhaps that's where users are already headed. As users become more aware of news posts and advertisements catered to their preferences, they seek alternatives to avoid that control.
Or they'll just keep liking those likes. Because even if we are conditioned to want them, likes still don't have to mean anything at all. They can remain treats, tidbits of gratification used only on social media. It's an idea Grosser had once previously toyed with in another project called "Reload the Love," which allowed users to see their notification count go up, just for the satisfaction of seeing the engagement. He found then, just as he found with the Demetricator, that the smallest numbers deeply affected the way users felt.
The First MUD
In 1978, in a computer lab at Essex University, two brilliant young students invented the future of video games. Roy Trubshaw and Richard Bartle were the creators of the Multi-User Dungeon – or simply MUD – a text-based adventure that ran on a giant DEC PDP-10 mainframe. They programmed the game in their spare time, accessing the computer labs in the evenings. If they hadn’t made it, massively multiplayer online adventures like EverQuest and World of Warcraft may never have happened.
There had been other fantasy adventure games before MUD, of course. Will Crowther’s Colossal Cave Adventure arrived in 1975, while work on Zork, developed by a bunch of MIT students in the university’s dynamic modelling group, began in 1977. These single-player programs were, in turn, heavily inspired by the pencil and paper role-playing game Dungeons and Dragons, which had been hugely popular in student circles since its publication in 1974.
Although Trubshaw wasn’t a fan of D&D, Bartle played a lot, buying a copy from Ian Livingstone’s Games Workshop store for £6.10 as soon as it was available in the UK. Trubshaw, who started the programming for MUD alone, originally planned to create a virtual world rather than a game. However, when Bartle got involved, he was already a keen computer games player and wanted participants to play together in a similar way to D&D. Consequently, when the first version of MUD uploaded to the university system in autumn 1978, it allowed multiple users to log into a mainframe and go on fantasy quests together.
Bartle had been making games and programming computers since the mid-1970s. His formative experiences in coding were courtesy of the DEC PDP-10 owned by British Petroleum’s petrochemical works in Brough, East Yorkshire, near his home town of Hornsea. “In order to say sorry for filling the air with toxic fumes they let the local schools use their computer,” he told attendees at the recent GameCity festival. “We had to fill in these coding sheets, writing in letters using an actual pen. Then we’d send them off somewhere and someone typed them in.”
Later, he attended Essex University to study mathematics, but quickly changed to computer science, a decision guided as much by intellectual pride as it was by interest. “There were 200 people studying maths at Essex and two of them were better than me,” he says. “But on the computer science course, there were none better than me so I switched to that and did my PhD in Artificial Intelligence.” According to Bartle, there were only three universities in Britain doing AI back then - Essex, Sussex and Edinburgh. The rest were apparently shut down because they’d been told by a professor of applied mathematics named Dr James Lighthill that AI was a useless subject that would never be important.
Before Essex, Bartle had been experimenting with internet connectivity on BP’s computer, using an ancient 110 baud modem (“it could transmit roughly 11 characters a second. You had to be very efficient with your coding”); the programs he created were stored on paper tape. But Essex had a comparatively advanced set-up. “The computer was the size of a room,” he says. “It had false floor panels under it that were filled with 29 carbon dioxide canisters. If there was a fire they’d all go off at once to put it out really quickly. It would also have put out all the operators, too, but they were cheaper than computers.”
Experimenting with this giant system, Roy Trubshaw discovered a mechanism for sharing code across separate teletype machines – an early version of the computer terminal – using an area of memory they weren’t supposed to be writing to. In short, it allowed several people to access the same program running on the mainframe at the same time. From here, the duo decided to create a fantasy adventure; Trubshaw wrote the physics, Bartle wrote the game code. The result was MUD.
They called it a multi-user dungeon, because of Zork. “The version we all played ran in [the programming language] Fortran and was just called Dungen because you could only use six character words. Back then we thought all games would be called dungeons, so ours was a multi-user dungeon. Turned out they were all going to be called adventures so we should have called it MUA.”
The duo ran the game over the university network, which was connected to British Telecom’s Experimental Packet Switching System, which could also be accessed by other UK universities. Bartle and Trubshaw used this to link in to the University of Kent, and from there establish a connection with the US-led ARPAnet, an early precursor to today’s global internet. “People had never played any sort of shared world before,” says Bartle. “You can’t imagine what it was like, you were playing a game and suddenly another real person would enter.”
Very quickly, keen computer hobbyists and hackers found out about the game and started dialing in to it from outside the university. The system couldn’t cope – Essex only had six modems and these were quickly overstretched. “The gamers clubbed together to buy the university a bank of 12 modems,” says Bartle. The computing press started paying attention – Bartle wrote a cover feature on the game for Practical Computing, explaining the creation of MUD and defining his hopes for the future of the genre:
What I would like to see - and it’s a long, long way off - is some local or national network with good graphics, sound effects and a well designed set of worlds of varying degrees of difficulty. In this true meritocracy, you will forever be encountering new situations, new difficulties, new solutions, and above all new people. Everyone starts off on an equal footing in this artificial world.
He was, of course, imagining the actual future of the massively multiplayer online role-playing game; the possibilities were always there in Bartle’s mind. But there was one thing he and Trubshaw never did. They never sought to copyright their game or their technology. Instead they shared it freely.
“We encouraged people to write their own MUDs,” he says. “We made MUD because the real world sucked. We weren’t supposed to be at university - Roy was from Wolverhampton, I was from Yorkshire and sounded like I should be working on a farm. It wasn’t a great atmosphere; we were looked down on because other people were at university for intellectual subjects not mind-numbing technology. We raged against that.”
“You shouldn’t have to be what the world defines you to be. You should be who you really are - you should get to become yourself. MUD was a political statement, we made a world where people could go and shed what was holding them back.”
MUD did indeed proliferate. Other programmers at other universities took the basics of the network code and game design and evolved them. Through the 80s and 90s, several variations were developed and adopted including AberMud, TinyMud (which was more geared toward the social rather than gaming side of virtual worlds) and DikuMud.
The latter, built by a group of students at the University of Copenhagen, was the most stable and easy to install – it was written in the common programming language C and could run on all Unix systems, so spread easily. It also neatly tied together all of the conventions of quest-based multiplayer role-playing games: players took on a specific class of character – fighter, wizard, thief, etc – then “leveled up” by killing enemies with a range of weapons and spells, before collecting experience points and loot.
For Bartle, this structure was, itself, a comment on the stifling class system. But in MUD, progression was based on merit, not parentage. “If you saw someone was at a certain level, it said something about them - about their skill and strength of character,” says Bartle. “It was a way for players to understand their place in the hierarchy and to see that they could always progress - there were no glass ceilings. But it wasn’t really a meritocracy either because, if you didn’t care about your leveling up your character, you didn’t need to, you could still play. It was about freedom.”
Politics aside, the raw structure of MUD would influence most subsequent graphical multiplayer online games such as Ultima Online, EverQuest, and World of Warcraft. And it was that initial decision not to protect MUD as an IP that secured its place as a key progenitor. As Bartle explains, “By the time the games companies got interested in making mutiplayer online games in the late 90s, there were 100 MUD experienced designers for every one who was experienced in one of the other multi-user games that had been invented, because it was all free.”
Bartle is still at Essex University. He’s now a professor and senior lecturer in game design; he also consults in game development. He retains that pervading belief that games are positive and empowering. While society often wonders about their negative effects, he sees in them a model for tolerance and ethical behaviour.
“The original hacker ethic was, you can do what you like as long as you don’t hurt anyone else. That fed into games and it has propagated outwards,” he says. “The more games you play the more sense you have of things like fairness - if you play an unfair game it’s no fun, it’s not a good game. I think that makes you more resistant to examples of unfairness in the real world. You may start to think, why shouldn’t gay people get married, what the hell, it doesn’t effect me?
“I hope that some of the culture that came out of games has affected the real world.”
The Third Industrial Revn
MOST PEOPLE ARE discomfited by radical change, and often for good reason. Both the first Industrial Revolution, starting in the late 18th century, and the second one, around 100 years later, had their victims who lost their jobs to Cartwright’s power loom and later to Edison’s electric lighting, Benz’s horseless carriage and countless other inventions that changed the world. But those inventions also immeasurably improved many people’s lives, sweeping away old economic structures and transforming society. They created new economic opportunity on a mass scale, with plenty of new work to replace the old.
A third great wave of invention and economic disruption, set off by advances in computing and information and communication technology (ICT) in the late 20th century, promises to deliver a similar mixture of social stress and economic transformation. It is driven by a handful of technologies—including machine intelligence, the ubiquitous web and advanced robotics—capable of delivering many remarkable innovations: unmanned vehicles; pilotless drones; machines that can instantly translate hundreds of languages; mobile technology that eliminates the distance between doctor and patient, teacher and student. Whether the digital revolution will bring mass job creation to make up for its mass job destruction remains to be seen.
Powerful, ubiquitous computing was made possible by the development of the integrated circuit in the 1950s. Under a rough rule of thumb known as Moore’s law (after Gordon Moore, one of the founders of Intel, a chipmaker), the number of transistors that could be squeezed onto a chip has been doubling every two years or so. This exponential growth has resulted in ever smaller, better and cheaper electronic devices. The smartphones now carried by consumers the world over have vastly more processing power than the supercomputers of the 1960s.
Moore’s law is now approaching the end of its working life. Transistors have become so small that shrinking them further is likely to push up their cost rather than reduce it. Yet commercially available computing power continues to get cheaper. Both Google and Amazon are slashing the price of cloud computing to customers. And firms are getting much better at making use of that computing power. In a book published in 2011, “Race Against the Machine”, Erik Brynjolfsson and Andrew McAfee cite an analysis suggesting that between 1988 and 2003 the effectiveness of computers increased 43m-fold. Better processors accounted for only a minor part of this improvement. The lion’s share came from more efficient algorithms.
The beneficial effects of this rise in computing power have been slow to come through. The reasons are often illustrated by a story about chessboards and rice. A man invents a new game, chess, and presents it to his king. The king likes it so much that he offers the inventor a reward of his choice. The man asks for one grain of rice for the first square of his chessboard, two for the second, four for the third and so on to 64. The king readily agrees, believing the request to be surprisingly modest. They start counting out the rice, and at first the amounts are tiny. But they keep doubling, and soon the next square already requires the output of a large ricefield. Not long afterwards the king has to concede defeat: even his vast riches are insufficient to provide a mountain of rice the size of Everest. Exponential growth, in other words, looks negligible until it suddenly becomes unmanageable.
Messrs Brynjolfsson and McAfee argue that progress in ICT has now brought humanity to the start of the second half of the chessboard. Computing problems that looked insoluble a few years ago have been cracked. In a book published in 2005 Frank Levy and Richard Murnane, two economists, described driving a car on a busy street as such a complex task that it could not possibly be mastered by a computer. Yet only a few years later Google unveiled a small fleet of driverless cars. Most manufacturers are now developing autonomous or near-autonomous vehicles. A critical threshold seems to have been crossed, allowing programmers to use clever algorithms and massive amounts of cheap processing power to wring a semblance of intelligence from circuitry.
Evidence of this is all around. Until recently machines have found it difficult to “understand” written or spoken language, or to deal with complex visual images, but now they seem to be getting to grips with such things. Apple’s Siri responds accurately to many voice commands and can take dictation for e-mails and memos. Google’s translation program is lightning-fast and increasingly accurate, and the company’s computers are becoming better at understanding just what its cameras (as used, for example, to compile Google Maps) are looking at.
At the same time hardware, from processors to cameras to sensors, continues to get better, smaller and cheaper, opening up opportunities for drones, robots and wearable computers. And innovation is spilling into new areas: in finance, for example, crypto-currencies like Bitcoin hint at new payment technologies, and in education the development of new and more effective online offerings may upend the business of higher education.
This wave, like its predecessors, is likely to bring vast improvements in living standards and human welfare, but history suggests that society’s adjustment to it will be slow and difficult. At the turn of the 20th century writers conjured up visions of a dazzling technological future even as some large, rich economies were limping through a period of disappointing growth in output and productivity. Then, as now, economists hailed a new age of globalisation even as geopolitical tensions rose. Then, as now, political systems struggled to accommodate the demands of growing numbers of dissatisfied workers.
Troll Hunters
We’ve come up with the menacing term “troll” for someone who spreads hate and does other horrible things anonymously on the Internet. Internet trolls are unsettling not just because of the things they say but for the mystery they represent: what kind of person could be so vile? One afternoon this fall, the Swedish journalist Robert Aschberg sat on a patio outside a drab apartment building in a suburb of Stockholm, face to face with an Internet troll, trying to answer this question.
The troll turned out to be a quiet, skinny man in his 30s, wearing a hoodie and a dirty baseball cap—a sorry foil to Aschberg’s smart suit jacket, gleaming bald head, and TV-trained baritone. Aschberg’s research team had linked the man to a months-long campaign of harassment against a teenage girl born with a shrunken hand. After meeting her online, the troll tormented her obsessively, leaving insulting comments about her hand on her Instagram page, barraging her with Facebook messages, even sending her taunts through the mail.
Aschberg had come to the man’s home with a television crew to confront him, but now he denied everything. “Have you regretted what you’ve done?” Aschberg asked, handing the man a page of Facebook messages the victim had received from an account linked to him. The man shook his head. “I haven’t written anything,” he said. “I didn’t have a profile then. It was hacked.”
This was the first time Aschberg had encountered an outright denial since he had started exposing Internet trolls on his television show Trolljägarna (Troll Hunter). Usually he just shoots them his signature glare—honed over decades as a muckraking TV journalist and famous for its ability to bore right through sex creeps, stalkers, and corrupt politicians—and they spill their guts. But the glare had met its match. After 10 minutes of fruitless back and forth on the patio, Aschberg ended the interview. “Some advice from someone who’s been around for a while,” he said wearily. “Lay low on the Internet with this sort of stuff.” The man still shook his head: “But I haven’t done any of that.”
“He’s a pathological liar,” Aschberg grumbled in the car afterward. But he wasn’t particularly concerned. The goal of Troll Hunter is not to rid the Internet of every troll. “The agenda is to raise hell about all the hate on the Net,” he says. “To start a discussion.” Back at the Troll Hunter office, a whiteboard organized Aschberg’s agenda. Dossiers on other trolls were tacked up in two rows: a pair of teens who anonymously slander their high school classmates on Instagram, a politician who runs a racist website, a male law student who stole the identity of a young woman to entice another man into an online relationship. In a sign of the issue’s resonance in Sweden, a pithy neologism has been coined to encompass all these forms of online nastiness: näthat (“Net hate”). Troll Hunter, which has become a minor hit for its brash tackling of näthat, is currently filming its second season.
Hate is having a sort of renaissance online, even in the countries thought to be beyond it.
It is generally no longer acceptable in public life to hurl slurs at women or minorities, to rally around the idea that some humans are inherently worth less than others, or to terrorize vulnerable people. But old-school hate is having a sort of renaissance online, and in the countries thought to be furthest beyond it. The anonymity provided by the Internet fosters communities where people can feed on each other’s hate without consequence. They can easily form into mobs and terrify victims. Individual trolls can hide behind dozens of screen names to multiply their effect. And attempts to curb online hate must always contend with the long-standing ideals that imagine the Internet’s main purpose as offering unfettered space for free speech and marginalized ideas. The struggle against hate online is so urgent and difficult that the law professor Danielle Citron, in her new book Hate Crimes in Cyberspace, calls the Internet “the next battleground for civil rights.”
That Sweden has so much hate to combat is surprising. It’s developed a reputation not only as a bastion of liberalism and feminism but as a sort of digital utopia, where Nordic geeks while away long winter nights sharing movies and music over impossibly fast broadband connections. Sweden boasts a 95 percent Internet penetration rate, the fourth-highest in the world, according to the International Telecommunication Union. Its thriving tech industry has produced iconic brands like Spotify and Minecraft. A political movement born in Sweden, the Pirate Party, is based on the idea that the Internet is a force for peace and prosperity. But Sweden’s Internet also has a disturbing underbelly. It burst into view with the so-called “Instagram riot” of 2012, when hundreds of angry teenagers descended on a Gothenburg high school, calling for the head of a girl who spread sexual slander about fellow students on Instagram. The more banal everyday harassment faced by women on the Internet was documented in a much-discussed 2013 TV special called Men Who Net Hate Women, a play on the Swedish title of the first book of Stieg Larsson’s blockbuster Millennium trilogy.
Internet hatred is a problem anywhere a significant part of life is lived online. But the problem is sharpened by Sweden’s cultural and legal commitment to free expression, according to Mårten Schultz, a law professor at Stockholm University and a regular guest on Troll Hunter, where he discusses the legal issues surrounding each case. Swedes tend to approach näthat as the unpleasant but unavoidable side effect of having the liberty to say what you wish. Proposed legislation to combat online harassment is met with strong resistance from free speech and Internet rights activists.
What’s more, Sweden’s liberal freedom-of-information laws offer easy access to personal information about nearly anyone, including people’s personal identity numbers, their addresses, even their taxable income. That can make online harassment uniquely invasive. “The government publicly disseminates a lot of information you wouldn’t be able to get outside of Scandinavia,” Schultz says. “We have quite weak protection of privacy in Sweden.”
The same information ecosystem that aids trolls also makes it easier to expose them.
Yet the rich information ecosystem that empowers Internet trolls also makes Sweden a perfect stalking ground for those who want to expose them. In addition to Aschberg, a group of volunteer researchers called Researchgruppen, or Research Group,has pioneered a form of activist journalism based on following the crumbs of data anonymous Internet trolls leave behind and unmasking them. In its largest troll hunt, Research Group scraped the comments section of the right-wing online publication Avpixlat and obtained a huge database of its comments and user information. Starting with this data, members meticulously identified many of Avpixlat’s most prolific commenters and then turned the names over to Expressen, one of Sweden’s two major tabloids. In December 2013, Expressen revealed in a series of front-page stories that dozens of prominent Swedes had posted racist, sexist, and otherwise hateful comments under pseudonyms on Avpixlat, including a number of politicians and officials from the ascendant far-right Sweden Democrats. It was one of the biggest scoops of the year. The Sweden Democrats, which have their roots in Sweden’s neo-Nazi movement, have long attempted to distance themselves from their racist past, adopting a more respectable rhetoric of protecting “Swedish culture.” But here were their members and supporters casting doubt on the Holocaust and calling Muslim immigrants “locusts.” A number of politicians and officials were forced to resign. Expressen released a short documentary of its reporters acting as troll hunters, knocking on doors and confronting Avpixlat commenters with their own words.
Make the Unknown Known
Martin Fredriksson is a cofounder of Research Group and its de facto leader. He is a lanky 34-year-old with close-cropped hair and a quietly intense demeanor, though he is prone to outbursts on Twitter that hint at his past as a militant anti-racism activist. I met Fredriksson at the tiny one-room office of Piscatus, the public records service for journalists that he oversees as his day job. Robert Aschberg, the chair of Piscatus’s board, has known Fredriksson for years and jokes that he is a brilliant researcher and an excellent journalist, but “you can’t have him in furnished rooms.” The extreme sparseness of the office bore him out. One of the only decorations was a Spice Girls poster.
Fredriksson hunched over his computer’s dual screens and logged in to the intranet he had created to coördinate Research Group’s unmasking of Avpixlat users. Research Group typically works in a decentralized manner, with members pursuing their own projects and collaborating with others when needed. The group currently has 10 members, all volunteers, including a psychology graduate student, a couple of journalism students, a grade school librarian, a writer for an online IT trade publication, and a porter in a hospital. The little organizing that occurs typically happens in Internet relay chat rooms and on a wiki. But analyzing the Avpixlat database, which contained three million comments and over 55,000 accounts, required a centralized, systematized process. An image on the main page of the intranet pokes fun at the immensity of the task. Two horses have their heads stuck in a haystack. “Find anything?” asks one. “Nope,” says the other.
Research Group was founded during the exhaustive process of unmasking a particularly frightening Internet troll. That episode began in 2005, when Fredriksson and his close friend Mathias Wåg learned that an anonymous person was requesting public information about Wåg from the government. As a return address, the requester used a post office box in Stockholm. That kept Fredriksson and Wåg in the dark at first. But the next year, they obtained a copy of a prison magazine in which a notorious neo-Nazi named Hampus Hellekant, who was in prison for murdering a union organizer, had listed the same post office box. In 2007, after Hellekant was released, pseudonymous posts began to appear on Swedish neo-Nazi forums and websites, soliciting information about Wåg and other leftist activists.
For three years, Fredriksson and some like-minded investigators tracked Hellekant’s every move, online and off. “He was functioning more or less as the intelligence service for the Nazi movement,” Fredriksson says. Their counterintelligence operation involved a mix of traditional journalistic techniques and innovative data analysis. One unlikely breakthrough came courtesy of -Hellekant’s habit of illegally parking his car all over Stockholm. Fredriksson’s team requested parking ticket records from the city. They were able to match the car’s location on certain days with time and GPS metadata on image files Hellekant posted under a pseudonym. In 2009 they sold the story of Hellekant’s post-prison activities to a leftist newspaper, and Research Group was born.
Since then, its members have investigated the men’s rights movement, Swedish police tactics, and various right-wing groups. Until the Avpixlat story they had mostly published their findings quietly on their website or partnered with small left-wing news organizations. “The official story is that we pick subjects about democracy and equality,” says Fredriksson. “But the real reason is that we just have special interests—we just try to focus on stuff that interests us as people.”
By the time Research Group came together, Fredriksson’s interest in Nazi hunting and talent for investigative reporting had landed him a job with Aschberg. Fredriksson had scraped data from a mobile payment platform with woefully inadequate security in order to investigate the donors to a neo-Nazi website. He also happened to get the records of scores of users who had made payments to Internet porn sites. Aschberg used the data on his show Insider, Sweden’s answer to NBC’s Dateline, where he exposed government officials who had bought Internet porn on their official cell phones. Then he hired Fredriksson as a researcher on Insider: he functioned as the technical brains behind many of Aschberg’s confrontations. Today Fredriksson does not work for Troll Hunter, and the show has no formal connection to Research Group. But Fredriksson’s legacy is clear in the technical detective work that the show often uses to expose its targets.
Fredriksson might accurately be called a “data journalist,” as his specialty is teasing stories from huge spools of information. But the bland term doesn’t do justice to his guerrilla methods, which can make the pursuit of information as thrilling as the hunt for a serial killer in a crime novel. When Fredriksson gets interested in a project, he seizes it obsessively. Aschberg speaks of him in awe, as a potent but alien force. “He’s very special,” he says. “He’s one of those guys who can sit for 24 hours and drink sodas and just work.”
Fredriksson is a member of a generation of Swedes known as “Generation 64,” who grew up tinkering with Commodore 64s in the 1980s and went on to revolutionize Sweden’s IT industry. His upbringing also coincided with the rise of a neo-Nazi movement in the 1990s, when he was a teenage punk rocker. He and his friends constantly clashed with a gang of skinheads in his small hometown in southern Sweden. “I was very interested in politics. I came to the conclusion that if I wanted to do politics I’d have to deal with the Nazi threat in some way,” he says. He joined the controversial leftist group Antifascistisk Aktion (AFA), which openly endorses the use of violence against neo-Nazis. In 2006 he was sentenced to community service for beating a man during a fight between a group of neo-Nazis and antiracists. “He said it was me. It actually wasn’t, but it just as well could have been,” Fredriksson says. He says he eventually came to believe that violence is wrong, and today his weapon of choice is information, not his fists. He is more interested in understanding hate than destroying it, although he wouldn’t mind if one led to the other. Research Group challenges the traditional divide between activism and journalism: it is guided by the values of its members, many of whom come from leftist circles. In the early 2000s, Fredriksson was heavily involved in Sweden’s free culture movement, which abhorred copyright laws, embraced piracy, and coded the first version of the legendary Pirate Bay’s BitTorrent tracker. Whenever Research Group is in the news, critics seize on its members’ leftist ties to discredit them as agenda-driven propagandists. But their methods are meticulous, and their facts are undeniable. “Our history will always be there,” says Fredriksson. “People will always say, ‘Oh, 10 years ago you did that.’ Whereas I live in the now. The only way for me to build credibility is to just publish valid stuff again and again, and hope I’m not wrong.”
However, his idiosyncratic background sometimes leads him from the path of traditional journalistic inquiry into murky ethical territory. “I like to pick up stones and see what’s under them,” he says. “I like to go wherever I want to go and just look at stuff.”
The mass unmasking of Avpixlat commenters in 2013 was an accidental consequence of this curiosity. Avpixlat is an influential voice in Sweden’s growing right-wing populist movement, which is driven by a xenophobic panic that Muslim immigrants and Roma are destroying the country. The site fixates on spreading stories of rapes and murders committed by immigrants, which it contends are being covered up by the liberal establishment. (“Avpixlat” means “de-pixelate,” as in un-censoring an image that’s been digitally obscured.) Initially, Fredriksson wanted to study how it functioned as a source of näthat. Avpixlat, and especially its unruly comments section, has become notorious as a launching pad for rampaging online mobs. “They provoke, they incite people to harass politicians and journalists,” says Annika Hamrud, a journalist who has written extensively about the Swedish right wing. When the site picked up the story of how a shop owner in a small town put up a sign welcoming Syrian refugees to Sweden, she explains, he was bombarded with online abuse. Wåg, Fredriksson’s friend and colleague, calls Avpixlat “the finger that points the mob where to go.” Fredriksson’s idea was to create a database of Avpixlat comments in order to investigate how its cybermobs mobilized. Avpixlat uses the popular commenting platform Disqus, which is also used by mainstream publications in Sweden and around the world. Fredriksson planned to scrape Disqus comments from Avpixlat and as many other Swedish websites as possible. He would then compare the handles of commenters on mainstream websites with those on Avpixlat. The extent of the overlap would suggest how dominant Avpixlat users were throughout the Web, and how responsible they were for the general proliferation of näthat.
Fredriksson hacked together a simple script and began to scrape Avpixlat’s comments using Disqus’s public API (the application programming interface, which lets online services share data). As he built his database, he noticed something odd. Along with each username and its associated comments, he was capturing a string of encrypted data. He recognized the string as the result of a cryptographic function known as an MD5 hash, which had been applied to every e-mail address that commenters used to register their accounts. (The e-mail addresses were included to support a third-party service called Gravatar.) Fredriksson realized he could figure out Avpixlat commenters’ e-mail addresses, even though they were encrypted, by applying the MD5 hash function to a list of known addresses and cross-referencing the results with the hashes in the Avpixlat database. He tested this theory on a comment he’d made on Avpixlat with his own Disqus account. He encrypted his e-mail address and searched the Avpixlat database for the resulting hash. He found his comment. “By that time I knew I had stumbled on something which the newspapers would be very interested in,” he says. He kept his scrapers running on Avpixlat and other websites that used Disqus, including American sites like CNN, eventually assembling a database of 30 million comments. But the goal was no longer a general survey of näthat. He wanted to answer an even more fundamental question: who are the real people behind Avpixlat’s hateful comments? “It had been like this great unknown for many years,” Fredriksson says. “It was this huge blank spot on the map that we could just fill out. Make the unknown known.”
In order to begin the process of unmasking Avpixlat’s users, Research Group needed a huge list of e-mail addresses to check against the Avpixlat commenter database, especially those of people whose participation in a racist right-wing website would be newsworthy. Sweden’s liberal public-records laws proved invaluable again. Research Group filed public information requests and collected thousands of e-mail addresses of parliament members, judges, and other government officials. For good measure, Fredriksson threw in a list of a few million e-mail addresses he’d found floating around on the Web. All told, Research Group assembled a list of more than 200 million addresses—more than 20 times the population of Sweden—to check against the database of 55,000 Avpixlat accounts.
Fredriksson gives lectures about online research, and he has found it’s easier to unmask people than many believe. “Anonymity online is possible, but it’s frail,” he says. He clicked on one Avpixlat user who had used his account to complain a lot about Muslims. He entered the user’s e-mail address into Google and found that the man had listed the address and his full name on the roster of his local boating club: “There he is.” If users’ e-mail addresses didn’t suffice, a researcher would begin wading through their comments, which sometimes numbered in the thousands, to glean clues to their identity.
Research Group toiled away for 10 months on the Avpixlat data, eventually identifying around 6,000 commenters, of whom only a handful were ever publicly named. A few months into the research, Fredriksson approached Expressen, whose investigative reporting on the Swedish far right he admired. The newspaper bought the story.
Payback
Research Group was so focused on analyzing the database that it did not seriously consider what the public fallout from the revelations might be. When the story came out, it sparked a firestorm. Angry Internet users, who saw the exposé as an assault on freedom of speech, began to distribute addresses of Research Group members as payback, a favored tactic of online intimidation known as “doxxing.” A Research Group member named My Vingren moved from her apartment after strange men visited one night. The address of Fredriksson’s parents was circulated. Debate about the ethics of the story raged, and even political opponents of the Sweden Democrats voiced reservations. Particularly egregious to some critics was that while many of Expressen’s targets were politicians, some were private citizens, including businesspeople and a professor. “I was this close to having a stress reaction,” Fredriksson says.
Fredriksson stands by Research Group’s work on the database. He does not believe anonymity should be protected if it’s used to spread hate. “I think there are legitimate causes for anonymity,” he says. “But I think the Internet is a wonderful thing—I’ve been part of spreading culture among the masses—and personally, I get pissed off when the Internet is abused by some people.” Still, he’s ambivalent about Expressen’s exposure of private citizens. Research Group left it up to Expressen to choose what to report. If it had been his choice, he says, he would only have exposed politicians. “It could have been a much stronger story if they had stuck to public figures,” he says.
Research Group emerged from the furor slightly shell-shocked but proud, with a newfound reputation as a reputable journalistic force. A few months later, the Swedish Association of Investigative Journalists gave the group and Expressen an award for the scoop. This past September, Expressen published a new series based on the data, exposing more Sweden Democrats. One had called a black man a chimpanzee, while another had suggested that Muslims were genetically predisposed to violence. For these stories, Research Group was nominated for the Stora Journalistpriset, Sweden’s most prestigious journalism prize.
The stories came out a week before Sweden’s general election and had, by all appearances, no effect on the outcome. In fact, the Sweden Democrats won 13 percent of the vote, doubling their previous result to become the third-largest party in Sweden. Some even suggested that Expressen had helped the Sweden Democrats by making them seem like victims. Fredriksson says he’s simply happy to have helped push their public persona a little closer to what he believes they stand for in their heart of hearts: the ugly id that’s visible in Avpixlat’s comments sections every day. “I say, well, we just showed that they are racist, and people are apparently liking that,” he says. “So, good for them.”
Easy Hacking
Retailers are putting their customers’ credit card details at the mercy of computer hackers because they have failed to upgrade an obsolete version of the Windows operating system on their machines, a leading online security researcher has warned.
A significant number of stores continue to run Windows XP even though Microsoft stopped providing security updates for the software almost nine months ago, James Lyne, the head of research at Sophos, said.
Windows XP, which was released in 2001, was one of the most popular operating systems in the world and many companies built their computer networks around it. However, Microsoft said last April that it would stop providing software updates for XP in order to concentrate on developing new software instead. When operating systems stop receiving security updates, they become more vulnerable to new types of cyberattack.
“Most retailers in the UK are either completely unprepared or unaware of the danger,” Mr Lyne said. “Or they are over-confident. For a very small amount of money, it is possible to get your hands on kit that can wreak havoc in their systems. And, because XP is not being updated, it is way easier to infect with malware.”
Mr Lyne demonstrated how to perform a hacking attack on Windows XP that was able to extract a string of credit card details in less than a minute. He set up a website, reallysaferetail.com, for which he bought an SSL [Secure Socket Layer] security certificate for £30 online. These certificates activate a padlock icon that appears next to the address bar on a browser, indicating to a user that their connection is secure.
From a cloud computing service based in Ireland, he first infected the Windows XP system with malware, or malicious software, by exploiting one of its security holes. He then instructed the malware to download everything that was held on the computer’s memory in what is known as a RAM [random-access memory] scraper attack, something that is very difficult to detect. Mr Lyne was then able to search for credit card numbers in the downloaded file.
He said that it was was also possible to attack non-Windows XP machines, although it was far harder to carry out the initial infection with malware.
Web Pages Aren't Forever
The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.)
Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible.
For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.”
The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.
The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?
QAD Home Security With A Smartphone
In these days of non-stop hacking, phishing and data breaches, it's easy to forget that regular old burglars are still lurking around to steal from your home. That's why I'm a big fan of home security systems, especially ones that let you watch your home from a distance and alert you automatically when someone breaks in.
Of course, you don't always need a security system that covers an entire house. Maybe you want to watch a specific drawer or jewelry box, your cubicle at work, the door to your room (if you have roommates or a snooping contractor), or what's happening in your hotel room while you're out. In those and similar cases, you might be able to set up a quick security system for free with the right tools.
Getting started
The main thing you need to get going is a smartphone (a tablet with a rear camera can work, too), a stand to keep the gadget upright and a monitoring app. Set up your gadget on the stand and point the camera in the direction you want to watch. Then start the app. When the app detects movement, it can alert you via e-mail or text, take pictures of the thief and even sound an alarm to scare them off.
If you already have an old smartphone or tablet lying around, you can be up and running in no time. You can also use your main smartphone if you want to guard something overnight while you're sleeping, like a hotel room door.
So, let's take a look at the apps that turn your smartphone into a security camera. At the moment, few monitoring apps work for both Android and Apple, so I'll tell you about one for each.
Android users first
Android users can grab the free Salient Eye app. The name is little odd, but the features are top notch (click here for tips on using security camera apps).
It uses your phone's camera to sense motion, and alerts you via e-mail or text. Then it starts capturing photos of the thief and uploads them online to a free cloud storage account. A few seconds later, it triggers an audible alarm that, hopefully, scares the thief away. To shut off the alarm requires a password.
For the notification and uploading features, you will need to have your gadget connected to a cellular or Wi-Fi network. Salient Eye can still capture images and sound the alarm without a connection, but if the thief steals the gadget, then it doesn't do as much good, aside from the fact they'll be running around blaring an alarm.
In situations where you know someone is coming in, like a cleaning person, you can turn off the alarm so you can see what they're doing but not alert them. You can also turn off notifications if you just want to use it as a motion-activated alarm.
The developer claims Salient Eye can work up to 10 hours on battery alone, so it will even work in places you can't access an outlet, like a drawer or jewelry box, or on a camping trip. There's also a paid remote-control app that lets you turn it on and off from a distance.
For Apple folks
Apple users will want to download the free Manything app to both their old iPhone and new iPhone or iPad. Like Salient Eye, Manything uses your gadget's camera to detect motion and trigger an alert.
Unlike Salient Eye, it can capture video in addition to still photos, and it streams video live to the iPhone or iPad you have with you. It also stores up to 12 hours of video in a free cloud account.
Manything has other fancy features like adjustable motion sensitivity, programmable motion zones so it can watch very specific areas, easy time-lapse creation and a built-in remote control. If you're really feeling adventurous, Manything has IFTTT support for triggering updates to social media or even triggering Internet-connected home appliances like some LED light bulbs.
Your only limit is your imagination. Use Manything to record activity around a bird feeder or know when a child leaves their room at night. Or just stick to using it for security.
As I said, Manything is free, but it has some paid options that let you use it with more than one gadget at a time or get more than 12 hours of video recording storage in the cloud.
More security
As I said, a smartphone security camera is good for quick security or a limited area. If you want to upgrade your home security, though, I recommend wireless security cameras.
In addition to streaming video to you on the go and sending motion alerts, these have additional features like night vision, two-way audio and sometimes even pan and tilt control.
Whatever you end up buying, make sure that it has an encrypted signal and that you change the default password right away. Otherwise, hackers will have no trouble finding it on the Internet, logging in and using the camera to spy on you.
The algorithms that run our lives
Software is deciding who gets a loan, who counts as a citizen and what prices you pay online. Who will step in when the machines get out of hand?
"AMAZON is all kinds of broken." If you caught that tweet on 12 December last year, and were quick, you might have grabbed some exceptional bargains. For an hour only, Amazon was selling an odd mix of items – cellphones, video games, fancy-dress costumes, mattresses – for one penny.
The surprise price drop cost sellers dearly. Goods usually marked £100 went for a 99.99 per cent discount. Hundreds of customers leapt at the chance, often buying in bulk. Even though Amazon reacted quickly and cancelled many orders, they were unable to recall those that their automated system had already dispatched from warehouses. Once set in motion, the process was hard to stop. Thanks to a software glitch, a handful of independent traders using Amazon's Marketplace lost stock worth tens of thousands of dollars. Some faced bankruptcy.
We only notice when algorithms go wrong. Most of the time they get on with business out of sight and out of mind. And business is booming. Automated processes are no longer simply tools at our disposal: they often make the decisions themselves. Much of the news we read, the music we listen to and the products we buy are served up automatically, based on statistical guesswork about what we want. Invisible chaperones shape our online experiences. Systems we can't examine and don't understand determine the route we take to work, the rates we get for mortgages, and the price we see for airfares.
Many are proprietary and all are complex, pushing them beyond public scrutiny. How can we be sure they're playing fair? A new wave of algorithm auditors are on the case, intent on pulling back the curtain on the hidden workings and hunting for undue bias or discrimination. But is this the fix? Do algorithms need better policing, or must we accept their nature as a price we pay for our automated world?
There's nothing inherently mysterious about them: an algorithm is simply a set of instructions for getting something done. The trouble is that algorithms get nested inside or bolted on to others, interacting in ever more complex ways. It can also be hard to predict how algorithms will behave with real-world data once released into the wild.
The scope of their influence is often unclear. Some people swear blind that they've seen the price of flights on one website jump after checking out a rival site, for example. Others think that's bunk, an urban myth for our times. Such debates highlight the shadowy nature of today's systems.
Not only are most algorithms secret recipes, sometimes even the developers who wrote them are in the dark. When Aniko Hannak at Northeastern University in Boston, Massachusetts, looked closely at how many of us have our search results skewed by factors like location and browsing history, she noted things even Google didn't know: for example, that around 12 per cent of searches get personalised. Google engineers thanked her. They'd never made such measurements and hadn't known the exact impact of their personalisation algorithms.
Exposing hidden algorithms can cause outrage. That's what Christian Sandvig and his colleagues at the University of Michigan, Ann Arbor, found when they lifted the lid on Facebook's newsfeed algorithms, which decide which posts from friends and family we actually see. The team compared filtered and unfiltered feeds and found that Facebook's algorithms hid posts deemed uninteresting, according to unspecified criteria.
Around two-thirds of the participants in Sandvig's study had no idea that algorithms were deciding what they saw. Many were shocked and upset when posts from close friends or family were excluded. Some had been blaming themselves or their friends for the algorithms' work. "If you post something and it doesn't get any comments or likes, people assume that either their friends don't like the topic, or their friends don't like them," says Sandvig.
Even for news, it's a popularity contest. During last year's Ferguson riots in Missouri, for example, Facebook's newsfeeds were filled with posts about the Ice Bucket Challenge because these had hundreds of thousands of likes.
What Sandvig's team did for Facebook, Hannak and her colleagues are doing for other online activity. Hannak is interested in how algorithms can tailor prices to different shoppers. In one recent study, the researchers looked at how online retailers such as Walmart, Office Depot and Expedia varied prices according to factors including a user's choice of browser, operating system and purchase history.
They found many instances of what they consider price discrimination though they are not sure of the rationale. Often the difference was small. Android users, for example, saw higher prices on about 6 per cent of items, though only by a few cents. In other cases, price quotes varied by up to $100. The greatest differences were typically seen between users who were logged in to a site and those who were not.
Crash damage
Hannak's group now wants to understand exactly how location influences search results. They are simulating hundreds of Android phones and spreading them across Ohio using faked GPS coordinates. They'll also be looking to see whether people from rich and poor neighbourhoods get different search results when hunting for financial services.
Evidence of that may already have come to light. Some think hidden algorithms played a part in the 2008 sub-prime mortgage crash. Between 2000 and 2007, US lenders like Countrywide Home Loans and DeepGreen doled out home loans at an unprecedented rate via automated online applications. "Everyone was saying what a great innovation it was," says Dan Power at the University of Northern Iowa in Cedar Falls. "Everyone was very high on these fast web-based loans. No one anticipated the problem."
The problem was granting so many high-risk loans without human oversight. Americans from minority groups suffered most in the resulting crash. Automated processes crunched through vast amounts of data to identify high-risk borrowers – who are charged higher interest rates – and targeted them to sell mortgages. "Those borrowers turned out to be disproportionately African American and Latino," says Seeta Gangadharan of the Open Technology Institute, a public policy think tank based in Washington DC. "Algorithms played a role in that process."
The exact degree to which algorithms were to blame remains unclear. But banks like Wells Fargo and Bank of America settled with several cities, including Baltimore, Chicago, Los Angeles and Philadelphia, for hundreds of millions of dollars over claims that their sub-prime lending had disproportionately affected minorities. Although the decision-making process big banks used to target and sell sub-prime loans may not have been new in itself, the reach and speed of those decisions when algorithms were the driving force was new. "It's the scale factor," says Gangadharan. "This was a problem that affected many people in the US and we have seen the effects fall along race and class lines in devastating ways."
Automated systems are replacing human discretion in ever more important decisions. In 2012, the US State Department started using an algorithm to randomly select the winners of the green card lottery. The system was buggy, however: it awarded visas only to people who applied on the first day, says Josh Kroll, a Princeton University computer scientist who is investigating the event. Those visas were rescinded, but it's a good example of how hidden algorithms can have a life-changing effect.
In a similar example, the documents leaked by Edward Snowden revealed that the National Security Agency uses algorithms to decide whether a person is a US citizen. According to US law, only non-citizens can have their communications monitored without a warrant. In the absence of information about an individual's birthplace or parents' citizenship, the NSA algorithms use other criteria. Is this person in contact with foreigners? Do they appear to have accessed the internet from a foreign country? Depending on what you do online, your citizenship might change overnight. "One day you might be a citizen, another you might be a foreigner," says John Cheney-Lippold, at the University of Michigan in Ann Arbor. "It's a categorical assessment based on an interpretation of your data, not your passport or your birth certificate."
Algorithms are also used to police voter fraud. Several US states use software called Crosscheck to remove duplicate entries from electoral registers. But people have been deleted simply for having the same name. As with the sub-prime algorithms, minorities are again hit hardest. The names it scrubs are disproportionately those of black, Asian and Hispanic voters, who are more likely to share names – such as Jackson, Kim or Garcia.
The next scandal may be prison sentencing. Judges and lawyers in Missouri can use a website to make an "Automated Sentencing Application". The system calculates incarceration costs for defendants, and weighs that against the likelihood the defendant will reoffend, based on prior criminal history and behavioural and demographic factors. Some think this will lead to minorities being given harsher sentences. Proxies like address, income and education level make it almost impossible to avoid racial bias. Similar systems are appearing across the US. "I think it's terrifying," says Sorelle Friedler, a computer scientist at Haverford College in Pennsylvania.
The scales are falling from our eyes as the impact of algorithms is felt in almost every area of our lives. What should we do about it? In many of these examples, the problem is not the algorithms themselves, but the fact that they over-amplify an existing bias in the data.
Higher standards
"People who work with algorithms are comfortable with the idea that they might produce these unintended results," says Sandvig. But for a growing number of people, that's not good enough. Christo Wilson, who works with Hannak at Northeastern University, thinks that large technology companies like Google and Facebook ought to be considered as public services that huge numbers of people rely on. "Given that they have a billion eyeballs, I think they have a responsibility to hold themselves to a higher standard," he says.
Wilson thinks that automated systems might be made more trustworthy if users can control exactly how their results are personalised – such as leaving gender out of the equation or ignoring income bracket and address. It would also help us learn how these systems work, he says.
Others are calling for a new regulatory framework governing algorithms, much like we have for the financial industry, for example. A recent report commissioned by the White House recommends that policy-makers pay more attention to what the algorithms do with the data they collect and analyse. To ensure accountability, however, there would need to be independent auditors who inspect algorithms and monitor their impact. We cannot leave it to governments or industry alone to respond to the problems, says Gangadharan.
"The big question now for me is who are the watchdogs," says Sandvig. For now, he suggests it should be the researchers who are beginning to reveal algorithms' broader effects. Wilson, for example, is looking into setting up dummy credit profiles to better understand price-fixing systems. But independent auditors face tough obstacles. For a start, digging around inside many automated services violates their terms of use agreement, which prohibits attempts to analyse how they work. Under the US Computer Fraud and Abuse Act, such snooping may even be illegal. And while public scrutiny is important, the details of proprietary algorithms need to be kept safe from competitors or hackers, for example.
What's more, most automated systems are too complex for humans to inspect by hand. So some researchers have developed algorithms that check other algorithms. Kroll is working on a system that would let an auditor verify that an algorithm did what it was supposed to with what it was given. In other words, it would provide a foolproof way of checking that the outcome of the green card lottery, for example, was in fact random. Or that a driverless car's algorithm for avoiding pedestrians treats both people walking and people in wheelchairs with the same caution.
Friedler has a different approach. By understanding the biases inherent in the underlying data, she hopes to eliminate bias in the algorithm. Her system looks for correlations between arbitrary properties – like height or address – and demographic groupings like race or gender. If the correlation is expected to lead to unwanted bias, then it would make sense to normalise the data. It is essentially affirmative action for algorithms, she says.
That's fine for cases where discrimination is clear, where a system is found to be unfair or illegal. But what if there is disagreement about how an algorithm ought to behave? Many would say Facebook's filtering of its newsfeed keeps it readable. Some would argue that highly personalised price adjustment can benefit both customers and retailers. What's acceptable to some won't be for others.
As Sandvig notes, unlike for financial systems, there are no standards of practice governing algorithms. But how we want them to behave may turn out to be a harder question for society to answer than we think. Maybe we'll need an algorithm for that.
Preserving Data
The advantage of dried calf skin over a PDF document is that, while vellum might smell funny, take skill to make and be difficult to email, it will be readable in a thousand years.
Today anyone can (potentially) read Magna Carta. The same cannot be said, of a document produced 25 years ago in Windows 3.0 and saved on a floppy disk.
Now researchers have begun to work on a way of ensuring we can preserve electronic documents, in the way that we save their physical counterparts.
“We don’t want our digital lives to fade away,” said Vint Cerf, vice-president at Google. “If we want to preserve them the same way we preserve books, we need to make sure that the digital objects we create will be rendered far into the future.”
Already some technologies were being lost. “Old digital imagery, where we have old coding techniques, [is] not necessarily recognised in modern software. Try to pull up these images on screen and they are effectively lost,” Mr Cerf told the annual meeting of the American Association for the Advancement of Science in San José. “If there are pictures that you really really care about then creating a physical instance is probably a good idea. Print them out, literally.”
Working with computer scientists at Carnegie Mellon University, he has begun developing a more sophisticated response: a system not just for keeping the digital information but for keeping the systems on which it runs.
“It’s not just a matter of preserving bits on some medium. We can always do that,” he said. “How do you preserve the software that created the bits, so they can be properly interpreted 1,000 years from now? Until now there has not been a very good answer to that question.”
Mr Cerf is considered one of the founding fathers of the internet. His proposed solution is to create a series of virtual machines — software replicas of old operating systems and the applications that run on them — that can then be emulated by any computer in the future.
In effect he wants the creation of a vast digital archive that historians can use to tell them precisely how to open any document, whether it was created in version 2.01 of Excel 97 running on a Windows Millennium Edition, or on an iPad Air using the latest version of iWorks.
This is done by recording the specifics of the operating system, and then taking a snapshot of the memory use of that system when the relevant application is running.
“If we create a big archive of this stuff, you should be able to run on any machine capable of operating virtual machines,” he said. He is now looking at setting up a series of these archives around the world, including at the Library of Alexandria.
The alternative, he said, is that despite living in a time that will soon produce more data a year than was created in the entire 20th century, we could leave our ancestors with little clue as to how we spent our lives.
“We stand to lose a lot of our history. If you think about the quantity of documentation from our daily lives which is captured in digital form, like our interactions by email, people’s tweets, all of the worldwide web, then if you wanted to see what was on the web in 1994 you’d have trouble doing that.”
Hooking Customers
CICERO once said that “Nature has planted in our minds an insatiable desire to see the truth.” These days it would be truer to talk of an insatiable desire to check our e-mail and Twitter accounts, and to play a few games of Candy Crush Saga (as a British parliamentarian was recently caught doing during a committee meeting). It is reckoned that four-fifths of smartphone owners check their devices within 15 minutes of waking up, and that the typical user does so 150 times a day.
This time it is not nature but man that has done the planting. Internet entrepreneurs devote a lot of thought to getting people hooked on their products. How else can they survive in a world in which hundreds of new ones are launched every day? And smartphones and tablets have helped greatly: what could be more habit-forming than devices that are always evolving, always there and always buzzing with fresh diversions?
“Hooked”, a new book by Nir Eyal, a technology writer, gives an overview of one of the most interesting battles in modern business: the intense competition to create new digital products that monopolise people’s attention. Peter Drucker, a management guru, once said the aim of a business is to create a customer. For today’s digital firms the aim is to create an uber-user: a tapping, scrolling devotee who keeps coming back for more whenever he has a spare moment. Habit-forming products help companies squeeze more money or information out of their customers. Some video-game makers get players hooked and then charge them for virtual products; often these are just cosmetic changes to how the game looks, but sometimes players can buy boosts to their in-game powers that help them win. Google specialises in useful apps, from Gmail to Google Maps, that gently squeeze data from users, the better to serve them ads.
Such products also offer protection from competition: once you have incorporated Twitter into your daily routine and devoted time to developing a following, you will be reluctant to switch to a rival. Although companies must make their products pretty simple to use, so as to persuade people to take them up, they also need to find mechanisms that encourage them to invest a lot of time in the product. Getting started on Twitter or Facebook is simple; but the more you tweet, the better and more popular your Twitter account becomes, and the more you search for friends and family on Facebook the more useful it is.
How do these companies turn you into a user? The biggest challenge is to get their hook into you in the first place: that is, persuade you to install their app or click on their link rather than choose one of the many alternatives. The best way to do this is through social pressure: create a buzz that gets people talking about your product. But it will become habit-forming only if it satisfies an inner need. People keep visiting Facebook because they are keen to keep in with their pals. They keep checking Twitter and their e-mail because they are worried about being out of the loop if they don’t.
The makers of habit-forming products have clearly read the works of B.F. Skinner, the father of “radical behaviourism”, who found that training subjects by rewarding them in a variable, unpredictable way works best. That is why the number of monsters one has to vanquish in order to reach the next level in a game often varies. Faithful Twitter users are rewarded with more replies to their tweets, and more ego-boosting followers, but not according to any predictable formula. These variable rewards come in three forms. The reward of the tribe: people who use Twitter or Pinterest are rewarded with social validation when their tweets are retweeted or their pictures are pinned. The reward of the hunt: users quickly scroll through their feeds in search of the latest gossip or funny cat pictures. And the reward of self-fulfilment: people are driven to achieve the next level on a video game, or an empty e-mail inbox.
Should the makers of habit-forming products be praised as innovative entrepreneurs? Or shunned as the immoral equivalents of drug pushers? Ian Bogost, a designer of video games, describes them as nothing less than the “cigarette of this century”. Paul Graham, a Silicon Valley investor, worries that humans have not had time to develop societal “antibodies to addictive new things”. Mr Eyal pushes back against such hyperbole. Creating a habit-forming product is in fact very hard. There have been plenty of digital products, such as Farmville, that were crazes for a while but went out of fashion. There is an important distinction between a habit and an addiction: only about 1% of people who regularly play slot machines, one of the most habit-forming technologies ever created, can reasonably be described as addicted. The proportion is surely lower for Twitter and the like. In any case, Mr Eyal notes, unlike smoking and playing slot machines, some apps help inculcate good habits, such as dieting or exercising.
That said, it is hard to read “Hooked” without feeling a bit queasy. Companies are getting at once more sophisticated and more shameless. If any other business were found to be employing people with the title of “behaviour designers”, they would be seen as exploitative and downright creepy. The internet is becoming ever more powerful and pervasive. And every new technological leap makes it easier for behaviour designers to weave digital technology into consumers’ daily habits. As smartphones become loaded with ever more sensors, and with software that can interpret their users’ emotional states (see article), the scope for manipulating minds is growing. The world is also on the cusp of a wearable revolution which will fix Google Glasses to people’s skulls and put smart T-shirts onto their torsos: the irresistible, all-knowing machines will be ever more ubiquitous. And the trouble with insatiable desires is that the struggle to sate them leaves everyone as exhausted as they are unfulfilled.
Policing Facebook et al
ONCE UPON A time, governing the Facebook community was relatively simple, because users—mostly American college students—shared at least some cultural context for what was and wasn’t acceptable. But now Facebook’s 1.39 billion users span a range of ages, ethnicities, religions, gender identities, and nationalities, and Facebook’s ability to create a space that meets everyone’s definition of “safe” increasingly has been called into question.
Which is why today, Facebook updated its community guidelines, spelling out in unprecedented detail what constitutes unacceptable behavior. Yet the unwieldy specificity of the new guidelines only proves that Facebook’s policies and procedures surrounding user activity will never be a finished product. As the world’s largest social network, Facebook certainly can learn a lot from the past, but it can never fully anticipate the future.
“It’s a challenge to maintain one set of standards that meets the needs of a diverse global community,” Facebook executives wrote in a news release announcing the update. “For one thing, people from different backgrounds may have different ideas about what’s appropriate to share—a video posted as a joke by one person might be upsetting to someone else, but it may not violate our standards.”
The new guidelines address everything from hate speech to nudity, making it clear that things like revenge porn, graphic images that glorify violence, and posts that threaten self harm or harm to others are explicitly prohibited. As always, anyone who violates these rules runs the risk of having their posts — or their accounts — blocked. According to the post, the updates to Facebook’s community guidelines aren’t so much tweaks to the guidelines, but fully formed explanations of how Facebook historically has assessed controversial content. For instance, though Facebook always has had a policy against nudity, the new guidelines get very specific about what type of nudity that rule refers to.
“We remove photographs of people displaying genitals or focusing in on fully exposed buttocks,” the new policy reads. “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.”
This and other lines in the community guidelines reveal just how reactionary many of these rules are. Facebook’s anti-nudity policies have, in the past, provoked the ire of breast cancer survivors whose post-surgery photos repeatedly were removed. Banned photos of breastfeeding have been similarly controversial.
These impassioned responses essentially have forced Facebook to add nuance to its blanket policies. “Our policies can sometimes be more blunt than we would like and restrict content shared for legitimate purposes,” Facebook’s updated nudity policy reads. “We are always working to get better at evaluating this content and enforcing our standards.”
Facebook’s new guidelines also address last year’s “real name” scandal, in which the company drew fire from the drag queen community for its policy requiring users to set up accounts with their real names. After two weeks of fighting, Facebook rethought its policy, with Chris Cox, the company’s chief product officer writing a lengthy apology on Facebook. “We’re going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were,” he wrote.
Now, Facebook says users are free to sign up with their “authentic identities,” or the names they go by on a daily basis. And yet, just as these rules are adjustments to past policies, they too, will likely be subject to adjustment down the line. The fact is, keeping social media “safe” for all users, with their many definitions of what that word means, is an unachievable task, and yet, Facebook is not alone in chasing it. Twitter has faced similar dissent from its community regarding the policing of harassment on the platform, leading CEO Dick Costolo to recently pledge to begin kicking trolls off of Twitter “left and right, and making sure that when they issue their ridiculous attacks, nobody hears them.”
But for Facebook, Twitter, and indeed, all social networks, the central challenge of accomplishing this mission is that they cannot solve it the way they solve other problems, which is to say, with technology. Today, Facebook still relies on people to report a violation, and other people within Facebook to determine whether that piece of content truly does violate its guidelines. That’s because no amount of expertly crafted algorithms can substitute for human sensitivity and judgment. But with billions of pieces of content being created every day from all corners of the globe, no amount of rules can adequately address every little judgment call these human moderators are forced to make. And so, the guidelines will always be a work in progress, forever begging to be rewritten.
Passwords 2
When ten million passwords were leaked on to the internet, they appeared to confirm that attempts by internet security experts to make us improve our password strength had been successful, even if, in the specific case of the leaked passwords, they were also completely pointless.
While many of the passwords were still single words, such as “password”, there was also a clear attempt by many to make them harder to crack. The problem was, people seemed to do so in the same way.
“Users are becoming slightly more conscious of what makes a password strong,” explained a blog post by WP Engine, an internet company that performed the analysis. “For instance, adding a number or two at the end of a text phrase. That makes it better, right?”
Actually, no. They found that almost half a million passwords did this — and in 20 per cent of those all people did was put the numeral “1” at the end.
Perhaps this is why some companies are now trying to move beyond passwords entirely.
Apple is phasing in thumbprint security, while it emerged this week that Yahoo! is giving users the option to associate their mobile phone with an account, and have a single use password texted to it each time they want to log on.
Although the service is voluntary, Dylan Casey, an executive at Yahoo!, said that it was “the first step to eliminating passwords”. He said it was an acknowledgement that it was increasingly hard for people to remember all the passwords they had. “I don’t think we as an industry has done a good enough job of putting ourselves in the shoes of the people using our products,” he said.
It would certainly be a more sensible strategy than the people who improved upon “password” by using “wasspord” or, marginally more sophisticatedly, tran5p053d numb3r5 f0r l3tt3r5.
“We are, for the most part, predictably unimaginative when it comes to choosing passwords, despite a decade of warnings from password strength checkers during sign-ups,” said WP Engine. “We love shortcuts, and so do password crackers.”
Still, at least the tendency of human beings to be predictable threw up some interesting sociological facts. By looking at the usernames associated with the passwords, it was possible to infer age and sex.
For instance, John.Smith1982@gmail.com is probably a male born in 1982.
C WP Engine found that people born in the 1980s and 1990s were more likely to use the word “love” in a password, that women used it twice as much as men and that — comfortingly in a selfie age — iloveyou appeared ten times as often as iloveme.
And what about qaz2ws and adgjmptw? The first is from the two leading diagonal columns on a keyboard. Adgjmptw, meanwhile, is the 20th most common keyboard pattern found – and it is produced by pushing the numbers 2 through to 9 on an alphanumeric keypad.
Virtual Crimes?
"Who are we when we live without consequences?" That's the question a detective poses angrily to the owner of the Hideaway, a virtual clubhouse catering to paedophiles in The Nether, Jennifer Haley's charged, compact play about online morality. The "Nether" of the title, a fully immersive relative of today's internet, lets them indulge in fantasies of molestation and murder – without ever meeting an actual child.
To Morris, the detective, the corruption of the Hideaway's clientele is real, even if their victims are not. Yet for all the pent-up fury with which she argues her case, she cannot induce Poppa, the Hideaway's charismatic proprietor, to admit he has done anything wrong.
Indeed, he insists that in his closely regulated realm – an idyllic and stunningly realised colonial manse – customers can expend their urges entirely safely. Everyone consents, no one comes to harm, and they are happy, as they could never be in the real world. What's wrong with that?
Outside the comfort zone
This is not a comfortable scenario, and The Nether is not a comfortable play; it takes courage to write anything that presents paedophiles in terms other than outright condemnation. Yet Haley's script does not require us to condone paedophilia, only to recognise that paedophiles have desires, motives and emotions too.
And it is remarkable how every character retains their complex humanity no matter what virtual atrocities we know they've committed.
It reminds us that if we regard them as sick – a favourite tabloid descriptor – we must think of them as needing cures, too. For those wary of what they might see, by the way, the play's power relies on suggestion, not shock.
The Nether's audience is confronted by unanswered and perhaps unanswerable moral propositions at every turn of its 85 minutes. Is freedom of speech absolute? Should you ever be prosecuted for the contents of your imagination? Does moral corruption lead to physical degradation? If no one is being forced to act against their will, should such actions nonetheless be forbidden?
In the frictionless Nether, stripped of real-world complications, the arguments are pared to their essentials, sometimes leaving the characters and the audience scrabbling for purchase. For example, Morris is desperate to learn whether there was ever "a real girl" beneath the Hideaway's simulacrum of 11-year-old Iris. The implication is that Poppa would then clearly be complicit in abuse, just as those who trade images of abuse today are complicit in their manufacture.
But such assurance is not easy to come by: every time the audience is tempted to think that we are approaching certitude, Haley's script snatches it away. By assuming the technology, and abstracting the problem, she allows the audience to focus on the Socratic method at the play's core and weigh up the protagonists' arguments in a relatively cool-headed manner.
Freedom's limits
And while Haley has picked the most extreme case around which to build her play, the way it probes the limits of freedom, morality and society has much broader relevance. You need look no further than the image-based forum 4chan, or perhaps even Twitter, to appreciate that.
But on the other hand, this approach also means it's hard to relate The Nether to the realities of our world.
We know that some paedophiles have a compulsive urge to abuse because of a brain injury, for example. Others have formed support groups to help them resist their urges, reportedly with some success. So it is hard to imagine that what works for one will work for all.
And our virtual reality technologies have not yet reached the point where online activities can convincingly substitute for real ones, which means that living without consequences remains the stuff of science fiction.
Almost science fact
But The Nether is only just science fiction. Debate began some years ago about the legality and morality of simulated child pornography: does it slake or fuel paedophilic lust? Virtual children have been used to ensnare paedophiles, although there have also been suggestions that childlike sex robots could be used to treat them.
Given recent rapid advances in haptics and virtual reality, combined with ready extrapolation from today's "dark net", the world depicted in the play where some sites don't appear in any search index and are sealed off from outside scrutiny may not be that far off.
If that future does arrive, how will we deal with it? The Nether is a stark reminder that, lacking the evidence and perhaps the will, we have ducked many of the toughest questions about how to behave in virtual spaces. The Nether, dark though it is, makes a compelling case that we should now engage fully: its critique of our times is breathtakingly powerful. Let's hope it doesn't prove equally powerfully prescient about times to come, too.
Rise of the Robots
From the self-checkout aisle of the grocery store to the sports section of the newspaper, robots and computer software are increasingly taking the place of humans in the workforce. Silicon Valley executive Martin Ford says that robots, once thought of as a threat to only manufacturing jobs, are poised to replace humans as teachers, journalists, lawyers and others in the service sector.
"There's already a hardware store [in California] that has a customer service robot that, for example, is capable of leading customers to the proper place on the shelves in order to find an item," Ford tells Fresh Air's Dave Davies.
In his new book, Rise of the Robots, Ford considers the social and economic disruption that is likely to result when educated workers can no longer find employment.
On robots in manufacturing
Any jobs that are truly repetitive or rote — doing the same thing again and again — in advanced economies like the United States or Germany, those jobs are long gone. They've already been replaced by robots years and years ago.
So what we've seen in manufacturing is that the jobs that are actually left for people to do tend to be the ones that require more flexibility or require visual perception and dexterity. Very often these jobs kind of fill in the gaps between machines. For example, feeding parts into the next part of the production process or very often they're at the end of the process — perhaps loading and unloading trucks and moving raw materials and finished products around, those types of things.
But what we're seeing now in robotics is that finally the machines are coming for those jobs as well, and this is being driven by advances in areas like visual perception. You now have got robots that can see in three-dimension and that's getting much better and also becoming much less expensive. So you're beginning to see machines that are starting to have the kind of perception and dexterity that begins to approach what human beings can do. A lot more jobs are becoming susceptible to this and that's something that's going to continue to accelerate, and more and more of those jobs are going to disappear and factories are just going to relentlessly approach full-automation where there really aren't going to be many people at all.
There's a company here in Silicon Valley called Industrial Perception which is focused specifically on loading and unloading boxes and moving boxes around. This is a job that up until recently would've been beyond the robots because it relies on visual perception often in varied environments where the lighting may not be perfect and so forth, and where the boxes may be stacked haphazardly instead of precisely and it has been very, very difficult for a robot to take that on. But they've actually built a robot that's very sophisticated and may eventually be able to move boxes about one per second and that would compare with about one per every six seconds for a particularly efficient person. So it's dramatically faster and, of course, a robot that moves boxes is never going to get tired. It's never going to get injured. It's never going to file a workers' compensation claim.
On a robot that's being built for use in the fast food industry
Essentially, it's a machine that produces very, very high quality hamburgers. It can produce about 350 to 400 per hour; they come out fully configured on a conveyor belt ready to serve to the customer. ... It's all fresh vegetables and freshly ground meat and so forth; it's not frozen patties like you might find at a fast food joint. These are actually much higher quality hamburgers than you'd find at a typical fast food restaurant. ... They're building a machine that's actually quite compact that could potentially be used not just in fast food restaurants but in convenience stories and also maybe in vending machines.
On automated farming
In Japan they've got a robot that they use now to pick strawberries and it can do that one strawberry every few seconds and it actually operates at night so that they can operate around the clock picking strawberries. What we see in agriculture is that's the sector that has already been the most dramatically impacted by technology and, of course, mechanical technologies — it was tractors and harvesters and so forth. There are some areas of agriculture now that are almost essentially, you could say, fully automated.
On computer-written news stories
Essentially it looks at the raw data that's provided from some source, in this case from the baseball game, and it translates that into a real narrative. It's quite sophisticated. It doesn't simply take numbers and fill in the blanks in a formulaic report. It has the ability to actually analyze the data and figure out what things are important, what things are most interesting, and then it can actually weave that into a very compelling narrative. ... They're generating thousands and thousands of stories. In fact, the number I heard was about one story every 30 seconds is being generated automatically and that they appear on a number of websites and in the news media. Forbes is one that we know about. Many of the others that use this particular service aren't eager to disclose that. ... Right now it tends to be focused on those areas that you might consider to be a bit more formulaic, for example sports reporting and also financial reporting — things like earnings reports for companies and so forth.
On computers starting to do creative work
Right now it's the more routine formulaic jobs — jobs that are predictable, the kinds of jobs where you tend to do the same kinds of things again and again — those jobs are really being heavily impacted. But it's important to realize that that could change in the future. We already see a number of areas, like [a] program that was able to produce [a] symphony, where computers are beginning to exhibit creativity — they can actually create new things from scratch. ... [There is] a painting program which actually can generate original art; not to take a photograph and Photoshop it or something, but to actually generate original art.
Moore's Law Keeps Going
Personal computers, cellphones, self-driving cars—Gordon Moore predicted the invention of all these technologies half a century ago in a 1965 article for Electronics magazine. The enabling force behind those inventions would be computing power, and Moore laid out how he thought computing power would evolve over the coming decade. Last week the tech world celebrated his prediction here because it has held true with uncanny accuracy—for the past 50 years.
It is now called Moore’s law, although Moore (who co-founded the chip maker Intel) doesn’t much like the name. “For the first 20 years I couldn’t utter the term Moore’s law. It was embarrassing,” the 86-year-old visionary said in an interview with New York Times columnist Thomas Friedman at the gala event, held at Exploratorium science museum. “Finally, I got accustomed to it where now I could say it with a straight face.” He and Friedman chatted in front of a rapt audience, with Moore cracking jokes the whole time and doling out advice, like how once you’ve made one successful prediction, you should avoid making another. In the background Intel’s latest gadgets whirred quietly: collision-avoidance drones, dancing spider robots, a braille printer—technologies all made possible via advances in processing power anticipated by Moore’s law.
Of course, Moore’s law is not really a law like those describing gravity or the conservation of energy. It is a prediction that the number of transistors (a computer’s electrical switches used to represent 0s and 1s) that can fit on a silicon chip will double every two years as technology advances. This leads to incredibly fast growth in computing power without a concomitant expense and has led to laptops and pocket-size gadgets with enormous processing ability at fairly low prices. Advances under Moore’s law have also enabled smartphone verbal search technologies such as Siri—it takes enormous computing power to analyze spoken words, turn them into digital representations of sound and then interpret them to give a spoken answer in a matter of seconds.
Another way to think about Moore’s law is to apply it to a car. Intel CEO Brian Krzanich explained that if a 1971 Volkswagen Beetle had advanced at the pace of Moore’s law over the past 34 years, today “you would be able to go with that car 300,000 miles per hour. You would get two million miles per gallon of gas, and all that for the mere cost of four cents.”
Moore anticipated the two-year doubling trend based on what he had seen happen in the early years of computer-chip manufacture. In his 1965 paper he plotted the number of transistors that fit on a chip since 1959 and saw a pattern of yearly doubling that he then extrapolated for the next 10 years. (He later revised the trend to a doubling about every two years.) “Moore was just making an observation,” says Peter Denning, a computer scientist at the Naval Postgraduate School in California. “He was the head of research at Fairchild Semiconductor and wanted to look down the road at how much computing power they’d have in a decade. And in 1975 his prediction came pretty darn close.”
But Moore never thought his prediction would last 50 years. “The original prediction was to look at 10 years, which I thought was a stretch,” he told Friedman last week, “This was going from about 60 elements on an integrated circuit to 60,000—a 1,000-fold extrapolation over 10 years. I thought that was pretty wild. The fact that something similar is going on for 50 years is truly amazing.”
Just why Moore’s law has endured so long is hard to say. His doubling prediction turned into an industry objective for competing companies. “It might be a self-fulfilling law,” Denning explains. But it is not clear why it is a constant doubling every couple of years, as opposed to a different rate or fluctuating spikes in progress. “Science has mysteries, and in some ways this is one of those mysteries,” Denning adds. Certainly, if the rate could have gone faster, someone would have done it, notes computer scientist Calvin Lin of the University of Texas at Austin.
Many technologists have forecast the demise of Moore’s doubling over the years, and Moore himself states that this exponential growth can’t last forever. Still, his law persists today, and hence the computational growth it predicts will continue to profoundly change our world. As he put it: “We’ve just seen the beginning of what computers are going to do for us.”
The Racist Internet
EARLIER THIS WEEK, Google Maps suffered the latest in series of embarrassing occurrences. It was discovered that when searching for “n***a house” and “n***a king,” Maps returned a surprising location: the White House. A search for “slut’s house” led to an Indiana women’s dorm. Initially, you may have suspected this to be the work of a lone vandal, or even a coordinated campaign. But Google Maps gave racist, degrading results not because it was compromised, but because the internet itself is racist and degrading.
That revelation comes from a Google statement posted yesterday evening. “Certain offensive search terms were triggering unexpected maps results, typically because people had used the offensive term in online discussions of the place,” wrote Jen Fitzpatrick, VP of Engineering & Product Management. “This surfaced inappropriate results that users likely weren’t looking for.”
It’s as remarkable as it is disheartening. What it shouldn’t be, though, is surprising.
“This is not new by any means,” says University of Michigan professor and co-editor of Race After the Internet Lisa Nakamura. “This is just a higher-profile example of something that’s been happening a really long time, which is that user-generated content used to answer questions reflects those pervasive attitudes that most people don’t want to think about, but are really common.” And in this case, are exceptionally racist.
That also makes it different from other prominent Maps gaffes, like the image of a giant Android mascot urinating on an Apple logo that surfaced in late April. That previous embarrassment was the result of abuse of Google’s Map Maker tool, which allowed users to create entries for far-flung places in an effort to crowdsource its digital cartography. They were manufactured menace, lone actors tweaking the system.
That sort of prank is also easier to prevent; the abuse those deliberate spammers caused prompted Google to shut down its Map Maker tool on May 12, a Google spokesperson confirmed to WIRED, a nuclear option that has since shuttered a service that had been functional since 2008.
The type of invective that led to this more recent Google Maps grotesqueness, though, isn’t something you can simply flip a switch to turn off, because it’s woven into the fabric of the internet itself. Essentially, we’re making internet algorithms racist.
Bomb Dot Com
What happened with the White House and other victims of untoward Google Maps results is most reminiscent of the popular practice known as “Googlebombing.” You may remember how in the mid-2000s a search for “santorum” led to a site that attempted to define the then-Senator’s name as a sex act, or that a search for “miserable failure” returned George W. Bush’s official page as a top result.
A Googlebomb was simple enough to produce; practitioners simply gamed Google’s search-rank algorithm through heavy tactical linking of specific phrases. The more a site is linked to, generally speaking, the higher Google will rank it. It’s a system that’s easily gamed if you have enough time and HTML on your hands.
What happened with Google Maps appears to have the same technical foundation as those concerted campaigns. As search guru Danny Sullivan points out at SearchEngineLand, a recent Google update applied similar ranking logic to Maps as of late last year, incorporating mentions of locations across the Web to help more accurately surface them in searches, and to provide richer descriptions when they appear. In theory, this helps customers find shops and services near them that might otherwise be labeled too vaguely to be helpful. Which is nice!
In practice, though, it also means that if enough people online refer to a specific place using vile epithets, even one of the most recognizable landmarks in the United States can be reduced to racist garbage. And it’s important to understand that while the technical function of producing the recent racist results are similar to how a Googlebomb works, there’s one very big fundamental difference: A Googlebomb is calculated. A group of people decided they wanted to game the “santorum” results and made it happen. In the case of the White House and other offensive Maps searches, the algorithm wasn’t subject to a coordinated effort, it just gathered up all the data the internet could provide, and the internet provided trash.
This is also what makes what happened to Google Maps different from Flickr’s similar algorithmic issues. Recently, the photo site launched new auto-tagging features that intelligently label photos based on their contents. Unfortunately, it was discovered it’s labeled photos of concentration camps as “jungle gyms,” and at least two photos of human beings (one man, one woman) as “ape.” Those errors, embarrassing and unfortunate as they are, stem not from a critical mass of offending users but from an algorithm that’s more easily confused than advertised or intended.
Garbage In, Garbage Out
It’s easy to forget how much grossness lurks online, especially now that our browsing habits are largely dictated by social channels like Twitter and Facebook (the latter of which works overtime, literally, to scrub its feeds of filth). We rarely see tabs that aren’t presented to us either by friends or trusted sources; our paths to content rigidly defined enough that it’s almost impossible to find yourself lost in a bad part of town.
That effect isn’t limited to social networks, either. “The fact that most people believe Google results are a reflection of reality is the real problem,” says S. Shyam Sundar, Co-Director of Penn State University’s Media Effects Research Laboratory. “It’s like believing that TV news is an unbiased mirror of society.” Google doesn’t show us the world; just a curated version that it thinks we want to see.
Meanwhile, somewhere not far from your Chrome cul-de-sac there are enough ongoing conversations in which the White House is casually, consistently, and pervasively called this horrible thing, that the world’s largest arbiter of information also identifies it as such. Lone, manipulative racists can certainly cause damage, but at least they can be dismissed as outliers. Google doesn’t do outliers; it does zeitgeist. And the zeitgeist of the internet, the real one, the one that you don’t normally see, turns out to be disgusting.
“The Web, from the very beginning, has been a haven for the explicitly racist speech by white supremacists,” says Charlton McIlwain, New York University Associate Professor of Media, Culture, and Communication. “But it has also become a haven for people who fancy themselves as egalitarian to express the kind of racial resentments, anger, and mistrust that they know is not publicly acceptable.” Both McIlwain and Nakamura note that if anything, the problem has gotten worse in recent years; McIlwain points to research that shows a “dramatic increase in racist speech online” since Obama’s 2008 election, while Nakamura attributes the uptick to the resentment that comes from “a feeling of entitlement that straight white men had for a long time that they don’t have anymore.”
Google plans to fix this recent flare-up the same way it did the Googlebombing of the aughts: by making them disappear. “Building upon a key algorithmic change we developed for Google Search, we’ve started to update our ranking system to address the majority of these searches,” explained Fitzgerald in her statement. “Simply put, you shouldn’t see these kinds of results in Google Maps, and we’re taking steps to make sure you don’t.”
So even though these results might technically be what the internet dictates, Google will do its best to obscure them from view. One could argue that’s just as well; just because you live near a cesspool doesn’t mean you ever have visit. But everyone we spoke with agreed that pretending the internet’s a more civilized place than it really is ultimately does more harm than good.
Facing up to the darker realities of the online experience, McIlwain says, “helps us realize that the Web itself is fraught with the same kinds of racial problems, controversies, and politics that we’ve dealt with offline for most of our country’s history.” Sundar believes that “the solution is to increase media literacy among internet users, not to give more editorial control to Google.”
A big part of that literacy? Understanding that while Google calls its surfacing of racism a “mess up,” it’s arguably the opposite. “People see [these results] as a glitch, a malfunction, but it’s not,” says Nakamura. “If anything, it works too well.”
Hacking Your Home
Smart devices are so easy to hack we could soon be robbed through our fridges
Never mind those hackers who have figured out a way to drive off with your car — what about the threat from your hi-tech fridge?
You will wander bleary-eyed into the kitchen one morning, and there on the refrigerator’s internet-connected touchscreen will be an announcement you haven’t seen before. In place of “Good morning, Jim!” or “Your egg tray is empty” will be a far less friendly message: “Pay up or we’ll jam your security locks. Then we’ll set off your fire alarms. Then we’ll blow up your boiler.”
Your fridge, in short, will be holding you to ransom. The once-seductive promise of an interconnected network of household appliances you can control from your iPhone has turned into something different: the threat that it can be controlled from someone else’s iPhone. Someone who is probably 16 years old and has sent you a message from Bulgaria.
The revelation last week that a pair of American researchers had hacked into the controls of a speeding Jeep Cherokee and succeeded in driving it off the road is merely the latest in a long line of potentially ruinous additions to the arsenals of cybercrime. The incident has led to the recall of 1.4m cars by the Chrysler group in America.
The FBI has calculated that malicious hackers earned more than £12m last year from the most notorious type of what has become known as “ransomware” — virus-like software installed on computer hard drives. It shuts down access to files and offers users an ultimatum: if you don’t transfer money to elusive accounts that are usually beyond the reach of European or American investigators, the contents of your hard drives are destroyed. You want to save all your nice family photos and the text of that memoir you’ve nearly completed? Send £1,000 to this account in Bulgaria.
They call it the “internet of things” — the fast-spreading vogue for smart watches, phones, cars and any other consumer item run by an onboard computer. Yet the gadgets we cannot live without are turning into the internet of targets. The hackers who once devoted their code-cracking exploits to desktop computers are turning their sights elsewhere.
In last week’s case of the Jeep Cherokee, the American hackers in question turned out to have honourable intentions: Charlie Miller and Chris Valasek are security experts who tipped off the car’s manufacturer, Chrysler, before they showed reporters from Wired magazine exactly how they commandeered the Jeep.
Chrysler has already issued a software fix for the flaw that enabled the two hackers to control the car from a laptop more than a mile away. Yet the threat appeared far from over when a British company, NCC, revealed that another flaw in the modern generation of computerised car infotainment systems had allowed its researchers to seize remote control of a car via DAB (digital audio broadcasting) radio.
“We took over the infotainment system and from there reprogrammed certain pieces of the vehicle so we could send control commands,” NCC’s Manchester-based research director, Andy Davis, told the BBC.
It was, in short, a difficult week for guardians of cybersecurity, compounded by the revelation that the confidential customer files of the Ashley Madison dating site had been seized by hackers who threatened to expose a world of hurt for up to 37m wannabe adulterers.
Clearly, it surprises few people these days that even the most secure of websites — from Nasa, the Pentagon and the White House downwards — have trouble fending off cyberattacks. Barely a week passes by without a giant of commerce lamenting a data breach.
What is new is that the devices we have come to take for granted — the things we love precisely because they keep us plugged in to our friends, our followers, our ever-present virtual lives — have become gateways to mayhem.
Modern cars are filled with what security experts call “attack surfaces”, vulnerable to outside manipulation — from remote locking key fobs to satellite radios, Bluetooth connections and even tyre pressure monitors that send wireless signals to the dashboard.
Security experts argue that while many aspects of vehicle technology — particularly relating to safety — have made giant leaps forward in recent decades, the programming protocols used in most car computer systems were created in the 1980s, and have not been refined much since.
Notably, they lack sophisticated message-filtering technology, which means that when the computer in your new sports car is told to slam on the brakes, it does not know if the message came from collision sensors under the bonnet, or a DAB signal that a teenager in Romania just hacked.
In recent years, malicious hackers have attacked vehicle surfaces in droves, allowing a new generation of savvy techno-thieves to skip the “breaking windows and fumbling for wires underneath the dashboard” scenario beloved of so many Hollywood films. Instead they carry discreet gadgets that scan the radio signals controlling entry systems, wait for a nearby car door to pop open, then climb in and drive away.
Just listen to Samy Kamkar, an American hacker best known for creating the world’s fastest spreading computer virus. Kamkar’s Samy worm was released onto the MySpace site in 2005, and infected 1m users in less than a day, posting the words “Samy is my hero” on the profile page of victims.
He has since become a security consultant, probing corporate software for flaws and publicising his findings. He has discovered how to hack into remote-controlled garage doors, and recently told The Washington Post: “I’ve pretty much found attacks for every car I’ve looked at. I haven’t been able to start every car, but in my testing I’ve been able to unlock any car.”
It does not stop with cars (or fridges). American researchers have exposed security flaws in a wide range of computerised medical tools, from x-ray and CT scanning machines, used to detect cancerous tumours, to devices that dispense medicines through drips.
The threat may well prove more alarming to patients than genuinely disruptive to hospitals, but nerves were not exactly calmed by an episode of Homeland, the popular television series. The show’s main character, played by the British actor Damian Lewis, helped to bump off a US vice-president by remotely interfering with his pacemaker.
Other institutions are equally nervous. In 2011 a group of security researchers — the title that is usually applied to benevolent hackers — revealed that they knew how to break into the systems that controlled the sliding automatic doors of many American prisons. The Mexican drug lord El Chapo need not have built a tunnel to escape from his maximum security cell — he just needed a flunkey with an app on his iPhone to open the doors.
Last week attention focused on the good guys of computer hacking, the security specialists known as “white hats”. So who are the “black hats”, the malicious hackers who are in it for money or mischief or both?
Law enforcement agencies view the threat as worldwide — that innocent-looking Filipino granny could just as easily be a hacker as the pale-faced Brooklyn teenager who never comes out of his basement. America has repeatedly accused the Chinese government of using hackers against US targets; many of the most dangerous criminal hacking gangs have originated in eastern Europe.
Other stereotypes have formed for good reason. When a hacker seized control of the tram network in the Polish city of Lodz in 2008 — causing several derailments — the alleged culprit turned out to be a 14-year-old boy.
When a group of hackers calling itself the Lizard Squad attacked the gaming networks of Microsoft and Sony on Christmas Day last year, the trail led to a Finnish teenager named Julius Kivimaki, also known as “Ryan” and “Zeekill”. Kivimaki, 17, was eventually charged with more than 50,000 offences, including breaches of data protection laws, illegally accessing company secrets, online harassment and payment fraud. Earlier this month a Finnish court found him guilty, but imposed only a two-year suspended sentence and fined him €6,558 (about £4,600).
Security experts were stunned by the court’s leniency, which appears to have been influenced by Kivimaki’s age. “The danger in a decision such as this is that it emboldens young malicious hackers by reinforcing the already popular notion that there are no consequences for cybercrimes committed by individuals under the age of 18,” said Brian Krebs, a former Washington Post reporter who became a computer security expert after Chinese hackers invaded his home network in 2001. Kivimaki celebrated his let-off by changing his Twitter profile to read: “Untouchable hacker god”. A Twitter account for the Lizard Squad declared: “All the people that said we would rot in prison don’t want to comprehend what we’ve been saying since the beginning, we have free passes.”
Many companies have found that the shrewdest way to deal with the hacking threat is to hire expert hackers to probe corporate defences and identify lingering flaws. The internet has long been awash with possibly apocryphal rumours that successful hackers who run foul of the law are offered a deal by the government: work for us and you will not go to jail. If Hollywood is any guide, the halls of the CIA, the National Security Agency and the FBI are crawling with unkempt youths slouching to work on advanced cryptography projects.
Big software firms have also recruited armies of “white hat” hackers to identify bugs in their systems before the “black hats” break in. Companies such as Facebook, Yahoo! and Google run “bug bounty” programmes, offering handsome rewards to anyone who reports a potential flaw in their software.
That is all very well for the industry insiders who know all about malware and botnets. But what about Chrysler and BMW, Hotpoint and Bosch? All justifiably proud of their consumer-friendly, hi-tech products, but not exactly on the front lines of cybersecurity research.
The real trouble with all this is us. It is the consumer who wants that flashy fridge with the fancy screen displaying Good Morning Britain at breakfast. It is we who demand a connected world of appliances, computers and cars.
So when one of those appliances writes us a ransom note one morning (“Your money or your virtual life”), we really should not blame the fridge.
Terms and Conditions
It is not a job that anyone is likely to want, but reading the terms and conditions of Britain’s most popular websites would be almost a full-time occupation.
If the average Briton wanted to read the small print of every website they visit in a typical year, it would take 124 working days, according to analysis by The Times. This equates to roughly six months of full-time employment.
The staggering length of the sites’ terms and conditions is to blame: for the ten most-visited websites in Britain, they amount to more words than Romeo and Juliet, Macbeth, Hamlet and the Tempest combined.
The terms and conditions on Apple’s iTunes website alone come to more than 23,000 words. The small print on the Argos website is the briefest in the top ten, but still totals 5,500 words.
The average privacy policy in Britain is 3,692 words long, and the average terms of service adds another 6,506.
To make matters worse, most terms are barely comprehensible. PayPal’s terms and conditions, a 35,000-word opus, advises customers: “If your PayPal payment funded by a Special Funding Source is rescinded (including, without limitation, Reversed) at a later time for any reason, PayPal will keep the amount that represents the portion of that PayPal payment that was funded by your Special Funding Source and (provided that the Special Funding Source has not already expired) reinstate the Special Funding Source.”
The longest terms found by The Times was at HSBC. Customers who open a current account with the bank are expected to wade through a colossal 36,000 words of associated small print.
The average person reads at a pace of 250 words a minute and visits 1,462 websites in a year, according to an American study.
Like many online spaces, League of Legends, the most widely played online video game in the world today, is a breeding ground for abusive language and behavior. Fostered by anonymity and amplified within the heated crucible of a competitive team sport, this conduct has been such a problem for its maker, Riot Games, that the company now employs a dedicated team of scientists and designers to find ways to improve interactions between the game’s players.
During the past few years the team has experimented with a raft of systems and techniques, backed by machine learning, that are designed to monitor communication between players, punish negative behavior, and reward positive behavior. The results have been startling, says Jeffrey Lin, lead designer of social systems at Riot Games. The software has monitored several million cases of suspected abusive behavior. Ninety-two percent of players who have been caught using abusive language against others have not reoffended. Lin, who is a cognitive neuroscientist, believes that the team’s techniques can be applied outside the video-game context. He thinks Riot may have created something of an antidote for online toxicity, regardless of where it occurs.
The project began several years ago when the team introduced a governance system dubbed, in keeping with the game’s fantasy theme, the Tribunal. The game would identify potential cases of abusive language and create a “case file” of the interaction. These files were then presented to the game’s community of players (an estimated 67 million unique users), who were invited to review the in-game chat logs and vote on whether they considered the behavior acceptable. Overall, the system was highly accurate, Lin says. Indeed, 98 percent of the community’s verdicts matched those of the internal team at Riot.
Several million cases were handled in this somewhat labor-intensive manner. Soon Lin and the team began to see patterns in the language toxic players used. To help optimize the process, they decided to apply machine learning techniques to the data. “It turned out to be extremely successful in segmenting negative and positive language across the 15 official languages that League supports,” says Lin.
The new version of the system, now policed by technology instead of by other players, made it more efficient to provide feedback and impose consequences for toxic behavior in the game. It can now deliver feedback to players within five minutes, where previously it could take up to a week.
Lin says the system dramatically improved what the company calls “reform rates.” A player who has previously received a penalty, such as a suspension from ranked matches, is considered reformed if he or she avoids subsequent penalties for a period of time. “When we added better feedback to the punishments and included evidence such as chat logs for the punishment, reform rates jumped from 50 percent to 65 percent,” he says. “But when the machine learning system began delivering much faster feedback with the evidence, reform rates spiked to an all-time high of 92 percent.”
One challenge the system faces is discerning context. As in any team sport, players often build camaraderie through joshing or sarcasm that, in another context, could be deemed unkind or aggressive. A machine usually fails to catch the sarcasm. In fact, that is perhaps the most significant barrier to fighting online abuse with machine learning. “It is pretty fair to say that AIs that understand language perform best when minimal contextual information is necessary to compute the correct response,” explains Chris Dyer, an assistant professor at Carnegie Mellon University who works on natural language processing. “Problems that require integrating a lot of information from the context in which an utterance is made are much harder to solve, and sarcasm is extremely context dependent.”
Currently, Lin and his team try to solve the problem with additional checks and balances. Even when the system identifies a player as having displayed toxic behavior, other systems are checked to reinforce or veto the verdict. For example, it will attempt to validate every single report a player files to determine his or her historical “report accuracy.” “Because multiple systems work in conjunction to deliver consequences to players, we’re currently seeing a healthy 1 in 5,000 false-positive rate,” says Lin.
To truly curb abuse, Riot designed punishments and disincentives to persuade players to modify their behavior. For example, it may limit chat resources for players who behave abusively, or require players to complete unranked games without incident before being able to play top-ranked games. The company also rewards respectful players with positive reinforcement.
Lin firmly believes that the lessons he and his team have learned from their work have broader significance. “One of the crucial insights from the research is that toxic behavior doesn’t necessarily come from terrible people; it comes from regular people having a bad day,” says Justin Reich, a research scientist from Harvard’s Berkman Center, who has been studying Riot’s work. “That means that our strategies to address toxic behavior online can’t be targeted just at hardened trolls; they need to account for our collective human tendency to allow the worst of ourselves to emerge under the anonymity of the Internet.”
Nevertheless, Reich believes Lin’s work demonstrates that toxic behavior is not a fixture of the Web, but a problem that can be addressed through a combination of engineering, experimentation, and community engagement. “The challenges we’re facing in League of Legends can be seen on any online game, platform, community, or forum, which is why we believe we’re at a pivotal point in the time line of online communities and societies,” says Lin. “Because of this, we’ve been very open in sharing our data and best practices with the wider industry and hope that other studios and companies take a look at these results and realize that online toxicity isn’t an impossible problem after all.”
Fair Algorithms
Quanta Magazine spoke with Dwork about algorithmic fairness, her interest in working on problems with big social implications, and how a childhood experience with music shaped the way she thinks about algorithm design today. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: When did it become obvious to you that computer science was where you wanted to spend your time thinking?
CYNTHIA DWORK: I always enjoyed all of my subjects, including science and math. I also really loved English and foreign languages and, well, just about everything. I think that I applied to the engineering school at Princeton a little on a lark. My recollection is that my mother said, you know, this might be a nice combination of interests for you, and I thought, she’s right.
It was a little bit of a lark, but on the other hand it seemed as good a place to start as any. It was only in my junior year of college when I first encountered automata theory that I realized that I might be headed not for a programming job in industry but instead toward a PhD. There was a definite exposure I had to certain material that I thought was beautiful. I just really enjoyed the theory.
You’re best known for your work on differential privacy. What drew you to your present work on “fairness” in algorithms?
I wanted to find another problem. I just wanted something else to think about, for variety. And I had enjoyed the sort of social mission of the privacy work — the idea that we were addressing or attempting to address a very real problem. So I wanted to find a new problem and I wanted one that would have some social implications.
So why fairness?
I could see that it was going to be a major concern in real life.
How so?
I think it was pretty clear that algorithms were going to be used in a way that could affect individuals’ options in life. We knew they were being used to determine what kind of advertisements to show people. We may not be used to thinking of ads as great determiners of our options in life. But what people get exposed to has an impact on them. I also expected that algorithms would be used for at least some kind of screening in college admissions, as well as in determining who would be given loans.
I didn’t foresee the extent to which they’d be used to screen candidates for jobs and other important roles. So these things—what kinds of credit options are available to you, what sort of job you might get, what sort of schools you might get into, what things are shown to you in your everyday life as you wander around on the internet—these aren’t trivial concerns.
Your 2012 paper that launched this line of your research hinges on the concept of “awareness.” Why is this important?
One of the examples in the paper is: Suppose you had a minority group in which the smart students were steered toward math and science, and a dominant group in which the smart students were steered toward finance. Now if someone wanted to write a quick-and-dirty classifier to find smart students, maybe they should just look for students who study finance because, after all, the majority is much bigger than the minority, and so the classifier will be pretty accurate overall. The problem is that not only is this unfair to the minority, but it also has reduced utility compared to a classifier that understands that if you’re a member of the minority and you study math, you should be viewed as similar to a member of the majority who studies finance. That gave rise to the title of the paper, “Fairness Through Awareness,” meaning cross-cultural awareness.
In that same paper you also draw a distinction between treating individuals fairly and treating groups fairly. You conclude that sometimes it’s not enough just to treat individuals fairly — there’s also a need to be aware of group differences and to make sure groups of people with similar characteristics are treated fairly.
What we do in the paper is, we start with individual fairness and we discuss what the connection is between individual fairness and group fairness, and we mathematically investigate the question of when individual fairness ensures group fairness and what you can do to ensure group fairness if individual fairness doesn’t do the trick.
What’s a situation where individual fairness wouldn’t be enough to ensure group fairness?
If you have two groups that have very different characteristics. Let’s suppose for example that you are looking at college admissions and you’re thinking about using test scores as your admission criterion. If you have two groups that have very different performance on standardized tests, then you won’t get group fairness if you have one threshold for the standardized-test score.
This is related to the idea of “fair affirmative action” you put forward?
In this particular case, our approach would boil down, in some sense, to what’s done in several states, like Texas, where the top students from each high school are guaranteed admission to any state university, including the flagship in Austin. By taking the top students from each different school, even though the schools are segregated, you’re getting the top performers from each group.
Something very similar goes into our approach to fair affirmative action. There’s an expert on distributive justice at Yale, John Roemer, and one of the proposals he has made is to stratify students according to the educational level of the mother and then in each stratum sort the students according to how many hours they spend each week on homework and to take the top students from each stratum.
Why wouldn’t it work to sort the entire population of students by the amount of time they spend on their homework?
Roemer made a really interesting observation that I found very moving, and that is: If you have a student from a very low-education background, they may not even realize it’s possible to spend a large number of hours studying per week. It’s never been modeled for them, it’s never been observed, nobody does it. It may not have even occurred to the student. That really strikes a chord with me.
What is it that you find so moving about that?
I had an interesting experience in high school. I’d started playing the piano at the age of about six, and I dutifully did my half-hour of practice a day. I was fine. But one time—I guess freshman year of high school—I passed by the auditorium and I heard somebody playing a Beethoven sonata. He was a sophomore, and I realized that you didn’t have to be on the concert-giving scale to play much, much better than I was playing. I actually started practicing about four hours a day after that. But it had not occurred to me that anything like this was possible until I saw that someone who was just another student could do it. I think probably this is why Roemer’s writing struck such a chord with me. I’d had this experience in my own very enriched life.
Your father, Bernard Dwork, was a mathematician and a longtime faculty member at Princeton, so in a sense you had an example to follow—as a scholar if not as a piano player. Did his work inspire yours in any way?
I don’t remember his work directly inspiring my interest in computer science. I think growing up in an academic household as opposed to a nonacademic household gave me a model for being deeply interested in my work and thinking about it all the time. Undoubtedly I absorbed some norms of behavior so that it seemed natural to exchange ideas with people and go to meetings and listen to lectures and read, but I don’t think it was mathematics per se.
Did that lesson about practice and the piano influence your approach to your research? Or, to put it another way, did you have experiences that taught you what it would take to be successful in computer science?
When I finished my course requirements in graduate school and I started to wonder how I could do research, it turned out that a very famous computer scientist, Jack Edmonds, was visiting the computer science department. I asked him, “How did your greatest results happen? Did they just come to you?” He looked at me, and stared at me, and yelled, “By the sweat of my brow!”
Is that how your best results have come to you?
It’s the only way.
You’ve said that “metrics” for guiding how an algorithm should treat different people are some of the most important things computer scientists need to develop. Could you explain what you mean by a metric and why it’s so crucial to ensuring fairness?
I think requiring that similar people be treated similarly is essential to my notion of fairness. It’s clearly not the entire story surrounding fairness—there are obviously cases in which people with differences have to be treated differently, and in general it’s much more complex. Nonetheless, there are clearly also cases in which people who should be viewed as similar ought to be treated similarly. What a metric means is that you have a way of stating a requirement about how similarly two different people—any two different people—can be treated, which is accomplished by limiting the amount by which their treatment can differ.
You mentioned previously that you consider this work on fairness a lot harder than your work on privacy, in large part because it’s so hard to come up with these metrics. What makes this so hard?
Imagine presenting the applications of two students to a college admissions officer. These students may be quite different from one another. Yet the degree to which they’d be desirable members of the student body could be quite similar. Somehow this similarity metric has to enable you to compare apples to oranges and come up with a meaningful response.
How does this challenge compare to your earlier work on differential privacy?
I think this is a much harder problem. If there were a magical way of finding the right metric—the right way of measuring differences between people—I’d think we had gotten somewhere. But I don’t think humans can agree on who should be treated similarly to whom. I certainly have no idea how to use machine learning and other statistical methods to get a good answer to it. I don’t see how to avoid dealing with the fact that you need different notions of similarity, even for the same people, but for different things. For example, discriminating in advertising for hair products makes perfect sense in a way that discriminating in advertising for financial products is completely illegal.
When you frame it like that, it seems like a monumental task. Maybe even impossible.
I view this as a “sunshine” situation; that is, the metric that’s being used should be made public and people should have the right to argue about it and influence how it evolves. I don’t think anything will be right initially. I think we can only do our best and—this is the point that the paper makes very strongly—advocate sunshine for the metric.