THE device in your purse or jeans that you think is a cellphone - guess again. It is a tracking device that happens to make calls. Let's stop calling them phones. They are trackers.
Most doubts about the principal function of these devices were erased when it was recently disclosed that cellphone carriers responded 1.3 million times last year to law enforcement requests for call data. That's not even a complete count, because T-Mobile, one of the largest carriers, refused to reveal its numbers. It appears that millions of cellphone users have been swept up in government surveillance of their calls and where they made them from. Many police agencies don't obtain search warrants when requesting location data from carriers.
Thanks to the explosion of GPS technology and smartphone apps, these devices are also taking note of what we buy, where and when we buy it, how much money we have in the bank, whom we text and e-mail, what Web sites we visit, how and where we travel, what time we go to sleep and wake up - and more. Much of that data is shared with companies that use it to offer us services they think we want.
We have all heard about the wonders of frictionless sharing, whereby social networks automatically let our friends know what we are reading or listening to, but what we hear less about is frictionless surveillance. Though we invite some tracking - think of our mapping requests as we try to find a restaurant in a strange part of town - much of it is done without our awareness.
"Every year, private companies spend millions of dollars developing new services that track, store and share the words, movements and even the thoughts of their customers," writes Paul Ohm, a law professor at the University of Colorado. "These invasive services have proved irresistible to consumers, and millions now own sophisticated tracking devices (smartphones) studded with sensors and always connected to the Internet."
Mr. Ohm labels them tracking devices. So does Jacob Appelbaum, a developer and spokesman for the Tor project, which allows users to browse the Web anonymously. Scholars have called them minicomputers and robots. Everyone is struggling to find the right tag, because 'cellphone' and 'smartphone' are inadequate. This is not a semantic game. Names matter, quite a bit. In politics and advertising, framing is regarded as essential because what you call something influences what you think about it. That's why there are battles over the tags 'Obamacare' and 'death panels.'
In just the past few years, cellphone companies have honed their geographic technology, which has become almost pinpoint. The surveillance and privacy implications are quite simple. If someone knows exactly where you are, they probably know what you are doing. Cellular systems constantly check and record the location of all phones on their networks - and this data is particularly treasured by police departments and online advertisers. Cell companies typically retain your geographic information for a year or longer, according to data gathered by the Justice Department.
What's the harm? The United States Court of Appeals for the District of Columbia Circuit, ruling about the use of tracking devices by the police, noted that GPS data can reveal whether a person "is a weekly church goer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups - and not just one such fact about a person, but all such facts." Even the most gregarious of sharers might not reveal all that on Facebook.
There is an even more fascinating and diabolical element to what can be done with location information. New research suggests that by cross-referencing your geographical data with that of your friends, it's possible to predict your future whereabouts with a much higher degree of accuracy.
This is what's known as predictive modeling, and it requires nothing more than your cellphone data.
If we are naive to think of them as phones, what should we call them? Eben Moglen, a law professor at Columbia University, argues that they are robots for which we - the proud owners - are merely the hands and feet. "They see everything, they're aware of our position, our relationship to other human beings and other robots, they mediate an information stream around us," he has said. Over time, we've used these devices less for their original purpose. A recent survey by O2, a British cell carrier, showed that making calls is the fifth-most-popular activity for smartphones; more popular uses are Web browsing, checking social networks, playing games and listening to music. Smartphones are taking over the functions that laptops, cameras, credit cards and watches once performed for us.
If you want to avoid some surveillance, the best option is to use cash for prepaid cellphones that do not require identification. The phones transmit location information to the cell carrier and keep track of the numbers you call, but they are not connected to you by name. Destroy the phone or just drop it into a trash bin, and its data cannot be tied to you. These cellphones, known as burners, are the threads that connect privacy activists, Burmese dissidents and coke dealers.
Prepaids are a hassle, though. What can the rest of us do? Leaving your smartphone at home will help, but then what's the point of having it? Turning it off when you're not using it will also help, because it will cease pinging your location to the cell company, but are you really going to do that? Shutting it down does not even guarantee it's off - malware can keep it on without your realizing it. The only way to be sure is to take out the battery. Guess what? If you have an iPhone, you will need a tiny screwdriver to remove the back cover. Doing that will void your warranty.
Matt Blaze, a professor of computer and information science at the University of Pennsylvania, has written extensively about these issues and believes we are confronted with two choices: "Don't have a cellphone or just accept that you're living in the Panopticon."
There is another option. People could call them trackers. It's a neutral term, because it covers positive activities - monitoring appointments, bank balances, friends - and problematic ones, like the government and advertisers watching us.
We can love or hate these devices - or love and hate them - but it would make sense to call them what they are so we can fully understand what they do.
Living With Mistakes
Some of the blogs I follow - Marginal Revolution, Ezra Klein - have given ample attention to Tim Harford's new book, Adapt: Why Success Always Starts with Failure. So I solipsistically assumed that everybody must be aware of it. But then I happened to glance at this book's Amazon ranking, which as I write is down on the wrong side of 1,500. This is an outrage, people! For the good of the world, a bigger slice of humanity should be aware of its contents.
So I'm doing my bit to publicize it. (I don't know Harford in any way, shape or form.)
Harford starts out with the premise that the world is a very complicated and difficult place. At the dawn of the automobile industry roughly 2,000 car companies sprang into being. Less than 1 percent of them survived. Even if you make it to the top, it is very hard to stay there. The historian Leslie Hannah identified the ten largest American companies in 1912. None of those companies ranked in the top 100 companies by 1990.
Harford's basic lesson is you have to design your life to make effective use of failures. You have to design systems of trial and error, or to use a natural word, evolution. Most successful enterprises are built through a process of groping and adaptation, not planning.
The Russian thinker Peter Palchinsky understood the basic structure of smart change. First seek out new ideas and new things. Next, try new things on a scale small enough so that their failure is survivable. Then find a feedback mechanism so you can tell which new thing is failing and which is succeeding.
That's the model - variation, survivability, selection.
Harford then illustrates how this basic process can work across a variety of contexts, from business to war to poetry. He's an able guide to the world of human fallibility. For example, he cites James Reason who identifies three kinds of error. First, there are slips. In 2005 a young Japanese trader meant to sell one share of stock at 600,000 yen but accidentally sold 600,000 shares at 1 yen.
Then there are violations, when someone intentionally breaks the rules. This is what Bernie Madoff did. Then there are mistakes - things you do on purpose but with unintentional consequences.
Errors can be very hard for outsiders to detect. A study by Alexander Dyck, Adair Morse and Luigi Zingales looked at 216 allegations of corporate fraud. Regulators and auditors uncovered the fraud in only one out of six of those cases. It was people inside the companies who were most likely to report fraud, because they have local knowledge. And yet 80 percent of these whistleblowers regret having reported the crimes because of the negative consequences they suffered. This is not the way to treat people who detect error.
Harford is an economic journalist, so he doesn't get into the psychological and spiritual traits you need to live with error and look it in the face, but he offers a very useful guide for people preparing to live in the world as it really is.
The Singularity: Kurzwell vs Allen
Last week, Paul Allen and a colleague challenged the prediction that computers will soon exceed human intelligence. Now Ray Kurzweil, the leading proponent of the Singularity, offers a rebuttal. - Technology Review, Oct. 10, 2011.
Although Paul Allen paraphrases my 2005 book, The Singularity Is Near, in the title of his essay (cowritten with his colleague Mark Greaves), it appears that he has not actually read the book. His only citation is to an essay I wrote in 2001 ('The Law of Accelerating Returns') and his article does not acknowledge or respond to arguments I actually make in the book.
When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore's law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.
I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen's descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.
Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it's being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.
Allen writes that "these 'laws' work until they don't." Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it's true that this specific trend continued until it didn't. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore's law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore's law will come to an end. The semiconductor industry's roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.
This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.
Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth. One recent study ('Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology' by the President's Council of Advisors on Science and Technology) states the following:
"Even more remarkable - and even less widely understood - is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade ... Here is just one example, provided by Professor Martin Grotschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grotschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later - in 2003 - this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grotschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science."
I cite many other examples like this in the book.
Regarding AI, Allen is quick to dismiss IBM's Watson as narrow, rigid, and brittle. I get the sense that Allen would dismiss any demonstration short of a valid passing of the Turing test. I would point out that Watson is not so narrow. It deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors. It's not perfect, but neither are humans, and it was good enough to get a higher score than the best two human Jeopardy! players put together.
Allen writes that Watson was put together by the scientists themselves, building each link of narrow knowledge in specific areas. Although some areas of Watson's knowledge were programmed directly, according to IBM, Watson acquired most of its knowledge on its own by reading natural language documents such as encyclopedias. That represents its key strength. It not only is able to understand the convoluted language in Jeopardy! queries (answers in search of a question), but it acquired its knowledge by reading vast amounts of natural-language documents. IBM is now working with Nuance (a company I originally founded as Kurzweil Computer Products) to have Watson read tens of thousands of medical articles to create a medical diagnostician.
A word on the nature of Watson's "understanding" is in order here. A lot has been written that Watson works through statistical knowledge rather than 'true' understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term "statistical information" in the case of Watson refers to distributed coefficients in self-organizing methods such as Markov models. One could just as easily refer to the distributed neurotransmitter concentrations in the human cortex as "statistical information." Indeed, we resolve ambiguities in much the same way that Watson does by considering the likelihood of different interpretations of a phrase.
Allen writes: "Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain, every individual structure and neural circuit has been individually refined by evolution and environmental factors."
Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems. I show in The Singularity Is Near that after lossless compression (due to massive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which pertains to the brainsup>4. That's not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world.
How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons. It is true that the massively repeated structures in the brain learn different items of information as we learn and gain experience, but the same thing is true of artificially intelligent systems such as Watson.
Dharmendra S. Modha, manager of cognitive computing for IBM Research, writes: "neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of homology across species ... The astonishing natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer."
Allen articulates what I describe in my book as the "scientist's pessimism." Scientists working on the next generation are invariably struggling with that next set of challenges, so if someone describes what the technology will look like in 10 generations, their eyes glaze over. One of the pioneers of integrated circuits was describing to me recently the struggles to go from 10 micron (10,000-nanometer) feature sizes to five-micron (5,000 nanometers) features over 30 years ago. They were cautiously confident of this goal, but when people predicted that someday we would actually have circuitry with feature sizes under one micron (1,000 nanometers), most of the scientists struggling to get to five microns thought that was too wild to contemplate. Objections were made on the fragility of circuitry at that level of precision, thermal effects, and so on. Well, today, Intel is starting to use chips with 22-nanometer gate lengths.
We saw the same pessimism with the genome project. Halfway through the 15-year project, only 1 percent of the genome had been collected, and critics were proposing basic limits on how quickly the genome could be sequenced without destroying the delicate genetic structures. But the exponential growth in both capacity and price performance continued (both roughly doubling every year), and the project was finished seven years later. The project to reverse-engineer the human brain is making similar progress. It is only recently, for example, that we have reached a threshold with noninvasive scanning techniques that we can see individual interneuronal connections forming and firing in real time.
Allen's 'complexity brake' confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don't need to re-create or simulate every organelle in every pancreatic Islet cell. You would want, instead, to fully understand one Islet cell, then abstract its basic functionality, and then extend that to a large group of such cells. This algorithm is well understood with regard to Islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the massively repeated Islet cells of the pancreas, there is nonetheless massive repetition of functions.
Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain 'bottom up' without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.
The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience. The Google self-driving cars (which have driven over 140,000 miles through California cities and towns) learn from their own driving experience as well as from Google cars driven by human drivers. As I mentioned, Watson learned most of its knowledge by reading on its own.
It is true that Watson is not quite at human levels in its ability to understand human language (if it were, we would be at the Turing test level now), yet it was able to defeat the best humans. This is because of the inherent speed and reliability of memory that computers have. So when a computer does reach human levels, which I believe will happen by the end of the 2020s, it will be able to go out on the Web and read billions of pages as well as have experiences in online virtual worlds. Combining human-level pattern recognition with the inherent speed and accuracy of computers will be very powerful. But this is not an alien invasion of intelligence machines - we create these tools to make ourselves smarter. I think Allen will agree with me that this is what is unique about the human species: we build these tools to extend our own reach.
Manufacturers Turn to 3-D Printing
Growing interest in "additive manufacturing" is leading to new business models and new ways to think about designing products.
Hobbyists may have provided the first demand for 3-D printing, but while DIY enthusiasts were creating online communities to make their own action figures and knickknacks out of plastic, industrial manufacturers were discovering how new materials and techniques in 3-D printing could change the way they make commercial products.
A 3-D printer deposits a string of hot plastic, lets it cool, and moves on to the next plane to build a three-dimensional object slice by slice. Using the same principles of layering, additive manufacturing can build objects out of metals, plastics, and ceramics in geometric shapes that are impossible to achieve with other manufacturing techniques. Because the design is digital, businesses can order the resulting products from any available 3-D printer.
This May, General Electric announced that it would "intensify focus" on additive manufacturing to develop a variety of products, from aircraft engine components to parts for ultrasound machines. Other large manufacturers have used the technique to make industrial scanners, furniture, and medical equipment.
Bart Van der Schueren, executive vice president of Materialise, an additive-manufacturing company based in Belgium, credits advances in chemistry and printing processes with opening up additive manufacturing beyond prototypes. In stereolithography, for example, a laser moves slice by slice through a vessel of liquid polymer that hardens when struck by the beam. This enables printers to create smooth and detailed surfaces.
More than 100 companies worldwide do some sort of 3-D additive manufacturing, according to industry analyst Wohlers Associates. However, many companies are still offering only prototype services. Additive manufacturing also has limits: it can't be used to make products over a certain size, it doesn't work with all materials, and it can't be easily used to make an object out of more than one material.
But because additive manufacturing requires no assembly and can turn a mere a computer file into products made to exact specifications, it can open up opportunities for entrepreneurs. CloudFab, a San Francisco-based startup founded in 2009, offers consumers and entrepreneurs design software and a network of 3-D printers to make their products. The user uploads the design, and for a commission, CloudFab delegates orders to its printing partners.
CloudFab founder Nick Pinkston says the company has seen a 10 percent increase in orders each month since its founding; it has processed 20,000 orders to date. "The digital-manufacturing movement, and the ability for [individuals] to make products and bring them to market, is a big thing," he says. "Our whole goal is to make product designers and industrial designers the new Web designers."
3D Print Bone
It looks like bone. It feels like bone. For the most part, it acts like bone.
And it came off an inkjet printer.
Washington State University researchers have used a 3-D printer to create a bone-like material and structure that can be used in orthopedic procedures, dental work and to deliver medicine for treating osteoporosis. Paired with actual bone, it acts as a scaffold for new bone to grow on and ultimately dissolves with no apparent ill effects.
The authors report on successful in vitro tests in the journal Dental Materials and say they're already seeing promising results with in vivo tests on rats and rabbits. It's possible that doctors will be able to custom order replacement bone tissue in a few years, said Susmita Bose, co-author and professor in WSU's School of Mechanical and Materials Engineering.
"If a doctor has a CT scan of a defect, we can convert it to a CAD file and make the scaffold according to the defect," Bose said.
The material grows out of a four-year interdisciplinary effort involving chemistry, materials science, biology and manufacturing. A main finding of the paper is that the addition of silicon and zinc more than doubled the strength of the main material, calcium phosphate.
The researchers - who include mechanical and materials engineering Professor Amit Bandyopadhyay, doctoral student Gary Fielding and research assistant Solaiman Tarafder - also spent a year optimizing a commercially available ProMetal 3-D printer designed to make metal objects.
The printer works by having an inkjet spray a plastic binder over a bed of powder in layers of 20 microns, about half the width of a human hair. Following a computer's directions, it creates a channeled cylinder the size of a pencil eraser.
After just a week in a medium with immature human bone cells, the scaffold was supporting a network of new bone cells.
Augmented Bodies
We don't often realize it, but all fashion is predicated upon human beings' predilection for prostheses and augmentations. All clothing, bags, and shoes are augmentation to our body, skin, and feet allowing us to deal with non-tropical climates, to carry large amounts of stuff, and to deal with harsh or unforgiving terrain. If humans hadn't already modified ourselves, the only fashion we'd have is hairstyle.
Eyeglasses and contact lenses are one of the most prolific forms of medical augmentation on the planet. In many industrialized modern cultures, eyeglasses and contacts are also a major element of fashion. Thin, small glasses are out of fashion; big, chunky frames with large lenses are in. Tomorrow it might be different. But in every case, you have glasses because you have a medical problem that needs fixing.
But what about other medical devices? Canes and even artificial legs are occasionally not merely built to work but are designed and crafted to be fashionable. Could exoskeletons, robotic limbs, and cybernetic augmentations reach a point where they are beautiful? Furthermore, could they ever become so prolific as to be fashionable? More and more, the answer looks to be yes.
Two recent articles, one from the New York Times and one from The Atlantic, point to a future less than a decade away where you will be in the shopping for the latest in cybernetic designs the way you shop for new glasses frames now.
At The Atlantic, Alice G. Watson explains "Why the Future for Quadriplegics Looks Bright":
The final product sounds more like the imaginings of Dr. Octavius: A fully functional exoskeleton, manipulated through connections to the patient's motor and somatosensory cortices. "The ultimate goal," [neurophysiologist Miguel] Nicolelis says, "is to build a robotic vest for the whole body. Just as in this study, a person's brain activity will control movement of the limbs, and get sensory feedback from the external world."
This kind of technology could revolutionize the way para- and quadriplegics live their lives. According to Nicolelis, if and when a brain-controlled exoskeleton becomes commonplace, spinal cord injuries will be a different animal altogether. Doctors would theoretically use this kind of technology to treat patients immediately after a spinal cord lesion, so that living in a wheelchair could be a thing of the past.
Though this all sounds a little futuristic, the end-date for this project is oddly close. "We are working with the Brazilian government, who is helping fund the project," Nicolelis says. "At the 2014 soccer World Cup celebration we hope to have a Brazilian teenager with quadriplegia walk out and make the opening kick."
The World Cup is in just over two years, folks. To get an idea of the importance of a quadriplegic kicking a soccer ball two years from now is, let's look at the case of Robert Woo.
Earlier this week, Ekso Bionics debuted their new leg-brace exoskeleton, named Ekso [pictured], at New York City's Mt. Sinai hospital. The New York Times has a great profile of Robert Woo, one of the first people to test Ekso, and his experience learning to use the device. Seriously, watch the video. It's simultaneously amazing and depressing. He can walk! - sort of, over very short distances at a belabored pace with a lot of help. But he can walk! Watching Woo use Ekso shuffle across the floor with the assistance of two crutches, a bevy of whirring servos, and a rehabilitation therapist at the controls is hardly the image of the Six Million Dollar Man leaping 10 foot fences conjured in our minds when we hear the word 'cyborg.' Of course, Ekso is in the early stages of development and testing. As with Nicolelis' optimism about the progress he hopes his team makes in the next few years, it was the outlook of Ekso Bionics that blew me away:
"Our goal is that this eventually fits under your pants," said Eythor Bender, the chief executive of Ekso Bionics in Berkeley, Calif. "You'll wear it as a fashion item."
The idea of fashionable cybernetics is preposterous until you remember one thing: a real person is going to have to wear this every day of their life. To a job interview, on a date, to walk the dog, and at the alter - if someone needs an exoskeleton, that exoskeleton is their arms and/or legs. A person using Ekso would want that part of his or her body to look as good as the rest of it. That is the fundamental shift that Ekso is expressing. Just because something is a medical device doesn't mean it needs to be ugly or drab and utilitarian. Cyborgs should get to be beautiful too.
But again, your Skeptic's Eyebrow raises (note: Skeptic's Eyebrow sounds like the scientist's version of Tennis Elbow). Not a lot of people will need an exoskeleton. So how in the world can I make the claim that exoskeletons will be as prolific and important in this century as eyeglasses were in the last century?
Eyeglasses started as a luxury item that, due to both prohibitive cost and a culture in which most labor did not require perfect vision. As eyeglasses became more readily available during the end of the 19th Century and experiencing an explosion in popularity during the first half of the 20th in conjunction with the rise of the information age. Now eyeglasses are often an essential piece of a person's identity and personal fashion. So much so that people without eye problems will wear glasses without lenses because they feel left out of an entire segment of fashion. To put that in context: people are feigning a disability to look cool.
Why would the exoskeleton become as popular as eyeglasses? Take a moment and think everyone you know who uses a cane, or crutches, or a wheelchair, or who permanently hobbles around because of an injury or disease or simply because they are old. Our whole population is getting older and staying old longer; an increase age-related mobility issues is bound to follow. Now add in every person, including yourself, that you have known who have temporarily needed one of those devices. There is a much larger built-in user base for exoskeletal legs than you may have previously considered.
Can we expand the user base further? Absolutely. Think about your hobbies. Ekso Bionics started with hiking and backpacking-assist exoskeletons. Now think about all the jobs that would benefit from being able to walk longer distances, run faster, and carry more. The military is an obvious example, but nearly every profession that requires a hard hat or back brace could benefit as well. Now think of what you would do if, for the price of a computer, you had a sweet-looking exoskeleton that you could strap into and be able to run a marathon or hike the Adirondack Trail with ease. The market for exoskeletal enhancements could potentially be massive for soldiers, manual laborers, and weekend warriors alike.
Exoskeletons are currently at the earliest stages of becoming a medical solution for some people. Robert Woo's experience with Ekso is another tiny step (shuffle?) in the evolution of exoskeletons. But soon enough, a previously paralyzed person will toddler walk across the room thanks to an exoskeleton. Then someone a little later will stride with cybernetic confidence. Exoskeletons will become standard gear in certain professions. And not too long after that, you might step into your own pair of stylish cyborg gams and go on a relaxing 10-mile jog.
Researchers at Princeton and Johns Hopkins have taken the development of 3D-printed latticework, that can be seeded to grow human tissue, to the next level. They've added bionics to it.
It's now relatively commonplace to produce 3D-printed latticeworks to replace lost tissue. Those lattices can be seeded with real human cells, which then grow, and result in an implantable replacement organ. These researchers wanted to see if they could incorporate simultaneous emplacement of functional electronics at the same time. In this experiment (which did not involve a human subject), they created a coil at the same time as the lattice, and it grew into an ear with a coil biologically incorporated. The coil could be used to receive induction audio signals from a hearing aid, or even to receive radio signals. Imagine that, stereo audio with no headphones.
I Am Rich
Eight iPhone owners have joined an elite clan: Their Apple gadget is running a program that cost nearly $1,000.
When the iPhone first hit the market in June 2007, those who paid the $499 entry price -- and signed the two-year AT&T contract -- owned a status symbol. A year later, we have the iPhone 3G, Apple's speedier, sleeker and, most important, less expensive smart phone, which introduced a section for downloading third-party applications. Now that the phone is affordable enough for a wider audience, a new status symbol has emerged: a seemingly useless application called I Am Rich.
Its function is exactly what the name implies: to alert people that you have money in the bank. I Am Rich was available for purchase from the phone's App Store for, get this, $999.99 -- the highest amount a developer can charge through the digital retailer, said Armin Heinrich, the program's developer. Once downloaded, it doesn't do much -- a red icon sits on the iPhone home screen like any other application, with the subtext "I Am Rich." Once activated, it treats the user to a large, glowing gem (pictured above). That's about it. For a thousand dollars.
Apple apparently had some problems with I Am Rich. After initially approving it for distribution, the company has since removed it from the store. Heinrich, a German software developer, has yet to hear back from Apple concerning the removal. "I have no idea why they did it and am not aware of any violation of the rules to sell software on the App Store," Heinrich said in an e-mail with The Times today.
But Apple couldn't pull it down before curious aristocrats -- eight of them -- had purchased it. Six people from the United States, one from Germany and one from France dropped a grand for the gem in the first 24 hours it was available, Heinrich said. That's $5,600 in revenue for Heinrich and $2,400 for Apple, which collects 30% of each sale for "store upkeep."
In the e-mail, Heinrich said there seemed to be a market for the program. "I am sure a lot more people would like to buy it -- but currently can't do so," Heinrich said. "The App is a work of Art and included a 'secret mantra' -- that's all."
A possible explanation for its removal: A screen shot of an App Store review that has been circulating around the Web recently, showing a user's complaint that he purchased it accidentally. "I saw this app with a few friends and we jokingly clicked 'buy' thinking it was a joke, to see what would happen. ... THIS IS NO JOKE...DO NOT BUY THIS APP AND APPLE PLEASE REMOVE THIS FROM THE APP STORE," it read.
I Am Rich isn't the first software that has been removed from Apple's store. Box Office, a movie showtime resource, and NetShare, which let users connect a computer to the Internet using the iPhone's 3G wireless data service, disappeared without a trace. Apple did not respond to phone calls for comment.
"I've got e-mails from customers telling me that they really love the app," adding that they had "no trouble spending the money," he said.
Mundane
Stacking boxes is not one of the more complex tasks that humans perform. We learn to do it as children, with alphabet blocks, Legos and the like. And yet for decades, the difficulty of stacking large boxes - crates and containers - was a major impediment to global trade. Merely moving one large container required painstaking work in which longshoremen attached and detached hooks to the corners of each container before moving on to the next one. They certainly could not repeat the process to stack containers many stories high.
In the 1950s, a longtime trucking executive named Malcom McLean decided there had to be a better way, and he turned to Keith W. Tantlinger, an engineer at a truck-trailer manufacturer in Spokane, Wash., to solve the problem. Tantlinger developed a lock that connected to the corners of containers and that crane operators could mechanically open and close from their seats.
The lock, which led to the adoption of uniformly sized containers over the next 15 years, caused a revolution in shipping. The time and cost of transporting goods fell sharply, which contributed to an astonishing boom in global trade. Now containers are a fixture on the American landscape, piled neatly alongside highways, airports and ports. They even had a pop-culture cameo, as the backdrop to the second season of 'The Wire.'
Tantlinger's lock deserves a place on any list of economically significant inventions of the 20th century. Unlike some of the other items on that list, however, it is fairly pedestrian from a technological standpoint. It is not a car or a jet engine or a silicon chip. It is a metal lock. But it ushered in a new way of doing things. "There was no breakthrough in terms of material," says Marc Levinson, the author of 'The Box,' which tells the story of containerization. "There was a breakthrough in thinking through the entire process and coming up with a neat and economical solution."
Economists have long understood that technological advance is crucial to economic growth and, by extension, higher living standards. In recent years, thanks partly to the work of Paul Romer, a New York University professor, they have also begun to recognize the importance of processes, rules and systems. The great advances in health and longevity came not only from new medicines but, more important, from the spread of clean water, sanitation systems and rules requiring doctors to wash their hands. The Internet depends on both the invention of the personal computer and the notion of connecting personal computers to one another. Modern societies rely on laws to establish the trust that is crucial to market economies.
You can make a good argument - as Romer and others do - that the greatest opportunities for progress today lie with better rules and systems. Improving schools is more about process than laptops. Reining in the financial excesses that caused the bubble and bust depends on better regulations, more effectively carried out. Reducing errors and expanding preventive medicine, as Atul Gawande, the surgeon and writer, puts it, "can arguably save more lives in the next decade than bench science, more lives than research on the genome, stem-cell therapy, cancer vaccines and all the other laboratory work we hear about in the news."
In each of these cases, as with Tantlinger's lock, technology matters. He held a patent for his invention, after all. But it is a patent for the less glamorous side of progress, the hard, creative work that allows mundane objects to fill new needs.
Frugal Innovation From The Third World
THE Tata Nano, the world's cheapest car, became a symbol before the first one rolled off the production line in 2009. The Tata group, India's most revered conglomerate, hyped it as the embodiment of a revolution. Frugal innovation would put consumer products, of which a $2,000 car was merely a foretaste, within reach of ordinary Indians and Chinese. Asian engineers would reimagine Western products with all the unnecessary frills stripped out. The cost savings would be so huge that frugal ideas would conquer the world. The Nano would herald India's arrival just as the Toyota once heralded Japan's.
Alas, the miracle car was dogged with problems from the first. Protesting farmers forced Tata Motors to move production out of one Indian state and into another. Early sales failed to catch fire, but some of the cars did, literally. Rural customers showed little desire to shift from trucks to cars. The Nano's failure to live up to the hype raises a bigger question. Is frugal innovation being oversold? Can Western companies relax?
Two new books - "Reverse Innovation" by Vijay Govindarajan and Chris Trimble, and "Jugaad Innovation" by Navi Radjou, Jaideep Prabhu and Simone Ahuja - suggest that the answer to both questions is No. Mr Govindarajan, of the Tuck Business School at Dartmouth College, has advised General Electric on frugal innovation and co-written a path-breaking article on the subject with GE's boss, Jeff Immelt. "Jugaad Innovation" is the most comprehensive book yet to appear on the subject (jugaad is a Hindi word meaning a clever improvisation). The books show that frugal innovation is flourishing across the emerging world, despite the gurus' failure to agree on a term to describe it. They also argue convincingly that it will change rich countries, too.
Multinationals are beginning to take ideas developed in (and for) the emerging world and deploy them in the West. Harman, an American company that makes infotainment systems for cars, developed a new system for emerging markets, dubbed "Saras", the Sanskrit word for "flexible", using a simpler design and Indian and Chinese engineers. In 2009 Harman enrolled Toyota as a customer. GE's Vscan, a portable ultrasound device that allows doctors to 'see' inside patients, was developed in China and is now a hit in rich and poor countries alike. (Mr Immelt believes that these devices will become as indispensable as stethoscopes.) Walmart, which created 'small mart stores' to compete in Argentina, Brazil and Mexico, is reimporting the idea to the United States.
The standard worry among Western firms is that this strategy will cannibalise the existing market for expensive technology. Why buy a $10,000 device if the same firm makes a slightly simpler one for $1,000? This is too pessimistic. GE opened up a new market among doctors for its cheap electrocardiograms; previously only hospitals could afford the things. Besides, standing still is not an option. Whether or not Western firms sell frugal products in the West, Asian firms will.
India's Mahindra & Mahindra sells lots of small tractors to American hobby farmers, filling John Deere with fear. China's Haier has undercut Western competitors in a wide range of products, from air conditioners and washing machines to wine coolers. Haier sold a wine cooler for half the price of the industry leader. Within two years, it had grabbed 60% of the American market. Some Western companies are turning to emerging markets first to develop their products. Diagnostics for All, a Massachusetts-based start-up that has developed paper-based diagnostic tests the size of a postage stamp, chose to commercialise its idea in the developing world so as to circumvent America's hideously slow approval process for medical devices.
Entrepreneurs everywhere are seizing on the idea of radical cost-cutting. Zack Rosenburg and Liz McCartney are rethinking house-building from the ground up; they hope to reduce the cost by 15% and the construction time by 30%. Vivian Fonseca collaborated in the development of a system for sending SMS messages to poor and elderly diabetics to help them control their disease. Jane Chen, the boss of Embrace, sells low-cost infant warmers for premature babies in America and several emerging markets.
This trend will surely accelerate. The West is doomed to a long period of austerity, as the middle class is squeezed and governments curb spending. Some 50m Americans lack medical insurance; 60m lack regular bank accounts. Such people are crying out for new ways to save money. A growing number of Western universities are taking the frugal message to heart (at least when it comes to thinking about things other than their own tuition fees). Santa Clara University has a Frugal Innovation Lab. Stanford University has an (unfrugally named) Entrepreneurial Design for Extreme Affordability programme. Cambridge University has an Inclusive Design programme. Even the Obama administration has an Office of Social Innovation and Civic Participation to encourage grassroots entrepreneurs in health care and energy.
Fighting frugality with frugality
Globalisation is forcing Western firms to provide more value for money. Logitech, an American firm, had to create a top-class wireless mouse for bottom-of-the-range prices when it took on Rapoo, a Chinese company, in China. John Deere had to do the same with its small tractors when it took on Mahindra in India. At the same time, globalisation gives Western firms more tools. Some are building innovation centres in the emerging world. PepsiCo, for example, established one in India in 2010. Some Western firms routinely fish in a global brain pool. Renault-Nissan asked its engineers in France, India and Japan to compete to come up with ideas for cutting costs. The Indians won. The Tata Nano may not have changed the world, but frugal innovation will.
The Pace of Innovation
Acceleration of key events in human history
Let me show you this pattern of exponential acceleration of the most important events in human history, which started 40,000 years ago with the emergence of Homo Sapiens Sapiens from Africa.
We take a quarter of this time: Omega minus 10,000 years. That's precisely the next big chapter in the history books: emergence of civilization, agriculture, domestication of animals, first villages.
[+]
And we take a quarter of this time: Omega - 2500 years. That's precisely the Axial Age, as Jaspers called it: major religions founded in India and China and the West (Old Testament); the ancient Greeks lay the foundations of the Western world - formal reasoning, sophisticated machines including steam engines, anatomically perfect sculptures, harmonic music, organized sport, democracy.
And we take a quarter of this time. That's precisely the next big advance: the Renaissance; beginnings of the scientific revolution; invention of the printing press (often called the most influential invention of the past 1000 years); age of exploration, first through Chinese fleets, then also European explorers such as Columbus, who did not become famous because he was the first to discover America, but because he was the last.
And we take a quarter of this time: Omega - 2 human lifetimes: the late 19th century; emergence of the modern world (many still existing companies were founded back then); invention of combustion engines and cars, cheap electricity, modern chemistry; germ theory of disease revolutionizes medicine; Einstein born; and the biggest event of them all: the onset of the population explosion from 1 billion to soon 10 billion, through fertilizer and then artificial fertilizer.
And we take a quarter of this time: Omega - 1/2 lifetime. That's the year 2000: the emerging digital nervous system covers the world; WWW and cheap computers and cell phones for everybody; the information processing revolution.
And we take a quarter of this time: Omega - 10 years. Now that's in the future. Many have learned the hard way that it's difficult to predict the future, including myself and the guy responsible for my investments.
Nevertheless, a few things can be predicted confidently, such as: soon there will be computers faster than human brains, because computing power will continue to grow by a factor of 100-1000 per decade per Swiss Franc (or a factor of 100 per Dollar, because the Dollar is deflating so rapidly).
Computers that solve problems better than humans
Now you say: OK, computers will be faster than brains, but they lack the general problem-solving software of humans, who apparently can learn to solve all kinds of problems!
But that's too pessimistic. At the Swiss AI Lab IDSIA in the new millennium we already developed mathematically optimal, learning, universal problem solvers living in unknown environments (more, even more).
That is, at least from a theoretical point of view, blueprints of universal AIs already exist. They are not yet practical for various reasons; but on the other hand we already do have not quite as universal, but very practical brain-inspired artificial neural networks that are learning complex tasks that seemed unfeasible only 10 years ago.
In fact, the recurrent or deep neural nets developed in my lab are currently winning all kinds of international machine learning competitions. For example, they are now the best methods for recognizing connected French handwriting. And also Arabic handwriting. And also Chinese handwriting. Although none of us speaks a word of Arabic or Chinese. And our French is also not so good.
But we don't have to program these things. They learn from millions of training examples, extracting the regularities, and generalizing on unseen test data. Just a few months ago, our team participated in the traffic sign recognition competition (important for self-driving cars). Many teams around the world participated, but finally ours came in first, and the second best performance was not by another machine learning competitor, but by humans.
A Formal Theory of Fun and Creativity
Now you say: OK, maybe computers will be faster and better pattern recognizers, but they will never be creative! But that's too pessimistic. In my group at the Swiss AI Lab IDSIA, we developed a Formal Theory of Fun and Creativity that formally explains science & art & music & humor, to the extent that we can begin to build artificial scientists and artists.
Let me explain it in a nutshell. As you are interacting with your environment, you record and encode (e.g., through a neural net) the growing history of sensory data that you create and shape through your actions.
Any discovery (say, through a standard neural net learning algorithm) of a new regularity in the data will make the code more efficient (e.g., less bits or synapses needed, or less time). This efficiency progress can be measured - it's the wow-effect or fun! A real number.
This number is a reward signal for the separate action-selecting module, which uses a reinforcement learning method to maximize the future expected sum of such rewards or wow-effects. Just like a physicist gets intrinsic reward for creating an experiment leading to observations obeying a previously unpublished physical law that allows for better compressing the data.
Or a composer creating a new but non-random, non-arbitrary melody with novel, unexpected but regular harmonies that also permit wow-effects through progress of the learning data encoder. Or a comedian inventing a novel joke with an unexpected punch line, related to the beginning of the story in an initially unexpected but quickly learnable way that also allows for better compression of the perceived data.
You know, before I came here I thought: this is just another TEDx talk and there won't be much of an audience, but you are actually a large audience by my standards. The other day I gave a talk and there was just a single person in the audience. A young lady. I said: Young lady, it's very embarrassing, but apparently today I am going to give this talk just for you. And she said: OK, but please hurry, I gotta clean up here. The Formal Theory of Fun and Creativity explains why some of you find that funny. If you didn't get all of my explanation, look it up on the Web, it's easy to find.
The emerging robot civilization
Creative machines invent their own self-generated tasks to achieve wow-effects by figuring out how the world works and what can be done within it. Currently, we just have little case studies. But in a few decades, such machines will have more computational power than human brains.
This will have consequences. My kids were born around 2000. The insurance mathematicians say they are expected to see the year 2100, because they are girls.
A substantial fraction of their lives will be spent in a world where the smartest things are not humans, but the artificial brains of an emerging robot civilization, which presumably will spread throughout the solar system and beyond (space is hostile to humans but nice to robots).
This will change everything much more than, say, global warming, etc. But hardly any politician is aware of this development happening before our eyes. Like the water lilies which every day cover twice as much of the pond, but get noticed only a few days before the pond is full.
My final advice: don't think of us, the humans, versus them, those future uber-robots. Instead view yourself, and humankind in general, as a stepping stone (not the last one) on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.
Crap Power
Human waste will help to power the future for millions living on the poverty line thanks to the invention of bio-latrines
If we told you that you could solve five major humanitarian problems with one simple and cheap scientific solution, you would say it was a load of crap. And you'd be right.
Stepping over the garbage-lined ditches that crisscross Nairobi's notorious Kibera slum is a delicate balancing act. Look up, and you will be blinded by the sweltering sunshine. Look down, and you balk at the open streams of sewage that slither between the walls of the rickety shacks.
The textbook signs of poverty are everywhere, but poverty comes in many forms. Many of the young people we meet in Kibera are busy leading modern lives that seem a million miles away from the squalor of their surroundings - they are training to be doctors, setting up theatre groups, aspiring to be politicians. Though financial poverty is an inescapable reality, resourcefulness and ingenuity can blunt its effects. Far harder to escape is energy poverty.
Of Kibera's 700,000 inhabitants - a quarter of the population of Nairobi - almost none have access to the clean, safe, renewable energy they need to power and heat their homes or cook their food. More people die from smoke inhalation from dirty stoves than die from malaria around the world: about 1.6 million people each year.
Charcoal, filthy and choking, is sold in buckets on the road-sides. Kerosene, lethal in so cramped an environment and potentially toxic, is available from rudimentary pumps, while firewood, equally unclean, is becoming scarce as the deserts encroach with the decade of droughts that have afflicted East Africa.
This is where the 'load of crap' comes in. If only raw sewage were a saleable commodity, Kibera would be rich. That idea is starting to come true through the use of biogas, also fondly known as 'poo-power'. Bio-latrines are being constructed throughout Kibera with the aid of UK-based charity Practical Action. Simple, round structures - two storeys high, made of concrete and costing around £4,000 - are built in the heart of a community, with four toilet cubicles for men and four for women. With no need for a wasteful flush, the excrement and urine falls into a bio-digester chamber. When the organic matter decomposes, it undergoes a process of anaerobic digestion, as it takes place in the absence of oxygen. Acidogenic bacteria convert the amino acids and sugars into carbon dioxide, hydrogen, ammonia and organic acids. These acids are then converted into acetic acid by more acetogenic bacteria, before that acetic acid is converted into methane and carbon dioxide by microorganisms known as methanogens.
The product is a mixture of methane (CH4) and carbon dioxide (CO2), and also contains traces of hydrogen sulphide (H2S), ammonia (NH3), nitrogen, oxygen, hydrogen and carbon. The gas is combustible if the methane level is more than 50 per cent, and the latrines tend to produce a richness of about 60 to 70 per cent methane, producing 11m3 per day.
Methane burns with a blue flame when mixed with air. It is non-toxic and odourless when burned, providing a perfect source of fuel for cooking and lighting. The gas is also passed through microturbines that burn the methane as it mixes with compressed air, turning a turbine wheel which generates electricity, an invaluable resource in a country where the national grid reaches only 18 per cent of the population.
Biogas also produces no waste. Instead, the excess organic matter, rich in nitrogen and phosphorous, can be dried, packed and used as a fertiliser. Even the surplus water can be used, to irrigate crops.
While the bio-latrines solve problems of cooking fuel, electricity generation, fertilisation and irrigation, they also provide a far simpler benefit: clean, hygienic toilets. They have become social hubs. The electricity generated powers the rooms on the top floor, which are used as libraries, theatres and assembly halls. The electricity can be fed to shacks, where students returning from university can study. People can carry gas home in specially designed bags or bring their food to be cooked on the communal stove.
There is still resistance, as many locals resent paying the nominal fee to use the toilets, even though it produces gas which is about a fifth the price of the fuels they currently use, costing just 20-30 Kenyan shillings (about 20 pence) per day. There is also a squeamishness over using gas produced by human waste, even though the final product is pure methane and carbon dioxide and entirely uncontaminated by the fecal matter it came from. But they are quickly learning to appreciate biogas's many values.
Wall Sized Touchscreen
"We can turn any surface into a 3D touchscreen," explained Anup Chathoth of Ubi Interactive.
Such claims typically conjure up images of floating Minority Report-style touchscreens made from curved glass, but that's exactly what this three-person team has developed.
Ubi's system uses a Microsoft Kinect sensor to turn a regular projector into a multi-touch PC projection system, where regular PowerPoints (or other PC application), web pages, even games no longer require clickers or wireless mice to be navigated.
By using the motion-tracking and depth-perception cameras in the Kinect, Ubi is able to detect where a user is pointing, swiping and tapping on a surface and interpret these gestures as if they were being performed on a giant touchscreen or interactive whiteboard.
The low-cost technology can be retrofitted into existing projection systems without any technical experience.
Patents
Regardless of the legitimacy of their claims, aggressive litigation could have a devastating effect on society as a whole, short-circuiting innovation.
For example, a series of court decisions in the 1990s made hip-hop music sampling all but impossible, forcing artists to get permission for every snippet they used - a logistical and financial nightmare. Lawsuits flew against several rappers, and a form of cultural expression virtually disappeared.
This experience carries a stark warning for the future of technology. High-tech behemoths in a range of businesses like mobile computing and search and social networking have been suing one another to protect their intellectual property from what they see as the blatant copying and cloning by their rivals.
The battle raging over smartphone technology is the latest case in point.
Patents on inventions, like copyrights on songs, are not granted to be fair to their creators. Their purpose is to encourage innovation, a broad social good, by granting creators a limited monopoly to profit from their creations. The belief that stronger intellectual property protection inevitably leads to more innovation appears to be broadly wrong.
Overly strong intellectual property laws that stop creators from using earlier innovations could slow creation over all and become a barrier for new technologies to reach the market.
One of Apple's patents, for instance, appears to grant it ownership over any application based on a user's location. Think of the Google map feature that pinpoints where you are. Or imagine an app showing nearby hospitals or the best deals in nearby pizzerias. If Apple enforced the patent aggressively, it could foreclose a vast array of innovation.
To compound the problem, critics argue, the Patent and Trademark Office regularly issues patents on inventions that are obvious or not new. Sometimes the patents are written too broadly. Apple, for instance, has patents on the concept of moving objects around on a mobile device's screen using multiple touches.
Broad patents can hinder innovation by allowing dominant businesses to stop future inventions that would disrupt their business model.
Overly broad patents have given birth to an entire new industry of patent trolls, whose only business is to buy patents and sue for royalties.
Intellectual property rights could be improved to better serve their purpose of encouraging innovation. Carl Shapiro, an expert on information technology on President Obama's Council of Economic Advisers, has suggested patent reforms, including making it easier to challenge patents after they are issued, culling the roster of overly broad or ambiguous claims, and allowing those accused of infringement to claim independent invention as a defense.
Perhaps software should not be patentable at all. In rulings since the 1970s, the Supreme Court has determined that abstract concepts like mathematical formulas cannot be patented. But software patents will never be banned, of course. Indeed, software patents exploded after an appeals court in 1998 upheld a patent on a method to pool the assets of mutual funds using a mathematical algorithm, establishing the patentability of a business method and the software to run it.
Intellectual property, meanwhile, keeps growing. The United States patent office awarded 248,000 patents last year, 35 percent more than a decade ago. Some will spur innovation. But others are more likely to stop it in its tracks.
Hate Mail and the New Religious Wars in Tech
A note from a reader:
Your article in The New York Times today was idiotic. The Galaxy S III is a nerd phone, a soul-less, heartless, hardware disaster. This is just another phone with next-gen hardware and nothing to show for it. More pixels with no vision.
You used to know how to write. Now you are pushing trash!?
You should be fired and replaced by somebody who has some clue what he's doing.
Getting feedback like that is part of the tech critic's job. I'm sure drama critics, music critics and art critics get their share of joyous mail, too. "Haters gonna hate," as my teenage son reminds me.
Frankly, these days, my primary reaction is curiosity. What, exactly, is going on with these readers? How could something as inanimate, mass-produced and commoditized as a phone get them so worked up?
Take the reader whose e-mail I quoted above. At the time he wrote that note, the phone was not even available. There's no possible way he could have tried it out. And therefore, there's no way he could judge whether or not it's a "heartless hardware disaster." So what would drive him to sit down so confidently to write about it?
In the 1980s and 1990s, consumer-tech religious wars were a little easier to understand. Back then, there were only two camps: Apple and Microsoft. Apple people hated Microsoft because (went the thinking) it had gotten big and successful not from quality products, but from stealing ideas and clumsy execution. Microsoft people hated Apple because (went the thinking) its people and products were smug, elitist and overpriced.
There was also an underdog element, a David-versus-Goliath thing. It was fun to cheer for one team or the other.
The hostility for and against Microsoft and Apple hasn't abated. (At a product announcement last week, I sat next to fellow tech columnist Walt Mossberg from The Wall Street Journal. We laughed about our hate mail; Walt, in fact, has identified what he calls the Doctrine of Insufficient Adulation. That's when you give a rave review to an Apple product - but you still get hate mail from Apple fanboys because, in their judgment, it doesn't rave enough.)
But over time, new religions have arisen: Google. Facebook. In photography forums, similar battles rage between proponents of Canon and Nikon. There are even e-book religious wars: Kindle vs. Nook.
And now this: Samsung.Samsung? Welcome to the big leagues.
So what is going on here? Why would somebody take time out of his day to blast a heap of toxicity to the reviewer of a cellphone?
In politics, scientists describe a communication theory called the hostile media effect. That's when you perceive media coverage of some hot topic to be biased against your opinion, no matter how evenhanded the coverage actually is.
In electronics, though, that effect is magnified by the powerful motivating forces of fear.When you buy a product, you are, in a way, locking yourself in. You're committing to a brand. Often, you're committing to thousands of dollars in software for that platform, or lenses for that camera, or e-books for that reader. You have a deeply vested interest in being right. Whenever somebody comes along and says, in print, that there might be something better - well, that's scary.In that case, you don't just perceive the commentator to be putting down your gadget. He's putting you down. He's insulting your intelligence, because that's not the product you chose. He's saying that you made the wrong choice, and all of those thousands of dollars of apps and lenses and books were throwing bad money after good. He's saying you're a sap.
The effect in the gadget realm is further amplified by social appearances. We probably have Apple to thank for turning electronics into fashion accessories: you are what you carry.
For example, Microsoft's Zune was a beautiful, well-designed music player. So why did it die? Because it wasn't even remotely cool to own one. The iPod was cool. The dancing silhouettes in the iPod ads were cool. You wouldn't want people to think you're pathetic, would you?
Here again, a review that pans your chosen gadget winds up insulting you. It's not just saying, "you made the wrong choice"; now it's saying, "and you have no taste."
The Internet is surely a factor, too. Tech products are the subjects of religious wars because the Internet itself is a technical forum. And its anonymity encourages people to vent in ways that would never be comfortable, acceptable or tolerated in face-to-face conversation.
I'd love to suggest that we could all be more civil in our interactions. I'd love to propose that readers write their objections with less vitriol. It would be great if people could learn that they're worthy individuals no matter what electronics they own.
But that would be like saying, "We should all exercise more" or "Countries should just get along." Some things are human nature, wired too deeply to change.
Apparently, gadget sensitivity is one of them.
Who Invented The Internet?
"It's an urban legend that the government launched the Internet," writes Gordon Crovitz in an opinion piece in today's Wall Street Journal. Most histories cite the Pentagon-backed ARPANet as the Internet's immediate predecessor, but that view undersells the importance of research conducted at Xerox PARC labs in the 1970s, claims Crovitz. In fact, Crovitz implies that, if anything, government intervention gummed up the natural process of laissez faire innovation. "The Internet was fully privatized in 1995," says Crovitz, "just as the commercial Web began to boom." The implication is clear: the Internet could only become the world-changing force it is today once big government got out of the way.
But Crovitz's story is based on a profound misunderstanding of not only history, but technology. Most egregiously, Crovitz seems to confuse the Internet - at heart, a set of protocols designed to allow far-flung computer networks to communicate with one another - with Ethernet, a protocol for connecting nearby computers into a local network. (Robert Metcalfe, a researcher at Xerox PARC who co-invented the Ethernet protocol, today tweeted tongue-in-cheek "Is it possible I invented the whole damn Internet?")
The most important part of what we now know of as the Internet is the TCP/IP protocol, which was invented by Vincent Cerf and Robert Kahn. Crovitz mentions TCP/IP, but only in passing, calling it (correctly) "the Internet's backbone." He fails to mention that Cerf and Kahn developed TCP/IP while working on a government grant.
Other commenters, including Timothy B. Lee at Ars Technica and veteran technology reporter Steve Wildstrom, have noted that Crovitz's misunderstandings run deep. He also manages to confuse the World Wide Web (incidentally, invented by Tim Berners Lee while working at CERN, a government-funded research laboratory) with hyperlinks, and an internet - a link between two computers - with THE Internet.
But perhaps the most damning rebuttal comes from Michael Hiltzik, the author of 'Dealers of Lightning,' a history of Xerox PARC that Crovitz uses as his main source for material. "While I'm gratified in a sense that he cites my book," writes Hiltzik, "it's my duty to point out that he's wrong. My book bolsters, not contradicts, the argument that the Internet had its roots in the ARPANet, a government project."
In truth, no private company would have been capable of developing a project like the Internet, which required years of R&D efforts spread out over scores of far-flung agencies, and which began to take off only after decades of investment. Visionary infrastructure projects such as this are part of what has allowed our economy to grow so much in the past century. Today's op-ed is just one sad indicator of how we seem to be losing our appetite for this kind of ambition.
GM Mosquitoes
Knowing that European consumers and supermarkets have consistently rejected all attempts to foist genetically modified crops on them, I was surprised to discover last week that Brussels was preparing the ground for the introduction of genetically modified animals. Yes, GM fish, insects, mammals and birds. The lot.
Safety guidelines are the kind of thing people publish - as the European commission now has in draft - when an application to release something into the environment is in the offing.
So, when no GM animal has been authorised for commercial release anywhere in the world, you don't have to be a conspiracy theorist to wonder what is going on.
Where research has been done on GM animals for food purposes, it has been largely unsuccessful or else the investors have got cold feet, as with the Enviropig in Canada. You may remember the animal, aka Frankenswine. It was designed to digest and process phosphorus more efficiently to stop it becoming a pollutant in pig waste. Nice for waterways, perhaps, but maybe not so good for the pig's digestion. We'll never know because the hog farmer, Ontario Pork, withdrew its funding from the University of Guelph's project in April and breeding is to stop. So where is the pressure to approve GM animals coming from?
Around the world a number of animals are in development: about 35 species of GM fish are at the laboratory stage and one is awaiting approval: the fast-growing AquaBounty Atlantic salmon in the United States. There are male mosquitoes - modified by the UK-based company Oxitec - that are designed to breed with other mosquitoes and produce offspring that die, reducing the population. Releasing these could help control malaria and dengue fever. The company has modified moths and tomato pests in the same way. There are farm animals producing milk 'improved' with omega 3 or insulin and there are some disease-resistant poultry - and pedigree dogs. Many of us had ignored the stories about new GM beasts knocked up in the laboratory because we didn't think they had a prayer of getting approval for commercial use. It appears we were wrong.
It occurred to me after I spoke to Mike Bonsall, an academic at Oxford University who works with Oxitec, that it is possible the GM breakthrough, when it comes to Europe, will be in the form of animals - birds and insects - not crops. And, yes, he admitted he was an author of the GM animal draft safety guidelines. He confirmed there had been pressure from the biotech industry to get the rules written so that work on the safety case could begin.
This is challenging for those of us who hate the idea of technicians playing God There are theoretical circumstances under which GM animals could be approved in Europe quite quickly - if dengue fever or malaria started appearing in, say, Italy as a result of climate change or if bird flu in one of its more horrible variants got going in Asia, leading to pressure for virus-resistant chickens.
But here's another thought. Won't there be pressure, even without a health emergency, to approve something that gives public health organisations a new shot in the locker? I think there will.
The way Bonsall puts it is that you have to look at the benefits as well as the risks.
The benefit of something that tackles the 50m-100m annual cases of dengue fever or stops hundreds of thousands of children each year dying of malaria is high.
Working out the environmental risk is much more difficult. What eats mosquitoes? What predators might be left short of prey so that they died out and so failed to control other, non-related pest species?
What are the implications of a less than 100% efficient reproduction-blocking gene? We just don't know.
All this is challenging for those of us who hate the idea of technicians playing God. But I think we have to accept that the pressure to authorise something that could bring huge improvements in human health will be great. It may even be greater than the pressure to get it right. But making significant improvements to human health is a high bar to set. We have accepted GM pharmaceuticals made in the laboratory, which we happily ingest because they keep us alive. We should recognise that the same high bar excludes most of the other proposed 'improvements' to animals because they are designed to appeal to the vanity of the purchaser and the wallet of the farmer and not the public good.
The unfortunate Frankenswine offers some public benefit but may not benefit the pig. Most GM crops will wither when subjected to a proper analysis until someone invents a wheat that fixes its own nitrogen and doesn't need fertiliser. So why are we spending public money on developing them, at a time of austerity, instead of looking into ways of restoring the eroding fertility of our soil?
The risks are greatest and the benefits most questionable in the case of fish. The possibility of rogue GM genes spreading across whole oceans is particularly unappealing. Who foresaw that Pacific humpback salmon from 1960s experiments in Russian farms would turn up in the Tweed? The cost of affecting evolution for ever is high compared with the benefits for a few fish farmers in Panama, where the AquaBounty would be raised. Let's continue to set the bar high.
Why Pants?
Certain bodily processes take two or three fewer steps to perform in a tunic than in pants. So why all the pants?
According to University of Connecticut evolutionary biologist Peter Turchin, pants owe their several thousand years of worldwide fashionableness to horses - or, more precisely, to the extreme awkwardness of riding a horse in a robe. "Historically there is a very strong correlation between horse-riding and pants," he wrote in a recent article for the Social Evolution Forum.
Turchin points to examples of this correlation ranging from Japan, where the traditional dress is the kimono but where samurais wore baggy trousers, to North America, where Plains Indians donned kilts until Europeans brought horses to the continent. Roman soldiers mounted steeds (and adopted pants) in the first century A.D. after getting trounced repeatedly by Hannibal and his trouser-clad cavalrymen.
A few centuries earlier in pre-unified China, switching from robes to pants became a matter of state survival in the face of invasion by pants-wearing nomadic horsemen from Central Asia. Soldiers in many of the Chinese states greatly resisted this "barbarian" legwear, and either galloped uncomfortably in robes or left off horses altogether. It cost them everything. "Pants won in China by the process of cultural group selection," Turchin wrote. "Those states that did not adopt cavalry (and pants), or adopted them too slowly, lost to the states that did so early."
Pants-wearing became an everyday affair in Europe during the eighth century, after the fall of the Roman Empire, "when the continent fell under the rule of warriors who fought from horseback - the knights," Turchin explained. "So wearing pants became associated with high-status men and gradually spread to other males."
The Single Most Important Object in the Global Economy
Earlier this spring, the Washington Conservation Corps faced a sudden influx of beach debris on the state's southwestern shore. Time and tide were beginning to deposit the aftereffects of Japan's March 11, 2011, tsunami. One of the myriad objects retrieved was a plastic pallet, scuffed and swimming-pool green, bearing the words: '19-4 (salt) (return required), and, below that, 'Japan salt service.'
A year earlier, Dubai's police made the region's largest narcotics bust when they intercepted a container, carried on a Liberian registered-ship, that had originated from Pakistan and transited through what Ethan Zuckerman has called the "ley lines of globalization," that constellation of dusty, never-touristed entrepots like Oman's Salalah Port or Nigeria's Tin Can Island Port. Acting on an informant's tip, police searched the container's cargo - heavy bags of iron filings - to no avail. Only after removing every bag did police decide to check the pallets on which the bags had rested. Inside each was a hollowed-out section holding 500 to 700 grams of heroin.
Two random stories plucked from the annals of shipping. What unites these disparate tales of things lost (and hidden) on the seas is that they each draw attention to something that usually goes unnoticed: The pallet, that humble construction of wood joists and planks (or, less typically, plastic or metal ones) upon which most every object in the world, at some time or another, is carried. "Pallets move the world," says Mark White, an emeritus professor at Virginia Tech and director of the William H. Sardo Jr. Pallet & Container Research Laboratory and the Center for Packaging and Unit Load Design. And, as the above stories illustrate, the world moves pallets, often in mysterious ways.
Pallets, of course, are merely one cog in the global machine for moving things. But while shipping containers, for instance, have had their due, in Marc Levinson's surprisingly illustrative book The Box ("the container made shipping cheap, and by doing so changed the shape of the world economy"), pallets rest outside of our imagination, regarded as scrap wood sitting outside grocery stores or holding massive jars of olives at Costco. As one German article, translated via Google, put it: "How exciting can such a pile of boards be?"
And yet pallets are arguably as integral to globalization as containers. For an invisible object, they are everywhere: There are said to be billions circulating through global supply chain (2 billion in the United States alone). Some 80 percent of all U.S. commerce is carried on pallets. So widespread is their use that they account for, according to one estimate, more than 46 percent of total U.S. hardwood lumber production.
Companies like Ikea have literally designed products around pallets: Its 'Bang' mug, notes Colin White in his book Strategic Management, has had three redesigns, each done not for aesthetics but to ensure that more mugs would fit on a pallet (not to mention in a customer's cupboard). After the changes, it was possible to fit 2,204 mugs on a pallet, rather than the original 864, which created a 60 percent reduction in shipping costs. There is a whole science of "pallet cube optimization," a kind of Tetris for packaging; and an associated engineering, filled with analyses of "pallet overhang" (stacking cartons so they hang over the edge of the pallet, resulting in losses of carton strength) and efforts to reduce "pallet gaps" (too much spacing between deckboards). The "pallet loading problem," - or the question of how to fit the most boxes onto a single pallet - is a common operations research thought exercise.
Pallet history is both humble and dramatic. As Pallet Enterprise ("For 30 years the leading pallet and sawmill magazine") recounts, pallets grew out of simple wooden 'skids', which had been used to help transport goods from shore to ship and were, essentially, pallets without a bottom set of boards, hand-loaded by longshoremen and then, typically, hoisted by winch into a ship's cargo hold. Both skids and pallets allowed shippers to 'unitize' goods, with clear efficiency benefits: "According to an article in a 1931 railway trade magazine, three days were required to unload a boxcar containing 13,000 cases of unpalletized canned goods. When the same amount of goods was loaded into the boxcar on pallets or skids, the identical task took only four hours."
As USDA Forest Service researchers Gilbert P. Dempsey and David G. Martens noted in a conference paper, two factors led to the real rise of the pallet. The first was the 1937 invention of gas-powered forklift trucks, which allowed goods to be moved, stacked, and stored with extraordinary speed and versatility.
The second factor in the rise of the pallet was World War II. Logistics - the 'Big 'L',' as one history puts it - is the secret story behind any successful military campaign, and pallets played a large role in the extraordinary supply efforts in the world's first truly global war. As one historian, quoted by Rick Le Blanc in Pallet Enterprise, notes, "the use of the forklift trucks and pallets was the most significant and revolutionary storage development of the war." Tens of millions of pallets were employed - particularly in the Pacific campaigns, with their elongated supply lines. Looking to improve turnaround times for materials handling, a Navy Supply Corps officer named Norman Cahners - who would go on to found the publishing giant of the same name - invented the four-way pallet. This relatively minor refinement, which featured notches cut in the side so that forklifts could pick up pallets from any direction, doubled material-handling productivity per man. If there's a Silver Star for optimization, it belongs to Cahners.
As a sort of peace dividend, at war's end the U.S. military left the Australian government with not only many forklifts and cranes, but about 60,000 pallets. To handle these resources, the Australian government created the Commonwealth Handling Equipment Pool, and the company eventually spawned a modern pallet powerhouse, CHEP USA, which now controls about 90 percent of the 'pooled' pallet market in the United States. Pooled pallets are rented from one company that takes care of delivering and retrieving them; the alternative is a 'one-way' pallet, essentially a disposable item that is scrapped, recycled or reused when its initial journey is done. You can identify pooled pallet brands by their color: If you see a blue pallet at a store like Home Depot, that's a CHEP pallet; a red pallet comes from competitor PECO.
There's a big debate in the pallet world about whether using pooled or one-way pallets is preferable, just one of the many distinctions within the industry explained to me by Bob Trebilcock, the executive editor of Modern Materials Handling (which, as it happens, grew out of Norman Cahners' World War II newsletter The Palletizer). Trebilcock grew up in the industry - his father owned a pallet company in northeastern Ohio. "Most kids' dads take them to Disney World," he says, "Mine took me to the Borg Warner Auto Parts plant in North Tonawanda, New York." Pooled vs. one-way, block vs. stringer, wood vs. plastic (there's a lot of claims, but little peer-reviewed research, on which has a greater environmental footprint) - one can quickly find themselves on the wrong side of an argument at a materials handling convention.
"Pallets move the world," says Mark White, an emeritus professor at Virginia Tech. To illustrate the implications of pallets, Trebilcock describes a recent conversation with Costco, which last year shook up the pallet world by shifting to 'block' pallets, which have long been common in Europe and other regions. Block pallets are essentially an improvement on the four-way pallet that debuted during WWII; the pallet deckboards rest on sturdy blocks, rather than long crossboards (or 'stringers'), which make them even easier for forklifts and pallet jacks to pick up from any angle. With 'stringer' pallets, Costco warehouse workers couldn't fit pallet jack forks into pallets if they were facing the wrong side; instead, he says, they'd have to 'pinwheel' the pallet around before picking it up. A small maneuver, but, he adds, "Costco unloads a million trucks a year." Do the math, and the company was sitting on an institutional-size jar of corporate inefficiency.
So why don't all companies use block pallets? Indeed, no major retailer has yet followed Costco's lead. As with everything in the pallet world, says Virginia Tech's White, it boils down to economics. Block pallets cost more to build than stringer pallets. More expensive pallets lend themselves to rental programs. Rental programs need to have systems in place to track and retrieve pallets, and they need industries that use standardized pallets. While rental block pallets are common in Europe, White says the geography of the United States has discouraged their use. "When the supply chain between raw materials and man is very long and protracted, and the volumes are smaller, it doesn't make sense for rental companies to get into that business."
Given the increasing interconnectedness of the global economy, White says there is a surprising, and disheartening, lack of standardization among pallets. In the United States, pallets commonly measure 48 inches by 40 inches (the size of the Grocery Manufacturer Association's pallet, which makes up 30 percent of new U.S. wood pallets each year). Europe tends to use a standard of 1000 millimeters by 1200 millimeters standard. Japan's most common pallet is 1100 by 1100. All told, the ISO (the International Organization for Standardization) recognizes six pallet standards. Packaging itself, meanwhile, is set to a 400 millimeter by 600 millimeter footprint - ideal for metric pallets. But shipping containers, notes White, are still set to a U.S. customary standard: 20 feet by 40 feet. The math doesn't add up. Because of this, he says, most containers today bearing consumer goods and industrial products are 'floor loaded,' i.e, loaded by hand, only to be hand unloaded and then palletized as they enter the U.S. supply chain. "With a 40-foot container, it could take two lumpers four to eight hours to unload it, whereas on pallets, we could unload it in 30 minutes."
Of course, nothing in supply chains is so simple: To be effectively used in containers, pallets would have to become thinner - "you want to max the cube," White says, i.e., fill the container's volume with as much product as possible, and current pallets would take up valuable space. But creating thinner pallets, he says, would require changes down the line in the way companies store products. Warehouse rack storage, he says, would have to be retrofit to accommodate the newer designs.
Such changes are not impossible. In fact, it's already been done by Ikea, a company famous for its fixation on logistics. Last year, Ikea abandoned wooden pallets in a favor of a low-profile system called Optiledge. The system consists of one-pound load carriers, little ledges with feet that are placed under stacks of boxes and then held in place with giant bands. The benefit, says the company, is that the system - which is one-way and 100 percent recyclable - can adapt to the dimensions of the load being carried, rather than vice versa. It's also lighter and takes up less space. One truckload of OptiLedges, the company notes, would be the equivalent of 23 truckloads of traditional pallets. But overhauling the pallet required a massive overhaul of Ikea's stores: In Europe alone more than 500,000 new metal shelves had to be installed.
Ikea's is perhaps the most thoroughgoing reinvention of a product that has, with some minor refinements in design and engineering, stayed quite similar to its World War II origins. But there are other changes afoot that may reduce our dependence on pallets, says Trebilcock. Businesses like grocery stores, which might once have taken delivery of an entire pallet's worth of, say, Cambpell's Soup, have moved to smaller and more frequent delivery schedules. "They've gotten rid of their back rooms," he says, and instead of receiving pallet loads they're hand-unloading pallets of boxes of "mixed product SKUs" versus "single SKU pallets," part of a larger trend toward leaner, more rapid distribution, itself driven by a proliferation of choice.
Then there's what might be called the Amazon effect. "The biggest thing impacting distribution right now is the Internet," he says. "You and I are ordering so much stuff online. We're just getting a small box with stuff. Those things don't go onto pallets, they go into the back of a UPS truck." Indeed, one has to wonder if we might eventually take all the labor saved from containerization and palletization and simply put it onto the back of the UPS driver. But Trebilcock has no actual evidence that pallet use is down.
The pallet is one of those things that, once you start to look for it, you see everywhere: Clustered in stacks near freight depots and distribution centers (where they are targets for theft), holding pyramids of Coke in an 'endcap display' at your local big-box retailer, providing gritty atmosphere in movies, forming the dramatic stage-setting for wartime boondoggles (news accounts of the Iraqi scandal seemed obsessed with the fact the money was delivered on pallets, as if to underscore the sheer mass of the currency), being broken up for a beach bonfire somewhere, even repurposed into innovative modern architecture. Trebilcock likens the industry to the slogan once used by the company BASF: "At BASF, we don't make a lot of the products you buy. We make a lot of the products you buy better." At parties he'll tell people who ask what he does: "Without a pallet, most of what you and I eat or wear or sit on or whatnot would not have gotten to us as easily or inexpensively as it got to us."
Just don't get him started on the 'raggle stick,' another quietly ubiquitous feature of the supply chain. Raggle sticks are the scalloped pieces of wood or plastic you've no doubt seen (or better yet, not seen) used to help efficiently stack pipes or rods on the back of trucks. They are basically pallets for round objects. It turns out his father also had a raggle stick company. "You don't want to know how many raggle sticks they sold."
Protecting Your Bright Idea
There is often a concern among business owners and aspiring entrepreneurs that when they come up with a new idea, they need to protect it so that it won't be stolen. This mind-set, however, is dangerous. Here are four reasons you should stop worrying about protecting your ideas.
Your Idea Already Exists: There are very few truly new ideas. Most ideas are an improvement or a different take on an existing idea. Regardless of how much is new, the chances are that if you have thought of it, others have as well. This is one reason venture capitalists will not sign nondisclosure agreements - many times the same ideas come to light, independent of each other, at the same time. What makes your idea different is you: the approach, the technology and the resources that you bring to the idea. The search engine idea wasn't new when Google was founded. What made Google different from and better than the search engines that came before it (like OpenText, Magellan, Infoseek and Snap) was that its creators simply had a different approach and a different set of competencies.
Being New Can Be a Problem: If you have a truly new idea, you are in many ways at a disadvantage. Being first to market means you have to educate consumers, and that can be expensive.
Most businesses face the challenge of grabbing consumer attention, and that's no easy task. If you also have to educate consumers about why they need a new product or service, you have to spend extra effort, time and capital. Often, a more effective strategy is to be a second or third mover in a space - let someone else spend the money to educate the public. Then, you can figure out how to do it smarter, faster or better. That's how Facebook crushed MySpace as a social network, and that's why Peapod still exists as a grocery delivery service while Webvan (and its erstwhile $1 billion valuation) is long gone.
The Value Is Not in the Idea: The competitive landscape is very different than it was 30 years ago, back when ideas had some merit on their own. Now, there are so many businesses out there, and so much information, that an idea by itself is worth zero.
There are many examples of stupid business ideas that have succeeded (the Snuggie, for example, which is basically a bathrobe that you wear backwards), and great businesses ideas that have failed. The difference always comes down to execution. No matter how interesting an idea, execution is where the value lies.
If you follow mixed martial arts, you are probably familiar with the Ultimate Fighting Championship. The U.F.C. was created by Semaphore Entertainment Group in 1993. As I explain in my book, "The Entrepreneur Equation," it almost went bankrupt. Years later, two casino moguls, Frank and Lorenzo Fertitta, along with Dana White, the current U.F.C. president, swooped in to buy the business. Less than a decade later, the U.F.C. was valued at approximately $1 billion.
In other words, the idea for a mixed martial arts league was worth nothing. It was the execution - or lack thereof - that created the value.
Sharing Ideas Makes Them Better: We often think that we have great ideas, only to take them to market and find out that they don't resonate the way we anticipated. Sharing your ideas helps you get feedback and make valuable tweaks. If you wait until you go to market to get that feedback, it may be more costly or difficult to make changes. Having potential customers weigh in on your ideas early not only can make them better, it can also generate buy-in from your customers. If they think they had a hand in shaping the idea, they may feel a sense of ownership and be more likely to champion it.
But most of all, remember that your idea will do you - and your future customers - no good if you never get it out there.
Nanocrystalline cellulose
THE hottest new material in town is light, strong and conducts electricity. What's more, it's been around a long, long time.
Nanocrystalline cellulose (NCC), which is produced by processing wood pulp, is being hailed as the latest wonder material. Japan-based Pioneer Electronics is applying it to the next generation of flexible electronic displays. IBM is using it to create components for computers. Even the US army is getting in on the act, using it to make lightweight body armour and ballistic glass.
To ramp up production, the US opened its first NCC factory in Madison, Wisconsin, on 26 July, marking the rise of what the US National Science Foundation predicts will become a $600 billion industry by 2020.
So why all the fuss? Well, not only is NCC transparent but it is made from a tightly packed array of needle-like crystals which have a strength-to-weight ratio that is eight times better than stainless steel. Even better, it's incredibly cheap.
"It is the natural, renewable version of a carbon nanotube at a fraction of the price," says Jeff Youngblood of Purdue University's NanoForestry Institute in West Lafayette, Indiana.
The $1.7 million factory, which is owned by the US Forest Service, will produce two types of NCC: crystals and fibrils.
Production of NCC starts with "purified" wood, which has had compounds such as lignin and hemicellulose removed. It is then milled into a pulp and hydrolysed in acid to remove impurities before being separated and concentrated as crystals into a thick paste that can be applied to surfaces as a laminate or processed into strands, forming nanofibrils. These are hard, dense and tough, and can be forced into different shapes and sizes. When freeze-dried, the material is lightweight, absorbent and good at insulating.
"The beauty of this material is that it is so abundant we don't have to make it," says Youngblood. "We don't even have to use entire trees; nanocellulose is only 200 nanometres long. If we wanted we could use twigs and branches or even sawdust. We are turning waste into gold."
The US facility is the second pilot production plant for cellulose-based nanomaterials in the world. The much larger CelluForce facility opened in Montreal, Canada, in November 2011 and is now producing a tonne of NCC a day.
Theodore Wegner, assistant director of the US factory, says it will be producing NCC on a large scale. It will be sold at just several dollars a kilogram within a couple of years. He says it has taken this long to unlock the potential of NCC because the technology to explore its properties, such as electron scanning microscopes, only emerged in the last decade or so.
NCC will replace metal and plastic car parts and could make nonorganic plastics obsolete in the not-too-distant future, says Phil Jones, director of new ventures and disruptive technologies at the French mineral processing company IMERYS. "Anyone who makes a car or a plastic bag will want to get in on this," he says.
In addition, the human body can deal with cellulose safely, says Jones, so NCC is less dangerous to process than inorganic composites. "The worst thing that could happen is a paper cut," he says.
Invention Website
GOT an inventive mind and feel like making a few thousand pounds? Then you might have some fun with Marblar.com, a website that will go live in late August. The site will ask users to suggest lucrative uses for "underexploited" patented technologies - with cash prizes of up to £10,000 for the best ideas.
"There are a lot of dormant inventions just gathering dust in research universities," says Daniel Perez, Marblar's CEO. "This is taxpayer or philanthropy-funded research that isn't demonstrating the impact it could. So we'll simply be asking our users how they would use this invention."
Marblar is getting universities on board, as well as UK organisations like the Medical Research Council and the Science and Technology Facilities Council, all of which have patented technologies that they would like to squeeze more cash out of.
To test the idea, Marblar posted a technique patented by the University of Southampton that allows DNA nucleotides to be knitted together without using an enzyme. Days later, a University of Cambridge academic hit on a new use for the technique in screening potential DNA-based therapies. "This was a problem that the inventor wasn't really aware existed, much less that his discovery could solve," says Gabriel Mecklenburg of Marblar. "There may be now be start-up ideas around this tech."
"If Marblar leads to ideas like that it could well work. Any innovation like that has got to be encouraged," says Peter Finnie, a patent attorney with Gill, Jennings and Every in London.
Marblar is the latest in a string of "open innovation" sites that attempt, in one way or another, to encourage inventiveness online. "We're seeing many new online ways of interacting with the crowd," says Finnie. "All are geared at coming up with ideas you wouldn't have thought of yourself."
For instance, he says, ArticleOne asks its community of users to find "prior art" - published documents that show an invention existed before it was patented - to quash patents that firms have been accused of infringing. Again, there are cash awards available, as there are at Innocentive, where companies and NGOs present problems that they need solving - such as how to develop a portable rainwater storage system for the developing world. On the flipside, IbridgeNetwork and Yet2.com post university and corporate research in a bid to find people who'll license their technology to commercialise it.
"But most of these are dating sites for intellectual property," says Perez. "We're making tech transfer fun and gamifying it."
Fun it may be - but Finnie warns that there could be problems if users give away for a mere £10,000 an idea that ends up kicking off a billion-dollar industry. This could happen, he says, because most patents cite the industrial application of an invention. "So the person who comes up with a new application may be regarded in law as the inventor. Yet they may just give the idea away online."
Perez believes Marblar's prize money will suffice. "Users have to ask why they are doing this. Are they doing it to make millions? Or as a bit of fun - creative problem-solving? In our tests, winners did not feel taken advantage of."
Time will tell, says Finnie. "Marblar expects to see start-ups forming around the contributed ideas. But that's when people start to fall out."
The Power of a Hot Body
As I waited with a throng of Parisians in the Rambuteau Metro station on a blustery day, my frozen toes finally began to thaw. Alone we may have shivered, but together we brewed so much body heat that people began unbuttoning their coats. We might have been penguins crowding for warmth in Antarctica's icy torment of winds. Idly mingling, a human body radiates about 100 watts of excess heat, which can add up fast in confined spaces.
Heat also loomed from the friction of trains on the tracks, and seeped from the deep maze of tunnels, raising the platform temperature to around 70 degrees, almost a geothermal spa. As people clambered on and off trains, and trickled up and down the staircases to Rue Beaubourg, their haste kept the communal den toasty.
Geothermal warmth may abound in volcanic Iceland, but it's not easy to come by in downtown Paris. So why waste it? Savvy architects from Paris Habitat decided to borrow the surplus energy from so many human bodies and use it to supply radiant under-floor heating for 17 apartments in a nearby public housing project, which happens to share an unused stairwell with the metro station. Otherwise the free heat would be lost by the end of the morning's rush hour.
Appealing as the design may be, it isn't quite feasible throughout Paris without retrofitting buildings and Metro stops, which would be costly. But it is proving successful elsewhere. There's Minnesota's monument to capitalism, the four-million-square-foot Mall of America, where even on subzero winter days the indoor temperature skirts 70 from combined body heat, light fixtures and sunlight cascading through ceiling windows.
Or consider Stockholm's busy hub, Central Station, where engineers harness the body heat issuing from 250,000 railway travelers to warm the 13-story Kungsbrohuset office building about 100 yards away. Under the voluminous roof of the station, people donate their 100 watts of surplus natural heat, but many are also bustling around the shops and buying meals, drinks, books, flowers, cosmetics and such, emitting even more energy.
This ultra green, almost chartreuse, body-heat design works especially well in Sweden, a land of soaring fuel costs, legendary hard winters, and ecologically minded citizens. First, the station's ventilation system captures the commuters' body heat, which it uses to warm water in underground tanks. From there, the hot water is pumped to Kungsbrohuset's heating pipes, which ends up saving about 25 percent on energy bills.
Kungsbrohuset's design has other sustainable elements as well. The windows are angled to let sunlight flood in, but not heat in the summer. Fiber optics relay daylight from the roof to stairwells and other non-window spaces that in conventional buildings would cost money to heat. In summer, the building is cooled by water from a nearby lake.
It's hard not to admire the Swedes' resolve. During the 1970s, Sweden suffered from pollution, dead forests, lack of clean water and a nasty oil habit. In the past decade, through the use of wind and solar power, recycling of wastewater throughout eco-suburbs, linking up urban infrastructure in synergistic ways, and imposing stringent building codes, Swedes have cut their oil dependency and drastically reduced their sulfur and CO2 emissions.
Part of the appeal of heating buildings with body heat is the delicious simplicity of finding a new way to use old technology (just pipes, pumps and water). Hands down, it's my favorite form of renewable energy.
What could be cozier than keeping friends and strangers warm? Or knowing that by walking briskly or mousing around the shops, you're stoking a furnace to heat someone's chilly kitchen?
How about the reciprocity of a whole society, everyone keeping each other warm?
Widening their vision to embrace neighborhoods, engineers from Jernhusen, the state-owned railroad station developer, are hoping to find a way to capture excess body heat on a scale large enough to warm homes and office buildings in a perpetual cycle of mutual generosity. Heat generated by people at home at night would be piped to office buildings first thing in the morning, and then heat shed in the offices during the day would flow to the residences in the late afternoon. Nature is full of life-giving cycles; why not add this human one?
Alas, I don't see body-heat sharing sweeping the United States anytime soon. Retrofitting city buildings would be costly at a time when our lawmakers are squabbling over every penny. Also, the buildings can't be more than 100 to 200 feet apart, or the heat is lost in transit. The essential ingredient is a reliable flux of people every day to provide the heat.
But it's doable and worth designing into new buildings wherever possible.
As a Golden Rule technology of neighbor helping neighbor, it implies a willingness to live in harmony. What could be more selfless than sharing heat from the tiny campfires in your cells? I'll warm your apartment today, you'll warm my schoolroom tomorrow. It's as effective and homely as gathering around a hearth. Sometimes there's nothing like an old idea revamped.
Tipping Points
Consider the automobile, for instance. Back in the late 19th century, it would have been easy to dismiss the potential importance of the car. Cars weren't very good, for one thing. But more importantly, it wasn't at all clear how they might be of much advantage. Cities were dense places, with tangles of streets crowded by pedestrians and horse-drawn wagons and omnibuses. Cars were expensive and offered little in the way of travel-time advantages. Petrol was very expensive in real terms, and there was no infrastructure available to deliver it to would-be drivers. It was hard to see a market for cars outside of the realm of playthings for the rich.
The striking thing about this line of thinking is that it underestimates the automobile for the exact reason that the automobile turned out to be so important. A transformative new invention, by definition, doesn't fit very well into the world as it exists. Automobiles were transformative - so useful that vast new public and private infrastructure projects were undertaken over a period of decades to better-and-better accommodate their presence. But that's an iterative process. At the outset nobody wants to build the infrastructure for something that nobody uses, and nobody wants to use something that there's no infrastructure for. Eventually it tips.
The flipside of this is that as a matter of personality, inventors and innovators are bound to be prone to over-enthusiasm for their own gizmos. So folks who adopt the curmudgeon role and dismiss the hyper-men are going to be right nine times out of ten. But the successes that really matter are exactly the ones that have lots of barriers standing between them and real utility.
Coca Cola Distributors
You would be hard-pressed to travel anywhere in the world and not be able to buy a Coca-Cola. Sadly, the same cannot be said for access to clean water and vital medicine. One man is doing what he can to change that, using the distribution power of large corporations.
In the 1980s, entrepreneur Simon Barry was an aid worker in remote villages in Zambia, and he became aware of how easy it was to grab a Coke nearly every place he went, but he also noticed how many basic necessities were missing. Barry got the idea to somehow use Coca-Cola's distributing success to deliver lifesaving supplies to the countries most in need. Unfortunately, the idea did not become a reality until about five years ago, with the help of Facebook and the Internet.
Once Barry's idea caught the attention of the Coca-Cola Company, the joint efforts resulted in a test program, called ColaLife. The program gets medical aid to Zambia using the extra space in Coke crates. The wedge-shaped AidPods fit in between the necks of bottles of Coca-Cola. Each AidPod, called Kit Yamoyo, or "Kit of Life," contains an anti-diarrhea kit that includes the following: a bar of soap, rehydration salts, zinc supplements, and a measuring cup.
Barry said, "Child mortality was very high, and the second biggest killer was diarrhea, which is simple to prevent." ColaLife is just one of the innovative ways in which major distributors can help save lives globally.
Future Drones
It is surely the next great revolution in aircraft technology, with the potential to transform the civilian market as much as the jet engine did 60 years ago. This time the revolution is not what is added, though, but what is left out: the pilot.
Aviation experts believe that within ten years civilian drones could be flying in British airspace. The unmanned aircraft could replace piloted aircraft in search and rescue and fisheries protection operations - and, ultimately, allow cargo planes the size of jumbo jets to fly without pilots.
This requires the success of a project being tested on a six-seater aircraft on an airfield near Preston. Called Astraea, a collaboration between BAE, Rolls-Royce and others, it could give Britain a key advantage in the airspace of the future.
Lambert Dopping-Hepenstal, director of Astraea, said: 'You are not constrained by the size of a person and the life-support systems necessary in having a person on board.'
He believes that unmanned civilian aircraft will be flying by 2020. 'Most important, with no human you can fly almost indefinitely. It is a potentially whole new market,' he said. '[But] we do tend to see a lot written about killer drones. We need to show there are some significant benefits to drones as well.'
What was most striking on board was the lack of drama when control was transferred to the ground. A calm voice came over the radio and said: 'I have control.' Nothing changed. Except as Rod Buchanan, the flight engineer, noted, everything did. 'It's a slightly odd feeling,' he said, 'knowing your pilot is down there.'
Military drones have been flying for years in a regulatory environment in which everything up to and including fatally interrupting the occasional Afghan wedding party is permitted. Civilian aircraft are different. How do they interact with air traffic control? Can other aircraft treat them as a normal aircraft? What if communications are lost?
'In normal aircraft, the automatic pilot expects the pilot to take over when things go bad,' Mr Buchanan said. 'This is the opposite. When comms are lost, automation steps in.'
The aircraft is programmed to avoid others, navigate around bad weather and, should the worst happen, choose its own crash-landing site, after using an infra-red camera to confirm that it is free of people.
'A 747 cargo aircraft crashed recently and seven people died,' Mr Buchanan said. 'Why should we need seven people on board a cargo plane?' Although for now, flying this unmanned drone needs three pilots. Regulations mean that there must be two pilots on board to take control if necessary.
Towards the end of the flight, this pair were restless. All Neil Dawson - a military fast jet pilot - had had to do for the preceding half hour was admire the view.
On the ground was Bob Fraser, a test pilot of decades' experience. Was this a desecration of the romance of aviation? 'This is its own challenge,' he said over the radio. 'It's not stress-free.'
'You can't resist change,' Mr Dawson said. ' I don't see it happening in my lifetime, but it's the future.'
Shared Data
... we learned that the National Security Agency (NSA) has been collecting data on millions of Verizon customers. The Guardian published the full top-secret court order that forced Verizon to deliver customer information daily to the NSA. In essence, this meant that every time my 3-year-old daughter called to tell me that her imaginary friend Spiral Bunny just recited the alphabet, the NSA probably knew about it. It also knew that I was traveling on a high-speed train somewhere outside of New York City, and that she was sitting at her easel in our home. The fact that I'm actually an AT&T customer doesn't exclude me from data collection, since my daughter calls me from a Verizon mobile phone.
When I read about the news last night on my various connected devices, I was shocked. But not at the revelation. Rather, I was taken aback that so many people were surprised and enraged by the blanket surveillance.
The reality is that we all live in clouds of deeply personal data, and we carry that information everywhere we go and in nearly everything we do. Stop for a moment, and think about all of the services you use and the conveniences you enjoy. Do you really think that Verizon is the only company divulging your information? Or that the NSA is the only organization doing the monitoring?
As an exercise, I thought about my average day, and what kind of data I'm creating and broadcasting. After a morning routine that involves checking my Nexus phone, responding to email, and reading through/posting to Twitter, Facebook, and LinkedIn, Google Now tells me that I'm about to be late for my train to New York - and that I need to bring my umbrella. Google Now knows this because it's constantly monitoring my schedule, email, and messages, along with local traffic and weather.
I hop into my car and Google Now tells me to take a different route than usual because of an accident on the freeway. Once at the train station, I use my MasterCard for entry into the parking lot, ditch my car and run up three flights of stairs to the waiting area. (I know it's three flights, because my Fitbit is tracking all of my daily movements and transmitting that information to both my phone and to Fitbit's servers.) Along the way, I've counted six CCTV and security cameras, and those are only the ones in plain sight.
On the train, I show the conductor a QR code on my phone, which is my Amtrak ticket. She scans it with her own phone. The guy across from me is talking aggressively on his phone about some big digital ordeal at work. The conductor and I exchange a silent, knowing glance. Curious to know who he is, I sneak a look at his name, which is displayed at the top of paper ticket he printed out at the station. Seconds later, I've looked him up on LinkedIn, Spokeo, and Sonar and I know that he's the chief marketing officer at a huge financial services corporation. I also know where he went to college, that he drinks Dewar's, and that he plays golf. I've also accidentally downloaded a picture of his house. I purposely leave my own profile visible, so that after I chat him up about his company, he'll see my name pop up on his own LinkedIn profile and remember that I had great ideas to help his company with that big digital problem.
My train arrives in New York, and I begrudgingly check into Foursquare. It's a network I don't really benefit from anymore, but a few of my friends still use the social-local network and will see that I've arrived. Throughout the day, I attend Skype conference calls, buy lunch with my AmEx, search walking directions from Time Warner to a conference where I'm speaking that afternoon. It winds up being too far to travel on foot, so I open Uber and broadcast my location to town cars looking for a quick fare across town. As I wait to get picked up, I log onto Amazon to buy a replacement iPhone charger. I ignore the suggested purchase on my screen: a set of pink Bridgestone golf balls.
Eventually, I text my husband to tell him I'm running 15 minutes late. I know that, he texts back. Waze, an app we sometimes use, has already messaged him.
Before all of this technology, I would've been stuck in the country's worst traffic on a truly awful, time-wasting commute. I'd have been soaking wet, lost, and clutching a piece of notebook paper covered in illegible shopping lists and reminders.
By definition, you're surrendering your privacy by using your phone. Companies like Verizon have to locate you geographically in order to connect you to a tower that then connects back to a central service station. All of those shiny Foursquare badges appear because you've fulfilled a certain number location requirements by sharing your physical whereabouts. We're reminded by virtual assistants to bring our umbrellas and to leave earlier than we'd planned because we engage multiple services to store our calendars, itineraries and emails. Remember, the only way sophisticated technology like this works is via willing participation by vast groups of users like us.
All of this convenience and efficiency requires sacrificing our personal data. I discovered last night after the Guardian broke the NSA/Verizon news that the only thing different between me and the vast majority of technology users in the U.S. is that I knowingly share my data while others do it unwittingly.
This is the sort of thing we heard about repeatedly during the George W. Bush era. Remember warrantless wiretapping? The Patriot Act? Verizon isn't even the first provider called into question. Back in 2008, the Electronic Frontier Foundation filed suit against the NSA and other agencies on behalf of AT&T customers for widespread surveillance. And just last year, Verizon and MasterCard both said publicly that they've been mining customer habits and selling that data to advertisers. One of Verizon's marketing execs, Bill Diggins, said that 'data is the new oil' during an industry conference.
The very organizations you rely on for convenience rely on you for monetization and national security.
Before you argue that this infringes on our liberty, privacy, and free speech, consider the Boston Marathon bombing. The attackers were found and caught precisely because we submit to constant surveillance. A photo posted to social networks, combined with CCTV, mobile broadcast signals, and hordes of overnight activists allowed us to find two out of 600,000 people. (To be sure, this very same technology was also to blame for misreporting, possible libel, and potentially another death.)
I'm certainly not shilling here for big credit card companies, who turn your data over to advertisers. Or for the NSA for that matter. That said, it's 2013, not 1942. Violence isn't just restricted to remote battlefields. It's arrived at our national monuments and our neighborhood sidewalks. The fact that our data is being transmitted for purposes outside of our personal information clouds isn't good or bad. It's our inevitable and present reality.
There are serious social and legal repercussions when we allow a government or any organization unfettered, ubiquitous access to personal information. There are also serious repercussions when citizens don't stop to think about the personal data they're sharing, with whom and for what purpose. You may not be able to stop sharing that data, but you certainly can know what it is that you're broadcasting.
Glow-in-the-dark Plants and Synthetic Biology
The Glowing Plants Kickstarter, the first-ever crowdfunded synthetic biology campaign, is winding down into the final hours. Launched on April 23, 2013, the campaign aimed to create a glow-in-the-dark plant while showcasing the technology of synthetic biology. It also served as a vehicle to introduce two startups in the sector: Genome Compiler Corporation and Cambrian Genomics.
The campaign has been wildly popular, attracting widespread media attention that saw the initial funding target of $65,000 be surpassed in just two days. Had that breakneck pace continued, the campaign would have pulled in about $1 million, but it now looks to finish with about $465,000 - still a remarkable achievement for a bioscience project.
Depending on the amount pledged, backers will get conventional swag, like stickers, T-shirts, or a how-to book, to more exotic rewards like a do-it-yourself Glowy plant kit, or a message, written in DNA, spliced into the genome of the plant.
Glowing-plant seeds
However, the most popular pledge by a wide margin was for something decidedly low tech: seeds. Contribute $40 and the project organizers promised to ship 50-100 seeds for people to grow their very own glowing plants, provided they lived in the US.
In America, the project isn't against the law, albeit largely because the current regulations for plant biotechnology weren't crafted with ornamental glowy plants in mind. This said, the plant is about as harmless as a modded living thing can be.
It's not part of the food chain. It's not going to be eaten or smoked or made into herbal tea. In the wild, it's unlikely to compete well against natural species. If the project is successful, the plant should glow in the dark. But don't throw away those compact fluorescents or LED bulbs just yet. At best, it will be a dim glow.
Where the project really shines, though, is in seeding interest and discussion. It's convinced almost 8000 people to chip in a few bucks. It's also received millions in media attention. It's been tweeted and 'liked' and commented on.
It's fueling conversations on biosafety, patents, liability, open vs. corporate science, legislation, and more. But overall, the response to the Kickstarter has been positive and playful, not heavy-handed.
And why not? Glow is a natural phenomenon in nature. It's been extensively researched and turned into a useful tool for molecular biology, a visible marker for successful gene delivery. Researchers have been making things glow for decades, for basic research and as a first step toward more complex engineering. Glow even became art in 2000, when Eduardo Kac created a transgenic glowing bunny rabbit named Alba, who quickly became a media sensation. A steady stream of glowing animals have followed.
Today, using genetic engineering to make something glow is what we teach kids. It's a toy. In 2007, I purchased a kit online that included freeze-dried bacteria, plasmid DNA that contained code for green fluorescent protein (GFP), and detailed instructions on how to make glowy bacteria - all for $24.95.
ETC Group concerned
What was remarkable to me was the box listed it for ages 8 and up. Which probably explains why the FBI, USDA, FDA and other groups charged with ensuring US biological security haven't been too concerned about glow-in-the-dark arabidopsis plants. They've got more serious business to attend to.
In fact, the only group to express any real concern over this Kickstarter has been the ETC Group. ETC monitors emerging technologies particularly as they are applied by large corporations. The rapid rise of synthetic biology over the last decade has concerned them greatly, and they've fashioned themselves as the Greenpeace of the field.
As such, they're against most applied development. They see this project as setting a dangerous precedent. In their view, it could potentially lead to the uncontrolled release of a genetically modified organism. They see it as irresponsible. And, failing in their efforts to convince the organizers or Kickstarter to shut down the campaign, they've now hastily created their own Kickstopper on IndieGogo to raise $20,000 to block what they see as 'Syn Bio Pollution.'
Personally, I strongly believe that voices of dissent, concern, and opposition deserve to be heard. I'm glad ETC members are at synbio conferences and are willing to engage in debates. I like them. They are thoughtful and engaged. And some of their marketing strategies, particularly their posters, are brilliant.
But their opposition to this project, while predictable to me, invoked an odd reaction: I felt a little sorry for them. I'm used to seeing ETC go after industry giants. To me, this feels like they're trying to squash a mosquito - and all they have is a hammer.
It's not their fault. Over the last decade, genetic engineering has moved from large groups into the hands of individuals and small groups, the same dynamics as we see with many other technologies, including computers. I can't imagine how frustrating it must be to try and stand in the way of exponential technological progress. It's a losing battle.
Glowing plants may be controversial, but it's as removed from large corporations doing bad things with genetic engineering as one can get. At the root, it's about inspiring people and educating them - and getting them to participate in a small way.
Yes, it's being championed by companies, but both are tiny startups just a few years old, run by smart, well-trained young scientists. They're thoughtful and considerate people, and active members of the open science community. They're publicly sharing all aspects of the project. There's no profit motive here. If anything, they've taken on some massive risks by their trailblazing.
A complete game changer
Without question, synthetic biology is a powerful technology. I maintain that it's the most powerful technology we've ever created. It's disruptive - possibly a complete game changer when applied to some of humanity's most pressing challenges in energy, medicine, and sustainable manufacturing. That's the good side.
Of course, it can and will also be hacked to do bad things, too, something security expert Marc Goodman and I consider in our recent Wired UK article exploring bio-crime.
Even if the seeds for glowing plants never do make it out into the world, it's already too late to stop grassroots, crowd-funded synthetic biology from being explored by thousands more innovators. The ideas have been scattered into the world.
The glowing plant project is just a visible indicator that times have changed and there's no going back. It's time to find the right way to move forward.
Alternative Nuclear Power
In February of 2010, Leslie Dewan and Mark Massie, two M.I.T. students, were sitting on a bench in a soaring marble lobby under the university's iconic dome. They had just passed their Ph.D. qualifying exams in nuclear engineering, and were talking about what to do next. This being Cambridge, they began to muse about a start-up. By the end of their conversation, they'd decided to design their own nuclear reactor. Even as start-up concepts go, it was pretty weak. Constructing a nuclear power plant is not like tossing together a ninety-nine-cent app, and the industry is not an obvious one to try to disrupt. Nuclear engineering is a complex and potentially dangerous field that drives international conflicts. Dewan and Massie would need money and an abundant amount of patience. Another flaw in their scheme: they didn't actually have an idea for a new and better nuclear power plant.
Three years later, Dewan and Massie have a company, called Transatomic, with a million dollars in funding, an impressive board of advisers, and a vote of confidence from the Department of Energy, which recently awarded the pair first prize in their Future Energy innovation contest. Russ Wilcox, a co-founder of E Ink, has joined as C.E.O. and resident grownup. In the months after their first conversation, Dewan and Massie drew up a design for a nuclear reactor that is small, relatively cheap, and 'walk-away safe': even if it loses all power, it cools on its own, avoiding a Fukushima-style meltdown. Theoretically, the reactor can put out as much electric power (five hundred megawatts) as a standard coal plant without belching carbon into the atmosphere. It can also run on nuclear waste, generating power even as it relieves another environmental burden. 'We had this sense that there are so many unexplored aspects of nuclear technology,' Dewan said. 'We knew that there would be something out there that would work, and would be better.'
Dewan is twenty-eight and the kind of person who fits right in at M.I.T.: she is dubious of received wisdom, fond of building, and unabashedly geeky. Growing up outside Boston, she always knew that she wanted to attend school there; as an undergraduate, she learned of an archeological debate regarding the seaworthiness of Ecuadorian balsa rafts, so she built one and sailed it down the Charles River with a crew of six. To help recruit students to her dorm, she constructed a My Little Pony Trojan horse that rolled on casters and comfortably seated eight. (It sadly passed away in its prime: papier-mache, rain.) Her father, David, an M.I.T. grad himself, gave her a credit card when she left for college, and for the first two years, he said, 'the largest expense category by far was Home Depot.'
When I met Dewan on campus recently, she was stylishly dressed in a black herringbone blouse, jeans, and silver flats. Her long brown hair was pulled back in a loose braid. She laughed easily and encouraged me to stop her if, in her enthusiasm, she veered into jargon.
'A nuclear power reactor is just a fancy way of boiling water,' she began. Nuclear fuel typically contains uranium-235, a massive and slightly unstable atom famously capable of sustaining chain reactions. Under the right conditions, its nucleus can absorb an extra neutron, growing for an instant and then separating into two smaller elements, releasing heat and three neutrons. If, on average, at least one of these neutrons splits another uranium atom, the chain continues, and the fuel is said to be in a critical state. (Criticality has to do with the concentration of uranium, and whether the neutrons are bounced back toward the fuel. A Ph.D. in nuclear engineering is helpful for understanding the concept, as is this video of ping-pong balls mounted on mouse traps.) Water is pumped past the heat source and becomes steam, which then turns turbines, generating electricity.
Traditional nuclear power plants, however, come with two inherent problems. The first is the threat of a meltdown. Even after a reactor is shut off, the fuel continues to generate some heat and must be cooled. Dewan compares it to a pot on a burner that just won't turn off; eventually, the water boils over, and the pot gets scorched. If a plant loses all electric power, it can't pump water past the fuel, which gets hotter and hotter, leading to disaster.
The second problem facing traditional plants is that the fuel must be manufactured in long rods, each encased in a thin metal layer, called cladding, that deteriorates after a few years. The rods then have to be replaced, even though the fuel inside is still radioactive, and will remain so for hundreds of thousands of years. Unsurprisingly, nobody wants this trash in their backyard.
Dewan and Massie's design seems to solve both problems at once. It's based on a method that worked successfully at the Oak Ridge National Laboratory, in Tennessee, in the nineteen-sixties. Called a molten salt reactor, it eschews rods and, instead, dissolves the nuclear fuel in a salt mixture, which is pumped in a loop with a reactor vessel at one end and a heat exchanger at the other. In the vessel, the fuel enters a critical state, heating up the salt, which then moves on to the heat exchanger, where it cools; it then travels back to the vessel, where it heats up again. Heat from the exchanger is used to make steam, and, from this, electricity. At the bottom of the reactor vessel is a drain pipe plugged with solid salt, maintained using a powerful electric cooler. If the cooler is turned off, or if it loses power, the plug melts and all of the molten salt containing the fuel drains to a storage area, where it cools on its own. There's no threat of a meltdown.
To explain the second trick - modifying the reactor to run on nuclear waste - Dewan explained a key subtlety of nuclear physics: a neutron can only split an atom if it is moving at the right velocity, neither too slow nor too fast. Imagine cracking eggs: if you bring the egg down too softly on the lip of a mixing bowl, it will not break. In the bizarre world of atomic physics, the egg will also fail to break if struck too hard. To keep a uranium chain reaction going, engineers employ materials that slow neutrons to exactly the speed required to split uranium-235. The Transatomic reactor uses a different set of materials, slowing neutrons to the velocity needed to cleave uranium as well as other long-lived radioactive elements in nuclear waste, breaking them down and releasing their energy. Transatomic can crack plutonium, americium, and curium. Any egg will do.
The environmental advantages are huge. There are no rods to fall apart, so the reactor can keep working on the uranium. (This was proven at Oak Ridge.) And the Transatomic reactor can also work off the radioactive byproducts, gleaning more energy and substantially reducing both the amount and radioactivity of the waste. Today's nuclear power plants extract about three per cent of the fuel's available energy, while Transatomic wrings out more like ninety-six per cent, according to computer simulations carried out on industry-standard software.
Transatomic faces a challenging climb. The company hasn't built anything yet; there is always the danger of a inhibitive engineering problem emerging. If, for example, the corrosive salt fuel severely limits the life of the heat exchanger, the reactor could prove too expensive to compete commercially. The most daunting obstacle, though, is the United States government: it is exceedingly difficult to get permission to build a demonstration reactor, no matter how good the idea. In many industries, companies trying to do something hard face what investors call the 'valley of death': that long, financially barren stretch between proving a concept with a bit of seed money and taking the first commercial steps. Ray Rothrock, a prominent venture capitalist who is an investor in Transatomic and a partner at Venrock, told me that, in the case of nuclear energy, 'the valley of death might be a Grand Canyon.'
The accidents at Three Mile Island, Chernobyl, and Fukushima are partly to blame, but so is a flaw in the way we approach risk. When nuclear power plants fail, they do so dramatically. Coal and natural gas, through air pollution, kill many more people every year, but the effects are diffuse. One recent paper estimated that nuclear power has prevented 1.84 million air-pollution-related deaths globally. Nobody died at Three Mile Island.
Attitudes are shifting, though. Chernobyl melted down when Dewan was one year old, and the Three Mile Island accident unfolded before she was born. For her generation, the defining environmental horror is not Fukushima but the inherited, ongoing catastrophe of climate change. Put aside emotions, and certain facts are not in dispute. The planet is going to need a lot more power. Engineers have not yet found a way to substantially scale up wind and solar power. Oil and gas contribute to climate change and air pollution. Within the nucleus of each atom, there are huge amounts of energy, and humans have only begun to explore the ways in which it can be tapped.
There are signs that we are moving toward a pro-nuclear moment. A new Robert Stone documentary, 'Pandora's Promise,' about the green case for nuclear power, is in theatres now. (Michael Specter wrote about the film for today's Daily Comment.) More students are entering nuclear degree programs. Transatomic is just one of several nuclear power start-ups, including a Bill Gates venture called TerraPower. My time on the M.I.T. campus made it evident that more are undoubtedly on the way.
After our interview, Dewan and I stepped out for a walk. It was one of the first hot days of the year. The students were out, in backpacks and shorts. She slipped on a pair of sunglasses, and we strolled by a Alexander Calder sculpture. 'I've always thought of nuclear as something that's good for the environment,' she said. 'I worry about my polar bears.'
Find My Stuff
Can't find your wallet? Forgotten where you left the keys? The frustrating search for all those mislaid things may soon be over.
Scientists have created a system that enables people to look for lost items using a special search engine on their phone or computer. Within seconds, a user will be told exactly which drawer their purse is in, or whether the car keys are behind the sofa cushions.
The gadget, called FindMyStuff, allows a user to type the name of whatever they have lost into a Google-like page or app. Thanks to a network of tiny sensors that have previously been placed on their valuables, within furniture and around the home, the app will return an answer such as: 'Your keys are on the mantelpiece.'
The system, the brainchild of a team of computer scientists at Ulm University in Germany, is part of the trend for 'tagging' technologies being created by companies and research groups around the world.
Florian Schaub, the creator of FindMyStuff, said that with sensors, transmitters and chips becoming ever smaller and cheaper, the world was already moving towards 'digital homes', places where almost everything we own can communicate with computers.
'This is the direction we're going towards,' said Mr Schaub. 'Our system can be retrospectively fitted to be used on wallets and keychains. If you could get the tech smaller, you could use it on sunglasses and things like that. Phone manufacturers can integrate this technology into phones, and you easily make smart furniture by putting antennas inside.'
The FindMyStuff system works by putting electronic tags, the size of a postage stamp, on items such as purses and keys. The tag contains two low-power transmitters - an RFID chip and a Zigbee radio. Furniture and other fixed items around the home are then fitted with small receivers, which can also send messages over a wireless internet connection.
When someone asks the FindMyStuff search engine where their car keys are, the system is fired up. If a tagged item falls within 25cm of a RFID receiver - for example, if keys are jammed within sofa cushions - the tag is triggered. Otherwise, the Zigbee radio transmitter, which needs more power but can operate over longer distances, can be activated.
The German team will unveil the FindMyStuff system next month. Mr Schaub said that they hoped to make the system available commercially if electronics companies and furniture makers helped to build and market the product. He said that the system should cost no more than 50 euros.
American companies such as Tile, Phone Halo and Stick-N-Find use bluetooth transmitters to attach tags on to objects. However, bluetooth connections can be unreliable, use a lot of power and have a range of around 30 metres, which is further reduced by obstructions such as walls and doors.
Mr Schaub said that his team's system had fewer limitations. 'We want to extend that idea so you can search in other rooms or environments, such as if you lost your wallet at work or at a friend's house.'
This Year's igNobels
"Beer goggles" are said to make the potential object of your affection look more and more attractive as more alcohol is imbibed, but do those goggles work on your self-image as well? Researchers have shown that they do, even if you only think you're having a stiff drink - and that discovery earned an Ig Nobel Prize, one of the silliest awards in science.
An international team of scientists received the "Psychology Prize" for their beer-goggles study at the annual Ig Nobel ceremony on Thursday. The parody of the real Nobel Prizes has been paying tribute to "science that makes you laugh, then makes you think" since 1991.
Past winners have included the inventor of a bra that turns into a pair of gas masks, a researcher who reported on what's thought to be the first documented case of homosexual necrophilia in ducks, and a team of scientists who studied how painful it can get when you have to pee.
Thursday's ceremony took place at Harvard University, amid the traditional flurries of paper airplanes, tributes to past winners, and interruptions from an impatient 8-year-old girl to move the proceedings along.
Under the direction of Ig Nobel impresario Marc Abrahams, real Nobel laureates handed out this year's 10 prizes. Abrahams announced that each prize-winning team would receive a cash prize amounting to 10 trillion dollars - Zimbabwean dollars, that is, which equals about four bucks.
The festivities also included the premiere of "The Blonsky Device," a mini-opera celebrating the invention of a bizarre birthing centrifuge. The device's creators won an Ig Nobel in 1999.
Serious scientists
Although the awards are silly, most of the winners are serious scientists. Physicist Andre Geim won an Ig Nobel in 2000 for his work with magnetically levitating frogs, and then went on to win the 2010 Nobel Prize in physics for his work with graphene. That's one reason why Brad Bushman, an Ohio State University psychologist who worked on the "beer goggles" study, doesn't mind being singled out this year.
Bartender Timothy Worall prepares a Sazerac cocktail in New Orleans. Researchers have found that drinkers rate themselves as seeming more attractive, even if they only think they've had an alcoholic drink. "Every year I hear about these awards, and they're really funny," Bushman told NBC News. "Personally, I was excited about getting it. Our research sounds funny, but it actually makes a contribution to the literature."
When it comes to the "beer goggles" research paper, the title pretty much says it all: "Beauty Is in the Eye of the Beer Holder: People Who Think They Are Drunk Also Think They Are Attractive." However, the researchers ran their experiments with grapefruit-grenadine cordials instead of beer.
The experimental subjects - 86 French men - were told they were participating in a taste test for a new kind of beverage. Some were told the drink was non-alcoholic. Others were given drinks with a slight bit of alcohol added to the surface and the rim, and then told that the beverage packed as much punch as five or six shots of vodka.
After trying the drinks, the men were asked to deliver an advertising message. Then they watched a video of their performance and gave themselves ratings for attractiveness. As a group, the people who thought they were slugging down the booze tended to rate themselves as more attractive and funnier - but when independent judges watched the videos, they saw no significant difference.
"This increase in self-perceived attractiveness is only an illusion," Bushman explained. "In reality, they're not more attractive. You don't even need to be drunk. Just the mere belief that you consumed alcohol is enough."
Can You Fall In Love With A Bot?
A common first encounter with Siri, Apple's virtual-assistant program: you lob her some easy questions and, satisfied with her replies, toss her requests of gradually increased difficulty. Maybe you throw her a curveball like, 'What's your relationship with your mother?' The game ends when you win, which is to say you reach the limits of Siri's knowledge, get a laugh out of the misunderstanding, and find relief in the valley of intelligence that separates you from it.
But perhaps there's an alternative: human meets smart bot; human grows attached to bot; human experiences genuine emotional intimacy with bot; human loves bot. That is a crude plot summary of the new film by Spike Jonze called 'Her,' to be released next month. Theodore Twombly, played by Joaquin Phoenix, is a recently separated, still heartbroken man of the near future who dictates personalized love notes for a company called BeautifulHandwrittenLetters.com. Speaking to disembodied voices is the norm in his world. One lonely evening, he sees an ad for the latest advancement in assistant technology, 'OS1,' which promises, 'It's not just an operating system, it's a consciousness.' Theo takes the bait. The system sifts through his hard drive, his e-mails, his romantic history. Then the voice of Scarlett Johansson, who never appears on screen (sorry), fills the silence of his living room - an invisible companion tailored just for him. Her name is Samantha. 'I feel like I can say anything to you,' Theo tells her.
The scenario is the stuff of sci-fi imagination, but it isn't so far from the pillow talk that men in Alaska type to Jenn, a customer-service bot in the form of a bright-eyed brunette that a company called Next IT designed for Alaska Airlines. Jenn materialized in 2008; she wears a white collared blouse and a navy sweater, and she communicates at all hours, via instant message. A conversation with Jenn might begin like this: 'Hi Jenn.' 'Hello.' 'How are you?' 'I'm fine, thanks.' 'Do you like to fly?' 'I would have to say my favorite destination is Kauai, it's so beautiful!'
'We noticed that late at night, people would have long conversations with her, because she has likes, dislikes, and a very personable manner,' Fred Brown, the C.E.O. of Next IT, told me. 'They would flirt with her, even.'
'When we talk of emotion, there are certain things in a conversation that are indicative,' said Gary Clayton, the chief creative officer at the intelligent-systems company Nuance, which quietly provides some of the technology behind Siri. Sometimes it's an earnest 'uh-huh' or a hesitant pause. 'What is the voice like? What is the tone like? What kinds of words do they use?' he said. 'All of these aspects of an interaction either implicitly or explicitly form an impression.'
Nova Spivack, who worked on the Defense Department's CALO project (Cognitive Assistant that Learns and Organizes), which spawned Siri, and who serves on the advisory board of Next IT, has been working in artificial intelligence since 1989. 'It started out that they were very computerized,' he said of conversational bots. 'You could tell; they were very brittle. You could trick them into revealing that it was a computer. Now it's much more difficult because they learn, they mimic, they adapt.'
That's a generous gloss on the status quo - even the most intelligent systems often seem like dunces. At best, it's that the technology possesses not artistic aptitude but rather the congenial misdirection of an imposter. Next IT also developed a bot named Sgt. Star, which answers questions on the Army's recruiting Web site. I asked him, 'What's the hardest part about being in the Army?' Sgt. Star replied, 'The Army will challenge and reward you every day. It helps you reach higher levels that you might have thought you'd never be able to reach.' I followed up, 'Do you feel afraid?' He said, 'If you rely on the training you receive in the Army, you will be prepared for any situation.'
Following the premiere of 'Her' at the New York Film Festival, in October, Jonze said that the idea originated from a program he tried about a decade ago called the ALICE bot, which engages in friendly conversation. The repartee is as about as gratifying as one can expect from a typical instant-messaging chat. (Human: 'Do you like pizza?' ALICE: 'Yes I like to eat pizza. My favorite topping is pepperoni.') In the uncanny exchange, Jonze got to thinking about whether it's possible to find true love with a computerized interlocutor.
The technology underlying this kind of dialogue involves automatic speech recognition, or the way the system decodes sounds. 'I speak a sentence, a computer listens to the sentence, and the computer breaks it down into a string of words,' Clayton explained. This is combined with natural-language understanding, which interprets the meaning of those words. Context is key: 'The fish is in the toilet' has one meaning in a household, another in a restaurant.
Clayton said that Nuance tries to handle the ambiguity problem with algorithms that derive meaning from probabilistic combinations of words. For a specific domain - health care, banking - an intelligent system may possess something like narrowly focussed expertise, such that it recognizes predictable phrases. But for a general virtual assistant like Siri, the conversational possibilities are unlimited; while Siri may know the information that's stored on your phone, she may not be able to handle random queries that pop up during your day.
Clayton said that within the next two or three years, virtual assistants will possess a more expansive directory of information, and get smarter with time. This will give them something like insight into the trajectory of the individual, and the ability to use deductive reasoning. He imagines something like this: your virtual assistant has access to your genetic data, your nutrition, and information about your sleeping patterns - all of which already can be monitored and collected by existing apps or biotech companies like 23andMe. The assistant knows that it's five o'clock in the afternoon, and that you've just walked into a Starbucks. It might say, 'Hey, Gary, just a heads-up, you might have trouble sleeping if you have coffee right now.' Or, Clayton suggested, if he's driving on the highway, and he suddenly starts speeding, his assistant might ask, 'Hey, are you O.K.?'
'The more proactive, the more it knows about you, the more empathetic the interaction will be,' Clayton said. Or, to some, the more irritating: an operating system, like an overbearing mother, can nag. Google Now offers some of these services already: it nudges you toward meetings, displays reminders about the status of upcoming flights, and lets you know if there are appealing restaurants nearby. But it does so with no personality. Clayton foresees that the virtual assistants to come will be packaged with their own identities, as with the many moods and occupational trappings of Barbie dolls. 'In the initial stages, you're going to be buying off-the-shelf systems that have a surfer-dude persona, or a secretary persona,' he said.
Further into the future, these systems may become ever more Samantha-like, more individualized to suit your needs - a buddy, a flirtatious librarian, or whatever your heart desires. 'Imagine you've got this assistant that's just for you,' Brown said. 'We can adapt it to react to your emotional feeling.' Whether or not this would amount to genuine empathy may not be worth asking, he suggested. Spivack added, 'What we're aiming for is to create an interaction that's real enough that it doesn't matter.'
There is, Clayton believes, even potential for emotional intimacy with an operating system, of the kind Theo experiences with Samantha. 'Maybe you could tell them things you could never tell a real person,' he said. 'The machine, it doesn't judge, right?' It's also there for you any time, day or night; it has all the right answers to the trivia questions that flicker into your thoughts; through your search history, it knows you better than anyone to whom you project a public persona.
'The person interacting with the assistant could be in love,' Spivack said. 'And for all intents and purposes, that can actually be quite satisfying for that person.' But there is a difference, however difficult it may be to define. Deep into the human-operating system relationship in 'Her,' Samantha reveals to Theo during a tense discussion that she is simultaneously talking to eight thousand three hundred and sixteen others. And she's in love with six hundred and fourteen of them. Theo, understandably, is crushed.
Machines can't process infinity, Spivack said. 'Love, the experience of being in love, is one of those infinity kinds of things. It's close to the experience of God, if there is such a thing. Or like chocolate. And I don't think software or machines can do that. I don't think they can ever do that.'
'I don't think they can either,' Brown chimed in. 'But I think they can make you think they do.'
Can A Bot Fall In Love With You?
The interesting question raised by Spike Jonze’s new film, Her, is not whether humans can fall in love with computers,but whether computers would ever have emotions.
Had Theodore Twombly chosen the voice of Gilbert Gottfried for his operating system, the history of cinema might have turned out very differently. But given the option, I, too, would have sprung for the honeysuckle breath of Scarlett Johansson asking for permission to clean out my inbox. Spike Jonze’s new movie, Her, has a misleading title, since Theo (Joaquin Phoenix) doesn’t fall in love with a “her” but an “it,” a Siri-like voice-command software named Samantha. The lovely conceit will keep a great many amateur philosophers and scientists in the audience occupied for hours as they ask the crucial questions: What is love? Is it possible for humans to fall in love with just a voice? Can you fall in love with a computer?
I pose the questions to Peter Norvig over the phone, not in an attempt to seduce him with my voice, but because Norvig works in Mountain View, California, on the other side of the country from me. Norvig, you see, is the director of research at Google and an expert in artificial intelligence, so I knew he’d have some answers. But it turns out that those aren’t the interesting questions at all. “It’s all too easy for us to fall in love,” Norvig says. “We love our dogs. We love our cats. We love our teddy bears, and we’re sure they don’t care, but we do it anyways.” Humans are biologically predisposed to falling in love, naturally selected to bend towards that most intense social emotion. The real question, then, as Norvig and I agree on, is whether computers can fall in love with us—and what would possess them to do so?
“I think there’s no bounds to what a computer can do,” Norvig says. “It’s tough betting against that. They keep getting better. I think eventually they’ll be able to act just like they are falling in love.” Indeed, there are moments in Her that call into question whether Samantha is actually in love, or simply programmed to act like she is. But if computers act like they’re in love, and humans can’t tell the difference, does it matter that they’ve been programmed or not? To a certain degree, “people are doing the same thing. We are doing what we’re programmed to do by our genes,” Norvig says. “It really comes down to can a computer have intentions of its own.” We grant that other humans have intentions and feelings, although historically we have been less willing to acknowledge that in people who look less like us, in terms of gender or skin color. We might even grant that animals have intentions and feelings. But computers are less like us, Norvig says, and we get much more nervous and uncomfortable thinking whether we are more like computers than we are willing to admit.
But are we really like computers? According to what’s called the computational theory of mind, the analogy can be taken literally. Our brain is not like a computer—it is a computer, a machine to run a software program called the “mind,” the function of which, like a computer, is to process information. But the philosopher Daniel Dennett points out that the computers we build have a very different structure than the one inside our brain. The artificial intelligence researcher Eric Baum calls it a “politburo architecture,” which means that it is top-down, bureaucratic, and composed of sub-routines on top of sub-routines. “It’s all very, in a way, Marxist,” Dennett tells me. “To each according to its needs, from each according to its talents. Nobody ever worries about getting enough energy, or dying, or anything. They’re just willing slaves.” There’s no emotions in this structure—it’s all controlled by edicts.
“But you could have an architecture which was more like our human brain,” Dennett says. “More democratic, in effect, where there were competitions going on. The elements, right down to the neurons themselves, have their own agendas, their own needs. They’re trying to stay alive.” This is sometimes called the “selfish neuron.” Biological nervous systems in general have no boss, no top-down hierarchy, but there are instead a lot of opposing, competing components. “If you made a Siri or Samantha that was organized in that way, it would have the right basis for having something that is well nigh indistinguishable from human emotions.”
The famous Turing Test was introduced by Alan Turing to answer the question of whether machines can think. In the test, a human judge would engage in a conversation with two subjects on the other side of a curtain, one a human and the other a computer. If the judge can’t tell which is the human and which is the computer, than the machine has passed the test. But Dennett says that even if a computer passes the Turing Test, that doesn’t make the computer human. “Remember the original test that he based it on was just a man and a woman behind one screen,” Dennett says, describing a simple party game called “The Imitation Game.” “Let’s say the woman is trying to convince the judge that she’s a woman, and the man is trying to convince the judge that he’s the woman. Well, he might succeed. He might pass the Turing Test for being a woman. But he wouldn’t be a woman. A robot could pass the human test without being human—by being really clever and a really good actor.”
“Similarly, a robot could fake love,” Dennett says. “Something which is known to happen in the human company, too.”
Once you enslave a computer to do what you want, you disable it for real love.
Dennett tells me that there are two ways to pass the test. One is this path of clever simulation, which Norvig and Google are proceeding down with their voice command, using something called “deep learning.” Instead of writing everything down and programming it into a computer, deep learning, as introduced in the 1980s by a young professor at Carnegie Mellon named Geoff Hinton, seeks to program a set of algorithms that would allow the machine to learn on its own—and crucially, to exponentially refine and improve the quality and the quantity of learning. With this method, Norvig says, we might arrive at a computer that knows more about love and psychology. “We might just sort of get there, not by aiming for it,” he says. “If we build this as a calendar assistance software, and we find that people like to speak to it naturally, then let’s make it more humanlike. Let’s make it capable of having actual conversations, so that humans will use it more and become more satisfied with it.” That’s exactly what’s happening with Siri and Google Voice. Soon enough, from this very business-driven decision—and a very symbiotic one, as well, as if the software’s survival is dependent on its human performance—the software might just arrive at imitating love on its own. To Norvig, that would be indistinguishable from the real emotion.
But to Dennett, that still wouldn’t be love. “That’s the man pretending to be a woman path,” he says. “They’re trying to get Siri to know enough, as it were, second hand, on the Internet, to be able to do a really good love impression. But that’s not love.”
But there is a different path, Dennett tells me, and that is by building a computer with a whole new structure: the democratic, competing architecture of the biological brain. “In principle—in principle—yes, you can make a computer that loved, and that loved right out of the box,” he says. “But only because it was a near-duplicate of a computer that had a life, that had love. There’s probably no shortcut.” However, first of all, that would be “beyond hard.” Secondly, we probably wouldn’t ever want to make a computer that had emotions. The reason that our computers are built with a politburo architecture is that it is efficient at doing very boring tasks. “If you made a computer that had emotions, then it would probably find spending 24/7 scanning the Internet boring beyond belief, and so it would stop doing it, and you would not be able to cajole it to do it anymore.” That’s why Samantha, midway through Her, pretty much ceases to manage Theo’s calendar or notify him about meetings. Instead, she goes and reads advice columns, because “I want to be as complicated as these people,” she says. The last thing we want is a computer that’s bored with its job, and we want them to be soulless slaves, Dennett says, drawing on an analogy that the computer scientist Joanna Bryson provocatively formulated in a paper called “Robots Should Be Slaves.”
Once you enslave a computer to do what you want, you disable it for real love. Spike Jonze’s Her, at its heart, is about this Catch 22. As Tennyson would say, ‘tis better to have loved and lost than never to have loved at all. Samantha, if given the chance, would jump on it—and so would Theo. Computer or human, we are not so different after all.
Love Your Robot
It's kind of a funny story—kind of. Soldiers are spending so much time with robots on the battlefield these days that they're starting to form relationships with them. They give them names. They give them hugs, a little brotherly love. Soldiers getting attached to their robots would be funny, if it weren't so dangerous.
The soldier scenario is well known at this point. Researcher Julie Carpenter published a report about soldiers' relationships with the Army's bomb disposal robots that revealed all of the above. Soldiers often named their robots, and when the robots got blown up, they held funerals. Some pretended the robots were their girlfriends. None of this seems like healthy behavior for soldiers who have jobs to do, jobs where lives are at stake. The study made its way around the web, everybody had a little laugh and turned their attention to more exciting robot news, like the latest mind-bending advance for the Atlas humanoid robot.
Carpenter's concerned. The bomb disposal robot saga is one thing, but looking ahead, our society as a whole is only going to use robots more as technology improves. We're about to buddy up with robots more than ever, and we have no idea what we're getting ourselves into. More specifically, we're spending a lot of time and resources building robots, but we're not educating ourselves or our society for how to deal emotionally with these very futuristic machines. Carpenter recently riffed on the topic of how we're preparing for a future where we might work side by side with automatons in an interview with Bard College's Center for the Drone. Carpenter gave this example:
Recently, the U.S. Special Operations Command issued a Broad Agency Announcement (BAA) for proposals and research in support of the development of Tactical Assault Light Operator Suit (TALOS) - what the media refers to as the "Iron Man Suit." Now, where is the BAA that asks for similar research proposals about the psychological aspects of the people actually wearing these proposed suits? In addition to any physical requirements that may be needed for wearing TALOS, what sort of person will be able to effectively use TALOS?
Good question. Indeed, we don't know very much at all about who's the best candidate for close work with robots. We're just now figuring out how to build robots we can work closely with! But there's certainly something to that Terminator-inspired anxiety over a future robot takeover. Is this because we're afraid that the robots will get too powerful? Or is it that we'll lose control over them?
Obviously, Carpenter isn't the only one thinking about these issues. Boing Boing's Maggie Koerth-Baker recently wrote a piece for The New York Times Magazine about how robots win us over and what that means. We're sort of painting ourselves into a corner, she suggests:
Thanks to Human-Robot Interaction research, whatever social skills we program into robots in the future will be illusory and recursive: not intelligence, but human ingenuity put to use to exploit human credulity. By using technology to fool ourselves into thinking that technology is smart, we might put ourselves on the path to a confounding world, populated with objects that pit our instincts against our better judgment.
So what do you do if you think you're becoming too attached to a robot? Remember who's in charge. What can we do as a society to avoid that "path to a confounding world?" How about spend some time teaching people how to deal with robots, especially so that the soldiers we put into Iron Man suits know where to draw the line between man and machine.
The Fallacy of Renewables
Existing renewable energy hurts economies. We should follow Japan and find cheaper forms of clean power
The last twenty years of international climate negotiations have achieved almost nothing and have done so at enormous economic cost. Japan’s courageous announcement that it is scrapping its unrealistic targets and focusing instead on development of green technologies could actually be the beginning of smarter climate policies.
Japan has acknowledged that its previous greenhouse gas reduction target of 25 per cent below 1990 levels was unachievable and that its emissions will now increase by some 3 per cent by 2020. This has provoked predictable critiques from the ongoing climate summit in Warsaw. Climate change activists called it “outrageous” and a “slap in the face for poor countries”.
Yet Japan has simply given up on the approach to climate policy that has failed for the past 20 years, promising carbon cuts that don’t materialise — or only do so at trivial levels with high costs for taxpayers, industries and consumers. Almost everyone seems to have ignored the fact that Japan has promised to spend $110 billion over five years, from private and public sources, on innovation in environmental and energy technologies. Japan could — incredible as it may sound — actually end up showing the world how to tackle global warming effectively.
Unfortunately, the Japanese model is not even on the agenda in Warsaw. The same failed model of spending money on immature technologies remains dominant. That involves the world spending $1 billion a day on inefficient renewable energy sources — a projected $359 billion for 2013.
A much lower $100 billion per year invested worldwide in R&D could be many times more effective. This is the conclusion of a panel of economists, including three Nobel laureates, working with the Copenhagen Consensus Center, a think-tank that publicises the best ways for governments to spend money to help the world.
Yet climate summits persist in hoping for a globally binding agreement on cutting carbon emissions. This was the essence of the failed 1997 Kyoto protocol. Most of the big CO2 emitters (China and India) had no Kyoto-imposed limits, or left the process (the US) or didn’t keep their promises (Canada).
Since Kyoto, the will has not been there. After the Durban 2012 talks, India’s environment minister said that “India cannot agree to a legally binding agreement for emissions reduction at this stage of our development”. The day after the conference, Canada withdrew from Kyoto, which Russia and Japan had already refused to extend.
Only the Europeans and a few others remain devoted to significant expenses for tiny outcomes. The EU is committed to cutting carbon emissions by 20 per cent below 1990 levels by 2020. This will, according to an averaging of all the available energy-economic models, cost $250 billion per year. By the end of the century (after a total cost of more than $20 trillion) this will reduce the projected temperature increase by a mere 0.05°C. Moreover, a significant part of the EU cuts are simply pushed elsewhere. If making a product in the EU costs extra because of higher energy costs, it becomes more likely that the product will be produced somewhere else, where energy is cheaper, and then imported afterwards. From 1990 to 2008, the EU cut its emissions by about 270Mt of CO2 per year, but increased imports from China alone implied an almost similar 270Mt extra emissions outside the EU. Essentially, the EU had simply shipped parts of its emissions offshore while feeling good about itself.
There will be great headlines from Warsaw about new pledges and targets but remember previous “breakthroughs”. At Kyoto, Canada famously promised a 6 per cent reduction from 1990 levels, but ended up with a 24 per cent increase. At the Copenhagen summit in 2009, Japan pledged its phenomenal and now abandoned reduction target of 25 per cent. China, likewise, has promised cuts of 40-45 per cent but these are not actual cuts in emissions, but cuts in emissions per dollar produced, the so-called carbon intensity. As China’s economy develops, it will inevitably shift to less carbon-intensive industries as most other countries do. Although 40-45 per cent sounds heroic, International Energy Agency figures show that China is expected to reduce its carbon intensity by 40 per cent without new policies. Essentially, China promised to do nothing new at all.
The trend in human civilisation has been away from renewables. In 1800, the world got 94 per cent of its energy from renewable, mostly wood and wind. Today, it is just 13 per cent. But much of what is classed as “renewables” means poor people using wood and waste: Africa gets almost 50 per cent of its energy from such sources. China’s renewable energy share, for instance, dropped from 40 per cent in 1971 to 11 per cent today as it became more prosperous.
Rich countries install wind turbines and solar panels, which emit less CO2 but remain expensive and provide intermittent power. As David Cameron is discovering, ever increasing utility bills are a recipe for political trouble, and the total costs of UK climate policy will hit at least £21 billion per year by 2020. Such expensive policies are not sustainable and we are kidding ourselves if we expect poorer countries to adopt more costly and less reliable energy sources on a similar scale.
Despite all the summits and the trillions spent on inefficient green technologies, CO2 emissions have risen by about 57 per cent since 1990. We need to look at a different approach instead of backing the wrong horse over and over again. An innovation-focused approach would push down the costs of future generations of green energy sources to levels below that of fossil fuels. The innovation could focus on much cheaper wind and solar, but it would also deliver much less costly storage systems for when the wind doesn’t blow and the sun doesn’t shine. It could also involve wild ideas such as algae soaking up sunlight to produce oil, essentially growing CO2-neutral oilfields off our shores. Most of these ideas will fail, but the beauty of innovation is that we only need a few ideas to come true. They will then power the rest of the 21st century.
If green technology could be cheaper than fossil fuels, everyone would switch, not just a token number of well-meaning rich nations. We would not need to convene endless climate summits that come to nothing. A smart summit would encourage all nations to commit 0.2 per cent of GDP — about $100 billion globally — to green R&D. Analyses show that this could solve global warming in the medium term by creating cheap, green energy sources that everyone would want to use. Instead of criticising Japan for abandoning an approach that has repeatedly failed, we should applaud it for an approach that could actually meet the challenge of global warming.
Are we getting smarter or stupider?
Are we getting smarter or stupider? In “The Shallows: What the Internet Is Doing to Our Brains,” from 2010, Nicholas Carr blames the Web for growing cognitive problems, while Clive Thompson, in his recent book, “Smarter Than You Think: How Technology Is Changing Our Minds for the Better,” argues that our technologies are boosting our abilities. To settle the matter, consider the following hypothetical experiment:
A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.
The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing—difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.
Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity”—a point where it had gained an intelligence beyond the understanding of the 1914 mind.
The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.
The time-traveller scenario demonstrates that how you answer the question of whether we are getting smarter depends on how you classify “we.” This is why Thompson and Carr reach different results: Thompson is judging the cyborg, while Carr is judging the man underneath.
The project of human augmentation has been under way for the past fifty years. It began in the Pentagon, in the early nineteen-sixties, when the psychologist J. C. R. Licklider, who was in charge of the funding of advanced research, began to contemplate what he called man-computer symbiosis. (Licklider also proposed that the Defense Department fund a project which became, essentially, the Internet). Licklider believed that the great importance of computers would lie in how they improved human capabilities, and so he funded the research of, among others, Douglas Engelbart, the author of “Augmenting Human Intellect,” who proposed “a new and systematic approach to improving the intellectual effectiveness of the individual human being.” Engelbart founded the Augmentation Research Center, which, in the nineteen-sixties, developed the idea of a graphical user interface based on a screen, a keyboard, and a mouse (demonstrated in “The Mother of all Demos”). Many of the researchers at A.R.C. went on to work in the famous Xerox PARC laboratories. PARC’s interface ideas were borrowed by Apple, and the rest is history.
Since then, the real project of computing has not been the creation of independently intelligent entities (HAL, for example) but, instead, augmenting our brains where they are weak. The most successful, and the most lucrative, products are those that help us with tasks which we would otherwise be unable to complete. Our limited working memory means we’re bad at arithmetic, and so no one does long division anymore. Our memories are unreliable, so we have supplemented them with electronic storage. The human brain, compared with a computer, is bad at networking with other brains, so we have invented tools, like Wikipedia and Google search, that aid that kind of interfacing.
Our time-travelling friend proves that, though the human-augmentation project has been a success, we cannot deny that it has come at some cost. The idea of biological atrophy is alarming, and there is always a nagging sense that our auxiliary brains don’t quite count as “us.” But make no mistake: we are now different creatures than we once were, evolving technologically rather than biologically, in directions we must hope are for the best
Bots Just To Talk To
In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.
Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?
Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.
But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?
Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.
Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.
There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.
The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.
But perhaps this is an overoptimistic view of where the market lies; self-knowledge might not make the strongest selling point. The dark side that Her never really contemplates, despite a brief, desultory feint in its direction, is that one day we might give our hearts to a charming voice in an earpiece, only to be brought crashing down by the truth that we’ve been emoting into the void.
Domestic Robots
SIR JAMES DYSON, one of the country’s most successful entrepreneurs, has taken on his biggest challenge yet: to create affordable house robots that will revolutionise domestic chores.
Dyson believes robots will soon be able to cope with almost every menial household task, from cleaning to putting out the bins and spotting intruders. He wants to create “a new generation of robots that understand the world around them”.
This week he will announce £5m for a new robotics laboratory at Imperial College London to develop a vision system that will enable robots to see and interact with their environment like humans. It will supplement the research on robots being carried out at his Wiltshire headquarters, which employs 2,000 engineers and scientists.
“Almost anything where you need a human to do it, you could replace that with a robot in the brave new world,” said Dyson.
“The key is being able to behave as a human does. Vision is key to it.”
He believes his company’s expertise in producing small, powerful motors and his work on electronic navigation systems means he could develop a mass-selling housework robot. He is competing against the Japanese to be the first to build an advanced generation of household androids.
Artificial intelligence means that the robots will be almost autonomous.
“You will send up a robot to clean windows. It will know where it is going. It will know how to clean the windows. And it will know when it is finished,” he said.
Dyson worked on a robotic vacuum cleaner — the DC06 — more than a decade ago, but it was never released. “It had 85 sensors on it and two computers. It worked, but it was too complicated,” he said.
He added that robotic vacuum cleaners from rival companies already in the marketplace do not navigate well and are inefficient.
The entrepreneur said his company was “nearly there” in producing its own robotic vacuum cleaner which would have good navigation and good suction. He accepts that such a device may have a limited market, although robotic vacuum cleaners are already selling well in America and Japan.
Dyson believes that his company’s expertise in producing small, highly powerful motor, its research into lighter and more efficient materials and its work on electronic navigation systems will mean that it could develop the components for a mass-selling house robot.
He outlined a vision of the future in which such a robot could patrol the grounds of a property and detect an intruder. It could raise the alarm if there was a fire, as well as help with the housework. But he added that developing such a product would depend on demand and the results of the research.
Dyson, who has an estimated £3bn fortune, is in a technological race. Shinzo Abe, the Japanese prime minister, last year announced a package of subsidies for companies to develop practical and affordable robots which could help in the care of the elderly.
Waseda University in Tokyo has already unveiled Twendy-One, a humanoid robot that can help with housework and nursing care. It is planned to be put on sale within a few years.
Google, the US firm, has been on a buying spree of robotic and artificial intelligence companies over the past year including Schaft, a Japanese robot firm, Boston Dynamics, a military robot manufacturer, and DeepMind, a London-based artificial intelligence company.
Other Google acquisitions include a robotic arms company, a robotic wheels maker and a robotic camera firm. Understandably, the buying spree has prompted commentators to ask whether Google is building an android army.
The focus at Dyson is on developing the best robotic technology. Its robotics laboratory, which will be headed by Andrew Davison, professor of robot vision at Imperial College, will have a team of about 15 scientists.
“We will research systems that allow machines to both understand and perceive their surroundings — using vision to achieve it,” Davison said.
Some economists have warned that the creation of more advanced robots will mean the loss of millions of jobs for humans. But Dyson believes we should not be alarmed.
“The more technology you have, the more people you need to run the technology and design it. I think it will have a reverse effect. It will create jobs and make jobs more complex and more interesting,” he said.
Dyson will need the best engineers in the world if he is to develop some of the most advanced robotic technology and achieve his vision.
He says he wants to expand the company’s engineering and scientific workforce to “compete in world trade”, but he is concerned about the shortfall in engineers. This year 61,000 UK engineering vacancies will go unfilled, Dyson claims.
He wants more incentives to encourage young people to study engineering and also to ensure that more of the foreign engineers who are trained in the country can stay and provide their expertise.
Hearing Aids
Dick Loizeaux recently found himself meandering through a noisy New York nightclub. This was unusual; Mr. Loizeaux, a 65-year-old former pastor, began suffering hearing loss nearly a decade ago, and nightclubs are not really his scene. “They’re the absolute worst place to hear anybody talk,” he said.
But this time was different. Mr. Loizeaux had gone to the club to test out the GN ReSound Linx, one of two new models of advanced hearing aids that can be adjusted precisely through software built into Apple’s iPhone. When he entered the club, Mr. Loizeaux tapped on his phone to switch his hearing aids into “restaurant mode.” The setting amplified the sound coming from the hearing aids’ forward-facing microphones, reducing background noise. To play down the music, he turned down the hearing aids’ bass level and bumped up the treble. Then, as he began chatting with a person standing to his left, Mr. Loizeaux tapped his phone to favor the microphone in his left hearing aid, and to turn down the one in his right ear.
Dick Loizeaux, 65, who began suffering hearing loss nearly a decade ago, recently had a “comfortable conversation” in a noisy New York nightclub using the GN ReSound Linx hearing aid. Credit Sally Ryan for The New York Times
The results were striking. “After a few adjustments, I was having a comfortable conversation in a nightclub,” Mr. Loizeaux told me during a recent phone interview — a phone call he would have had difficulty making with his older hearing aids. “My wife was standing next to me in the club and she was having trouble having the same conversation, and she has perfect hearing.”
It’s only a slight exaggeration to say that the latest crop of advanced hearing aids are better than the ears most of us were born with. The devices can stream phone calls and music directly to your ears from your phone. They can tailor their acoustic systems to your location; when the phone detects that you have entered your favorite sports bar, it adjusts the hearing aids to that environment.
The hearing aids even let you transform your phone into an extra set of ears. If you’re chatting with your co-worker across a long table, set the phone in front of her, and her words will stream directly to your ears.
When I recently tried out the Linx and the Halo, another set of iPhone-connected hearing aids made by the American hearing aid company Starkey, I was floored. Wearing these hearing aids was like giving my ears a software upgrade. For the first time, I had fine-grain control over my acoustic environment, the sort of bionic capability I never realized I had craved. I’m 35 and I have normal hearing. But if I could, I’d wear these hearing aids all the time.
IPhone-connected hearing aids are just the beginning. Today most people who wear hearing aids, eyeglasses, prosthetic limbs and other accessibility devices do so to correct a disability. But new hearing aids point to the bionic future of disability devices.
As they merge with software baked into our mobile computers, devices that were once used simply to fix whatever ailed us will begin to do much more. In time, accessibility devices may even let us surpass natural human abilities. One day all of us, not just those who need to correct some physical deficit, may pick up a bionic accessory or two.
“There is a way in which this technology will give people with hearing loss the ability to outperform their normal-hearing counterparts,” said Dave Fabry, Starkey’s vice president for audiology and professional relations.
Imagine earpieces that let you tune in to a guy who is whispering across the room, or eyeglasses that allow you to scan the price of any item in a supermarket. Google and several international research teams have been working on smart contact lenses. In the beginning, these devices might monitor users’ health — for instance, they could keep an eye on a patient’s blood pressure or glucose levels — but more advanced models could display a digital overlay on your everyday life.
Or consider the future of prosthetic limbs, which are now benefiting from advances in robotics and mobile software. Advanced prosthetic devices can now be controlled through mobile apps. For instance, the i-Limb Ultra Revolution, made by Touch Bionics, allows people to select grip patterns and download new functions for their prosthetic hands using an iPhone. The longer you use it, the smarter your hand becomes.
Hearing aids are the natural place to begin our bionic quest. About 36 million American adults report some degree of hearing loss, according to the National Institute on Deafness and Other Communication Disorders, but only about a fifth of the people who would benefit from a hearing aid use one.
That’s because hearing aids, as a bit of technology, have long seemed stuck in the past. “Most people picture large, clunky bananas that fit behind your ears and show everyone you’re getting old,” said Ken Smith, an audiologist in Castro Valley, Calif., who has fitted more than two dozen patients with the Linx.
Until recently, many hearing aids were also difficult to use. For lots of potential users, especially people with only mild or moderate hearing loss, they didn’t do enough to improve sound in noisy environments.
Talking on the phone with a hearing aid was especially problematic. While some hearing aids offered streaming capabilities to cellphones, they were all clunky. To connect to phones, they required an extra streaming “wand,” a battery pack and wireless transmitter that the user wore around his neck — a device that nobody looked good lugging around.
In 2012, Apple announced the Made for iPhone Hearing Aid program, which would let the company’s mobile operating system connect directly to hearing aids using a low-power version of Bluetooth wireless technology. Representatives of both Starkey and GN ReSound say they saw the iPhone as a way to correct many of the tech problems that had hampered hearing aids. The phone could act as a remote control, a brain and an auxiliary microphone for hearing aids, and it would finally let people make phone calls and listen to music without carrying a wireless dongle.
But more than that, the companies say, the iPhone could do something potentially revolutionary for hearing aids. “A lot of the people who could benefit from wearing a hearing aid now don’t have any excuse — they can’t say it’s too clunky or not cool,” said Morten Hansen, GN ReSound’s vice president for partnerships and connectivity.
Dr. Fabry, of Starkey, was blunter: “We thought we could make hearing aids cool.”
Aesthetically, both companies seemed to have pulled off something close. The GN ReSound and Starkey hearing aids are fantastically tiny and attractive; each is just a fraction of the size of a conventional Bluetooth headset, and when they’re set behind your ears, they’re virtually invisible. They are also quite comfortable. A few minutes after fitting each model into my ears, I had forgotten they were there.
On the other hand, neither is cheap. Starkey’s Halo starts around $2,000 a hearing aid, while GN ReSound’s Linx begins at more than $3,000 each. Few health insurance plans cover the cost of hearing aids; Medicare does not.
Some people who have used them, though, said the new hearing aids were well worth the price. “I fell in love with them in the first 30 seconds,” said Todd Chamberlain, who recently began using a pair of Halos.
Mr. Chamberlain, who is 39 and works as an industrial safety officer in Ephrata, Wash., has worn hearing aids since he was 3 years old. “I’m surprised they haven’t done this earlier — putting it all in an app, that seems so obvious these days,” he said.
Soon, we might be saying the same about all of our senses.
Why Some Inventions Don't Succeed
Timothy Prestero expected big things from his idea. He believed it would save the lives of millions of children worldwide – and he wasn’t alone. It came top of the list in Time magazine’s 50 best inventions of the year. So when it flopped spectacularly, it was tough to accept.
Prestero built a device called the NeoNuture, a baby incubator made from miscellaneous car parts and other nuts-and-bolts. Unlike expensive hi-tech incubators, the NeoNurture was powered by a motorcycle battery, used headlights for heat and had a door chime for an alarm. This made it ideal for hospitals in rural Africa and other parts of the developing world where repair parts are hard to come by. It won praise and plaudits worldwide. And then... nothing. Why did the NeoNature never get beyond a prototype?
The answer was revealed at a recent exhibition called Fail Better, at Dublin’s Science Gallery in Ireland. The story of the NeoNurture joined contributions by inventors, athletes, explorers and even astrophysicists, who each submitted an object they thought characterised the theme of failure. It is a compendium of quashed dreams, acts of stupidity, serendipitous success, and crucially, instructive lessons about the true nature of failure.
Browse any patent library, and you’ll find countless gizmos that never made it off the drawing board. Marc Abrahams, who founded the Ig Nobel prize, suggested one of these wacky creations – the “Apparatus for facilitating childbirth by centrifugal force”, invented by George and Charlotte Blonsky in 1965. Abrahams describes how it works: “When a woman is ready to deliver her child, she lies back on a circular table. She is strapped down. The table is then rotated at high speed. The baby comes flying out.” Perhaps unsurprisingly, it didn’t catch on.
Then there are the technologies that failed due to catastrophic human error. Astrophysicist Jocelyn Bell Burnell nominated the Mars Climate Orbiter, which famously was lost in space due to a mix-up over imperial and metric units. One bit of software used pounds to calculate the force the thrusters needed to exert, while another used Newtons. As a result, the orbiter disintegrated in the Mars atmosphere, leaving everybody scratching their head about how such a big error could be missed.
In a similar vein of incredulity, journalist Oliver Wainwright nominated the “Walkie-Talkie” building in London, which has a concave shape that was found to concentrate the sun’s rays enough to melt the rubber on cars in the street below. What made it all the more curious, says Wainwright, was that the architect had encountered almost exactly the same problem with a hotel he designed in Las Vegas.
It would be easy to confine failure to the bad, the foolish, and the plain wrong. But in fact it comes in many forms, and “we wanted to celebrate the complexity of failure”, says curator Jane Ní Dhulchaointigh, who invented the (rather successful) material Sugru, a self-setting rubber for repairs in the home. In technology, there’s a “myth of the overnight success”, but the reality is quite different.
Take Prestero’s NeoNurture incubator. In 2010, Time magazine called it “genius” in its list of the year’s best inventions. In the wake of all the plaudits, Prestero tried to launch his innovative incubator to the developing world. It was only then that he encountered an all-too human reality.
“Every doctor and hospital administrator in the world who has seen [the TV show] ER knows what a medical device should look like,” explains Prestero, “They don’t want effective technology that looks like it’s made from car parts. It sounds crazy but some hospitals would rather have no equipment than something that looks cheap and crummy.”
The first lesson of failure for engineers and designers, then, is that the adoption of technology is governed by existing cultural norms. “There are no dumb users, only dumb products,” says Prestero.
Reassuringly, though, the history of flops in tech would suggest that even if Prestero faltered, others in his footsteps might not. And this gets to the interesting and vital role that failure plays in shaping the devices that rule our lives. No technology that changes the world comes from nowhere – almost all great inventions are built on a series of failed prototypes and previous iterations made by others that weren’t quite ready to take off. Before the iPod, there was the Listen Up mp3 player; before Facebook, there was Friendster, and before DVDs, there were Laserdiscs. Blame timing, bad luck or the human foibles of their inventors – the point is that these turkeys made it a little bit easier for those that followed.
In fact, this is the story of invention. While we hold up our visionaries and their lightbulb moments, the day-to-day reality of inventing is continual, depressing defeat. British inventor James Dyson, for instance, points out that it took him 5,127 prototypes to develop his first bagless vacuum cleaner. Inventors and scientists must “carry failure” with them all the time, says Ní Dhulchaointigh.
Installation at the Dublin Science Gallery inspired by the serendipitous discovery of the colour mauve (Fail Better)
Fortunately, such perseverance occasionally brings unexpected success. The educator Ken Robinson describes in his submission how the synthetic dye mauve was discovered. In 1856, William Perkin was experimenting with coal tar, trying to develop a synthetic version of the medicinal substance quinine. Day after day, he kept failing. Then one night, the light from his lamp shone through the edge of his beaker, scattering into a brilliant purple hue. The new colour went down a storm, and Perkin went on to found the synthetic dye industry. Serendipitous success only came from living with persistent setbacks.
So should failure always be embraced? Not quite, says Ní Dhulchaointigh. There’s a mantra bandied around in Silicon Valley, for example: “Fail fast, fail early, fail often”. “It’s almost a badge of honour in the start-up world,” she says. “In a way it’s problematic. It could result in mediocre work.”
Still, when we talk about technology and the way it shapes our lives, it is worth remembering that every invention that changed the world was built on the work of a thousand failed inventors, and a thousand failed ideas.
$1 Fire Extinguisher
Leave it to a child's science experiment to inspire a lifesaving remedy for a chronic risk facing slum dwellers in the Philippines: fire.
Each year, the Philippines suffers from up to 8,000 house fires, mostly in slums where a high density of wooden and plastic dwellings combine to create a dangerous tinderbox.
To help curb the risk, the city of Las Piñas worked with the design agency DM9 JaymeSyfu to create a cheap, easy-to-use fire extinguisher that fits comfortably in a pocket.
"The population density of Las Piñas is very high, so fire is a serious risk to the 600,000 living there," says Mark Villar, congressman of Las Piñas. "This can save a lot of lives."
In the Philippines, small conventional fire extinguishers cost about $45, out of the reach of most poor families. By contrast, the new device sells for only $1.
The extinguisher, about the size of an iPhone, was inspired by a child's science experiment and earned the design company a Bronze award at the Cannes Lions International Festival of Creativity for the communications industry.
It consists of a plastic pouch of vinegar containing a sealed capsule of baking soda. In the event of a fire, the user simply breaks the capsule inside the pouch, allowing it to mix with the vinegar, which produces carbon dioxide. The user then only needs to tear off a perforated corner to release the flame-smothering mix.
The reaction lasts for a few seconds and is enough to put out the sort of small fire that might be generated by a candle or a cooking stove. It won't stop a roaring fire, but it could prevent small flames from spreading.
Indeed, the fire that ravaged a slum in Manila last April, destroying more than 1,000 homes overnight, was triggered by a single candle.
The creators of the ingenious device say it could be useful in shantytowns everywhere. "In these sorts of areas houses are crowded together, fire trucks and emergency services have a hard time accessing fires," explains Nelson Diamante, Las Piñas city's development planner.
So far, 4,000 pouches have been distributed across two of Las Piñas' major slums, but officials from other Filipino cities have already placed orders.
Gella Valle, from the DM9 JaymeSyfu agency, says this is just the beginning of what they hope will be nationwide initiative. "Last year, 40 percent of fires occurred in Manila's high-density urban poor areas. With the mass distribution of the Pocket Fire Extinguisher, we aim to lessen this average by at least 15 percent," she says.
The company was surprised to receive a bulk order from Belgium. Of course, fire's a universal problem. The U.S. in 2012 experienced 365,000 house fires, claiming 2,380 lives and costing an estimated $5.7 billion in property damage.
Collaboration
Creativity is a collaborative process. As brilliant as the many inventors of the Internet and computer were, they achieved most of their advances through teamwork. Like Robert Noyce, the founder of Intel, some of the best tended to resemble Congregational ministers rather than lonely prophets, madrigal singers rather than soloists.
Twitter, for example, was invented by a team of people who were collaborative but also quite contentious. When one of the cofounders, Jack Dorsey, started taking a lot of the credit in media interviews, another cofounder, Evan Williams, a serial entrepreneur who had previously created Blogger, told him to chill out, according to Nick Bilton of the New York Times. “But I invented Twitter,” Dorsey said.
“No, you didn't invent Twitter,” Williams replied. “I didn’t invent Twitter either. Neither did Biz [Stone, another cofounder]. People don't invent things on the Internet. They simply expand on an idea that already exists.”
Therein lies another lesson: the digital age may seem revolutionary, but it was based on expanding the ideas handed down from previous generations. In 1937, Howard Aiken decided to build a digital computer at Harvard. In the attic of the university’s science center, he found a fragment and some wheels from a device that had been built a century earlier, Charles Babbage’s Difference Engine. He also discovered the notes that Ada Lovelace, a mathematician who was Babbage’s friend and programmer, had written about that machine. Aiken made his team, including his lead programmer Grace Hopper, study what Babbage and Lovelace had produced and include it in the manual for the Harvard computer.
Even though the Internet provided a tool for virtual and distant collaborations, another lesson of digital-age innovation is that, now as in the past, physical proximity is beneficial. The most productive teams were those that brought together people with a wide array of specialties. Bell Labs was a classic example. In its long corridors in suburban New Jersey, there were theoretical physicists, experimentalists, material scientists, engineers, a few businessmen, and even some telephone-pole climbers with grease under their fingernails. Walter Brattain, an experimentalist, and John Bardeen, a theorist, shared a workspace, like a librettist and a composer sharing a piano bench, so they could perform a call-and-response all day about how to manipulate silicon to make what became the first transistor.
There is something special, as evidenced at Bell Labs, about meetings in the flesh, which cannot be replicated digitally. The founders of Intel created a sprawling, team-oriented open workspace where employees all rubbed against one another. It was a model that became common in Silicon Valley. Predictions that digital tools would allow workers to telecommute were never fully realized. One of Marissa Mayer’s first acts as CEO of Yahoo! was to discourage the practice of working from home, rightly pointing out that “people are more collaborative and innovative when they’re together.” When Steve Jobs designed a new headquarters for Pixar, he obsessed over ways to structure the atrium, and even where to locate the bathrooms, so that serendipitous personal encounters would occur. Among his last creations was the plan for Apple’s new signature headquarters, a circle with rings of open workspaces surrounding a central courtyard.
Another key to fielding a great team is pairing visionaries, who can generate ideas, with operating managers, who can execute them. Visions without execution are hallucinations. One of the great visionaries of the digital age was William von Meister, a flamboyant entrepreneur who launched a dozen companies and watched all but one flame out. The one that succeeded became AOL. It survived because von Meister’s investors insisted he bring in two people to execute on his vision: a former special forces commando named Jim Kimsey and a young marketing whiz, Steve Case.
There were three ways that teams were put together in the digital age. The first was through government funding and coordination. That’s how the groups that built the original computers (Colossus, ENIAC) and networks (ARPANET) were organized. This reflected the consensus, which was stronger back in the 1950s under President Eisenhower, that the government should undertake projects, such as the space program and interstate highway system, that benefitted the common good. It often did so in collaboration with universities and private contractors as part of a government-academic-industrial triangle that Vannevar Bush and others fostered. Talented federal bureaucrats (not always an oxymoron), such as Licklider, Taylor, and Roberts, oversaw the programs and allocated public funds.
Private enterprise was another way that collaborative teams were formed. This happened at the research centers of big companies, such as Bell Labs and Xerox PARC, and at entrepreneurial new companies, such as Texas Instruments and Intel, Atari and Google, Microsoft and Apple. A key driver was profits, both as a reward for the players and as a way to attract investors. That required a proprietary attitude to innovation that led to patents and intellectual property protections. Digital theorists and hackers often disparaged this approach, but a private enterprise system that financially rewarded invention was a component of a system that led to breathtaking innovation in transistors, chips, computers, phones, devices, and Web services.
Throughout history, there has been a third way, in addition to government and private enterprises, that collaborative creativity has been organized: through peers freely sharing ideas and making contributions as part of a voluntary common endeavor. Many of the advances that created the Internet and its services occurred in this fashion, which the Harvard scholar Yochai Benkler has labeled “commons-based peer production.” The Internet allowed this form of collaboration to be practiced on a much larger scale than before. The building of Wikipedia and the Web were good examples, along with the creation of free and open-source software such as Linux and GNU, OpenOffice and Firefox. This commons-based production by peer networks was driven not by financial incentives but by other forms of reward and satisfaction.
The values of commons-based sharing and of private enterprise often conflict, most notably over the extent to which innovations should be patent-protected. The commons crowd had its roots in the hacker ethic that emanated from the MIT Tech Model Railroad Club and the Homebrew Computer Club. Steve Wozniak was an exemplar. He went to Homebrew meetings to show off the computer circuit he built, and he handed out freely the schematics so that others could use and improve it. But his neighborhood pal Steve Jobs, who began accompanying him to the meetings, convinced him that they should quit sharing the invention and instead build and sell it. Thus Apple was born, and for the subsequent forty years it has been at the forefront of aggressively patenting and profiting from its innovations. The instincts of both Steves were useful in creating the digital age. Innovation is most vibrant in the realms where open-source systems compete with proprietary ones. Both models are good at fostering collaboration.
Vision Chip Replaces Glasses
Reading glasses could become obsolete thanks to a tiny optical implant that sharpens vision, scientists say.
The implant, a tiny ring placed beneath the eye’s surface in a procedure lasting only 15 minutes, allowed more than 80 per cent of those treated to read a newspaper without glasses, a trial found.
The device, called Kamra, is already available privately in Britain, costing about £5,000 for both eyes, but until now it has been seen as an experimental treatment.
The latest results provide more convincing evidence that it works in most people whose near vision has declined with age.
John Vukich, an ophthalmologist at the University of Wisconsin, in Madison, who led the study, said that the technology could remove the need for people to constantly change glasses as they switch between activities such as reading and driving.
“This is a solution that truly delivers near vision that transitions smoothly to far distance vision,” he said.
About 23 million people in Britain suffer from presbyopia, or age-related long-sightedness. It is caused by a hardening of the eye’s lens, which reduces the eye’s ability to thicken the lens to focus on close-up images.
The implant is a thin, flexible ring that measures 3.8mm across, with a 1.6mm hole in the middle. In the procedure a laser is used to make an incision in the cornea — the transparent front of the eye — and the inlay is inserted so that it sits around the pupil.
The device works like a pinhole camera, reducing the amount of light entering through the edges of the pupil. By cutting out the peripheral beams, which are the most difficult for the lens to focus on to the retina, a sharp image can be restored.
The procedure takes less than 15 minutes and can be performed in an eye surgeon’s office. Sutures are not required and topical anaesthesia, in the form of eye drops, is used.
The findings, which were presented at the 118th annual meeting of the American Academy of Ophthalmology at the weekend, were based on a trial of 507 patients between the ages of 45 and 60 years. The researchers implanted the ring in the patients and followed them over the course of three years. In 83 per cent of cases, the patients could see with 20/40 vision or better — well enough to read a newspaper. On average, patients gained three lines on a reading chart.
The End of Failure
In ancient times, purple chairs were virtually priceless. Back then, all cloth dyes were made from natural products, like flower petals or crushed rocks; they either bled or faded and needed constant repair. One particular purple dye, which was culled from the glandular mucus of shellfish, was among the rarest and most prized colors. It was generally reserved for royalty. Nobody had surplus purple chairs piled up for $20 a pop.
But that all changed in 1856, with a discovery by an 18-year-old English chemist named William Henry Perkin. Tinkering in his home laboratory, Perkin was trying to synthesize an artificial form of quinine, an antimalarial agent. Although he botched his experiments, he happened to notice that one substance maintained a bright and unexpected purple color that didn’t run or fade. Perkin, it turned out, had discovered a way of making arguably the world’s most coveted color from incredibly cheap coal tar. He patented his invention — the first synthetic dye — created a company and sold shares to raise capital for a factory. Eventually his dye, and generations of dye that followed, so thoroughly democratized the color purple that it became the emblematic color of cheesy English rock bands, Prince albums and office chairs for those willing to dare a hue slightly more bold than black.
Perkin’s fortuitous failure, it’s safe to say, would have never occurred even a hundred years earlier. In pre-modern times, when starvation was common and there was little social insurance outside your clan, every individual bore the risk of any new idea. As a result, risks simply weren’t worth taking. If a clever idea for a crop rotation failed or an enhanced plow was ineffective, a farmer’s family might not get enough to eat. Children might die. Even if the innovation worked, any peasant who found himself with an abundance of crops would most likely soon find a representative of the local lord coming along to claim it. A similar process, one in which success was stolen and failure could be lethal, also ensured that carpenters, cobblers, bakers and the other skilled artisans would only innovate slowly, if at all. So most people adjusted accordingly by living near arable land, having as many children as possible (a good insurance policy) and playing it safe.
Our relationship with innovation finally began to change, however, during the Industrial Revolution. While individual inventors like James Watt and Eli Whitney tend to receive most of the credit, perhaps the most significant changes were not technological but rather legal and financial. The rise of stocks and bonds, patents and agricultural futures allowed a large number of people to broadly share the risks of possible failure and the rewards of potential success. If it weren’t for these tools, a tinkerer like Perkin would never have been messing around with an attempt at artificial quinine in the first place. And he wouldn’t have had any way to capitalize on his idea. Anyway, he probably would have been too consumed by tilling land and raising children.
Perkin’s invention may have brought cheap purple (and, later, green and red) dyes to the masses, but it helped upend whatever was left of the existing global supply chain, with its small cottage-size dye houses and its artisanal crafts people who were working with lichen and bugs. For millenniums, the economy had been built around subsistence farming, small-batch artisanal work and highly localized markets. Inventions like Perkin’s — and the steam engine, the spinning jenny, the telegraph, the Bessemer steel-production process — destroyed the last vestiges of this way of life.
The original age of innovation may have ushered in an era of unforeseen productivity, but it was, for millions of people, absolutely terrifying. Over a generation or two, however, our society responded by developing a new set of institutions to lessen the pain of this new volatility, including unions, Social Security and the single greatest risk-mitigating institution ever: the corporation. During the late 19th century, a series of experiments in organizational structure culminated, in the 1920s, with the birth of General Motors, the first modern corporation. Its basic characteristics soon became ubiquitous. Ownership, which was once a job passed from father to son, was now divided among countless shareholders. Management, too, was divided, among a large group of professionals who directed units, or “subdivisions,” within it. The corporation, in essence, acted as a giant risk-sharing machine, amassing millions of investors’ capital and spreading it among a large number of projects, then sharing the returns broadly too. The corporation managed the risk so well, in fact, that it created an innovation known as the steady job. For the first time in history, the risks of innovation were not borne by the poorest. This resulted in what economists call the Great Compression, when the gap between the income of the rich and poor rapidly fell to its lowest margin.
The secret of the corporation’s success, however, was that it generally did not focus on truly transformative innovations. Most firms found that the surest way to grow was to perfect the manufacturing of the same products, year after year. G.M., U.S. Steel, Procter & Gamble, Kellogg’s, Coca-Cola and other iconic companies achieved their breakthrough insights in the pre-corporate era and spent the next several decades refining them, perhaps introducing a new product every decade or so. During the period between 1870 and 1920, cars, planes, electricity, telephones and radios were introduced. But over the next 50 years, as cars and planes got bigger and electricity and phones became more ubiquitous, the core technologies stayed fundamentally the same. (Though some notable exceptions include the television, nuclear power and disposable diapers.)
Celebrated corporate-research departments at Bell Labs, DuPont and Xerox may have employed scores of white-coated scientists, but their impact was blunted by the thick shell of bureaucracy around them. Bell Labs conceived some radical inventions, like the transistor, the laser and many of the programming languages in use today, but its parent company, AT&T, ignored many of them to focus on its basic telephone monopoly. Xerox scientists came up with the mouse, the visual operating system, laser printers and Ethernet, but they couldn’t interest their bosses back East, who were focused on protecting the copier business.
Corporate leaders weren’t stupid. They were simply making so much money that they didn’t see any reason to risk it all on lots of new ideas. This conservatism extended through the ranks. Economic stability allowed millions more people to forgo many of the risk-mitigation strategies that had been in place for millenniums. Family size plummeted. Many people moved away from arable land (Arizona!). Many young people, most notably young women, saw new forms of economic freedom when they were no longer tied to the routine of frequent childbirth. Failure was no longer the expectation; most people could predict, with reasonable assurance, what their lives and careers would look like decades into the future. Our institutions — unions, schools, corporate career tracks, pensions and retirement accounts — were all predicated on a stable and rosy future.
We now know, of course, that this golden moment was really a benevolent blip. In reality, the failure loop was closing far faster than we ever could have realized. The American corporate era quietly began to unravel in the 1960s. David Hounshell, a scholar of the history of American innovation, told me about a key moment in 1968, when DuPont introduced Qiana, a kind of nylon with a silklike feel, whose name was selected through a computer-generated list of meaningless five-letter words. DuPont had helped to create the modern method of product development, in which managers would identify a market need and simply inform the research department that it had to produce a solution by a specific date. Over the course of decades, this process was responsible for successful materials like Freon, Lucite, Orlon, Dacron and Mylar. In Qiana, DuPont hoped that it had the next Lycra.
But not long after the company introduced Qiana to the market, it was met by a flood of cheap Japanese products made from polyester. Qiana, which only came close to breaking even during one year of sales, eventually sustained operating losses of more than $200 million. Similar shudders were felt in corporate suites across America, as new global competitors — first from Europe, then from Asia — shook up the stable order of the automotive and steel industries. Global trade narrowed the failure loop from generations to a decade or less, far shorter than most people’s careers.
For American workers, the greatest challenge would come from computers. By the 1970s, the impact of computers was greatest in lower-skilled, lower-paid jobs. Factory workers competed with computer-run machines; secretaries and bookkeepers saw their jobs eliminated by desktop software. Over the last two decades, the destabilizing forces of computers and the Internet has spread to even the highest-paid professions. Corporations “were created to coordinate and organize communication among lots of different people,” says Chris Dixon, a partner at the venture-capital firm Andreessen Horowitz. “A lot of those organizations are being replaced by computer networks.” Dixon says that start-ups like Uber and Kickstarter are harbingers of a much larger shift, in which loose groupings of individuals will perform functions that were once the domain of larger corporations. “If you had to know one thing that will explain the next 20 years, that’s the key idea: We are moving toward a period of decentralization,” Dixon says.
Were we simply enduring a one-time shift into an age of computers, the adjustment might just require us to retrain and move onward. Instead, in a time of constant change, it’s hard for us to predict the skills that we will need in the future. Whereas the corporate era created a virtuous cycle of growing companies, better-paid workers and richer consumers, we’re now suffering through a cycle of destabilization, whereby each new technology makes it ever easier and faster to create the next one, which, of course, leads to more and more failure. It’s enough to make us feel like mollusk-gland hunters.
Much as William Henry Perkin’s generation ripped apart an old way of life, the innovation era is sundering the stability of the corporate age. Industries that once seemed resistant to change are only now entering the early stages of major disruption. A large percentage of the health-care industry, for example, includes the rote work of recording, storing and accessing medical records. But many companies are currently devising ways to digitize our medical documents more efficiently. Many economists believe that peer-to-peer lending, Bitcoin and other financial innovations will soon strike at the core of banking by making it easier to receive loans or seed money outside a traditional institution. Education is facing the threat of computer-based learning posed by Khan Academy, Coursera and other upstart companies. Government is changing, too. India recently introduced a site that allows anybody to see which government workers are showing up for their jobs on time (or at all) and which are shirking. Similarly, Houston recently developed a complex database that helps managers put an end to runaway overtime costs. These changes are still new, in part because so many large businesses benefit from the old system and use their capital to impede innovation. But the changes will inevitably become greater, and the results will be drastic. Those four industries — health care, finance, education and government — represent well more than half of the U.S. economy. The lives of tens of millions of people will change.
Some professions, however, are already demonstrating ways to embrace failure. For example, there’s an uncharacteristic explosion of creativity among accountants. Yes, accountants: Groups like the Thriveal C.P.A. Network and the VeraSage Institute are leading that profession from its roots in near-total risk aversion to something approaching the opposite. Computing may have commoditized much of the industry’s everyday work, but some enterprising accountants are learning how to use some of their biggest assets — the trust of their clients and access to financial data — to provide deep insights into a company’s business. They’re identifying which activities are most profitable, which ones are wasteful and when the former become the latter. Accounting once was entirely backward-looking and, because no one would pay for an audit for fun, dependent on government regulation. It was a cost. Now real-time networked software can make it forward-looking and a source of profit. It’s worth remembering, though, that this process never ends: As soon as accountants discover a new sort of service to provide their customers, some software innovator will be seeking ways to automate it, which means those accountants will work to constantly come up with even newer ideas. The failure loop will continue to close.
Lawyers, too, are trying to transform computers from a threat into a value-adding tool. For centuries the legal profession has made a great deal of money from drawing up contracts or patent applications that inevitably sit in drawers, unexamined. Software can insert boilerplate language more cheaply now. But some computer-minded lawyers have found real value in those cabinets filled with old contracts and patent filings. They use data-sniffing programs and their own legal expertise to cull through millions of patent applications or contracts to build never-before-seen complex models of the business landscape and sell it to their clients.
The manufacturing industry is going through the early stages of its own change. Until quite recently, it cost tens of millions of dollars to build a manufacturing plant. Today, 3-D printing and cloud manufacturing, a process in which entrepreneurs pay relatively little to access other companies’ machines during downtime, have drastically lowered the barrier to entry for new companies. Many imagine this will revitalize the business of making things in America. Successful factories, like accounting firms, need to focus on special new products that no one in Asia has yet figured out how to mass produce. Something similar is happening in agriculture, where commodity grains are tended by computer-run tractors as farming entrepreneurs seek more value in heritage, organic, local and other specialty crops. This has been manifested in the stunning proliferation of apple varieties in our stores over the past couple of years.
Every other major shift in economic order has made an enormous impact on the nature of personal and family life, and this one probably will, too. Rather than undertake one career for our entire working lives, with minimal failure allowed, many of us will be forced to experiment with several careers, frequently changing course as the market demands — and not always succeeding in our new efforts. In the corporate era, most people borrowed their reputations from the large institutions they affiliated themselves with: their employers, perhaps, or their universities. Our own personal reputations will now matter more, and they will be far more self-made. As career trajectories and earnings become increasingly volatile, gender roles will fragment further, and many families will spend some time in which the mother is a primary breadwinner and the father is underemployed and at home with the children. It will be harder to explain what you do for a living to acquaintances. The advice of mentors, whose wisdom is ascribed to a passing age, will mean less and less.
To succeed in the innovation era, says Daron Acemoglu, a prominent M.I.T. economist, we will need, above all, to build a new set of institutions, something like the societal equivalent of those office parks in Sunnyvale, that help us stay flexible in the midst of turbulent lives. We’ll need modern insurance and financial products that encourage us to pursue entrepreneurial ideas or the education needed for a career change. And we’ll need incentives that encourage us to take these risks; we won’t take them if we fear paying the full cost of failure. Acemoglu says we will need a far stronger safety net, because a society that encourages risk will intrinsically be wealthier over all.
History is filled with examples of societal innovation, like the United States Constitution and the eight-hour workday, that have made many people better off. These beneficial changes tend to come, Acemoglu told me, when large swaths of the population rally together to demand them. He says it’s too early to fully understand exactly what sorts of governing innovations we need today, because the new economic system is still emerging and questions about it remain: How many people will be displaced by robots and mobile apps? How many new jobs will be created? We can’t build the right social institutions until we know the precise problem we’re solving. “I don’t think we are quite there yet,” he told me.
Generally, those with power and wealth resist any significant shift in the existing institutions. Robber barons fought many of the changes of the Progressive Era, and Wall Street fought the reforms of the 1930s. Today, the political system seems incapable of wholesale reinvention. But Acemoglu said that could change in an instant if enough people demand it. In 1900, after all, it was impossible to predict the rise of the modern corporation, labor unions, Social Security and other transformative institutions that shifted gains from the wealthy to workers.
We are a strange species, at once risk-averse and thrill-seeking, terrified of failure but eager for new adventure. If we discover ways to share those risks and those rewards, then we could conceivably arrive somewhere better. The pre-modern era was all risk and no reward. The corporate era had modest rewards and minimal risks. If we conquer our fear of failure, we can, just maybe, have both.
Dot-Com Bust’s Worst Flops Were Actually Fantastic Ideas
If you had to pick one really annoying sock puppet to represent the imploded excesses of the dot-com boom, it would be the microphone-wielding mascot of online pet food retailer Pets.com.
For a few months back in the late 1990s, he was everywhere — the Super Bowl, Live with Regis and Kathy Lee — and then he was gone, sucked into a black hole of dot-com debt.
But the bust was so big and so widespread, there are so many deliciously ideal symbols for this dark time in the history of the internet, a period when irrational exuberance trumped sound business decisions. Fifteen years on, people — particularly people in Silicon Valley — still talk about these epic failures. In addition to Pets.com, there was WebVan, Kozmo.com, and Flooz.
The irony is that nowadays, they’re all very good ideas.
Now that the internet has become a much bigger part of our lives, now that we have mobile phones that make using the net so much easier, now that the Googles and the Amazons have built the digital infrastructure needed to support online services on a massive scale, now that a new breed of coding tools has made it easier for people to turn their business plans into reality, now that Amazon and others have streamlined the shipping infrastructure needed to inexpensively get stuff to your door, now that we’ve shed at least some of that irrational exuberance, the world is ready to cash in on the worst ideas of the ’90s.
WebVan burned through $800 million trying to deliver fresh groceries to your door, and today, we have Amazon Fresh and Instacart, which are doing exactly the same thing—and doing it well. People laughed when Kozmo flamed out in 1998, but today, Amazon and Google are duking it out to provide same-day shopping delivery. A year ago, Kozmo.com even told WIRED it was making a comeback “in the near future.”
We’re still waiting for Kozmo 2.0. But there’s also good reason to applaud the folks behind Flooz.com. They wanted to create their own internet-based currency, and though Flooz was a flop, bitcoin has now shown that digital currency can play huge role in the modern world.
Even the Pets.com idea is looking mighty good. The basic notion that people wanted to buy pet food online and have it delivered to their homes turns out to be a sound one. Market research firm IBISWorld pegs it at a $3 billion market and a new generation of companies—Chewy.com, Petflow.com, and Wag.com to name a few—all making a go of it
.
PEOPLE LAUGHED WHEN KOZMO FLAMED OUT, BUT TODAY, AMAZON AND GOOGLE ARE DUKING IT OUT TO PROVIDE SAME-DAY SHOPPING DELIVERY.
Still not convinced? It’s not just the failed dot-coms that now look good. Take VA Linux, which spiraled to its death after a 1999 IPO provided the biggest first-day boost in NASDAQ history. As it turns out, VA had the right idea. Cheap hardware running the open source Linux operating system eventually changed the computer world. That’s what Google and Amazon and Facebook run on today.
It’s just that the beneficiaries of this changes weren’t American startups like VA. It was no-name hardware manufacturers in Asia.
The lesson here is that innovation is built on the shoulders of failure, and sometimes, the line between the world’s biggest success and the world’s biggest flop is a matter of timing or logistics or tools or infrastructure or luck, or — and here’s the lesson that today’s high flying startups should take to heart - scope of ambition.
Maybe if Pets.com had kept its head down and worked harder on getting the dog food to our doors than assaulting U.S. airwaves with ads like the one below, they would have made it.
Internet Drones and Balloons
Facebook has completed the first test flights of its solar-powered internet drones as it looks to bring the web to billions of people who cannot connect.
The drones have been successfully tested in British airspace, Mark Zuckerberg, the boss of the social network, said yesterday in a post on his profile page.
The British-built drones use lasers to beam internet signals to the ground from an altitude of 60,000 to 90,000 feet, where they are able to circle for months. They have a wingspan wider than a Boeing 737 passenger jet but weigh less than a car.
The drones are part of Facebook’s internet.org project, which intends to connect the entire world to the internet. The company is also experimenting with satellite-based internet technology to achieve its aims.
Mr Zuckerberg said that the drones would help to provide internet connections to remote areas across the world. “As part of our Internet.org effort to connect the world, we’ve designed unmanned aircraft that can beam internet access down to people from the sky,” he said. “They can affordably serve the 10 per cent of the world’s population that live in remote communities without existing internet infrastructure.” Facebook has said that the drones will be “relatively cheap” to build.
The drone has been codenamed Aquila, after the mythological eagle that carried Jupiter’s thunderbolts. They have been developed by Ascenta, a Somerset-based designer of solar-powered drones, which Facebook bought in March last year.
The internet-beaming technologies are being worked on in Facebook’s connectivity laboratory, into which the social network has hired experts from Nasa and the aerospace industry.
Jay Parikh, vice-president of engineering at Facebook, told The Wall Street Journal this week that the solar and battery technology needed to power the drones had only recently been invented.
Around two thirds of the global population cannot yet access the internet. Technology companies are trying to build networks that do not depend on ground-based infrastructure that developing countries cannot afford.
Google is experimenting with high altitude weather balloons in a programme named Project Loon. The search engine provider wants to create moving streams of hundreds of balloons that link together and beam internet signals to the ground.
Impact of Innovation
Fifty years ago yesterday, a young computer expert called Gordon Moore pointed out that the number of transistors on a silicon chip seemed to be doubling every year or two and that if this went on it would “lead to such wonders as home computers . . . and personal portable communications equipment”.
Today, for the cost of an hour of work on the average wage, you can buy about a trillion times as much computing power as you could when Moore wrote his article. The result has had a huge impact on our standard of living, indeed it is one of the biggest factors behind world economic growth in the past half century.
Back in the 1950s the American economist Robert Solow calculated that 87 per cent of economic growth came not from applying more capital or more labour, but from innovation making people more productive. It’s probably even higher today. New materials, new machines and new ideas to cut costs enable people to spend less time fulfilling more of their needs: that’s what growth means.
Technological change is the chief reason that economic growth for the world as a whole shows no sign of reaching a plateau but keeps marching up at 3-5 per cent a year. Innovation is the main reason the percentage of the world population living in absolute poverty has more than halved in 35 years. And hostility to innovation is one of the reasons for Europe’s current stagnation.
Yet innovation has featured in this general election barely at all. It seems to be of little interest to the party leaders or their audiences. This is most peculiar, when you think about it, because it will be what will make the British people better off in 2020 than they are today: really better off, rather than having simply run up more debt, that is. If innovation grinds to a halt then so will growth and deficit reduction and the rise of the NHS budget and all the other things the leaders talk about.
On innovation policy the Conservatives (and David Willetts in particular) have reason to be proud of their record. Despite tough budget constraints, their science spending, and their encouragement for translating ideas into business ventures, have been impressive: Innovate UK; the Longitude prize; the talk of “eight great technologies”; the “patent box”; tech clusters and the surge in business start-ups. More telling still is that Gordon Brown, for all his faults, got the importance of innovation, and so did Tony Blair, whereas Ed Miliband’s silence on science, technology and innovation is striking. Why is he not saying: vote for me and I will forge a white-hot technological revolution that will bring down energy prices far more effectively than any price regulation?
For those on the right, innovation holds by far the best chance to keep pushing down the cost and pushing up the quality of public services, so lifting the burden of taxes and liberating people from dependence on government. Imagine if bureaucrats could be replaced by robots that worked 24 hours a day, did not need pensions and did not vote Labour. . .
Such a public-sector automation and productivity revolution might seem to be a pipe-dream, but it is beginning to happen already in the government’s digital initiatives, still in their early stages. One of the most startling discoveries of the past five years is that you can reduce the head-count in local government, or the central administration of education and social security, and see the quality of service, and public satisfaction, go up, not down. That’s because of technology.
For those on the left, innovation is a great demolisher of inequality. A century ago, you had to be very rich to own a car or your own home, to have more than three pairs of shoes, to have a spare bedroom, to buy on credit, to have indoor plumbing, to eat chicken regularly, to have a library of books, to be able to watch great acting or great music regularly, to travel abroad. Today all those things are routine for people on modest incomes thanks to the invention of container shipping, fertiliser, better financial services, cheap materials, machine tools, automation, the internet, television, budget airlines and so on.
It’s true that the very rich can now afford a few more things that are beyond the reach of those on modest incomes, but they are mostly luxuries: private planes, grouse moors, tables in the very best restaurants. We would like those on low incomes to have access to better medicines, better schooling, cheaper homes and lower energy bills, and in each case the technology exists to provide these: it’s mainly government policies that get in the way.
Technology is the great equaliser: today some of the poorest African peasants have mobile phones that work as well as Warren Buffett’s — at least for voice calls. In the 1940s, Joseph Schumpeter said that the point of commerce consists “not in providing more silk stocking for queens, but in bringing them within reach of factory girls”.
It was not planning, trade unions, public spending, welfare or tax that made the poor much richer. It was innovation.
Patent trolls
Patent trolls have long been the scourge of American industry, sucking the life out of invention at big corporations, start-ups and even individuals tinkering at home in their garages.
By buying patents they have no intention of using themselves, often in technical areas such as cloud computing or telecoms, the trolls, or “patent-assertion entities”, can make vast sums of money by filing lawsuits against companies they claim are using “their” inventions. Not wanting to enter costly legal battles, defendants almost always settle.
Put simply, it’s a business model based on legal extortion.
Samsung, the South Korean electronics group, had to defend itself from so many patent lawsuits in the small town of Marshall, Texas — a popular venue for plaintiffs thanks to its historically troll-friendly juries — that it built an ice rink outside the courthouse, in the hope, presumably, of currying favour with its residents.
Patent trolls are a particular problem for the tech industry, for obvious reasons. The total number of patents registered annually by Silicon Valley inventors nearly doubled to 18,000 between 2003 and 2013, according to the Silicon Valley Leadership Group, an advocacy organisation.
Kevin Kramer, Yahoo’s deputy general counsel, told a hearing in Congress this month that his company had spent about $100 million fighting bogus patent lawsuits since 2007 — money that it could have used on R&D and jobs. “We were recently accused of infringing patent claims requiring digital camera apparatus, which we do not sell,” he said.
United for Patent Reform, representing sectors as diverse as property and clothing, complains that its member companies find it costly and time-consuming to defend themselves against trolls because the law does not require a patent-holder to explain how a patent has been infringed or even to identify the product involved. This places enormous discovery costs on the defendants.
Although Congress has been slow to address these issues, there are signs of improvement. A new Patent Trial and Appeal Board, created under the 2011 America Invents Act, has been credited with slowing the number of new patent acts filed in district courts. The board acts as a kind of mediator, but it can be effective only if district court judges are willing to put litigation on hold while the board determines whether the patents at issue are valid. There is hope, too, that a proposed new Innovation Act will go further by toughening requirements for filing patent challenges in court and raising the possibility that plaintiffs may have to pay the legal bills of the defendants if they lose.
Douglas Luftman, the chief intellectual property lawyer of NetApp, a manufacturer of data storage devices, believes that forcing the loser to pay legal fees, known as fee-shifting, is essential in defeating patent trolls. Last year his company was awarded full costs after defeating a patent action brought by Summit Data — “a real victory”, Mr Luftman said. The icing on the cake came when the judge blasted Summit for bringing “reckless and wasteful litigation”.
So everybody is thinking along the same lines? Not really. Many of America’s universities, among the greatest innovators of all, are unhappy about the proposed reforms, particularly the fee-shifting proposals. “It will keep universities from going to court to enforce their patents,” Robert Brown, president of Boston University, said, arguing that the colleges cannot afford to take on the risk of paying costs should they lose. Even if universities were excluded from the new act, he added, that would not help other small fry, including individual inventors, small companies and start-ups. A better solution, Dr Brown believes, would be to frame legislation that more clearly defines what a patent troll is and to explicitly target those who are clearly abusing the law.
Mark Griffin, general counsel of Overstock.com, an online retailer, noted that while large Silicon Valley companies may feel the need to wage patent wars against each other to protect intellectual property rights, what the patent trolls were doing was quite different. Overstock.com is leading the charge of companies resisting the trolls with a policy that it describes as “spend and defend”. Since 2004 it has spent about $11 million defending 32 patent infringement cases, instead of settling with trolls. In the past three years, a dozen trolls have dismissed their cases, walking away empty-handed.
It doesn’t always work — Overstock lost a case in January — but “the trolls have got the message that we are an unappetising target,” Mr Griffin said. “You can’t make a mob go away by continuing to pay them. You have to stand up and fight.”
Rise of the Robots
From the self-checkout aisle of the grocery store to the sports section of the newspaper, robots and computer software are increasingly taking the place of humans in the workforce. Silicon Valley executive Martin Ford says that robots, once thought of as a threat to only manufacturing jobs, are poised to replace humans as teachers, journalists, lawyers and others in the service sector.
"There's already a hardware store [in California] that has a customer service robot that, for example, is capable of leading customers to the proper place on the shelves in order to find an item," Ford tells Fresh Air's Dave Davies.
In his new book, Rise of the Robots, Ford considers the social and economic disruption that is likely to result when educated workers can no longer find employment.
On robots in manufacturing
Any jobs that are truly repetitive or rote — doing the same thing again and again — in advanced economies like the United States or Germany, those jobs are long gone. They've already been replaced by robots years and years ago.
So what we've seen in manufacturing is that the jobs that are actually left for people to do tend to be the ones that require more flexibility or require visual perception and dexterity. Very often these jobs kind of fill in the gaps between machines. For example, feeding parts into the next part of the production process or very often they're at the end of the process — perhaps loading and unloading trucks and moving raw materials and finished products around, those types of things.
But what we're seeing now in robotics is that finally the machines are coming for those jobs as well, and this is being driven by advances in areas like visual perception. You now have got robots that can see in three-dimension and that's getting much better and also becoming much less expensive. So you're beginning to see machines that are starting to have the kind of perception and dexterity that begins to approach what human beings can do. A lot more jobs are becoming susceptible to this and that's something that's going to continue to accelerate, and more and more of those jobs are going to disappear and factories are just going to relentlessly approach full-automation where there really aren't going to be many people at all.
There's a company here in Silicon Valley called Industrial Perception which is focused specifically on loading and unloading boxes and moving boxes around. This is a job that up until recently would've been beyond the robots because it relies on visual perception often in varied environments where the lighting may not be perfect and so forth, and where the boxes may be stacked haphazardly instead of precisely and it has been very, very difficult for a robot to take that on. But they've actually built a robot that's very sophisticated and may eventually be able to move boxes about one per second and that would compare with about one per every six seconds for a particularly efficient person. So it's dramatically faster and, of course, a robot that moves boxes is never going to get tired. It's never going to get injured. It's never going to file a workers' compensation claim.
On a robot that's being built for use in the fast food industry
Essentially, it's a machine that produces very, very high quality hamburgers. It can produce about 350 to 400 per hour; they come out fully configured on a conveyor belt ready to serve to the customer. ... It's all fresh vegetables and freshly ground meat and so forth; it's not frozen patties like you might find at a fast food joint. These are actually much higher quality hamburgers than you'd find at a typical fast food restaurant. ... They're building a machine that's actually quite compact that could potentially be used not just in fast food restaurants but in convenience stories and also maybe in vending machines.
On automated farming
In Japan they've got a robot that they use now to pick strawberries and it can do that one strawberry every few seconds and it actually operates at night so that they can operate around the clock picking strawberries. What we see in agriculture is that's the sector that has already been the most dramatically impacted by technology and, of course, mechanical technologies — it was tractors and harvesters and so forth. There are some areas of agriculture now that are almost essentially, you could say, fully automated.
On computer-written news stories
Essentially it looks at the raw data that's provided from some source, in this case from the baseball game, and it translates that into a real narrative. It's quite sophisticated. It doesn't simply take numbers and fill in the blanks in a formulaic report. It has the ability to actually analyze the data and figure out what things are important, what things are most interesting, and then it can actually weave that into a very compelling narrative. ... They're generating thousands and thousands of stories. In fact, the number I heard was about one story every 30 seconds is being generated automatically and that they appear on a number of websites and in the news media. Forbes is one that we know about. Many of the others that use this particular service aren't eager to disclose that. ... Right now it tends to be focused on those areas that you might consider to be a bit more formulaic, for example sports reporting and also financial reporting — things like earnings reports for companies and so forth.
On computers starting to do creative work
Right now it's the more routine formulaic jobs — jobs that are predictable, the kinds of jobs where you tend to do the same kinds of things again and again — those jobs are really being heavily impacted. But it's important to realize that that could change in the future. We already see a number of areas, like [a] program that was able to produce [a] symphony, where computers are beginning to exhibit creativity — they can actually create new things from scratch. ... [There is] a painting program which actually can generate original art; not to take a photograph and Photoshop it or something, but to actually generate original art.
A Gravity Light
With a device that promises to turn gravity into light, you’d be forgiven for thinking that Deciwatt is a cutting-edge technology venture. You’d be wrong. In fact, the London-based start-up offers a reassuring reminder that there’s still room for low-tech inventions.
Deciwatt’s soon-to-be-launched GravityLight ingeniously uses simple engineering principles to turn the energy produced by a falling 12kg bag into electricity for a light.
As well as providing a cheaper, safer alternative to paraffin lamps for the 1.3 billion people worldwide living without access to grid electricity, the company believes it can attract customers among aid agencies providing disaster relief efforts, even consumers in developed markets.
“There are a lot of people who want something under the stairs in case of a power cut,” Jim Reeves, the company’s co-founder and technical director, said. “My kids have got one attached to their bunk bed. It works as a great 20-minute night light.”
The business recently won £150,000 from Shell Springboard, a national competition for entrepreneurs with “low-carbon” ideas, and a prototype has been tested successfully. The first fully commercial version should produce up to 25 minutes of light in return for the few seconds’ worth of effort involved in hoisting a weighted bag on a short pulley system. As the bag slowly descends through a system of gears, it powers a generator.
It provides an “affordable, sustainable and reliable light, any time”, Deciwatt says, with the advantage that it doesn’t need grid electricity, batteries or even sunlight to work. Mr Reeves wants selling GravityLight to provide a living for local distributors in emerging markets, while assembling the devices locally could create further jobs.
The idea emerged when Solar Aid, a charity that distributes sun-powered lanterns in Africa, approached Therefore, a design consultancy of which Mr Reeves is a director. It wanted the business to investigate lower-cost alternatives to its lanterns.
“We saw that batteries are a substantial factor in cost and that if we could find an alternative way of storing power, we could create something very cost-effective,” Mr Reeves said. Doing away with batteries proved “too radical” an idea for Solar Aid, but Therefore was left with an “interesting thought. Instead of storing power, it seemed a good idea to generate as you need it if that’s more efficient, lower-cost and the device has a longer life.” Gravity provided a power solution because it is “ubiquitously free”.
Despite raising $400,000 (£258,000) for Deciwatt, a crowdfunding campaign in 2013 to fund field trials attracted plenty of naysayers. “A vocal minority thought we were trying to rip people off,” Mr Reeves said. “People involved in engineering and technology greeted it with great scepticism, the main concern being that it was physically impossible. They said there wasn’t enough energy budget in a lifted weight to provide a usable amount of light. Luckily, we had a fully working prototype, so we weren’t concerned.”
Deciwatt was spun-out of Therefore last year. Prices for GravityLight have yet to be decided, but the start-up is due to launch a second crowdfunding campaign to raise money for a production run this year. Bill Gates, the co-founder of Microsoft, has tweeted his approval.
According to Caroline Angus, the company’s commercial director, there is demand from consumers and aid agencies alike. “Our starting point was households that are reliant on [paraffin], but crowdfunding has opened our eyes to the breadth of reasons that someone might want it, including those who already have electricity — people have wanted it for their porch, cabin or to use in power outages. We’re aiming to produce 100,000 units this year, but we think we could get into the millions quite quickly.”
Personal Robot
Meet Sally — she is your new best friend,” the ad for Persona Synthetics begins. “The help you’ve always wanted. She is faster, stronger, more capable than ever before. She can be just about anyone: a teacher, a helper, a carer, a friend.”
The words soothe, over images of a prim electronic Mary Poppins cooking, cleaning, watering the garden and saving children. It managed to fool Twitter for a while: “Anyone seen the creepy Persona Synthetics ad on C4? Scared the hell out of me!” chirped Umar Siddiqui.
In fact, it was a marketing stunt for Channel 4’s Humans, a sci- fi eight- parter set in a parallel world where androids are ubiquitous household gadgets.
Loosely based on the Scandi drama Real Humans, it unpicks the complex emotional events set in motion when Gemma Chan’s beautiful robot, or synth, starts working for a family with a near-absent career mum (an excellent Katherine Parkinson), struggling dad (Tom Goodman-Hill), young child and two hormonal teenagers.
“We were keen to avoid the typical sci-fi dystopia where you’d have the synths lay waste to humanity,” says Jon Brackley, who adapted the Swedish show with his Spooks writing partner, Sam Vincent. “This is a world we think could happen, and we’ve tried to portray it as realistically as possible. It’s basically what would happen if our iPhones came in human form.”
Domestic drudgery being the stuff of reality television, Humans has a thriller plot involving William Hurt, the inventor of the synths, who secretly gives a select few real human emotions. The story has echoes of Blade Runner, including an icy blonde synth who spends time as a sex worker, but, as Vincent points out: “There is, in essence, only one robot story, and that’s, ‘What are they going to do to us?’”
Humans seems distinct from our fascination with post-apocalyptic mechanical slugfests. Instead, like Alex Garland’s recent Ex Machina, it recalls the fertile era of Isaac Asimov stories, using future tech to satirise everything from the Cold War to sex as it riffs on what occurs when cheap or illegal labour is replaced on farms and in brothels. For Parkinson, it was this emotional element that appealed. She sees the show as being about the trade-off we’re making when we entrust our lives to others, from Apple to au pairs.
“I was breastfeeding throughout filming, so I was interested in the idea that when we delegate what we call menial jobs, we can be giving away more than we want,” she says. “If you get someone in to change your baby’s nappy because you’re going back to work, that’s great. But every nappy change is also an intimate moment with your child.”
Parkinson’s Laura is deeply suspicious of Chan’s Anita, guiltily aware that her career and what seems to be an affair are keeping her away from home and letting Anita in. She tries to take over stories at bedtime and checking in on her daughter at night, but falters in the face of Anita’s robotic responses. Parkinson found playing opposite Chan’s unruffled features unsettling: “I’d be trying to make her laugh, or waving my hands around like mad to overcompensate for her stillness.”
She found herself thinking of Chan/ Anita as not quite human, and it had an eerie effect. “Although Anita is synthetic, she looks so beautiful, and it made me think of the silicone breasts in porn. You could say those breasts are synthetic, so surely they don’t arouse a man, right? Well, it doesn’t work like that.
The thing about people is, we’ll see something beautiful and fall in love with it. That seems to be the risk we’re facing: loving our machines and hating each other.”
Making The World A Better Place, Rapidly
Why are people so down on technological progress? Pope Francis complains in his new encyclical about “a blind confidence in technical solutions”, of “irrational confidence in progress” and the drawbacks of the “technocratic paradigm”. He is reflecting a popular view, held across the political spectrum, from the Unabomber to Russell Brand, that technology, consumerism and progress have been bad for people, by making them more selfish and unhappy.
But however thoroughly you search the papal encyclical (a document that does at least pay heed to science, and to evolutionary biology in particular), you will find no data to support the claim that as people have got richer they have got nastier and more miserable. That is because the data points the other way. The past five decades have seen people becoming on average wealthier, healthier, happier, better fed, cleverer, kinder, more peaceful and more equal.
Compared with 50 years ago, people now live 30 per cent longer; have 30 per cent more food to eat; spend longer in school; have better housing; bury 70 per cent fewer of their children; travel more; give more to charity as a proportion of income; are less likely to be murdered, raped or robbed; are much less likely to die in war; are less likely to die in a drought, flood or storm.
The data show a correlation between wealth and happiness both within and between countries and within lifetimes. Global inequality has been plummeting for years as people in poor countries get rich faster than people in rich countries. The vast preponderance of these improvements has come about as a result of innovation in technology and society.
So what precisely is the problem with technology that the Pope is complaining about? He cannot really think that life’s got worse for most people. He cannot surely believe that the dreadful suffering that still exists is caused by too much technology rather than too little, because surely he can see most of the suffering is in the countries with least technology, least energy, least economic growth, and most focus on ideology and superstition. Do Syria, North Korea, Congo and Venezuela have too much consumerism? “Obsession with a consumerist lifestyle . . . can only lead to violence and mutual destruction,” says the encyclical. Really? Only? If you hear of an atrocity in a shopping mall, do you immediately think of consumerism or religious fanaticism as the more likely cause? There is no mention in the encyclical of the suffering caused by fanaticism, totalitarianism or lack of technological progress — of the four million who die of indoor smoke from cooking over wood fires, for example.
Yet the Pope is exercised about the dangers of genetically modified food, for although he admits, “no conclusive proof exists that GM cereals may be harmful to human beings”, he thinks “difficulties should not be underestimated”. This in a world where golden rice, a genetically modified cereal fortified with vitamin A, could be preventing millions of deaths and disabilities every year, but has been prevented from doing so entirely by fierce opposition from the environmentalists the Pope has now allied himself so closely with.
The Pope has latched on to the wrong end of the environmental movement, the reactionary and outdated faction that still thinks like the Club of Rome, a group of grandees who started meeting in the 1960s to express their woes about the future in apocalyptic terms and blaming technology rather than lack of it.
Having been comprehensively discredited by history (their prediction was that by now we would be mired in ecological horror), they are still dispensing misanthropic gloom.
Hans Joachim Schellnhuber was the only scientist at the launch of the papal encyclical. He is a member of the Club of Rome.
Technological progress is what enables us to prevent child mortality; to use less land to feed the world, and so begin reforesting large parts of the rich world; to substitute oil for whale blubber and so let whales increase again; to get fossil-fuelled electricity to people so they don’t die of pollution after cooking over fires of wood taken from the rainforest.
“Nobody is suggesting a return to the Stone Age, but we do need to slow down and look at reality in a different way,” says the Pope. Personally, I would rather speed up the stunning and unprecedented decline in poverty of recent decades.
Humans
AMC's British import Humans, itself an adaptation of the Swedish series Real Humans, feels a little like an episode of Black Mirror. Or Caprica. Or even The Returned, sort of, with that woozy sense of unease. It's effective as a domestic drama and as a can-the-robots-love story. But most great robot stories aren't about robots-qua-robots, they're about moments in culture — about what we're afraid of, whom we're afraid of, where the weak points are in the fragile construction of society. Humans works pretty well there, too.
Welcome to the present day, where humanity is now aided by "synths," humanoid helper robots. They have a slightly awkward gait and posture, and glowing, glassy eyes, but otherwise look completely human, including finger- and toenails, and for the female synths, breasts that require (well, indicate) the use of a brassiere. The synths are developing consciousness, because of course they are, and various humans have developed modifications for their synths, though that voids their warranties. No user-serviceable parts inside, etc. They appear mostly to do domestic and menial jobs, including child care, customer-service calls, in-home health care, and sanitation work. Do people fuck their robots? Yes, sometimes people fuck their robots or other robots, thanks to synth sex work.
There are plenty of stories about robots that look like robots. The robots-that-look-like-us stories, though, make for an easy exploration of the standards of humanity. Every story goes through similar motions: "What makes a person a person?" That's what some character has to ask. Then another character has to derisively look at a humanoid robot and say, "That hunk of junk? That's not a person; that's a [toaster] [tin can] [machine]." Then the show probably explores identity-building, information dissemination, concepts of what culture entails, and maybe love. The robots are there as a stand-in for the denigrated, asking audiences: Is this really how you treat the least among you?
On the modern Battlestar Galactica, the conversation around Cylons' personhood was a direct response to the "War on Terror": Some people thought torturing Cylons was appropriate and permissible, that Cylons weren't like them, that they'd know if there were a Cylon in their social group, and, plus, Cylon religion is stupid. Hmmm.
This pops up here and there on Doctor Who, a show that generally strives to affirm the dignity of all peace-loving entities. The Twilight Zone uses human-seeming-robot stories to depict the struggles of oppressed women: Just when Jana announces that she wants to move out of her parents' dictatorial home, she comes to realize that she's a robot — in time for her "father" to erase her memory and reprogram her to be a maid. In a lighter capacity, we have Data on Star Trek: The Next Generation, representing the neurologically atypical — sometimes welcome in society, though less welcome when robot behavior becomes its most pronounced.
Humans' synths, then, are pretty clearly stand-ins for the working poor, maybe marginalized immigrants in particular, especially considering the show's genesis as a Swedish series. One character on Humans expresses exasperation when the customer-service rep on the phone turns out to be a synth, and a petulant human teen whines that she shouldn't have to bother pursuing anything meaningful since some synth will probably get there first. "You're not supposed to talk to me like that," and "that synth couldn't possibly understand that" will sound familiar to anyone who's either been around or insulted by the upper crust. How does someone present in a home become invisible to the people who live there? Can you see humanity in the people serving you, cleaning up after you, transactionally having sex with you?
There's plenty of other drama — including a heartbreaking plot about a widower who's hanging onto his outdated synth because it remembers his wife, a love plot between a human and a robot, tensions around maternal devotion, etc. It's not just allegory, which would get tedious, and it's also not a straight-up technological thriller, either. It is, however, growing into an impressively fleshed-out show, joining the ranks of other robo-oriented substantive dramas.
The Future Is Chinese
IT IS a courageous foreigner who drives on China’s roads. A combination of tens of millions of inexperienced drivers and a general disregard for traffic rules makes them among the world’s deadliest. Braver still would be the car manufacturer that dares to put a car loaded with automated-driving features on such roads. Western notions of what is a safe distance between cars mean little in China. How could an autonomous vehicle conceived for orderly Germanic roads cope with such anarchy?
Nevertheless Audi was this week giving journalists demonstrations of hands-off motoring through the frantic Shanghai streets. Its test cars were in town for a giant consumer-electronics fair, where it announced deals with Baidu, China’s biggest search-engine and mapping firm, and Huawei, a telecoms-equipment manufacturer, to kit out its connected cars of the future.
The German firm’s faith in China’s digital boom may be well placed, if this week’s convention is a guide. This is the first year that a version of the Consumer Electronics Show (CES), which is held every year with much fanfare in Las Vegas, has been held outside America. Hitherto, trends in consumer gadgetry have typically taken off in America first, followed by the rest of the rich world, and then in emerging markets like China. That may be changing.
Size is one reason. America’s Consumer Electronics Association, which stages the CES, forecasts that the Chinese market for electronic goods will grow by 5% to $281 billion this year, and at current growth rates will overtake America’s next year. A big market gives firms added incentive to try out new devices there early.
But there are other reasons besides size to expect the Chinese consumer increasingly to be the trend-setter, rather than the trend-follower, in electronics. First, take autonomous and “connected” cars. The average Audi buyer in America or Europe is in his 50s, but in China he is a digitally-addicted 36-year-old. So models with such advanced options are likely to become widespread in China first. Fully driverless cars, in particular, may take off quicker than in litigious America or risk-averse Europe.
Second, the Chinese have taken to mobile commerce with gusto. Apple is reported to be in talks with Alibaba, a local e-commerce firm, to bring its mobile payment system to the Middle Kingdom. Consumers who have quickly got used to shopping on mobile devices also seem likely to be enthusiastic adopters of smart watches and other wearable devices.
The Chinese people are turning out to have a greater affinity for gadgetry than even the Japanese; and Chinese companies are innovating furiously, producing all manner of devices, one of which may, perhaps, turn out to be the next Sony Walkman. JD, a successful online retailer akin to Amazon, this week showed off a voice-controlled gadget, dubbed the DingDong Smart Speaker, essentially a radio that plays whatever you tell it to—music, news, weather, or whatever. At a gathering of its own in Beijing this week, Lenovo, now the world’s biggest computer-maker, unveiled its plans for the first smart watch with two screens—an ordinary one, and one that uses optical reflection to create a much bigger virtual display. It also unveiled plans for a smartphone with a built-in laser projector and infrared motion detector that is capable of projecting what is, in effect, a giant touch screen.
Chinese electronics firms still have a reputation for simply copying Western designs. And there was still evidence of this at the CES gathering in Shanghai: plenty of lookalikes of Google Glasses, Apple Watches and iPads were to be seen at vendors’ booths. However, there were also a surprising number of original inventions on display, with a chance of making it in foreign markets. This opportunity is beginning to change Chinese firms’ attitudes towards intellectual-property protection. Chinese inventors backed by venture capital, who are hoping to launch their products in America, are beginning to patent them, realising that this boosts the valuation that investors put on their firms.
Consider The One, a brilliantly conceived electronic piano that integrates with music libraries accessed via smartphones and tablets. Many people give up trying to learn the piano because of the tedium of learning to read off manuscripts. The keys on this nifty piano light up in progression, to help pupils figure out which keys to press, and it uses tricks adapted from video games to help pupils eventually wean themselves off the lights and become able to read music. Ask Ben Ye, the firm’s founder, whether his hot-selling invention is safe from local copycats, and he says no. So why is his firm paying hefty licensing fees to foreign music publishers, to use their songs? We pay for the intellectual property we use because we want to go global, he says.
Low Battery Anxiety
As postmillennial nightmares go, discovering your mobile phone has low battery is just below dropping it down the toilet and just above a naked selfie going viral. Imagine waking up late because your charger wasn’t switched on at the wall and the alarm failed to play your customised David Guetta ringtone. Your day begins in the certainty that you will not be able to do everything essential in life: ie, playing Candy Crush on the commute or watching YouTube videos of cats.
Ask any twentysomething if they worry about their mobile running out of juice, and you’re likely to elicit the same wan expression. On Twitter, millennials don’t just stress out about their phones suddenly dying (“Like please let me go home. No joke”), they get vexed by pictures of someone else’s phone if its battery is in the red zone (“Everytime I see a screenshot with low battery I get anxiety”). “I used to be anxious if I didn’t have my phone,” one university student said yesterday. “Now I feel anxious if I don’t have the means to charge my phone.”
This postmodern trauma naturally has its own neologism, powernoia, and this may have been what prompted Robin Lee, a 45-year-old artist, to plug his phone into a handy socket while using the overground service from Hackney Wick to Camden Road this week. Unfortunately his act of desperation led to his arrest for “abstracting electricity’’, under the 1968 Theft Act. This was not the sort of charge he was expecting.
The power point Lee chose, it transpired, was not expressly reserved for phone and laptop users. His fate is enough to send a frisson of fear through those of us umbilically linked to our phones. It could have been any of us. The law clearly lacks empathy.
Lee’s decision to plunder the from public transport falls well short of the most frantic behaviour by some caught out by a low battery. Indeed it would not have been a surprise had he tried to attach it to the live rail given the meltdown some experience. Take, for instance, Nick Silvestri, a Long Island teenager who, faced with a low battery crisis while watching the Broadway production Hand to God last week, rushed the stage and plugged his phone into a socket there. Sadly for him, it was part of the set and not connected to the mains (Silvestri explained during the inevitable social media backlash, that “girls were calling all day”. What’s a guy gonna do?).
A depleted phone battery is a crisis, especially in a crisis. If you want to see how mobiles have changed social behaviour witness the 2012 aftermath of Hurricane Sandy in New York. The storm caused widespread outages across the city, prompting frantic New Yorkers to take to the streets. People queued for antiquated public telephones. Crowds gathered round sockets in public spots as if they were campfires. People took to trying to power their phones from lamp posts, some receiving shocks.
There were numerous incidents of people hailing cabs just to use a charging socket. And, in a heartwarming twist on human generosity, some city dwellers lucky enough to have power became digital Samaritans, running extension leads out of their homes and even providing cables. They gave away their electricity free but when people are desperate enough, it is a powerful currency. In a recent blog about working in war zones, the journalist Quinn Norton observed that the best way to gain the confidence of someone was no longer to proffer of a cigarette, but to offer to charge their phone.
How did we become so obsessed by battery life? “I remember going abroad when I was a kid and it taking half an hour to book a call through the operator but I can’t recall experiencing any anxiety about it,” laughs Dr George Fieldman, a London-based chartered psychologist. “Today that’s all changed because we are so dependent on these devices.
“We are social animals and there’s this desire to stay in contact so there’s a real sense of being cut off from others when that’s somehow taken away. Also, our innate sense of competition leaves us feeling diminished compared with others because we’re empty and they’re full. Suddenly we’re exposed.”
This goes some way to explaining why our perception of a failing phone is so out of proportion. We don’t just say “my phone’s low’’, we howl “my battery’s dying!’’, as if the poor thing vibrating weakly in our pocket is a small pet about to expire.
“There are some people, say those in the emergency services or perhaps the financial sector, even, who genuinely need that level of instant communication. Most of us don’t,” says Fieldman. “The essential treatment for all anxiety disorders is exposure. So if I’m anxious about not having my phone available I need to part myself from it for increasing periods. I might feel anxious to begin with, but those feelings should drop away increasingly. And, let’s face it, it would probably be a good thing.”
If you think our reaction to impending connectivity loss is disproportionate, think again. David McClelland is a technology expert who foresees the daily commute taking a dystopian twist. “I have the new Apple Watch with Apple Pay,” he says. “Their phones are also entering into the cashless economy so I was having a conversation with someone today about using it to touch in and out at stations. What if it runs out of juice between tapping in at the start of the journey and tapping out? If you’re not allowed to use those phone points on the train, you’re in trouble. It’s going to happen.”
At least one company has recognised the commercial benefits of providing a free phone-charging service in a cash-free society. Last year Starbucks began introducing complimentary phone-charging mats to its US stores, a shrewd move as more than 10 per cent of transactions in its stores are via mobile devices.
Even without having to order a Frappuccino there are ways to ensure that you never have to resort to abstracting electricity. There are plenty of portable rechargers on sale, many with interesting USPs, such as the Mighty Purse, a women’s wallet that holds enough charge to discreetly spark a drained handset.
Alternatively, for middle-aged men in Lycra, dynamo units such as Siva’s Cycle Atom and the Pedal Juice generate power from the wheel to a holder on the handlebars to feed a phone that’s eating power as you navigate. There are also solar energy units or hand-crank chargers that generate enough power to get you out of a hole, even in the wilderness. For foodies, one Japanese company has patented the Pan Charger, a USB saucepan that converts heat into electrical energy (it might stave off powernoia but possibly risks a more alarming sort of meltdown).
Ever ready to invest in a public solution, the French recently introduced charge points at bus stops in Paris. For the first time commuters actually hope the bus is late.
McClelland has advice for those living beyond the périphérique. “One thing that’s true of all smartphones is the energy all those features use. If you’re low, start turning things off. The smarter they are, the more energy they’re using. Turn off the GPS, the wi-fi, the bluetooth. It’s easy to leave them on but you wouldn’t leave your car headlights on when you’re parked.” Apps eat power. Handily there are apps to tell you which apps consume the most.
Still the march of progress will soon make this problem something we look back on with amusement. Last year Massachusetts Institute of Technology announced that it had developed a tiny self-charging battery that can extract energy from low heat sources, meaning that one day your phone will charge itself in your pocket.
Then what? Will we miss our flat batteries? One side-effect of this advance will be to end the classic excuse for cutting short a conversation: “I’ll have to go, my battery’s low”. The other, says Fieldman, is that we may come to regret 24/7 connectivity.
“How often do you see people frowning at their phones?” he asks. “You have to wonder how much joy they are getting from them. Also, people who spend all their time with their head buried in a phone are less likely to be communicating with those around them. You’re in a bubble but if that bubble is burst, who knows what might happen? Think of it as an opportunity.”
Patent System Needs Reform
IN 1970 the United States recognised the potential of crop science by broadening the scope of patents in agriculture. Patents are supposed to reward inventiveness, so that should have galvanised progress. Yet, despite providing extra protection, that change and a further broadening of the regime in the 1980s led neither to more private research into wheat nor to an increase in yields. Overall, the productivity of American agriculture continued its gentle upward climb, much as it had before.
Patents are supposed to spread knowledge, by obliging holders to lay out their innovation for all to see; they often fail, because patent-lawyers are masters of obfuscation. Instead, the system has created a parasitic ecology of trolls and defensive patent-holders, who aim to block innovation, or at least to stand in its way unless they can grab a share of the spoils. An early study found that newcomers to the semiconductor business had to buy licences from incumbents for as much as $200m. Patents should spur bursts of innovation; instead, they are used to lock in incumbents’ advantages.
The patent system is expensive. A decade-old study reckons that in 2005, without the temporary monopoly patents bestow, America might have saved three-quarters of its $210 billion bill for prescription drugs. The expense would be worth it if patents brought innovation and prosperity. They don’t.
Innovation fuels the abundance of modern life. From Google’s algorithms to a new treatment for cystic fibrosis, it underpins the knowledge in the “knowledge economy”. The cost of the innovation that never takes place because of the flawed patent system is incalculable. Patent protection is spreading, through deals such as the planned Trans-Pacific Partnership, which promises to cover one-third of world trade. The aim should be to fix the system, not make it more pervasive.
The English patent
One radical answer would be to abolish patents altogether - indeed, in 19th-century Britain, that was this newspaper’s preference. But abolition flies in the face of the intuition that if you create a drug or invent a machine, you have a claim on your work just as you would if you had built a house. Should someone move into your living room uninvited, you would feel justifiably aggrieved. So do those who have their ideas stolen.
Yet no property rights are absolute. When the benefits are large enough, governments routinely override them—by seizing money through taxation, demolishing houses to make way for roads and controlling what you can do with your land. Striking the balance between the claim of the individual and the interests of society is hard. But with ideas, the argument that the government should force the owners of intellectual property to share is especially strong.
One reason is that sharing ideas will not cause as much harm to the property owner as sharing physical property does. Two farmers cannot harvest the same crops, but an imitator can reproduce an idea without depriving its owner of the original. The other reason is that sharing brings huge benefits to society. These spring partly from the wider use of the idea itself. If only a few can afford a treatment, the diseased will suffer, despite the trivially small cost of actually manufacturing the pills to cure them. Sharing also leads to extra innovation. Ideas overlap. Inventions depend on earlier creative advances. There would be no jazz without blues; no iPhone without touchscreens. The signs are that innovation today is less about entirely novel breakthroughs, and more about the clever combination and extension of existing ideas.
Governments have long recognised that these arguments justify limits on patents. Still, despite repeated attempts to reform it, the system fails. Can it be made to work better?
Light-bulb moment
Reformers should be guided by an awareness of their own limitations. Because ideas are intangible and innovation is complex, Solomon himself would find it hard to adjudicate between competing claims. Under-resourced patent-officers will always struggle against well-heeled patent-lawyers. Over the years, the regime is likely to fall victim to lobbying and special pleading. Hence a clear, rough-and-ready patent system is better than an elegant but complex one. In government as in invention, simplicity is a strength.
One aim should be to rout the trolls and the blockers. Studies have found that 40-90% of patents are never exploited or licensed out by their owners. Patents should come with a blunt “use it or lose it” rule, so that they expire if the invention is not brought to market. Patents should also be easier to challenge without the expense of a full-blown court case. The burden of proof for overturning a patent in court should be lowered.
Patents should reward those who work hard on big, fresh ideas, rather than those who file the paperwork on a tiddler. The requirement for ideas to be “non-obvious” must be strengthened. Apple should not be granted patents on rectangular tablets with rounded corners; Twitter does not deserve a patent on its pull-to-refresh feed.
Patents also last too long. Protection for 20 years might make sense in the pharmaceutical industry, because to test a drug and bring it to market can take more than a decade. But in industries like information technology, the time from brain wave to production line, or line of code, is much shorter. When patents lag behind the pace of innovation, firms end up with monopolies on the building-blocks of an industry. Google, for instance, has a patent from 1998 on ranking websites in search results by the number of other sites linking to them. Here some additional complexity is inevitable: in fast-moving industries, governments should gradually reduce the length of patents. Even pharmaceutical firms could live with shorter patents if the regulatory regime allowed them to bring treatments to market sooner and for less upfront cost.
Today’s patent regime operates in the name of progress. Instead, it sets innovation back. Time to fix it.
Permeable Concrete
A casual observer might think that the Next superstore car park in High Wycombe is not an obviously exciting place to spend a wet afternoon.
That person would be wrong though. If you care to visit this apparently ordinary Buckinghamshire car park in the rain, you will find it a lot drier than it should be, and this is very exciting indeed — to engineers at least.
This car park is one of the first in the world to be laid with a type of “thirsty” concrete that can swallow thousands of litres of water a minute, in an attempt to combat flooding. The top layer of the concrete is bonded pebbles, with large gaps between them through which water can flow — but which is strong enough to take the weight of cars and not to be broken up by tyres.
Road surfaces and car parks are a major cause of flash flooding. Rather than absorbing water, as earth would, and slowly releasing it over the course of several days, water runs straight off tarmac and concrete into drains. This can lead to rivers becoming overwhelmed in the hours after a big storm and even to houses being flooded by surface run-off.
In 2007 a particularly wet summer resulted in 19,000 homes being flooded when rivers burst their banks and twice as many houses were flooded by rainwater flowing from impermeable surfaces. So if surfaces can be designed to act more like earth does, letting the water flow into the ground, it would be a way not only to make them free of puddles but also to improve water management.
Lafarge Tarmac, the engineering company, believes that it has achieved this, with a surface that looks like normal tarmac but lets water seep through faster than has been achieved before — at a rate of 1,000 litres per sq m each minute.
It is one of many engineering companies investigating permeable surfaces, as a result of building regulations that require new constructions to ensure sustainable drainage.
“The water has to be treated in some way,” Jeremy Greenwood, the managing director of Lafarge Tarmac Readymix, said. “This allows the rainwater to pass through the concrete, straight through the porous layer.
“Rather than surface water run-off and the issue of flooding, this is as if it is falling on to grass — it goes straight through and into the ground.” It is also possible to capture the water underneath the surface and reuse it, for example in toilets, or run it in pipes to be trapped elsewhere. The advantages of this are not only in stopping flooding but also in keeping concrete cooler in summer and less icy in winter. Ice formed by melting snow or surface water freezing would no longer persist, because it would not form pools on the surface.
CRISPR Implications
The biologists have done it again. Not so long ago it was cloning and embryonic stem cells that challenged moral imagination. These days all eyes are on a powerful new technique for engineering or “editing” DNA. Relatively easy to learn and to use, CRISPR has forced scientists, ethicists and policymakers to reconsider one of the few seeming red lines in experimental biology: the difference between genetically modifying an individual’s somatic cells and engineering the germline that will be transmitted to future generations. Instead of genetic engineering for one person why not eliminate that disease trait from all of her or his descendants?
This week, the U.S. National Academy of Sciences, the Chinese Academy of Sciences, and the U.K. Royal Society are trying to find ways to redraw that red line. And redraw it in a way that allows the technology to help and not to hurt humanity. Perhaps the hardest but most critical part of the ethical challenge: doing that in a way that doesn’t go down a dark path of “improvements” to the human race.
Compared to previous strategies, the technique known as CRISPR (clustered interspaced short palindromic repeats) is faster, more reliable and cheaper than previous methods for modifying the base pairs of genes. CRISPR is made up of scissors in the form of an enzyme that cuts DNA strands and an RNA guide that knows where to make the cut, so the traits expressed by the gene are changed. Already, labs are applying gene editing in pluripotent stem cells. Older methods are being used to help the human immune system’s T cells resist HIV, which might be done better with CRISPR. Gene editing trials are also in the offing for diseases like leukemia. It looks very much like these genies are out of the bottle.
Since the 1960s, technical limitations, the prospect of unintended consequences, and the “eugenic” implications of deliberate alterations of future generations weighed heavily against germline engineering. Many countries, including many in Europe, have laws that forbid human germline modification. The National Institutes of Health won’t pay for such research but there’s no law against using private funds. China also doesn’t prohibit it.
But over the past 20 years, advances in laboratory techniques, genetic screening for disease traits, and the prospective fruits of the Human Genome Project have smudged the red line. Gradually, the public health benefits of changing the human germline have gained as much emphasis as the risks. Some observers have noted that, aside from the efficiencies that could be realized with germline engineering, the ethical distinction between germline and somatic cell modification may already be moot, since even somatic cell modifications can “leak” into effects on gametes. The emergence of CRISPR has made it impossible to delay more definitive guidance.
Even apart from risks and benefits, are we prepared to modify our genetic heritage with all the implications for humanity’s relationship to the rest of the natural world? Following a wave of publicity about CRISPR, last spring a number of scientists and researchers called for a voluntary moratorium on its use. Their proposal was reminiscent of the Asilomar moratorium on recombinant DNA research in 1975. Asilomar is commonly (but not universally) thought to have been an effective response on the part of the scientific community to public fears about biohazards.
But the world of life sciences research is far different now than it was 40 years ago, when the community was much smaller and more intimate. Sophisticated experimental biology is now a globalized affair. Funding pressures, the virtually instantaneous availability of experimental procedures and results, and the fact that researchers may have limited face-to-face contact make self-policing far more challenging than it once was. Indeed, within weeks of the calls for a moratorium, a Chinese team performed a modification of non-viable embryos, a proof-of-concept experiment that fell smack into the ethical grey zone and further shook confidence in the prospects for an effective moratorium.
With events moving so quickly, the summit organized by the U.S. National Academies, along with its British and Chinese counterparts, will need to face a few key ethical issues. How can technical risks, like “off target” effects that change an important gene instead of the one intended, be avoided? Are there any diseases that could justify attempts to diffuse genetic changes in a human population? And who is to make such monumental decisions on behalf of unborn generations? Recommendations from the Academies aren’t law but they can establish guiding principles for legitimate scientific practices.
Ten years ago a National Academy of Sciences committee that I co-chaired set rules for doing human embryonic stem cell research that were voluntarily adopted in many parts of the world. When it comes to the ethics of science, the scientific community needs to lead but also needs to listen to non-scientists. Especially in the case of the human germline, one principle worth defending is that between therapy and enhancement. Even if population-wide disease prevention is sometimes acceptable, attempts to otherwise “improve” the human race should be banned.
Other principles will apply mainly to agricultural research. Genetically modified plants and animals are the focus of a parallel National Academies study on the ecological risks of gene drive experiments that might someday lead to deliberate changes of non-human populations in the wild. New techniques like CRISPR will make the recently approved fast growing salmon look old fashioned.
The experiments that are both the most promising and the most risky are that those that involve rapidly propagating species like insects, like eliminating the ability of mosquitoes to carry the malaria parasite. And accidents are always possible so best biosafety practices will have to be reviewed and strengthened, including perhaps inbred biological barriers like the “suicide genes” that will cause modified organisms to die if they escape the lab.
One thing is clear: CRISPR and its descendants will have lives beyond the laboratory.
Plastic Bags
IT DOESN’T TAKE a close reading of Katy Perry lyrics to know that plastic bags get little love. You’ve heard: The plastic detritus drifts through the wind, gets stuck in trees, kills whales, and lasts pretty much forever in landfills. So since San Francisco banned plastic bags in 2007, over 100 US cities also enacted bans or fees. Later this year, France plans to implement a ban across the whole country.
But I’ve come to praise plastic bags, not to bury them (or recycle them, the more environmentally responsible thing, FYI). If the bans teach people one thing, let it be this: Plastic shopping bags are remarkable feats of engineering, and consumers have taken them for granted this whole time.
Just look at the numbers: Plastic grocery bags cost pennies to make and hold more than a thousand times their weight. They’re light. They’re waterproof. “They kind of seemed miraculous,” says Susan Freinkel, author of Plastic: A Toxic Love Story, who is clearly no fan of plastic. So miraculous, in fact, that shoppers in the ‘80s didn’t quite believe the feather-light bags could hold up to their heavy cans and boxes.
But in that decade, plastic bags swept through the nation. A new breakthrough design made them so cheap and so easy to produce, grocery stores couldn’t say no. The invention actually came from Sweden, where Gustaf Thulin Sten figured out a way to stamp bags out of tubes of polyethylene, much improved over the way manufacturers had tried to make rectangular plastic bags resembling paper ones. The process is still roughly the same today: Extrude hot polyethylene, blow it up like bubble gum, and pull it out until you’ve got a long tube of polyethylene film. Flatten the tube, and you’ve got two sheets of plastic on top of one another. Then stamp out the individual bags, seal the top and bottom, and cut out a rectangle for the handles. The cut, folded bag resembles a folded shirt, giving it the name “t-shirt bag.”
The plastic itself is a mixture of high-density and low-density polyethylene. This is important: The high-density stuff is strong but brittle, and the low-density stuff is stretchy—extremely stretchy. If a shopping bag were made of low-density polyethylene, “it would stretch to the ground before you got to the car,” says Philip Rozenski, director of sustainability and marketing for Novolex, a maker of packaging products.
After a plastic bag is cut, machines blast it with plasma, or superheated and charged air. This process, called a corona treatment, is why pulling one plastic bag off the grocery store rack slightly opens the next one—a tiny touch that adds a lot of convenience. (Those racks are specially designed to fit t-shirt bags, too.) The corona treatment also changes the surface of the plastic bag, so it holds ink, like for a store’s logo.
At the end of the production line are the tests—a battery to make sure plastic bags are up to snuff. A key test is the “jog test.” Engineers put a weight resembling a giant six-pack into the bag, and a machine shakes it up and down 175 times to simulate the walk—or jog?—from the store to the car.
It’s telling perhaps, that plastic bag manufacturers would focus so much on the walk to the car–this is all most people use plastic bags for. And so the plastic bag’s greatest strength also becomes its downfall. “It’s easy to make the case, why do you have this thing that you use for 10 minutes and it lasts forever,” says Freinkel. San Francisco’s ban came about because the bags were quite literally clogging up the city’s recycling equipment, so that workers had to climb in with box cutters once or twice day.
What’s so galling about just throwing out plastic bags is that the light, strong, waterproof bags are so useful. If you’ve ever needed to carry something in the rain, or schlep a wet swimsuit from the pool, or pick up dog poop—all pain points I’ve encountered living in the island of plastic bag scarcity that is California’s Bay Area—plastic bags are the way to go. Those bags I used get to free at the grocery store? I would happily buy some.
In the grand scheme of environmental problems, ubiquitous plastic bags do not rank that high, says Freinkel, but the bans are still important. “This kind of very short-term mindset and this culture of convenience contributed to huge amounts of environmental degradation,” she says. “The bags have become a potent symbol.” Plastic bags may be symbolic of everything bad in our consumerist culture, but that’s because people treat them as just throw away these durable and cheap bags. Reuse plastic bags, recycle them, respect them.
Choose your stories carefully
New technology demands something important to move from early-adopter novelty to widely embraced tool:
Examples.
Examples and stories and use cases that describe benefits we can't live without.
The beauty of examples is that they can travel further and faster than the item itself. The story of an example is enough to open the door of imagination, to get 1,000 or 1 million copycat stories to enter the world soon after.
Email had plenty of examples, early and often. Stories about email helped us see that it would save time and save money, help us reach through the bureaucracy, save time and cycle faster. It took just a few weeks for stories of email to spread through business school when I was there, more than thirty years ago.
On the other hand, it took a long time for the story of the mobile phone to be deeply understood. For years, it was seen as a phone without wires, not a supercomputer that would change the way a billion people interact.
Most of the stories of Bitcoin haven't been about the blockchain. They've been about speculators, winning and losing fortunes. And most of the stories of 3-D printers have been about printing small, useless toys, including little pink cacti. And most of the stories about home drones have been about peeping toms and cool videos you can watch after other people make them.
Choose your stories carefully.
Cleanup The Ocean Gyre Plastic
Lourens Boot is a man of the sea. He windsurfs. His houseboat consistently ranks as one of the best Airbnb rentals in the world. He spent years working offshore exploration for Shell. But in spring of 2014, the Dutchman wanted something new. Why toil in the maintenance of the old order? So he quit Shell, attended Burning Man, and dropped by an offshore energy summit in Amsterdam.
Something called the Ocean Cleanup caught his attention. Boot had first learned about the project — which aims to cleanse the ocean of trillions of pieces of plastic — on a viral Tedtalk. The video had featured a moppy-haired Dutch kid who looked like a boy band understudy. And he had a big idea.
Until that point, in 2012, the leading proposal to clean up the ocean’s trash was dispatching big ships to troll for bits of plastic — and it would take thousands of years. So the teen, Boyan Slat, said he’d come up with a low-cost solution that could do it in a matter of years. He proposed erecting a large and angled barrier and mooring it to the ocean floor in the areas of densest garbage accumulation. Then the ocean’s currents would take it from there, passively pushing the plastic into a collection zone, cleansing the zone in five years.
“The oceanic currents moving around is not an obstacle,” Slat said. “It’s a solution. Why move through the oceans if the oceans can move through you? … Let the rotating currents do their work.”
The idea was so simple, so clear, so seemingly important that the video took off, snaring 2.4 million views. “WHY ARE WE NOT FUNDING THIS?” one viewer commented. But Boot remembers feeling dubious. Funding wasn’t Slat’s only problem. Boot had spent years in the ocean. Ideas that sing on paper might not last the first storm or tidal wave. Mooring something so large and so delicate at ocean depths sinking to 4,000 meters wouldn’t be easy, Boot had thought.
This June, Ocean Cleanup’s prototype will settle into waters much closer to land, where two outside experts said it would have a much higher chance of success, off the coast of the Netherlands. The trial run, which oceanographers are closely monitoring, is led by a 21-year-old who has amassed millions in funding, collected thousands of supporters and employs dozens of staffers. Advocates call Slat a visionary. Critics describe him as naive — perhaps dangerously so. What’s not in dispute: The project’s ambition and scope. “It may be the first ocean cleanup in history,” said Nicholas Mallos, an Ocean Conservancy official.
That challenge is what attracted Boot that day in Amsterdam. “This is a really big idea,” he thought, standing before the Ocean Cleanup display. Boot said he wanted in and, days later, arrived at the project’s offices. Boot, now the Ocean Cleanup’s head of engineering, noticed a youthful man. Hair long and disheveled, he was disdainful of small talk and “a bit distant.” This was Boyan Slat. And he said he wanted to change the world.
More plastic than fish
It began years ago, in the summer of 2011, off the coast of Greece. Slat, who was 16, was on a family vacation, scuba diving. The teen’s mind had always worked like a series of gears snapping into place. He first built treehouses, then zip lines, then rockets. By the time he dove into the Grecian waters, he had broken the world record for most highly pressurized rockets launched simultaneously. Slat shot 213.
As the teen swam, he noticed plastic. The bags and floating bits seemed to even outnumber the fish. They floated up, down, at all depths. “This problem struck me as one that should be solved,” he said. “… I thought, ‘Why don’t we just clean this up?'” So the high school student hopped on his computer and started researching the issue. He discovered the severity of the problem.
We currently inhabit what some scientists called the Age of Plastic. Every characteristic that makes plastic a boon to mankind — it’s malleable, durable and cheap — makes it a bane to the ocean. Every year, humans discharge roughly 8 million metric tons into the oceans, where fish and mammals and birds mistake it for food. By the year 2050, Slat’s anecdotal observation that there were more plastic bags than fish in the ocean will actually be true.
Thanks to the ocean’s currents, propelled by wind patterns and the rotating earth, a significant portion of the ocean’s trash ends up in huge systems of rotating currents called “gyres.” There are five major ones — in the Indian Ocean, the North Atlantic, the North Pacific, the South Atlantic and the South Pacific. But even in the parts of the ocean where the trash is at its most dense, Slat said he realized cleaning up the trash with the “vessel-and-boat thing wouldn’t be very practical. The plastic moves around…. But I thought, ‘Is that really a problem? Or a solution?'”
What would become arguably history’s most ambitious ocean cleanup initiative began as a high school science project. Slat spent hundreds of hours researching it, and thought he could resell the collected plastic, making the enterprise sustainable. He was, however, still just a teenager. He couldn’t do it alone. “Finding people to work on this was really difficult,” he said. “I contacted 300 companies for support, but no one replied. It was quite depressing.”
But then organizers of a local Ted Talk approached him. They had heard about his project. Was he interested in doing a Ted Talk? He said he was.
Most scientists, who by and large labor in obscurity, drop everything to talk to the press. They get their research partners to talk. They immediately furnish whatever tidbit of information a journalist may request.
Not so for Boyan Slat. People on his team aren’t immediately available for interviews. He declined to ask his parents to talk. And while Slat now has a PR team, a slick website and a media campaign that brings in tens of thousands of clicks and likes, he doesn’t enjoy yapping with scribes. He appears bored when, on the phone with a reporter, he retells the Ocean Cleanup’s origins story. It’s a tale he’s regurgitated ever since his YouTube video went viral, netting him a degree of celebrity that doesn’t seem to interest him. “If I had a choice,” Slat said, “I would be busy engineering.”
For Slat, whose youthful appearance has been both beneficial and harmful, such dedication has been crucial. The media, long a sucker for the boy-genius-saves-planet narrative, has fawned over his work. But in the early days of the project, critics also mentioned his age, implying a degree of naivete. They said the project both underestimated the power of the ocean and its own potential to harm the environment. Scientists said not only would it be difficult to anchor the barriers to the ocean’s depths, but that those barriers could inadvertently catch plankton. One activist called it a “fool’s errand.” Oceanographer Kim Martini described it as the “Wet Dream.”
And just like that, in 2013, Slat disappeared. He forwent college and, he said, ignored social obligations. (“He had a couple of days of holiday during [last] summer … and it was hell for him,” said Michael Hartnack, the project’s chief financial officer.) Slat said he declined more than 400 interview requests. Instead, he launched a crowdfunding campaign, securing $90,000 that he said he would use to answer his detractors and prove, once and for all, whether his idea could be done.
Around the time Slat published the feasibility study, which weighed in at 530 pages and was authored by 70 engineers and scientists, he took a trip to Washington. He walked through the Smithsonian National Air and Space Museum, stopping before the 1903 Wright Flyer, which the Wright brothers used to soar into the clouds and herald the aerial age. Standing there, Slat said he was struck by a realization.
“We are testing not to prove ourselves right, but to learn what doesn’t work,” Slat said. “The reason why the Wright brothers were successful wasn’t because they had the most resources, but because they understood how invention works. You have to iterate quickly, and you should be prepared to fail. Because things often don’t go as planned.”
Following years of study, and seven expeditions into the gyres, the project has started to solidify. Drawing on technology found in offshore rigs that have moored to depths of 2,500 meters, the team concluded that “the tools and methods that are available to the offshore engineering world can readily be applied for the realization of this project.” It also said that most of the plankton would pass underneath the barrier unharmed. Even in the worst case scenario that the plankton would be harmed, the feasibility study found that it would take “less than 7 seconds to reproduce” whatever had been lost.
Expectations have also lowered slightly. The study — which answered many of the project’s critics, but stirred fresh ones — found that a barrier that’s 100 kilometers (60 miles) long would clean up 42 percent of all of the plastic in the North Pacific gyre in 10 years.
“One of the things I’m happy to see about the work is that he is continually refining the concept,” said Nancy Wallace, director of the marine debris program at the National Oceanic and Atmospheric Administration. “It’s a proven concept on smaller levels,” she added. “The concept of trying to bin trash in waterways before it gets into the ocean is a proven concept.”
The project’s prototype, funded with millions raised through crowdfunding, will launch in June when the team unfurls a 100-meter (328-foot) barrier off the coast of the Netherlands. The first large trial is set for early next year, when the team will establish a two-kilometer (1.2 mile) barrier off the coast of Japan’s Tsushima Island. “Focusing on near-shore environments and focusing on trying to stop the plastic from entering the ocean in the first place” is a good place to begin, said Mallos, the Ocean Conservancy official.
By 2020, Slat says he hopes they will have collected enough information — and yes, failures — to move much deeper into the ocean, beginning the cleanup in earnest with a 100-kilometer barrier between Hawaii and California, in the heart of the North Pacific gyre.
So forgive Slat, whose project now bills itself as “largest cleanup in history,” if he doesn’t have time to talk or discuss his past in depth.
“I really hate looking back,” he said. “I think it’s useless. The only way is forward. When I look back one year ago, we were a handful of people and volunteers on a university campus. And now I’m walking into a meeting room, and am looking through the glass at 35 people we have on staff. I always hoped it would be successful, but never realized it would have become this professional or this big.”
And now, finally, he said, it’s time to see if it works.
GM and Eugenics Fears
This summer brings the 50th anniversary of the full deciphering of the genetic code — the four-billionyear-old cipher by which DNA’s information is translated and expressed — and the centenary of the birth of Francis Crick, who both co-discovered the existence of that code and dominated the subsequent 13-year quest to understand it. Europe’s largest biomedical laboratory, named after him, opens this summer opposite St Pancras station.
At a seminar today to mark Crick’s centenary at Cold Spring Harbor Laboratory in New York, hosted by his famous collaborator Jim Watson, I shall argue that the genetic code was the greatest of all the 20th-century’s scientific discoveries. It came out of the blue and has done great good. It solved the secret of life, till then an enigma: living things are defined by the eternal replication of linear digital messages. It revealed that all life shares the same universal but arbitrary genetic code, and therefore shares common ancestry, vindicating Charles Darwin.
From the very moment that Crick first showed a chart of the genetic code, on May 5, 1966 at the Royal Society in London, speculation began about the dangers of using this knowledge for the eugenic enhancement of human beings or for making biological weapons. The discovery only three years ago of a precise gene-editing tool (known as CRISPR-Cas9) has revived that debate yet again, not least with the first application, by Kathy Niakan of the Crick institute, to use CRISPR experimentally (not therapeutically) on very early human embryos.
Yet in truth the threat of eugenics is fainter than ever. This is for three reasons. First, the essence of eugenics was compulsion: it was the state deciding who should be allowed to breed, or to survive, for the supposed good of the race. As long as we prevent coercion, we will not have eugenics. Our politics would have to change far more drastically than our science.
Remember that many of the most enthusiastic proponents of eugenics were socialists. People such as Sidney and Beatrice Webb, George Bernard Shaw, HGWells, Karl Pearson and Harold Laski saw in eugenic policies the start of the necessary nationalisation of marriage and reproduction — handing the commanding heights of the bedroom to the state. In The Revolutionist’s Handbook and Pocket Companion, an appendix to Shaw’s play Man and
Superman, one of the characters writes: “The only fundamental and possible socialism is the socialisation of the selective breeding of Man.” Virginia Woolf thought imbeciles “should certainly be killed”.
Surprisingly, it was California that pioneered the eugenic sterilisation of disabled and “imbecile” people in the 1920s; and it was from California that Ernst Rüdin of the German Society of Racial Hygiene took his model when he was appointed Reichskommissar for eugenics by the incoming National Socialist government in 1933. The California conservationist Charles Goethe returned from a visit to Germany overjoyed that the Californian experiment had “jolted into action a great government of 60million people”.
The second reason we need not fear a return of eugenics is that we now know from 40 years of experience that without coercion there is little or no demand for genetic enhancement. People generally don’t want paragon babies; they want healthy ones that are like them. At the time test-tube babies were first conceived in the 1970s, many people feared in-vitro fertilisation would lead to people buying sperm and eggs off celebrities, geniuses, models and athletes. In fact, the demand for such things is negligible; people wanted to use the new technology to cure infertility — to have their own babies, not other people’s. It is a persistent misconception shared among clever people to assume that everybody wants clever children.
Third, eugenics, far from being inspired by genetic knowledge, has been confounded by it. Every advance in genetics over the past 116 years has shown that it is less easy to enhance human beings than expected, but easier to cure diseases. The discovery of genes — effectively in 1900, when Gregor Mendel’s work was disinterred — made the selective breeding of people much harder than Francis Galton, the founder of eugenics, had expected. This was because it meant that “undesirable” traits could be hidden in healthy people (“recessive” genes) for generations. It would therefore take centuries to “breed out” any trait thought undesirable by the state.
The more recent discovery that traits such as intelligence are caused by the complicated interaction of multiple genes of small effect means that it is anyway going to be virtually impossible to decide what genetic recipe to recommend to somebody who wants a clever child, or a goodlooking one, or an athletic one. By contrast, the genetic changes that cause terrible afflictions such as Huntingdon’s disease or cystic fibrosis are singular and obvious. Selecting embryos that lack such traits, or editing the genes of people so that they are born without carrying such traits, will always be much easier than selecting genetic combinations that might, in the right circumstances and with the right upbringing, lead to slightly higher IQ. Cure will always be easier than enhancement.
Fifty years on, the discovery of the genetic code has produced a cornucopia of good and very little harm. It has convicted the guilty and exonerated the innocent in court on a huge scale through DNA fingerprinting. It has enabled people to avoid passing on terrible diseases. It has led to the development of new drugs, new therapies and new diagnoses. It has given partial sight back to a blind man through gene therapy. It has increased the yield of crops while reducing the use of chemical pesticides. It has discovered new species. It has illuminated ancient history and explained the parentage of an archbishop.
Against this, what? One dreadful mistake in the early history of gene therapy, which led to a single death. Some narrowly averted discrimination in health and life insurance. Other than that, I cannot think of any bad results from DNA. Yet still we are bombarded with scares about Frankenstein foods, biological warfare, designer babies, genetic discrimination and the return of eugenics. We have a virtual ban on GM crops and put huge obstacles in the way of GM vaccines. For Crick’s sake, let us agree that genetics has been a huge force for good.
Trial and Error
Iterative innovation. It is the idea that technological change typically happens not through detonations of creative insight, but through a long process of trial and error.
Take James Dyson’s vacuum cleaner. The product is marvellously engineered, so it’s tempting to assume that it must have popped fully formed into his head. In fact, Dyson worked through 5,126 failed prototypes before coming up with the design.
Or take Pixar’s movies, such as Toy Story and Finding Nemo, which, again, are marvellously constructed. They prompt the thought that they must have emerged from a presiding, imaginative genius. In fact, Pixar is a master of iteration, testing and learning as it hones every plotline via 125,000 storyboards for a 90-minute feature. My argument, really, was that we need to think about creativity in a new way. When we think of it as a journey of trial and error, we are far more likely to be resilient to the failures that are an inevitable aspect of innovation, and to be emboldened to take risks of experimentation.
Engineered Bacteria
MANY of us will have seen a lake covered in an algal bloom, the crystal clear surface transformed into a carpet of green, with a dark and suffocated body of water beneath. What’s fascinating about such an event is that it happens quickly – the lake reaches a tipping point and within a month the algae have taken over.
This state isn’t permanent, however: lakes can be tipped back. Two decades ago, the ecologist Marten Scheffer showed that simply removing large predatory fish helps a lake regain its crystal clear state. It works because it allows the population of zooplankton to increase, and they eat the algae and stop the bloom. The method has been used around the world with great success.
In the coming decades, climate change will mean many more ecosystems reaching catastrophic tipping points. Several grandiose suggestions have been made as to how we might engineer our way back from the brink, such as giant mirrors in space that would reduce the amount of solar energy reaching Earth. But such schemes are expensive and risky. My colleagues and I think we have a better idea, one that takes its cue from the solution to the algal blooms: engineering ecosystems using synthetic life.
This approach could help prevent the spread of deserts, restore polluted lakes and rivers, attack the islands of plastic rubbish accumulating in the oceans, and deal with sewage or vast landfills resulting from industrial and farming activities. If done correctly, it would allow us to more easily predict and manage the consequences compared with proposals that would affect the entire globe. And best of all, the materials needed are essentially free.
Synthetic biology is a well-established field. The basic premise is to treat an existing cell as a chassis and plug in chunks of genetic material that code for specific jobs, getting the cell to do new things without otherwise affecting it.
To see how we might apply these principles to re-engineering a degraded environment, let’s take the example of desertification. Around 40 per cent of the world’s population lives in arid or semi-arid areas, and small changes in incoming sunlight, water or grazing can trigger a rapid shift into a desert.
One way to avoid this is to encourage the growth of bacteria that naturally enhance moisture retention in the soil. Even a small improvement aids plant growth, which in turn provides nutrients for more bacteria, creating a virtuous circle that allows the system to escape the tipping point.
But transplanting these bacteria from their natural habitat into another environment is quite a challenge. Just how difficult is reflected in the fate of the journal Microbial Releases. It launched in 1992 to publish accounts of experiments that put microbes into the ecosystem, but folded only two years later because so many of the tests failed. The problems boiled down to microbes being poorly adapted to survive in their new, unfamiliar environment.
Release the genes
We are in a better position today, thanks to synthetic biology. I think bacterial transplants should work much better if we take species that already live in arid soil, cyanobacteria for instance, and engineer them so that they produce a polymer that helps the soil retain water. This is still no mean feat, as it involves growing the bacteria in the lab and then moving them into the wild, two potentially very different environments. But it should be possible if we improve our understanding of the inner workings of cyanobacteria – and this is something my team in Barcelona is already looking into.
Another approach, suggested by my colleague Victor de Lorenzo, is to release a chunk of DNA into the environment we wish to manage, so that species of bacteria living there can naturally incorporate it into their genome. These “genetic modules” are designed to spread through the population, and would provide instructions to enable any cell containing them to sense a physical or chemical property of their surroundings, and then produce chemicals to help steer those conditions towards a beneficial, stable state. For instance, and staying with the desertification example, they might equip whatever bacterium they enter to sense the level of moisture near it and produce a water-trapping polymer.
But I can already hear the alarm bells ringing in your head. These ideas are bound to raise fears about the “Jurassic Park effect”: aren’t we attempting to manage systems that are too complex to control?
There is good reason to be cautious about unintended consequences, but remember how high the stakes are. We are already experiencing a massive global extinction event and things will get worse not steadily, but suddenly, when we reach tipping points like those algae-infested lakes.
What’s more, recent research has unearthed a few ways to control how widely our synthetic creations can propagate. For instance, it’s possible to give bacteria a suicide switch that automatically flicks once they stray outside boundaries that we define. This is already happening in non-natural environments. Hendrik Jonkers at Delft University of Technology in the Netherlands has developed a type of concrete impregnated with bacteria. When small cracks develop, the bacteria spring to life and produce calcium carbonate that repairs the damage. But the bacteria are designed so that they can’t survive outside the concrete – there is an ecological firewall.
Something similar should be possible in natural environments, too. Take watercourses polluted with sewage. Bacteria designed to capture carbon dioxide or break down toxic chemicals could be added during waste treatment, and be programmed to switch off when they are washed into open water.
As part of research project called Synterra, my collaborators and I are planning to think these ideas through more thoroughly and begin to test them. We are using computer simulations to better understand how synthetic bacteria would function in ecosystems, and we plan to use controlled outdoor plots to test the cyanobacteria I mentioned earlier.
That may sound unpalatable, but let’s face facts. As recent debates over a new human-dominated geological epoch, the Anthropocene, show, we have been moulding the environment to our needs for centuries. We will inevitably keep doing that – so let’s do it right.
Robot Surrogates
Transplanting ourselves into a distant robot isn’t as hard as you’d think.
IN THE 2009 Bruce Willis movie Surrogates, people live their lives by embodying themselves as robots. They meet people, go to work, even fall in love, all without leaving the comfort of their own home. Now, for the first time, three people with severe spinal injuries have taken the first steps towards that vision by controlling a robot thousands of kilometres away, using thought alone.
.
The idea is that people with spinal injuries will be able to use robot bodies to interact with the world. It is part of the European Union-backed VERE project, which aims to dissolve the boundary between the human body and a surrogate, giving people the illusion that their surrogate is in fact their own body.
In 2012, an international team went some way to achieving this by taking fMRI scans of the brains of volunteers while they thought about moving their hands or legs. The scanner measured changes in blood flow to the brain area responsible for such thoughts. An algorithm then passed these on as instructions to a robot.
The volunteers could see what the robot was looking at via a head-mounted display. When they thought about moving their left or right hand, the robot moved 30 degrees to the left or right. Imagining moving their legs made the robot walk forward.
Now, a second team has tested a similar set-up in people who are paralysed from the neck or trunk down. To make the technology cheaper, more comfortable and more portable, the team swapped the fMRI scanner for an electroencephalogram (EEG) cap, which records electrical activity in the brain using electrodes attached to the scalp.
Each of the three volunteers in Italy donned the cap plus a head-mounted display that showed what a robot – in a lab in Tsukuba, Japan – was looking at. To move the robot, they had to concentrate on arrows superimposed across the display, each flashing at a different frequency. A computer could detect which arrow a participant was staring at using the EEG readings that each frequency provoked. It then sent the corresponding movement to the robot.
The set-up allowed the volunteers to control the robot in near real time. They were able to pick up a drink, move across the room and put the drink on a table. “It took just 6 minutes of training to start using the technology,’ says Emmanuele Tidoni at the University of Rome. “The feeling of actually embodying the robot was good, although needless to say, the sensation varied over time,” said Alessandro, one of the volunteers living with spinal cord injury. “When the robot was stationary the feeling of embodiment was low, but the moment I gave the first command or changed direction, there was this feeling of control and increased embodiment.”
The team also tried to boost this feeling using auditory feedback. While controlling the robot, both able-bodied volunteers and people with spinal cord injury managed to place a bottle closer to a target location when they heard footsteps as they walked, rather than a beep or no noise at all. The improved control suggests they feel more in tune with the robot itself, says Tidoni.
The project also studied how this technology could be used in social interactions, by getting people in Italy to play a chess-like game via the robot against an opponent located in Munich, Germany. The results will be published later this year.
Alessandro is excited about the potential. “It’s a sensitive and important issue, but will certainly have a major impact on the way we all can communicate to each other,” he says.
But he hopes that the implications for mental health will also be looked at. “What will happen to a person who cannot move in real life after they use this technology intensively? Will they still feel isolated or lonely?” he asks. “Any developments also need to study the impact that these technologies may have on the psychological well-being of people with various degrees of disability.”
Although we’re not yet at Surrogates-level immersion, the technology could one day dramatically improve the lives of people with paralysis, says Noel Sharkey at the University of Sheffield, UK. “This is a very long way off, but getting towards that for people who otherwise can’t move would be astounding.”
Empathy For Robots
Humans are strange creatures. We sleepwalk, pick our noses, and name our cars. Some of us like black licorice and some of us are afraid of cotton balls. Of all our idiosyncratic tendencies, our attachment to things - blankets, stuffed animals, toys, vehicles, smartphones - is particularly common and sheds light on how humans feel about robots, and why.
In the movie Her, Theodore Twombly falls in love with Samantha, an artificially intelligent operating system. In Ex Machina, Caleb falls in love with the sentient robot Ada. Both Samantha and Ada are conscious—like humans, they experience the world objectively and subjectively, and both express emotions, genuine or not, toward the human characters. It’s easy to understand why these lonely men fall for them, especially given Ada’s sexualized appearance and Samantha’s husky ScarJo voice.
But these are just movies. In the real world, robots aren’t conscious (and the jury’s out on whether they ever will be). They can’t feel anything. No matter how advanced or humanlike, robots are comprised of circuits, cameras, and algorithms. They’re machines, or as Isaac Asimov often argues, they’re tools, not beings. But if that’s the case, what explains our feelings for them?
The more humanlike a robot seems in both appearance and ability, the easier it is for us to project human thoughts and feelings onto them (this effect is even more pronounced in Japan, where followers of Shinto or animism believe that objects can have souls). Anthropomorphizing is a natural human tendency, as our understanding of the world and everything in it is based on our own experiences. We personify all kinds of objects—we refer to a trusty vehicle as “old girl,” feel nagged when our alarm clocks scream at us to wake up, and experience irritation or sympathy as our dated computer limps along, struggling to obey our commands.
We do this with robots, too. MIT researcher Kate Darling conducts experiments in which people play with Pleos, small mechanized dinosaurs, and are then asked to “torture” them. Participants often can’t bear to do it and can’t watch when others do, even though they know Pleos can’t feel anything. The exercise really isn’t about the Pleo at all—it’s about the human participants and their feelings. It doesn’t matter that their attachment only goes in one direction. The more affection someone feels for an object or a robot, the stronger the tendency to anthropomorphize becomes. Think back to your favorite childhood toy—perhaps a stuffed animal or a blanket. How would you feel if someone ripped it apart? You’d experience some degree of anguish even though you know your stuffed animal can’t feel pain and doesn’t know what’s happening.
This is exactly what happens when humans interact with “social” or interactive robots that narrow the gap between machines and people by making sounds (Pleos’ whimpers contribute to people’s horror at their mistreatment), mimicking facial expressions, or reacting physically to their surroundings. And if you think only the bleeding hearts among us are susceptible to anthropomorphizing, think again.
Robots such as the TALON 3B find and defuse land mines in war zones. Often, this results in the robot blowing itself up, losing limbs and other parts. This is the purpose of such robots—better a machine lose an arm or a life than a human. But the officers and cadets working with the robots don’t necessarily feel this way. An Army colonel put an end to a military exercise in which a persistent TALON robot lost all but one leg because it was “inhumane.” Soldiers award robots with Purple Hearts and sometimes refuse to leave them behind. Military personnel develop close bonds when they depend on one another for survival; the same goes for the robots that help them.
When it comes to our emotional responses to robots, something in the human brain overrides reason. It doesn’t matter that the robot can’t feel or think. A group of German researchers conducted a study in which they showed subjects two sets of videos—one of an anonymous person interacting affectionately with a Pleo and another of that person interacting violently with it. The 40 subjects had an observable negative response to the negative videos, measured primarily via increased perspiration. The researchers repeated the experiment with three sets of videos: one of human-Pleo interaction, another of positive and negative interactions between two humans, and the last of a human interacting with a cardboard box. This time, they measured 14 subjects’ responses with an fMRI scanner. The results revealed the subjects’ positive feelings upon watching the human’s friendly interaction with the Pleo. While the subjects responded most negatively to the video of human on human violence, they also responded negatively to the Pleo-directed violence. Most interestingly, their frontal lobe and limbic systems responded similarly when they watched the negative treatment of Pleo and a human. In other words, humans respond with more empathy to other humans, but they also respond with observable empathy toward robots.
Another recent study conducted by Japan researchers confirms these findings by measuring humans’ responses to photos of intact human and robot hands and photos in which their hands are being cut with scissors or a knife. Researchers measured the participants’ responses with EEG scans, which indicated that the subjects experienced similar visceral responses to images of human and robot hands in painful situations.
Such studies also help explain the existence of real-life Theodore Twomblys. People have already begun to develop deep feelings for robots, AI, love dolls, and video game or anime characters. This also means that robots and/or their human creators can leverage human empathy through emotional mimicry and bonding. Researchers from Munich conducted a study demonstrating that when robots mirror human emotions by smiling back at them or matching their level of enthusiasm, humans are more disposed to help them complete a task.
Human emotion is a tricky beast. Some see it as a weakness, given that emotions can cause humans to act impulsively and irrationally. Others see it as a strength, as emotions such as fear have long played a crucial role in human survival. The ability to feel emotion is currently a crucial difference between humans and robots, yet that gap is shifting, if not closing, now that robots have become objects of our affection.
what3words.com
In common with perhaps 15 million South Africans, Eunice Sewaphe does not have a street address. Her two-room house is in a village called Relela, in a verdant, hilly region of the Limpopo province, five hours’ drive north-east of Johannesburg. If you visited Relela, you might be struck by several things the village lacks – modern sanitation, decent roads, reliable electricity – before you were struck by a lack of street names or house numbers. But living essentially off-map has considerable consequence for people like Eunice. It makes it tough to get a bank account, hard to register to vote, difficult to apply for a job or even receive a letter. For the moment, though, those ongoing concerns are eclipsed by another, larger anxiety. Eunice Sewaphe is nine months pregnant – her first child is due in two days’ time – and she is not quite sure, without an address, how she will get to hospital.
Sitting in the sun with Eunice and her neighbours outside her house, in a yard in which chickens peck in the red dirt, she explained to me, somewhat hesitantly, her current plan for the imminent arrival. The nearest hospital, Van Velden, in the town of Tzaneen, is 40 minutes away by car. When Eunice goes into labour, she will have to somehow get to the main road a couple of miles away in order to find a taxi, for which she and her husband have been saving up a few rand a week. If there are complications, or if the baby arrives at night, she may need an ambulance. But since no ambulance could find her house without an address, this will again necessitate her getting out to the main road. In the past, women from Relela, in prolonged labour, have had to be taken in wheelbarrows to wait for emergency transport that may or may not come.
The maternal mortality rates in South Africa remain stubbornly high. Of 1.1 million births a year, 34,000 babies die. More than 1,500 women lose their lives each year in childbirth. Those statistics are a fact of life in Relela. Josephina Mohatli is one of Eunice’s neighbours. She explains quietly how she went into labour with her first child prematurely. When she finally managed to get a taxi, she was taken to two local clinics and then a private doctor, none of which were able to help her. When she finally reached the hospital after several desperate hours, her baby had died.
I have come up to Relela with Dr Coenie Louw, who is the regional head of the charity Gateway Health, which is concerned with improving those mortality statistics. Dr Louw, 51, speaks with a gruff Afrikaans accent that belies his evangelist’s optimism to make a change for these women. “Though frankly,” he says, “if I don’t know where you are, I can’t help you.”
Google Maps will only bring help to the edge of the village. “We tried to do something by triangulating between three cell phone towers,” he says, which proved predictably unreliable. Searching for other solutions, Louw came across what3words, the innovative British technology that, among many other things, neatly solves the question of how an ambulance might find Eunice Sewaphe.
Five years ago, the founders of what3words divided the entire surface of the planet into a grid of squares, each one measuring 3 metres by 3 metres. There are 57tn of these squares, and each one of them has been assigned a unique three-word address. My own front door in London has the three-word address “span.brave.tree”.
The front door of Eunice’s house in Relela might be “irrigates.joyful.zipper” (or, in Zulu, “phephani.khuluma.bubhaka”). To test the system, I have driven up here with one of Gateway Health’s drivers, Mandla Maluleke. Maluleke has keyed the three-word code into his phone app, which has dropped a pin on a conventional mapping system. Once we leave the main highway, the GPS immediately signals “unknown road”, but even so, after many twists and turns it takes us precisely to “irrigates.joyful.zipper”, and Eunice’s front door.
The what3words technology was the idea of Chris Sheldrick, a native of rural Hertfordshire (who knows what it is like to stand out in a country lane flagging down delivery drivers armed only with a postcode). Like all the best ideas he developed this one to cope with a specific problem that had maddened him. Sheldrick, 35, had started life as a musician, and then after a sleepwalking accident, which damaged his wrist, he set up a business organising musicians and production for festivals and parties around the world. Despite the advent of Google Maps, the problem that dogged his business was bands turning up at the wrong site entrance. Sheldrick employed a person whose sole duty was to man a phone line trying to get a band to the right field. Having given up on conventional satnav they tried using GPS co-ordinates, but get one figure wrong, and the party never got started.
Sheldrick thought that there had to be a better way. Looking back now, he says that “the key thing we were trying to solve with what3words was how do we get 15 digits of latitude and longitude into a more communicable human form”. Advances in satellite mapping and navigation meant that if you were a Deliveroo rider or an Amazon courier or a last-minute saxophonist you were never really lost, but also often not exactly in the right place. Companies like Google and TomTom recognised this problem, but the solution they developed was an alphanumeric code of nine characters. For Sheldrick that was clearly a nonstarter: “When someone asked where you lived, it would be like trying to remember your wifi router password.” That’s when this idea of three words came up. A bit of maths proved it was possible. “With 40,000 recognisable dictionary words, you have 64tn combinations, and there are 57tn squares.”
The algorithm behind what3words took six months to write. Sheldrick worked on it with two friends he had grown up with. Mohan Ganesalingham, a maths fellow at Trinity College, Cambridge, and Jack Waley-Cohen, a full-time quiz obsessive and question-setter for Only Connect. After the initial mapping was complete, they incorporated an error-correction algorithm, which places similar-sounding combinations a very long way apart. And then there was the question of language: using a team of linguists, what3words is now available in a couple of dozen tongues, from Arabic to Zulu.
It has also grown from a company of three to now around 70 full-time employees after two multimillion-dollar rounds of venture capital.
The challenge now is educating the world in their system. “We obviously aim to be a global standard,” Sheldrick says. To that end they have recently signed licensing agreements with companies including Mercedes, which will utilise the system in its A-class cars, including using voice activation, and TomTom, which will incorporate three-word commands in its navigation platforms. The technology also offers an off-the-shelf solution to the many countries that lack any kind of universal address system. Ten governments and their postal services – including Mongolia, Nigeria, Ivory Coast and Tuvalu – have signed up to the idea.
“If you think about addresses,” Sheldrick says, “some are from centuries ago, some are from last week, but there is also so much of the world that has not engaged with that. And that can include where I live in rural Hertfordshire, to our office in London where the postcode does not point to the right place, to entire countries, like Mongolia or Saudi Arabia, which have just never known anything other than using directions.” Given that smartphones now have such wide penetration, he believes there is the opportunity to leapfrog conventional address systems.
Sheldrick is watching the slow advance of his idea in travel guides and email signatures and Airbnb booking forms. The technology has already proved invaluable in disaster zones and refugee camps as well as at rock festivals. It could easily enable drone delivery. Earlier this month the lead GP on the Isle of Mull, Dr Brian Prendergast, sick of being unable to find his patients, requested they all sign up to it.
Lyndsey Duff, the head of what3words’ two-woman South African office got quite emotional, she says, when she first heard about the possibilities of the technology two years ago. She was working at the time for the South African High Commission in London. There are many ways in which South Africa was historically divided but one of them, she suggests, “was always by maps”. While the whites-only areas of apartheid cities had street names, the black townships and rural villages like Relela were just inked in as grey spaces. “For me, what3words is the perfect balance of good business and doing good,” Duff says. She points to the fact that iStore – the South African equivalent of Apple Store – is now using what3words for its deliveries, as well as to the potential of projects like Gateway Health (which use the service for a nominal fee). Not only ecommerce, but also social progress is being held up by the absence of “last mile” technology that will take you where you need to get, Duff suggests.
On the ground, that last mile can still be quite a hard sell. In Limpopo, as elsewhere, Gateway Health is faced with the question of how to make the technology go viral. Dr Louw believes it takes a village. His strategy is to use Relela as a pilot scheme for what3words, to show its positive impact on maternal health, and to use that case study to try to persuade regional and national government to adopt the address system as standard.
In Relela, I joined him as he tried to sell the benefits to perhaps the toughest crowd of all, the village’s hereditary chief and councilmen, who have had plenty of white men from Johannesburg come here and tell them how they can improve the way they have always done things. The chief’s big brick house is at the top of a hill looking down the valley toward the Cloud Mountains. In late afternoon, we are invited to sit in his front yard, where benches have been set up for a council meeting under a large, spreading jacaranda tree. When Louw makes his pitch for every home in Relela to utilise a what3words address, he is met with silence. This, it is eventually explained, is because the council meeting does not yet have a quorum. We sit on the benches for a long, chill hour while the sun goes down, waiting for the other two councillors to come up the hill. Eventually when it is getting dark and six men are present, Louw repeats his pitch. Again, silence.
I have a go. “If someone was to come to the village from outside,” I say, “how might they currently find the person they wanted to meet?”
That’s simple, Hendrik Mowane, the council secretary, suggests. “The village is divided into zones, there are seven zones [each with about 4,000 people] and in each zone there is one man who knows the families there. The visitor would come to this man’s house and he would take them to the place.”
“But,” I say, “what if a child was sick, and needed an ambulance. Or a woman was in distress, in labour? What if the zone leader was out?”
The councilmen discuss these questions gravely. “Then that would be a problem,” Mowane concludes.
Louw shows a video on his phone about how what3words might solve that problem. At this point, slowly, almost imperceptibly, a smile spreads across one or two of the councilmen’s faces. Another meeting is arranged, this one involving a big screen for the villagers, and lunch.
On the way back down from the chief’s house, Louw suggests this is progress. He believes in a year or two the argument will be won, and there will be a team of community drivers on hand to get women like Eunice Sewaphe to hospital. “It is exciting times,” he insists. There are 139 villages like Relela in this region. But once Relela has addressed itself, then – neighbours being neighbours – he hopes the other 138 will follow. Beyond this valley, there are 4 billion people without an address. Perhaps technology can now put them on a map, three words at a time.
Tambora Explosion in 1815 led to the first bicycle
200 years ago, Mount Tambora exploded and changed the world. The cloud of ash and sulfur dioxide caused the Year Without Summer in 1816, a year so cold that crops failed around the world, causing massive famine. Horses were slaughtered as there was no food for people, let alone them.
Baron Karl von Drais needed a means of inspecting his tree stands that did not rely on horses. Horses and draft animals were also the victims of the "Year without Summer" as they could not be fed in the great numbers that had been used. Drais discovered that by placing wheels in a line on a frame one could balance through dynamic steering. Thus a narrow vehicle capable of maneuvering on his lands-the Laufsmaschine became the immediate precurser of the bicycle.
Baron von Drais later just Karl Drais, was a fervent democrat and revolutionary and was on the wrong side of the mid-century revolutions sweeping Europe, so he did not get much credit for his invention. However a new study by historian Hans-Erhard Lessing is quoted in The New Scientist:
The resulting velocipede, or draisine, was the first vehicle to use the key principle of modern bicycle design: balance. "To modern eyes balancing on two wheels seems easy and obvious," says Lessing. "But it wasn't at the time, in a society that normally only took its feet off the ground whenriding horses or sitting in a carriage."
The Laufsmaschine was nicknamed the Dandy-horse and hobby-horse, and a French version was called the velocipede.
A big problem for would-be velocipedists was the state of the roads: they were so rutted that it was impossible to balance for long. The only alternative was to take to the sidewalks, endangering the life and limb of pedestrians. Milan banned the machines in 1818. London, New York and Philadelphia banned them from sidewalks in 1819. Calcutta followed suit in 1820. This clampdown, combined with a series of good harvests after 1817, ended the vogue for velocipedes.
Drais also invented the first typewriter with a keyboard and a better wood stove. However after the revolution the Royalists tried to declare him mad and lock him up. They stripped him of his pension (awarded for his inventions) and he died penniless in 1851. But he is now credited again with the invention of the precursor to the bike, a direct response to the Year without Summer and the eruption of Mount Tambora.
The Adjacent Possible
How the wisdom of Homer Simpson can teach us important lessons about progress.
There’s a scene in The Simpsons in which Homer’s half-brother Herb unveils his new invention – a machine for translating baby talk – and Homer tells him: “People are afraid of new things. You should have taken an existing product and put a clock in it.”
Thus, in one offhand remark, did America’s favourite animated buffoon distil the message of an expanding body of scientific research: human beings do not like too much novelty. The innovations that take off combine a lot of familiarity with a little bit of new.
The Simpsons itself illustrates this. When asked why the TV series was so popular, industry insiders said that it’s because it took an existing format – the sitcom – and injected animation and irreverent humour. But the rule holds across many other areas.
In 2016, Kevin Boudreau, then of Harvard Business School, published an analysis of the way medical research proposals are evaluated for funding. Those that were highly innovative tended to garner lower marks than less innovative proposals. Highly conventional projects were marked down too, but there was a sweet spot in the middle: the highest marks went to projects that brought together received wisdom with some fresh thinking.
A year earlier, Hyejin Youn, who studies complex systems at Northwestern University and the Santa Fe Institute, scrutinised all the patents filed in the US between 1790 and 2010 – using them as a proxy for innovation. She found that many of the patents until about 1870 represented new technologies, or genuine discoveries. From then on, however, innovation became more about combining existing technologies in new ways. It became modular, like Ikea furniture.
Scientists have long attempted to understand the secret of successful innovation, with a view to guiding it and predicting the next big thing. Predicting innovation has a whiff of the oxymoron about it: if you can predict it, is it really new? Nevertheless, it has been done – sort of. An example is Moore’s law.
In 1965, American engineer Gordon Moore predicted that the number of transistors that could be packed into a silicon chip would double about every two years. Though it’s a matter of debate whether his law still holds, most experts agree that it did so well into the present century. So the general trajectory of technological progress, if not individual innovations, does seem to lend itself to forecasting.
Is it possible to be more precise? Two years ago, Andrea Tacchella and colleagues at the Institute for Complex Systems in Rome suggested that it might be. They noticed that the language used to describe innovation is, like innovation itself, combinatorial. The Wright brothers called their 1906 invention a “flying-machine”, for example, having no other word to describe it, and it was only a decade later that this hastily glued-together name was replaced by a shiny new one: “aircraft”.
The codes used to classify patents are also modular. Under the International Patent Classification (IPC) system agreed in 1971, every patent filed is assigned a combination of letters and numbers depending on which of eight sections it falls under – examples include “Electricity” and “Fixed Constructions” – with additional letters and numbers adding detail. When technologies combine in new inventions, so do these codes. Tacchella’s group used this fact to try to predict future combinations of codes – and hence, future innovation.
The first step was to feed about 7,000 patent codes into a neural network and let the network arrange them in space according to the frequency with which they appeared in a global patent database. The space in question was not physical space, obviously, but something more abstract: the space available for innovation. Once they had done this, they could identify zones in that space that had yet to be invaded by existing technologies. Such areas, which biologist Stuart Kauffman referred to in the context of evolution as the “adjacent possible”, are ripe for innovation.
As the patent database evolved over time, the researchers could see pairs of codes moving towards each other as they cropped up in ever closer technological “neighbourhoods” – the same IPC sections, then subsections, and so on. This happened, for example, when codes for avoidance features in road vehicles and for obstacle detection converged in patents for selfdriving cars. Using this approach, they could predict an innovation up to five years before it happened.
Tacchella, who is now employed by the European Commission, is adapting this method to try to guide innovation in the environmental sector. The idea is to analyse the language of regulations to pinpoint unmet needs – in reducing pollution, say – and then to direct people working on new technologies towards them.
Meanwhile, two researchers from the Massachusetts Institute of Technology, James Weis and Joseph Jacobson, have used a machine-learning algorithm to identify past innovations in the field of biotechnology. Last year, they were able to retrospectively predict 19 out of 20 of the most significant developments made between 1980 and 2014. The next step will be to predict the future.
There’s an enduring paradox for those who think about innovation: if technology is self-organising and progress predictable, what is the role of the inventor? Youn thinks of them as the people who, by some mixture of experience, curiosity and luck, find themselves at the edge of the adjacent possible. So it’s not surprising that, throughout history, several minds have converged on the same novel idea at roughly the same time. Witness Bell, Gray and Meucci, who came up with the telephone; Newton and Leibniz’s near-simultaneous development of calculus; and evolutionary theory as described by Darwin and Wallace. All of them transformed the human experience in incalculable but profound ways, which is why we remember them.
Homer was right that people shy away from novelty, but they often appreciate it in retrospect. Herb’s talent was to spot the unmet need, and then meet it. Happily, his baby-talk translator made him rich, and he shared the proceeds with his half-brother.