Evolution is a game of chance. New Scientist uncovers six of the winning mutations that helped humans hit the jackpot.
EARTH, several million years ago. A cosmic ray blasts into the atmosphere at close to the speed of light. It collides with an oxygen atom, generating a shower of energetic particles, one of which knocks into a DNA molecule within a living creature.
That DNA molecule happens to reside in a developing egg cell within an ape-like animal living in Africa. The DNA is altered by the collision - mutated - and the resulting offspring is slightly different from its mother.
The mutation gives the offspring an advantage over its peers in the competition for food and mates, and so, as the generations pass, it is carried by more and more of the population. Eventually it is present in nearly everyone, and so the altered version of the DNA should really no longer be called a mutation - it's just one of the regular 23,000 or so genes that make up the human genome.
While cosmic rays are thought to be one source of mutations, DNA-copying errors during egg and sperm production may be a more common cause. Whatever their origins, these evolutionary accidents took us on a 6-million-year journey from something similar to a great ape to us, Homo sapiens.
It was a remarkable transformation, and yet we have only recently started to gain insight into the mutations that might have been involved. We are a million miles from a complete list, but even the first few to emerge as likely candidates are shedding light on the ascent of man. "It gives us a perspective on what it takes to become human," says John Hawks, a palaeoanthropologist at the University of Wisconsin-Madison.
For a long time, most of our knowledge of human evolution had to be gleaned from fragments of bone found in the earth - a bit like trying to work out the picture on a jigsaw when most of the pieces are missing. The fraction of animal remains that happen to be buried under the right conditions to fossilise can only be guessed at, but it is likely to be vanishingly small.
That is why the field of palaeoanthropology has been given such a boost by the explosion in genetic-sequencing technologies. In 2003, a complete read-out of the human genome was published, a project that took 13 years. Since then, thanks to the technology getting faster and cheaper, barely a year goes by without another genome rolling off the production line. We have now sequenced creatures including chimpanzees, gorillas and orang-utans, as well as Neanderthals and Denisovans, our distant cousins who left Africa before Homo sapiens did.
Comparing these genomes reveals a wealth of information. If a gene that is active in the brain is different in humans and chimps, for instance, that could point to a mutation that helped to make us smarter. In fact, comparing the human and chimp genomes reveals about 15 million substitutions in the "letters" that make up the genetic code. There are also wholesale deletions of DNA or duplications. Based on what we already know about DNA, the vast majority of these changes would not have affected our physical traits. That's either because the change to the DNA is so minor that it would not influence a gene's function, or because the mutation is in a region of so-called junk DNA. It is estimated that out of the 15 million differences, perhaps 10,000 were changes to genes that altered our bodies and were therefore subject to natural selection.
It's still a formidable target, and that's not counting mutations to the regulatory regions of our DNA, which act as on/off switches for genes. It is not yet possible to calculate a figure for this type of mutation in the human line, although they are thought to have played a crucial role in evolution.
So far several hundred mutations have been identified that affected us. More discoveries will follow, but documenting the DNA changes is not half as challenging as working out what they did. "Determining their effect requires immense experimentation and sometimes the creation of transgenic animals," says Hawks. "This is difficult science to undertake. We are at the very early stages."
Even so, we have already had a glimpse of many of the pivotal points in human evolution, including the rapid expansion of our brains, the emergence of speech and the possible origin of our opposable thumbs. Read on to discover the evolutionary accidents that made you the person you are today.
Weaker Jaw Muscles
A chimpanzee's jaws are so powerful it can bite off a person's finger in one chomp. That is not a theoretical calculation; more than one primate researcher has lost a digit that way.
Humans have wimpy jaw muscles by comparison. This could be down to a single mutation in a gene called MYH16, which encodes a muscle protein. The mutation inactivates the gene, causing our jaw muscles to be made from a different version of the protein. They are consequently much smaller.
This finding, which came in 2004, caused a stir when the researchers argued that smaller jaw muscles could have allowed the growth of a bigger skull (Nature, vol 428, p 415). Primates with big jaw muscles have thickened supporting bone at the back of their skull, which arguably constrains skull expansion, and therefore that of the brain too. "We are suggesting this mutation is the cause of the decrease in muscle mass and hence the decrease in bone," says Hansell Stedman, a muscle researcher at the University of Pennsylvania in Philadelphia, who led the work. "Only then do you lift the evolutionary constraint that precludes other mutations that allow your brain to continue growing."
The team dated the mutation to 2.4 million years ago - just before our brain expansion took off. But another study, which sequenced a longer section of the muscle gene, came up with an earlier estimate for when the mutation occurred - 5.3 million years ago (Molecular Biology and Evolution, vol 22, p 379).
Whichever date is right, the mutation still happened after we split from our last common ancestor with chimps. Why would our ancestors switch to a weaker bite? Stedman speculates that rather than changes in diet being the catalyst, it could be that our ancestors no longer used biting as a form of attack. "At some point, perhaps through social organisation, this form of weaponry became more optional for our ancestors," he says.
Brain Gain
Our braininess is one of our species' defining features. With a volume of 1200 to 1500 cubic centimetres, our brains are three times the size of those of our nearest relative, the chimpanzee. This expansion may have involved a kind of snowball effect, in which initial mutations caused changes that were not only beneficial in themselves but also allowed subsequent mutations that enhanced the brain still further. "You have some changes and that opens opportunities for new changes that can help," says John Hawks at the University of Wisconsin-Madison.
In comparison to that of a chimp, the human brain has a hugely expanded cortex, the folded outermost layer that is home to our most sophisticated mental processes, such as planning, reasoning and language abilities. One approach to finding the genes involved in brain expansion has been to investigate the causes of primary microcephaly, a condition in which babies are born with a brain one-third of the normal size, with the cortex particularly undersized. People with microcephaly are usually cognitively impaired to varying degrees.
Genetic studies of families affected by primary microcephaly have so far turned up seven genes that can cause the condition when mutated. Intriguingly, all seven play a role in cell division, the process by which immature neurons multiply in the fetal brain, before migrating to their final location. In theory, if a single mutation popped up that caused immature neurons to undergo just one extra cycle of cell division, that could double the final size of the cortex.
Take the gene ASPM, short for "abnormal spindle-like microcephaly-associated". It encodes a protein found in immature neurons that is part of the spindle - a molecular scaffold that shares out the chromosomes during cell division. We know this gene was undergoing major changes just as our ancestors' brains were rapidly expanding. When the human ASPM sequence was compared with that of seven primates and six other mammals, it showed several hallmarks of rapid evolution since our ancestors split from chimpanzees (Human Molecular Genetics, vol 13, p 489).
Other insights come from comparing the human and chimp genomes to pin down which regions have been evolving the fastest. This process has highlighted a region called HAR1, short for human accelerated region-1, which is 118 DNA base pairs long (Nature, vol 443, p 167). We do not yet know what HAR1 does, but we do know that it is switched on in the fetal brain between 7 and 19 weeks of gestation, in the cells that go on to form the cortex. "It's all very tantalising," says Katherine Pollard, a biostatistician at The Gladstone Institutes in San Francisco, who led the work.
Equally promising is the discovery of two duplications of a gene called SRGAP2, which affect the brain's development in the womb in two ways: the migration of neurons from their site of production to their final location is accelerated, and the neurons extrude more spines, which allow neural connections to form (Cell, vol 149, p 192). According to Evan Eichler, a geneticist at the University of Washington in Seattle who was involved in the discovery, those changes "could have allowed for radical changes in brain function".
Energy Upgrade
While it is tough to work out just how our brains got so big, one thing is certain: all that thinking requires extra energy. The brain uses about 20 per cent of our energy at rest, compared with about 8 per cent for other primates. "It's a very metabolically demanding tissue," says Greg Wray, an evolutionary biologist at Duke University in Durham, North Carolina.
In the past year, three mutations have been discovered that may have helped meet that demand. One emerged with the publication of the gorilla genome, in March (Nature, vol 483, p 169). This revealed a DNA region that underwent accelerated evolution in an ancient primate ancestor, common to humans, chimps and gorillas, some time between 15 and 10 million years ago.
The region was within a gene called RNF213, the site of a mutation that causes Moyamoya disease - a condition that involves narrowing of the arteries to the brain. That suggests the gene may have played a role in boosting the brain's blood supply during our evolution. "We know that damaging the gene can affect blood flow, so we can speculate that other changes might influence that in a beneficial way," says Chris Tyler-Smith, an evolutionary geneticist at the Sanger Institute in Cambridge, UK, who was part of the group that sequenced the gorilla genome.
There are more ways to boost the brain's energy supply than just replumbing its blood vessels, though. The organ's main food source is glucose and this is drawn into the brain by a glucose-transporter-molecule in the blood vessel walls.
Compared with chimpanzees, orang-utans and macaques, humans have slightly different "on switches" for two genes that encode the glucose transporters for brain and muscle, respectively (Brain, Behaviour and Evolution, vol 78, p 315). The mutations mean more glucose transporters in our brain capillaries and less in our muscle capillaries.
"It's throwing a switch so you divert a greater fraction [of the available glucose] into the brain," says Wray. In short, it looks like athleticism has been sacrificed for intelligence.
Speech
Bring up a chimpanzee from birth as if it were a human and it will learn many unsimian behaviours, like wearing clothes and even eating with a knife and fork. But one thing it will not do is talk.
In fact, it would be physically impossible for a chimp to talk just like us, thanks to differences in our voice boxes and nasal cavities. There are neurological differences too, some of which are the result of changes to what has been dubbed the "language gene".
This story began with a British family that had 16 members over three generations with severe speech difficulties. Usually speech problems are part of a broad spectrum of learning difficulties, but the "KE" family, as they came to be known, seemed to have deficits that were more specific. Their speech was unintelligible and they had a hard time understanding others' speech, particularly when it involved applying rules of grammar. They also had problems making complex movements of the mouth and tongue.
In 2001, the problem was pinned on a mutation in a gene called FOXP2. We can tell from its structure that the gene helps regulate the activity of other genes. Unfortunately, we do not yet know which ones are controlled by FOXP2. What we do know is that in mice (and so, presumably, in humans) FOXP2 is active in the brain during embryonic development.
Contrary to initial speculation, the KE family had not reverted to a "chimp-like" version of the gene - they had a new mutation that set back their language skills. In any case, chimps, mice and most other species have a version of FOXP2 that is remarkably similar to that of humans. But since we split from chimpanzees there have been two other mutations to the human version, each of which alters just one of the many amino acids that make up the FOXP2 protein (Nature, vol 418, p 869).
It would be fascinating to put the human version of FOXP2 into chimps to see if it improves their powers of speech but we cannot do that for both technical and ethical reasons. The human version has been put into mice, though. Intriguingly, the researchers observed that the genetically modified mice pups squeak slightly differently - there was a small drop in the pitch of their ultrasound squeals.
But this may be less relevant than the changes seen within the mice brains. Last year, changes were found in the structure and behaviour of neurons in an area called the cortico-basal ganglia circuits (Neuroscience, vol 175, p 75). Also called the brain's reward circuits, these are known to be involved in learning new mental tasks. "If you do something and all of a sudden you get a reward, you learn that you should repeat that," says Wolfi Enard, an evolutionary geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, who led the work.
Based on what we already know about these circuits, Enard thinks that in humans FOXP2 plays a role in learning the rules of speech - that specific vocal movements generate certain sounds, perhaps, or even the rules of grammar. "You could view it as learning the muscle sequences of speech, but also learning the sequence of 'The cat the dog chased yesterday was black'," he suggests.
Enard reckons this is the best example yet found of a mutation that fuelled the evolution of the human brain. "There's no other mutation where we have such a good idea what happened," he says.
Our Hands
From the first simple stone tools, through to the control of fire and the development of writing, our progress has been dependent on our dexterity. It's not for nothing that in the science-fiction classic 2001: A space odyssey, Arthur C. Clarke portrayed the day an ape-man started clubbing things with an animal bone as a pivotal moment in our evolution.
Assuming alien meddling was not responsible, can our DNA shed light on our unrivalled abilities with tools? Clues come from a DNA region called HACNS1, short for human-accelerated conserved non-coding sequence 1, which has undergone 16 mutations since we split from chimps. The region is an on/off switch that seems to kick a gene into action in several places in the embryo, including developing limbs. Cutting and pasting the human version of HACNS1 into mouse embryos reveals that the mutated version is activated more strongly in the forepaw, right in the areas that correspond to the human wrist and thumb (Science, vol 321, p 1346).
Some speculate that these mutations contributed to the evolution of our opposable thumbs, which are crucial for the deft movements required for tool use. In fact, chimps also have opposable thumbs, just not to the same extent as us. "We have more fine muscle control," says Katherine Pollard, who studies this DNA region at The Gladstone Institutes in San Francisco. "We can hold a pencil, but we can't hang from the limb of a tree comfortably like a chimp."
Starch Diet
Chimps and other large primates subsist mainly on fruits and leaves. These are such low-calorie foods that the animals have to forage for most of their waking hours. Modern humans get most of their energy from starchy grains or plant roots. Over the past 6 million years our diet must have undergone several shifts, when we started using stone tools, learned to cook with fire, and settled down as farmers.
Some of these changes are hard to date. There is an ongoing debate over what constitutes the first evidence for cooking hearths. And digging sticks, used to unearth tubers and bulbs, do not fossilise. An alternative way of tracking dietary changes is to look at the genes involved in digestion.
A digestive enzyme called salivary amylase plays a key role in breaking down starch into simple sugars so it can be absorbed in the gut. Humans have much higher levels of amylase in their saliva than chimpanzees, and recently it was discovered how this came about.
While chimps have only two copies of the salivary amylase gene (one on each of the relevant chromosome pair), humans have an average of six, with some people having as many as 15 (Nature Genetics, vol 39, p 1256). DNA copying errors during the production of sperm and eggs must have led to the gene being repeatedly duplicated.
To find out when the duplications happened, the gene was sequenced in people from several countries, as well as in chimps and bonobos. "We were hoping to find a signature of selection about 2 million years ago," says Nathaniel Dominy, a biological anthropologist now at Dartmouth College in Hanover, New Hampshire, who led the work. That is around the time our brains underwent significant growth, and one theory is that it was fuelled by a switch to a starchier diet.
But the team found the gene duplications had happened more recently - some time between 100,000 years ago and the present day. The biggest change in that period was the dawn of agriculture, so Dominy thinks the duplications happened when we started farming cereals. "Agriculture was a signal event in human evolution," he says. "We think amylase contributed to it."
It was the advent of agriculture that allowed us to live in larger settlements, which led to innovation, the cultural explosion and, ultimately, modern life. If we consider all the mutations that led to these pivotal points in our evolution, human origins begin to look like a trail of unfeasible coincidences. But that is only because we do not see the harmful mutations that were weeded out, points out John Hawks at the University of Wisconsin-Madison. "What we're left with is the ones that were advantageous." It is only from today's viewpoint that the mutations that give us our current physical form appear to be the "right" ones to have. "It's hindsight," says Hawks. "When we look back at the whole process, it looks like a stunning series of accidents."
Evo and Health
Evolution has revolutionized our modern scientific understanding of natural history and how our bodies came to be. Yet evolutionary insights regarding health and disease are not typically emphasized with patients.
Medical education focuses on proximate causes of disease — infection, trauma, cellular regulation, etc. — as opposed to evolutionary understandings of how our traits and responses came to be in the first place. What evolutionary insights are there for clinical medicine?
Medical conditions can occur when there is a mismatch between our evolved bodies and our particular society and environment. This idea challenges some of our notions of disease.
Hardly a day goes by when I don’t see patients with lactose intolerance, allergies, obesity, anxiety, near-sightedness, ADHD, and flu symptoms. The lactase gene spread rapidly in historical populations with dairy husbandry. But 70% of the world’s population is lactose intolerant, all of whom are “normal” in the context of their environments that were, until recently, lactose-free.
Allergies and autoimmune conditions are more common in developed societies where infections occur less frequently. This suggests a mismatch between our evolved immune system and our current environment. Recent evidence suggests that the incidence of autoimmune Crohn’s disease has risen in places where the incidence of gastrointestinal worm infection has fallen.
Obesity likely represents a mismatch between our food preferences which evolved in environments of relative food scarcity, and modern environments with increased food availability and decreased activity levels.
Anxiety may have been an evolutionarily useful response — think of the advantage of being jumpy when you hear rustling in the tall grass in an African savannah — but now may be an inappropriate expression for our current environment.
According to a recent study, children who are genetically susceptible to near-sightedness are less likely to become nearsighted if they spend more time outdoors. This suggests near-sightedness may in part result from a mismatch between the outdoor environment in which we evolved and modern indoor activities such as reading and playing video games.
With an evolutionary perspective, conditions such as attention deficit hyperactivity disorder may be conveyed not as a disease, but rather a mismatch between a patient’s evolved nature and our particular society’s educational expectations. In all these conditions, an evolutionary approach helps clinicians and patients see medical conditions as contextual, rather than as an inherent defect. Evolution — natural history — becomes relevant.
Fever, cough, vomiting, diarrhea, etc. are evolutionary host defenses to expel infections, not, as patients often believe, infections themselves. Nonetheless the suffering can be marked. If treatment is provided to alleviate these symptoms, will our body’s defense against infections be weakened? This area is ripe for additional scientific research.
Like the early days of pharmacology and microbiology, it is too soon to predict the extent of clinical relevance that an explicit evolutionary understanding can yield. Evolutionary thinking has already directly impacted clinical medicine in areas such as genetics and vaccine design. Evolutionary principles also inform public health measures, such as the campaign to avoid inappropriate antibiotic use in humans and livestock to help prevent the evolution of resistant pathogen strains.
Just because a trait evolved does not make it good or bad. Evolution itself is impersonal and morally neutral. It is up to us to provide deliberate values into the blind shuffle of evolutionary selection. An evolutionary scientific understanding provides greater wisdom into health and illness. Even in this world of technological marvels, the “history and physical” (H&P) is often emphasized as a physician’s most valued diagnostic strategy. In essence, evolution is history. With an evolutionary perspective, the “H” in “H&P” can be understood and appreciated at a deeper level.
C3 and C4 Photosynthesis
With projections of 9.5 billion people by 2050, humanity faces the challenge of feeding modern diets to additional mouths while using the same amounts of water, fertilizer and arable land as today.
Cornell University researchers have taken a leap toward meeting those needs by discovering a gene that could lead to new varieties of staple crops with 50 percent higher yields.
The gene, called Scarecrow, is the first discovered to control a special leaf structure, known as Kranz anatomy, which leads to more efficient photosynthesis. Plants photosynthesize using one of two methods: C3, a less efficient, ancient method found in most plants, including wheat and rice; and C4, a more efficient adaptation employed by grasses, maize, sorghum and sugarcane that is better suited to drought, intense sunlight, heat and low nitrogen.
"Researchers have been trying to find the underlying genetics of Kranz anatomy so we can engineer it into C3 crops," said Thomas Slewinski, lead author of a paper that appeared online in the journal Plant and Cell Physiology. Slewinski is a postdoctoral researcher in the lab of senior author Robert Turgeon, professor of plant biology.
The finding "provides a clue as to how this whole anatomical key is regulated," said Turgeon. "There's still a lot to be learned, but now the barn door is open and you are going to see people working on this Scarecrow pathway."
The promise of transferring C4 mechanisms into C3 plants has been fervently pursued and funded on a global scale for decades, he added.
If C4 photosynthesis is successfully transferred to C3 plants through genetic engineering, farmers could grow wheat and rice in hotter, dryer environments with less fertilizer, while possibly increasing yields by half, the researchers said.
C3 photosynthesis originated at a time in Earth's history when the atmosphere had a high proportion of carbon dioxide. C4 plants have independently evolved from C3 plants some 60 times at different times and places. The C4 adaptation involves Kranz anatomy in the leaves, which includes a layer of special bundle sheath cells surrounding the veins and an outer layer of cells called mesophyll. Bundle sheath cells and mesophyll cells cooperate in a two-step version of photosynthesis, using different kinds of chloroplasts.
By looking closely at plant evolution and anatomy, Slewinski recognized that the bundle sheath cells in leaves of C4 plants were similar to endodermal cells that surrounded vascular tissue in roots and stems.
Slewinski suspected that if C4 leaves shared endodermal genes with roots and stems, the genetics that controlled those cell types may also be shared. Slewinski looked for experimental maize lines with mutant Scarecrow genes, which he knew governed endodermal cells in roots.
When the researchers grew those plants, they first identified problems in the roots, then checked for abnormalities in the bundle sheath. They found that the leaves of Scarecrow mutants had abnormal and proliferated bundle sheath cells and irregular veins.
In all plants, an enzyme called RuBisCo facilitates a reaction that captures carbon dioxide from the air, the first step in producing sucrose, the energy-rich product of photosynthesis that powers the plant. But in C3 plants RuBisCo also facilitates a competing reaction with oxygen, creating a byproduct that has to be degraded, at a cost of about 30-40 percent overall efficiency. In C4 plants, carbon dioxide fixation takes place in two stages. The first step occurs in the mesophyll, and the product of this reaction is shuttled to the bundle sheath for the RuBisCo step. The RuBisCo step is very efficient because in the bundle sheath cells, the oxygen concentration is low and the carbon dioxide concentration is high. This eliminates the problem of the competing oxygen reaction, making the plant far more efficient.
Tinkering
Evolution is like a search engine, though not a very good one. We’re not talking Google. We might be talking Google drunk, blindfolded, on crutches, and with a frontal lobotomy.
This is why the Nobel laureate François Jacob described evolution as a tinkerer, not an engineer. Engineers know where they’re going—they have an aim, a plan. Tinkerers are just fastening parts together, sticking this bit onto that in an ongoing exploration of functional possibilities, with no goal in mind.
The insight that the evolutionary search engine proceeds blindly - and therefore gradually - came from Charles Darwin. He realized that because resources are often scarce, organisms are always in competition with one another. In the endless battle, those individuals who happen to possess some slight innate advantage will flourish and pass along that advantage to their descendants. By this method, new species could be created, one imperfect change at a time, but this process certainly was not going to happen quickly.
Historically, only a tremendous geological shift, like a meteorite impact or an ice age, has broadly sped up the process. What these shifts provide is a wedge that opens up novel ecological niches, new possibilities for the search engine of evolution to explore. This fits-and-starts hypothesis — what evolutionary theorists Stephen J. Gould and Niles Eldredge dubbed “punctuated equilibrium” in 1972—helps explain the seemingly sudden appearance of new species in the fossil record.
But really, there is nothing all that sudden about it. According to researchers, those periods of punctuation span roughly 50,000 to 100,000 years. Fossils just don’t keep very good records.
Mostly, natural selection is a plodder’s game. Sure, one individual might be significantly taller or smarter or more long-lived than his peers, but no matter how beneficial the variation, extremely long stretches of time are required for it to spread across an entire population.
That was supposed to be the rule, at least.
Lately the process has been a little more frenetic. Over the past few centuries, and accelerating ever more quickly in the past 50 years, a steady stream of human innovations has begun to drastically speed up processes that were, until very recently, the sole province of nature. In short, it appears that our technology has created ways of accelerating change (genetic engineering, for instance) and new habitats (like the modern city), essentially fracturing our biology and transforming our future as a species.
The first inkling that something might be wonky with gradualism - as Darwin’s slow process of evolutionary change is known - did not emerge from biology. It showed up in economics, specifically in an economic analysis of slavery in America.
Most slaves, especially those on smaller plantations, were fed better and lived in better conditions than freemen in the North. In 1958, Harvard economists Alfred Conrad and John Meyer published a book arguing that slavery may have been immoral but still made economic sense - which was too much for a University of Chicago economist named Robert Fogel to abide. Fogel was white, but his wife was African American. Very African American. “When I was teaching at Harvard,” Fogel recounts, “she hung a sign outside the door to our house. It read: “Don’t be upset because you’re not black like me - we’re not all born lucky.”
Fogel decided to prove Conrad and Meyer wrong. He spent almost a decade on the problem. In his earlier work, Fogel had helped pioneer the application of rigorous statistical analysis and other economics-based mathematical methods to the study of history (research that earned him a Nobel Prize in 1993).
Now, working alongside University of Rochester economist Stanley Engerman, Fogel began applying these methods to the study of slavery. This enterprise led him deep into the relationship between economics, physiology, and longevity, where he analyzed variables such as the amount of food consumed by the average slave (or freeman) measured against the amount of work he produced.
To make such comparisons, Fogel needed data and metrics. For data he used a National Institutes of Health–funded database of American Civil War veteran records: an informational treasure trove containing details like height and weight at time of conscription, daily roll calls of the sick and injured, periodic postwar checkups, census data, and, often, death certificates.
For metrics he chose height and body mass, because of a steadily growing consensus among scientists that these factors were accurate predictors of mortality and morbidity. “Height,” says UCLA economist Dora Costa, who cowrote papers on these ideas with Fogel, “turns out to be a fantastic health indicator. It’s net for nutrition, infectious disease, sanitation, and demands placed on the body.” (The United Nations now uses height as a way to monitor quality of nutrition in developing countries.)
What all this information provided was a population-eye view of life in the 19th century, which is exactly what Fogel needed in order to understand broad socioeconomic trends and reach startling conclusions. The first of those conclusions, which he and Engerman detailed in 1974 in their now-famous Time on the Cross: An Economic Analysis of American Negro Slavery, was that Conrad and Meyer were correct after all: Slavery, while repugnant, was neither as inefficient nor as unprofitable as most historians assumed.
“As it turns out,” Fogel recounts, “most slaves, especially those on smaller plantations, were fed better and lived in better conditions than freemen in the North. This meant they lived longer, healthier lives and thus produced more work. Certainly, it’s an odious conclusion, but it’s right there in the data.”
Then around 1988 Fogel began to notice a startling trend in the data: Over the past few centuries, but predominantly in the 20th century, Americans have been growing taller. They have also been getting thicker, living longer, and getting richer.
In 1850, for example, the average American male stood 5 feet 7 inches and weighed 146 pounds. By 1980 those numbers had jumped to 5 feet 10 and 174 pounds. And it was not just Americans. A team of economists expanded this inquiry internationally, and discovered that the trends were global.
“Over the past 300 years,” Fogel says, “humans have increased their average body size by over 50 percent, average longevity by more than 100 percent, and greatly improved the robustness and capacity of vital organ systems.”
From an evolutionary perspective, 300 years is an eyeblink. A sneeze. Not nearly enough time for these sorts of radical improvements. So where did they come from? Fogel spent the next two decades answering this question.
He came to believe that a steady stream of technological improvements—advances in food production and distribution, sanitation, public health, and medicine—facilitated an era of rapid evolutionary advances. “In the past hundred years,” Fogel says, “humans have gained an unprecedented degree of control over their environment, a degree of control so great that it sets them apart not only from all other species, but from all previous generations of Homo sapiens.”
Fogel’s core idea, which he calls techno-physio evolution and explains in depth in his 2011 book, The Changing Body (cowritten with Roderick Floud, Bernard Harris, and Sok Chul Hong), is fairly straightforward: “The health and nutrition of one generation contributes, through mothers and through infant and childhood experience, to the strength, health, and longevity of the next generation; at the same time, increased health and longevity enable the members of that next generation to work harder and longer and to create resources which can then be used, in turn, to assist the next, and succeeding, generations to prosper.” In short, technology is impacting genetics.
These notions are not entirely new. Economists have known for almost 100 years of a correlation among height, income, and longevity. What had not been properly explained was the mechanism. That explanation came later, with the burgeoning new field of epigenetics - the study of how the external environment can alter our genes throughout life, and even be passed on to future generations. Today researchers in this well-established field have shown that natural selection is not the only force producing heritable change.
Fogel, though, goes further by going faster. “It’s a whole-is-much-greater-than-the-sum-of-its-parts argument,” he explains. “We’re talking about an incredible synergy between technology and biology, about very simple improvements—pasteurization, a general reduction of pollutants, cleaning up our water supply—producing heritable effects across populations faster than ever before.
Think about this: Humans are a 200,000-year-old species. When we first emerged our life span was 20 years. By the turn of the 20th century it had become 44 years. We advanced by 24 years over the course of 200,000 years. But today, it’s 80 years. These simple improvements doubled our longevity in a century.”
“Evolution designed us to be quite plastic,” notes economist John Komlos, a visiting scholar at Duke University. “Our size expands in good times and contracts in bad.” The gain in body mass observed by Fogel began in the 1920s, when people started driving cars and working more sedentary jobs.
But today, with an obesity epidemic in high gear, plasticity in weight has become a burden because the forces of evolution did not shape us to control our food intake. “We didn’t know that extrinsic factors could make this kind of difference,” Komlos says. “Techno-physio evolution shows that economics has an impact at the cellular level - that it goes bone deep.”
Since Fogel began his work, these ideas have spread far beyond economics. Scholars from cultural anthropologists to population geneticists have begun investigating the phenomenon of techno-physio evolution. In an article published in 2000 in Behavioral and Brain Sciences, University of St. Andrews evolutionary biologist Kevin Laland calls the process “niche construction” and explains it thus:
“All organisms constantly interact with their local environments, and they constantly change them by doing so. If, in each generation, populations of organisms modify their local environment only idiosyncratically or inconsistently, then there will be no modification of natural selection pressures and, hence, no significant evolutionary consequence.
Culture rather than catastrophe provides the new niches in modern evolution.
If, however, in each generation, each organism repeatedly changes its own environment in the same way…then the result may be a modification of natural selection. The environmental consequences of such niche construction may be transitory, and may still be restricted to single generations, but if the same environmental change is reimposed for sufficient generations, it can serve as a significant source of selection.”
Whether you call it niche construction or techno-physio evolution, it is ultimately punctuated equilibrium by a different name, with culture rather than catastrophe providing the new niches. The main difference is in the pace of change. Naturally occurring geologic events are rare occurrences. Niche-altering technological progress, meanwhile, is rapid and accelerating.
This is no small distinction. In recent years, researchers have found that the same exponential growth rates underpinning computing (Moore’s law, for example, which says the number of integrated circuits on a computer chip doubles every 12 to 24 months) show up in all information-based technologies. The fields with a huge potential to drive techno-physio evolution - artificial intelligence, nanotechnology, biology, robotics, networks, sensors, and so on - are likewise advancing along exponential price/performance curves.
Consider genomic sequencing, which has been touted as the essential tool needed to move medicine from standardized and reactive to personalized and preventive. In 1990, when the Human Genome Project was announced, the cost of deciphering a person’s entire genome was budgeted at $3 billion - about as far from personalized medicine as one can get.
But by 2001 the cost was down to $300 million. By 2010 it was below $5,000, and in 2012 the $1,000 barrier finally fell. Within 10 years, at the current rate of decline, a fully sequenced human genome will price out at less than $10. If standardized and reactive medicine managed to double the human life span in a century, just imagine how far personalized and preventive medicine might extend it.
Techno-physio evolution shows how increased control over our external environment impacts our biology. But many of the technologies that are now advancing most rapidly are ones that cut out the middleman - that Darwinian mediator, natural selection - allowing us to take direct control of our internal environment and push it forward, even when the niche is unchanged.
“Exponentially growing technology changes the evolutionary discussion,” says Andrew Hessel, co-chair of bioinformatics and biotechnology at Singularity University. “If you follow the patterns out, you very quickly see that this is the century we take control over our genome.
Just look at the technologies surrounding reproduction: fetal testing, genetic screening, pregnancy monitoring, genetic counseling. When I was a child, Down syndrome was a real problem. Today roughly 90 percent of all fetuses with Down syndrome are terminated. Play these patterns forward, and we aren’t long from the day when we’re engineering our children: choosing skin color, eye color, personality traits. How long after that until parents are saying, ‘I bought you the best brain money can buy - now why don’t you use it?’”
Such possibilities raise other, even bigger, questions. How much change does it take to create a whole new species? Dartmouth neuroscientist Richard Granger, who studies brain evolution, thinks it may not take much. “Think about dogs,” he says. “Used to be they all looked like wolves. Now they don’t. In just a few thousand years of messing around with their genes, humans have created canine breeds that are completely physically incompatible - a Great Dane and a Chihuahua could not produce offspring without help.”
What is true in dogs is true in humans, as well. Right now, humans are the only hominid species on Earth, but it seems unlikely to remain the case, notes Juan Enriquez, CEO of Biotechnomy, a life-sciences investment firm, and a founding director of the Life Sciences Project at Harvard Business School. “We’re now no more than a generation or two away from the emergence of an entirely new kind of hominid,” he says. “Homo evolutus: a hominid that takes direct and deliberate control over its own evolution and the evolution of other species.”
The standard science fiction version of what happens after we take control of our evolution usually runs along eugenic lines, leading toward efforts to build a master race. But the real situation is nowhere near so straightforward. Unintentional consequences are everywhere. Seemingly unambiguous genetic goals—like trying to make people more intelligent—not only involve millions of genes, raising the specter of easy error, but might involve conditional relationships: For instance, our intelligence might be tied to memory in ways we can’t yet decode, so trying to improve one ability might inadvertently impede the other.
Moreover, without some form of top-down control, there is little to suggest that human desires will be uniform enough even to agree on what a master race should be like. “Sure,” Hessel says, “we may begin optimizing ourselves and engineering our children, but it’s unlikely this will occur in a uniform way. We’re still human. So we’re going to engineer our children based on our egos, our creativity, our whims. This pretty much guarantees all sorts of wild varieties.
“It’s highly improbable that all of these varieties will be able to interbreed successfully, not without the use of technology. That’s when we really splinter the species; that’s why Homo evolutus could easily produce a Cambrian explosion of subspecies.”
Many More Mutations
Population growth has flooded our DNA with mutations, yielding both diversity and complex illnesses.
Ten thousand years ago, there were just 5 million people on Earth, fewer than live in Singapore today. The population has since soared to 7 billion. This rapid growth has left a mark on the human genome, researchers are finding, drastically increasing the number of very rare mutations in our DNA. That realization casts doubt on the long-standing view that just a few genetic mutations underlie many hereditary diseases. In reality, those diseases are probably caused by a wide variety of extremely rare mutations that vary from one person to the next, complicating efforts to understand and treat them at the genetic level.
Scientists have known for years that each of our roughly 20,000 protein-coding genes comes in multiple forms, some with dramatic health effects. More recently, improved gene-sequencing technology and larger population studies have made it possible to detect gene variants that appear in only 1 percent of the human population. But geneticists have suspected there were even rarer variants out there—and in 2012 they discovered they were right.
At the University of Washington, scientists looked at portions of 15,585 genes in 2,440 people. They found that 86 percent of the variants were of that rarer variety—found in less than 0.05 percent of all people. In another report, published in May, researchers at UCLA and GlaxoSmithKline tallied up the variants of 202 genes in 14,002 people. They found that 95 percent of the variants were each present in less than 0.5 percent of the subjects, and 74 percent were found in only one or two people out of the sample. A third study, from the 1,000 Genomes Projects Consortium, showed that in the average person, up to 400 of these ultrarare mutations are likely to disrupt gene activity.
The prevalence of rare variants is a direct consequence of increasing population. Each person acquires a few dozen new mutations at birth. “The more individuals there are, the more mutations have arisen recently,” says Cornell University population geneticist Alon Keinan. Over time, harmful mutations are weeded out by natural selection. But rapid population growth has introduced so many new variants that many of the disease-related ones are probably still present.
The existence of so many rare variants complicates efforts to study the relationship between genes and health. To establish how different variants affect the risk of lung cancer, for instance, scientists now realize they cannot look at the DNA and medical records of just 100 people. Many rare variants will be missing from such a small sample. Even more daunting, each disease is probably caused by a broader mix of genetic errors than previously thought, and that mix may vary considerably. A schizophrenic from Brazil might have a combination of rare-variant genes contributing to the disease that is different from the mix in a schizophrenic from Norway.
“It’s bad news for mathematically associating variants with particular complex diseases,” says Keinan. It’s also bad news for using such genetic associations to develop better drugs. Since a single disorder might result from hundreds of different gene combinations, it’s unlikely that researchers will find one drug that tackles all of them.
On the bright side, John Novembre of UCLA notes, rare variants will help us reconstruct a missing chunk of history—the undocumented migrations and mixings of populations. Most of the known genetic markers for ancestry are shared by tens of millions of people, and so can provide only a general sense of geographic origin. But many rare variants arose in just the past few centuries and might be limited to people from just one region of one country. “We can learn much more about migrations of the recent past,” Novembre says.
Human Variations
Scientists have discovered that about one in thirteen people have flexible ape-like feet.
A team studied the feet of 398 visitors to the Boston Museum of Science.
The results show differences in foot bone structure similar to those seen in fossils of a member of the human lineage from two million years ago.
It is hoped the research, published in the American Journal of Physical Anthropology, will establish how that creature moved.
Apes like the chimpanzee spend a lot of their time in trees, so their flexible feet are essential to grip branches and allow them to move around quickly - but how most of us ended up with more rigid feet remains unclear.
Jeremy DeSilva from Boston University and a colleague asked the museum visitors to walk barefoot and observed how they walked by using a mechanised carpet that was able to analyse several components of the foot.
Most of us have very rigid feet, helpful for stability, with stiff ligaments holding the bones in the foot together.
When primates lift their heels off the ground, however, they have a floppy foot with nothing holding their bones together.
This is known as a midtarsal break and is similar to what the Boston team identified in some of their participants.
This makes the middle part of the foot bend more easily as the subject pushes off to propel themselves on to their next step.
Dr DeSilva told BBC News how we might be able to observe whether we have this flexibility: "The best way to see this is if you're walking on the beach and leaving footprints, the middle portion of your footprint would have a big ridge that might show your foot is actually folding in that area."
Another way, he added, was to set up a video camera and record yourself walking, to observe the bones responsible for this folding motion.
Most with this flexibility did not realise they had it and there was no observable difference in the speed of their stride.
In addition, Dr DeSilva found that people with a flexible fold in their feet also roll to the inside of their foot as they walk.
The bone structure of a two-million-year old fossil human relative, Australopithecus sediba, suggests it also had this mobility.
"We are using variation in humans today as a model for understanding what this human creature two million years ago was doing," added Prof De Silva.
Tracy Kivell, a palaeoanthropologist from the Max Planck Institute for Evolutionary Anthropology, said: "The research has implications for how we interpret the fossil record and the evolution of these features.
"It's good to understand the normal variation among humans before we go figure out what it means in the fossil record," Dr Kivell said.
Evolution Is Speeding Up
We really are a mutant race. Our genomes are strewn with millions of rare gene variations, the result of the very fast, very recent population growth of the human species. From an estimated 5 million individuals just 10,000 years ago, we ballooned to more than 7 billion. On average, every duplication of the human genome includes 100 new errors, so all that reproducing gave our DNA many opportunities to accumulate mutations. But evolution hasn’t had enough time to weed out the dangerous ones: gene variants that might make us prone to illness, or simply less likely to survive.
Joshua Akey of the University of Washington recently explored the average age of our species’s gene variants, finding that most are very young. About three-quarters of single nucleotide variants — a mutation that substitutes just one nucleotide (an A, C, T or G) in the long string of DNA — occurred within the past 5,000 years, surprising considering that our species may be 200,000 years old. Using several techniques to gauge the effects of these mutations, which are the most common type of variant in the human genome, Akey estimated that more than 80 percent are probably harmful to us.
All of these mutations — roughly 100 billion for each generation in the entire population — potentially accelerate the pace of evolution by giving it more raw materials with which to work. A small percentage may be beneficial; abilities such as digesting milk in adulthood and living at high altitude are recent acquisitions of the human genome. Given how many mutations are now circulating among living humans, we may be evolving new capabilities already.
******************************************
Homo sapiens sapiens has spread across the globe and increased vastly in numbers over the past 50,000 years or so—from an estimated five million in 9000 B.C. to roughly 6.5 billion today. More people means more opportunity for mutations to creep into the basic human genome and new research confirms that in the past 10,000 years a host of changes to everything from digestion to bones has been taking place.
"We found very many human genes undergoing selection," says anthropologist Gregory Cochran of the University of Utah, a member of the team that analyzed the 3.9 million DNA sequences* showing the most variation. "Most are very recent, so much so that the rate of human evolution over the past few thousand years is far greater than it has been over the past few million years."
"We believe that this can be explained by an increase in the strength of selection as people became agriculturalists—a major ecological change - and a vast increase in the number of favorable mutations as agriculture led to increased population size," he adds.
Roughly 10,000 years ago, humanity made the transition from living off the land to actively raising crops and domesticated animals. Because this concentrated populations, diseases such as malaria, smallpox and tuberculosis, among others, became more virulent. At the same time, the new agriculturally based diet offered its own challenges—including iron deficiency from lack of meat, cavities and, ultimately, shorter stature due to poor nutrition, says anthropologist John Hawks of the University of Wisconsin–Madison, another team member.
"Their bodies and teeth shrank. Their brains shrank, too," he adds. "But they started to get new alleles [alternative gene forms] that helped them digest the food more efficiently. New protective alleles allowed a fraction of people to survive the dread illnesses better."
By looking for wide swaths of genetic material that vary little from individual to individual within these sections of great variation, the researchers identified regions that both originated recently and conferred some kind of advantage (because they became common rapidly). For example, the gene known as LCT gave adults the ability to digest milk and G6PD offered some protection against the malaria caused by Plasmodium falciparum parasite.
"Ten thousand years ago, no one on planet Earth had blue eyes," Hawks notes, because that gene—OCA2—had not yet developed. "We are different from people who lived only 400 generations ago in ways that are very obvious; that you can see with your eyes."
Comparing the amount of genetic differentiation between humans and our closest relatives, chimpanzees, suggests that the pace of change has accelerated to 10 to 100 times the average long-term rate, the researchers write in Proceedings of the National Academy of Sciences USA.
Not all populations show the same evolutionary speed. For example, Africans show a slightly lower mutation rate. "Africans haven't had to adapt to a fundamentally new climate," because modern humanity evolved where they live, Cochran says. "Europeans and East Asians, living in environments very different from those of their African ancestors and early adopters of agriculture, were more maladapted, less fitted to their environments."
And this speedy pace of evolution will not slow until every possible beneficial mutation starts to happen—the maximum rate of adaptation. This has already begun to occur in such areas as skin color in which different sets of genes are responsible for the paler shades of Europeans and East Asians, according to the researchers.
The finding raises many questions. Among them: "the medical applications of this kind of knowledge [as well as] exactly what most of the selected changes do and what drove their selection," Cochran says.
But the history of humanity is beginning to be read out from our genes, thanks to a detailed knowledge of the thousands of them that have evolved recently. "We're going to be classifying these by functional categories and looking for matches between genetic changes and historic and archaeological changes in diet, skeletal form, disease and many other things," Hawks says. "We think we will be able to find some of the genetic changes that drove human population growth and migrations - the broad causes of human history."
Trees
“Tree trunks are standing monuments to futile competition.” In his book The Greatest Show on Earth, he makes the necessary distinction between a “designed economy” and an “evolutionary economy,” using the fable of the “Forest of Friendship.” “In a…mature forest,” he writes, “the canopy can be thought of as an aerial meadow…raised on stilts…gathering solar energy…but a substantial portion of the energy is ‘wasted’ by being fed straight into the stilts, which do nothing more useful than loft the ‘meadow’ high in the air, where it picks up the same harvest…as it would—at far lower cost—if it were laid flat on the ground.”No tree can afford to not compete in the height competition. However, if somehow the trees could arrange a pact of friendship to limit their heights, each tree, and the forest as a whole, could save energy. This is obviously not possible for trees, but if it were, Dawkins concludes, the “Forest of Friendship [would be] more efficient as a forest.”
Neutral Mutations
When Jesse Bloom heard in 2009 that Tamiflu, once the world’s best treatment for flu, had inexplicably lost its punch, he thought he knew why. Sitting in his lab at the California Institute of Technology, the biologist listened to a spokesperson from the World Health Organization recount the tale of the drug’s fall from grace. Introduced in 1999, the compound was the first line of defense against the various strains of flu virus that circulate around the world every year. It did not just treat symptoms; it slowed the replication of the virus in the body, and it did its job well for a time. But in 2007 strains worldwide started shrugging off the drug. Within a year Tamiflu was almost completely useless against seasonal influenza.
The WHO spokesperson explained that the sweeping resistance came about through the tiniest of changes in the flu’s genetic material. All flu viruses have a protein on their surface called neuraminidase—the “N” in such designations as H1N1—which helps the viruses to break out of one cell and infect another. Tamiflu is meant to stick to this protein and gum it up, trapping the viruses and curtailing their spread. But flu viruses can escape the drug’s attention through a single change in the gene encoding the neuraminidase protein. A mutation called H274Y subtly alters neuraminidase’s shape and prevents Tamiflu from sticking to it.
Most public health experts had assumed that flu viruses would eventually evolve resistance to Tamiflu. But no one anticipated it would happen via H274Y, a mutation first identified in 1999 and originally thought to be of little concern. Although it allows flu viruses to evade Tamiflu, it also hampers their ability to infect other cells. Based on studies in mice and ferrets, scientists concluded that the mutation was “unlikely to be of clinical consequence.” They were very wrong. The global spread of viruses bearing H274Y proved as much.
That spread “sounded alarm bells to me,” Bloom says. Something else had changed to let the virus use the mutant neuraminidase without losing the ability to spread efficiently. He soon found that certain strains of H1N1 had two other mutations that compensated for H274Y’s debilitating effects on the virus’s ability to spread from cell to cell. Neither of the pair had any effect on their own. In the lingo of biologists they were “neutral.” But viruses that carried both of them could pick up H274Y, gaining resistance to Tamiflu without losing their infectivity. Both mutations looked innocuous individually, but together they made the virus more adaptable in the face of a challenge. To put it another way, they made it better at evolving.
Such neutral mutations are also known collectively as hidden or cryptic variation. They were long ignored by most researchers, but thanks to technological advances, scientists are starting to see that they are a major driving force in evolution—including the evolution of microorganisms that make us sick. By studying cryptic variation, scientists are finding new ways of safeguarding our health and discovering fuller answers to one of evolution's most fundamental questions: Where do new adaptive traits come from? As Joshua Plotkin from the University of Pennsylvania puts it: “This is the forefront of modern evolutionary biology.”
How it works
As the flu example shows, one way that cryptic mutations can enhance adaptation is by collaborating with other mutations to produce a whole that is greater than the sum of its parts. Imagine that someone gave you a triangular metal frame or a pair of wheels. Both parts would be useless on their own, but put them together and you get a working bicycle. If you have either component, you have nothing immediately useful but you are primed to reap the benefits of the second one. In the same way cryptic variation can lay the groundwork for future adaptations.
Some cryptic mutations can also prove useful on their own, essentially keeping quiet until circumstances arise where they come in handy. And, by building up a lot of cryptic variation, organisms can increase their ability to adapt. Imagine that something goes badly wrong in your house. If you have a bunch of tools you never needed before stashed away in a cupboard, one of them might end up being good for the job or could be modified. In the same way, a storehouse of cryptic variation increases the chances that living things will be preadapted to cope with new challenges.
These ideas fit well with Darwin’s theory of natural selection, in which beneficial traits that boost an organism’s reproductive success are passed down to future generations, or “selected” to continue on. Biologists, however, are increasingly realizing that some mutations are important not because they provide immediate benefits but because they enable adaptations to occur in the future. These mutations can build up because natural selection does not remove genetic alterations that have no obvious effects on our proteins, cells or bodies.
The notion that cryptic mutations can be useful has a long history. In the 1930s Sewall Wright, one of the founding fathers of evolutionary theory, recognized that initially unimportant genetic changes could later give rise to valuable ones. Theodosius Dobzhansky, another central figure, said that species need to have “a store of concealed, potential, variability.” Even so, until very recently scientists managed to document only a few arcane examples of hidden mutations affecting the wings or hairs of flies without any proof these changes benefited the animals. “We didn’t have the tools to take it further, and the topic languished,” says Joanna Masel from the University of Arizona in Tucson. With powerful sequencing technology and mathematical models on hand, scientists have now been able to show that cryptic variation is a powerful and widespread force in evolution. In everything from flu viruses to flowers to fungi, they have found tangible case studies where useful adaptations arose from seemingly neutral mutations.
Evidence
One of the clearest examples comes from Andreas Wagner at the University of Zurich and involves molecules called ribozymes, which consist of RNA (genetic material related to DNA) and function in the body as catalysts. They speed up chemical reactions involving other RNA molecules but are picky about the ones they interact with. To react with a new target, they need to alter their shapes. And to do that, they need the sequence of their building blocks to change. In test-tube studies Wagner found that ribozymes could adapt to deal with a new target six times faster if they had previously built up lots of cryptic variation. Just as in the Tamiflu-resistant flu viruses, these mutations made no difference on their own; they merely brought some of the ribozymes a step closer to achieving the changes they needed. “They had a leg up in the evolutionary process,” Wagner says.
Another example comes from studies of heat-shock proteins, which help nascent proteins to fold properly into their functional forms and also protect them from losing their function in response to various stresses, such as excess heat. In 1998 Suzanne Rutherford and Susan Lindquist from the Massachusetts Institute of Technology showed that a heat-shock protein called Hsp90 can both hide cryptic variation and unleash it, depending on circumstances.
By helping proteins to fold correctly, Hsp90 allows them to tolerate genetic mutations that might otherwise catastrophically distort their shapes. It can thus allow proteins to build up such mutations, along with neutral ones. If conditions become more challenging—such as a significant rise in temperature—the Hsp90 molecules may be in such demand that they cannot aid all the proteins that need them. Suddenly, proteins have to fold without Hsp90’s help, and all their cryptic mutations become exposed to natural selection. Some of these mutations would have beneficial effects in the challenging conditions and would thus pass to the next generation.
Rutherford and Lindquist first demonstrated what Hsp90 does in fruit flies. When they depleted the protein by exposing flies to heat or chemicals, the insects grew up with all sorts of weird features, from subtle, unimportant things like extra hairs to severe deformities like misshapen eyes. None of these changes were caused by fresh mutations but rather by existing dormant ones that had been hidden by Hsp90 and unmasked by its absence. For good reason, Lindquist has described Hsp90 as an evolutionary “capacitor,” after the devices that store electrical charge and release it when needed. It stores cryptic variation, unleashing it in demanding environments, just when it is most needed.
Hsp90 is ancient and found in plants and fungi as well as animals—signs that it is one of life’s critical molecules.One of Lindquist’s lab members, Daniel Jarosz, discovered that a fifth of all the variation in the yeast genome is concealed by Hsp90—a huge reservoir just waiting to be released. By exposing so much variation in one fell swoop, Hsp90’s behavior provides a possible answer to one of evolution’s most puzzling questions—the origin of complex combinations of traits.
“Sometimes it’s hard to envision how new forms or functions could emerge if they require multiple mutations, none of which are individually beneficial. The frequency of it should be exceedingly rare,” Jarosz says. It is a dilemma that opponents of evolutionary theory often seize on. But heat-shock proteins, and cryptic variation more broadly, provide a possible solution. When environments change, they allow organisms to make use of mutations that were sitting quietly in the wings but that in combination suddenly offer a solution to some challenge to survival. They act as evolutionary rocket fuel. “Hsp90 can help us to understand how complex traits could ever be achieved in very rapid fashion,” Jarosz says. For those in the field, it is an exciting time. “We’re really at the cusp of making big discoveries in the most fundamental question in evolutionary biology: ‘How does life bring about new things?’” Wagner says.
The disease connection
Beyond offering new insight into the underpinnings of evolution, research into cryptic mutations is suggesting new ways to look at and combat disease. It has been very hard to decipher the genetic underpinnings of many human traits or diseases, from height to schizophrenia. Even though they run strongly in families, scientists have found only a small number of genes associated with them. Plotkin wonders if cryptic variation might help to solve the puzzle of this “missing heritability.” Perhaps we should be looking for mutations that have no effect on their own but rather influence the risk of diseases in combination. “This is just wild speculation on my part, but it sounds reasonable to me,” he says.
The same thinking is being applied to other disorders. We continually provide bacteria, fungi and viruses with new challenges by attacking them with our immune system or hitting them with waves of toxic drugs. One of their chief defences is the ability to evolve resistance, and cryptic variation helps them to do this faster. Lindquist, for example, has shown that Candida albicans, the fungus responsible for thrush, needs lots of Hsp90 to evolve resistance to antifungal drugs. When she blocked Hsp90, the fungi stayed vulnerable. Cancer cells also benefit from Hsp90, because they need help in folding their wide array of unstable mutant proteins. Many scientists are now testing chemicals that block Hsp90 as potential treatments for cancer or ways of preventing fungi and bacteria from developing drug-resistance.
Others are trying to predict how cryptic variation fuels the evolution of viruses. Plotkin and Bloom are focusing on influenza. “The flu virus is evolving all the time to escape all the antibodies that it has stimulated in the human population,” Plotkin says. “This is why we have to update the vaccine every year.” Last year he analyzed the genomes of flu viruses collected over four decades. He found hundreds of pairs of mutations, where one swiftly appeared after the other. In many cases the first of the pair was neutral—it did nothing except to pave the way for the second mutation. By identifying these hidden mutations, which predate more serious ones, we could find strains that are primed for resistance and cut them off with the right vaccines. “We could, to some extent, predict the evolution of flu,” Plotkin says.
Plotkin also envisages focusing on cryptic mutations for a different end: making new molecules useful for the biotechnology industry. Many scientists are trying to artificially evolve designer proteins that will do specific tasks. Typically they look for mutations that overtly alter proteins in ways that enhance their ability to do the chosen task. But it may be useful to look for the hidden neutral mutations that could make proteins more likely to acquire useful mutations. “Understanding the role of cryptic mutations in an evolving protein could help to improve some already very useful techniques for engineering enzymes,” Plotkin says.
Applications like these are just the beginning. In many ways the study of cryptic variation has been a metaphor for itself. Knowledge and interest in the field has been building up under the surface for a long time, largely hidden from view, only to be released by the influx of new technology. “We’re really at the tip of the iceberg,” Plotkin says.
Wild Animals Adapt To City Life
Cities are often viewed as environmental wastelands, where only the hardiest of species can eke out an existence. But as scientists in the fledgling field of urban ecology have found, more and more native animals are now adjusting to life on the streets.
Take America's biggest metropolis. As recently as a few decades ago, New York City lacked white-tailed deer, coyotes and wild turkeys, all of which have now established footholds. Harbor seals, herons, peregrine falcons and ospreys have likewise returned in force, and red-tailed hawks have become much more common. Meanwhile the first beaver in more than two centuries turned up in 2007; river otters last year ended a similar exile.
What's happening in New York is by no means an anomaly. Experts say that the adaptation of wildlife to urban areas is ramping up worldwide, in part because cities are turning greener, thanks to pollution controls and an increased emphasis on open space.
In North America, the phenomenon is perhaps best exemplified by the coyote, which colonized cities roughly 15 to 20 years ago. A recent study of the Chicago area found that urban pups had survival rates five times higher than their rural counterparts. “Coyotes can absolutely exist in even the most heavily urbanized part of the city, without a problem,” says Stan Gehrt, a wildlife ecologist at Ohio State University. “They learn the traffic patterns, and they learn how stoplights work.”
Other studies have found animals from hawks to opossums reaping benefits from urban life. “We need to be careful about thinking of cities as places that don't really have interesting biodiversity,” says Seth Magle, director of the Urban Wildlife Institute at the Lincoln Park Zoo in Chicago. “Our urban areas are ecosystems, with just as many complex interactions as the Serengeti or the outback of Australia.”
H.L. Mencken
The way to deal with superstition is not to be polite to it, but to tackle it with all arms, and so rout it, cripple it, and make it forever infamous and ridiculous. Is it, perchance, cherished by persons who should know better? Then their folly should be brought out into the light of day, and exhibited there in all its hideousness until they flee from it, hiding their heads in shame.
True enough, even a superstitious man has certain inalienable rights. He has a right to harbor and indulge his imbecilities as long as he pleases, provided only he does not try to inflict them upon other men by force. He has a right to argue for them as eloquently as he can, in season and out of season. He has a right to teach them to his children. But certainly he has no right to be protected against the free criticism of those who do not hold them. He has no right to demand that they be treated as sacred. He has no right to preach them without challenge. Did Darrow, in the course of his dreadful bombardment of Bryan, drop a few shells, incidentally, into measurably cleaner camps? Then let the garrisons of those camps look to their defenses. They are free to shoot back. But they can't disarm their enemy. - H.L. Mencken
Rapid Speciation
Only a few genetic changes are needed to spur the evolution of new species-even if the original populations are still in contact and exchanging genes. Once started, however, evolutionary divergence evolves rapidly, ultimately leading to fully genetically isolated species, report scientists from the University of Chicago in the Oct 31 2013 Cell Reports.
"Speciation is one of the most fundamental evolutionary processes, but there are still aspects that we do not fully understand, such as how the genome changes as one species splits into two," said Marcus Kronforst, Ph.D., Neubauer Family assistant professor of ecology and evolution, and lead author of the study.
To reveal genetic differences critical for speciation, Kronforst and his team analyzed the genomes of two closely related butterfly species, Heliconius cydno and H. pachinus, which only recently diverged. Occupying similar ecological habitats and able to interbreed, these butterfly species still undergo a small amount of genetic exchange.
The researchers found that this regular gene flow mutes genetic variants unimportant to speciation-allowing them to identify key genetic areas affected by natural selection. The butterfly species, they discovered, differed in only 12 small regions of their genomes, while remaining mostly identical throughout the rest. Eight of these coded for wing color patterning, a trait important for mating and avoiding predation, and under intense selection pressure, while the other four remain undescribed.
"These 12 spots appear to only function well in the environment their species occupies, and so are prevented from moving between gene pools, even though other parts of the genomes are swapped back and forth," Kronforst said.
The team also compared the genomes of these two groups to a third species, still closely related but further removed on an evolutionary time scale. Here, they found hundreds of genomic changes, indicating that the rate of genetic divergence accelerated rapidly after the initial changes took hold.
"Our work suggests that a few advantageous mutations are enough to cause a 'tug-of-war' between natural selection and gene flow, which can lead to rapidly diverging genomes," Kronforst said.
Kronforst and his team plan to characterize the remaining four divergent genome areas to look for functions important to speciation. They also are studying why species more commonly arise in tropical areas.
"It is possible that this type of speciation, in which natural selection pushes populations apart, has been important in the evolution of other organisms. It remains to be seen whether it is a common process though," Kronforst said.
Note: This story has been adapted from a news release issued by the University of Chicago Medical Center
Bend or Break (Economist 23 Nov 2013)
THAT old-time religion is strong in America. To take just one measure, for decades more than 40% of all Americans have consistently told Gallup pollsters that God created humans in pretty much their current form, less than 10,000 years ago. They are embracing an account of man’s origins promoted by Young Earth Creationists who lean on a painstakingly literal reading of the Scriptures, swatting aside the counter-claims of science (fossils are a relic of Noah’s flood, they argue, and evolution is a myth peddled by atheists). In a recent poll 58% of Republicans and 41% of Democrats backed creationism. The glue that underpins such faith is the principle of Biblical inerrancy - a certainty that the Scriptures are infallibly and unchangingly true.
A quest for certainty is an American tradition. Old World believers often inherit religion passively, like a cultural artefact. Americans, an individualistic bunch, are more likely to switch churches or preachers until they find a creed that makes sense to them. They admire fundamental texts (the constitution, for example) that plain citizens may parse for immutable truths.
At the same time, the literalist faith is in crisis. Young Americans are walking away from the stern denominations that have held such sway over post-war American life, from Billy Graham’s crusades to the rise of the religious right. After they hit 18, half of evangelical youngsters lose their faith; entering a public university is especially perilous. As a generation, millennials (those born between the early 1980s and 2000s), are unimpressed by organised anything, let alone organised religion. Many young adults told the Barna Group, an evangelical research outfit, that they felt stifled by elders who demonised secular America. Young Christians are more accepting of gay rights than their elders. In a challenge to creationists, a quarter of young adults told Barna’s study that their churches were “anti-science”.
The seeming paradox of a strong faith in crisis is explained by rigidity: that which cannot bend may break instead. The danger is keenly felt in conservative Christian circles, where a debate has broken out over the long-term outlook for the movement. That debate took Lexington this week to unfamiliar territory: the annual meeting, in Baltimore, of America’s largest society for evangelical theologians, where Biblical inerrancy topped the agenda. Some discussions were a trifle arcane, it is true, with sharp exchanges about ancient Hebrew cosmology and the degree to which the Book of Genesis draws on Mesopotamian creation and flood motifs. But a bang-up-to-date, and distinctly political, dispute hummed along underneath the scholarly sparring: what to do about core principles threatened by new facts. Evangelical Christianity is being shaken not only by the irreverence of the young but also by new discoveries flowing from genetic science.
Some discoveries mostly serve to inject fresh evidence into long-running disputes. It is nearly 90 years since the “Monkey Trial” of John Scopes, a young schoolmaster accused of teaching evolution to Tennessee children. Recent research (notably cross-species comparisons of gene sequences rendered non-functional by mutations) has greatly strengthened the case that humans and chimpanzees share a common ancestor. A creationist speaker in Baltimore shrugged such discoveries off, declaring that “science changes, but the word of God never changes.”
A trickier controversy has been triggered by findings from the genome that modern humans, in their genetic diversity, cannot be descended from a single pair of individuals. Rather, there were at least several thousand “first humans”. That challenges the historical existence of Adam and Eve, and has sparked a crisis of conscience among evangelical Christians persuaded by genetic science. This is not an esoteric point, says Michael Cromartie, an evangelical expert at the Ethics and Public Policy Centre, a Washington think-tank: many conservative theologians hold that without a historical Adam, whose sin descended directly to all humanity, there would be no reason for Jesus to come to Earth to redeem man’s Fall.
Academics have lost jobs over the Adam controversy. Many Christian universities, among them Wheaton (a sort of evangelical Harvard and Yale, rolled into one), oblige faculty members to sign faith statements declaring that God directly created Adam and Eve, the “historical parents of the entire human race”. John Walton, an Old Testament scholar at Wheaton, suggested that Adam and Eve are presented in Genesis as archetypes, though he called them historical individuals too.
In a breach with orthodoxy that would have been unthinkable a few years ago, the Baltimore meeting was also addressed by a Canadian, Denis Lamoureux, who sees no evidence for a historical Adam. The Bible, he argues, is “ancient science” filled with archaisms and metaphors. Mr Lamoureux is a prominent member of the “evolutionary creation” movement, which credits God with creating Darwinian evolution and overseeing its workings (a view shared by, among others, the pope). A prime mover, Francis Collins, is an atheist-turned-Christian who directs the National Institutes of Health (NIH), the American government’s biomedical research agency. Biologos, an evolutionary creation group that Mr Collins set up in 2007, calls this a moment to match Galileo’s trial for insisting that the Earth circles the sun.
Academic papers on Adam are flying. Perhaps a dozen Adam books are out or due out soon. Baltimore’s packed Adam session turned professors away at the door. This is a dispute between conservative Christians, not an outbreak of soggy, believe-what-you-like European deism. Much is at stake. Denying science is a bad habit among conservatives of all stripes: Paul Broun, a Georgia Republican who sits on the House science committee (and who wants to run for the Senate), says evolution is a lie “straight from the pit of hell”. That’s pandering, not piety.
Where are the millions of missing link fossils that should be overflowing our museums?
Where are they? They are in our museums overflowing them actually. Most of the extensive fossils are not on display. There are more of them than display space.
The fact is there are not really any missing links as spoken of in the common sense. This is a common misconception that has existed for a very long time based on early deist ideals and actually predates the theories of evolution. Missing links
The term "missing link" refers back to the originally static pre-evolutionary concept of the great chain of being, a deist idea that all existence is linked, from the lowest dirt, through the living kingdoms to angels and finally to God. The idea of all living things being linked through some sort of transmutation process predates Darwin's theory of evolution. Jean-Baptiste Lamarck envisioned that life is generated in the form of the simplest creatures constantly, and then strive towards complexity and perfection (i.e. humans) through a series of lower forms. In his view, lower animals were simply newcomers on the evolutionary scene.
"Missing link" is still a popular term, well recognized by the public and often used in the popular media. It is, however, avoided in the scientific press, as it relates to the concept of the great chain of being and to the notion of simple organisms being primitive versions of complex ones, both of which have been discarded in biology. In any case, the term itself is misleading, as any known transitional fossil, like Java Man, is no longer missing. While each find will give rise to new gaps in the evolutionary story on each side, the discovery of more and more transitional fossils continues to add to our knowledge of evolutionary transitions.
This is where they are. In the back and underground in those museums. These are but single glimpses of sometimes multiple rooms full of those fossils the question speaks of.
The AMNH Paleontology collections contain an estimated 4.75 million specimens. They are divided into four collection units: Fossil Amphibians, Reptiles, and Birds (FARB); Fossil Fish; Fossil Invertebrates; and Fossil Mammals.
That is but one museum with 4.75 million fossils. The pictures above are but tiny snapshots into many different museums. The actual collections are so huge I could fill one hundred times as many pictures just of the storage rooms.
There are likely hundreds of millions of fossils in the world's collective museums.
Just as an example of the fossil record of what you call missing links, we have an extensive list of Human fossils going back millions of years showing incremental changes from a common ancestor of the human and ape to modern humans. Keep in mind we are not related to apes we are related to an earlier species which apes are also related to. List of human evolution fossils
We have fossil records like this for most common species IN OUR MUSEUMS.
Evo Can Be Disproven
When you say, "The theory of evolution is something that can never be proven or disproven definitively," that's actually incorrect. The theory of evolution CAN be disproven. And that's the beauty of it.
There are lots of ways to disprove evolution. The classic example is the Precambrian rabbit; that is, a fossil whose geologic history is incompatible with biological history. Such a thing would, definitively, disprove evolution. It's just one of many ways to disprove evolution.
It's precisely because evolution can be disproven that makes it, or any other scientific theory, valuable. Theories that can't be disproven are of zero utility. They serve only for warm fuzzies, and while those warm fuzzies appear to be of intense interest to the vast majority of people wringing their hands about evolution, they're of no interest whatsoever to people who actually want to achieve things in the world: developing medicines, decoding the genome, even finding oil and predicting earthquakes.
The fundamental value of science is when you take a bunch of known facts, run them through your theory to derive a thing which had been unknown, and then rely on it if as if you knew it to be true all along. If the theory were false, that thing would turn out not to be true, and you'd fail in your endeavor. But when the theory is true, that new fact also turns out to be true. A theory that can't be disproven can't distinguish between correct and incorrect answers, and so it doesn't make any difference. It leads nowhere.
Not every science leads directly to applications, but all scientific processes fundamentally come down to that process. The various scientific disciplines, including the purely theoretical ones, all bolster one another to build up an edifice that allows us to extrapolate from things we know to things we don't yet know but can rely on.
The important thing about evolution is that unlike competing notions of life on earth, it is the only one which COULD be proven false, and is therefore the only one worth anything at all. Even if evolution were proven false, the other ideas (it's invalid to call it a theory, for this reason) would remain utterly worthless.
People remain attached to them because they are not scientists and never build things of value from the tools that science gives us. They enjoy only the special feeling they get from being distinct from non-human animals. If you want a special feeling, get a dog. If you like to accomplish things, you study science. And that study will provide so many examples of places where the theory evolution could have been wrong, but wasn't, that you'll conclude that continuing to ask questions like this is a vast waste of everybody's time.
New Scientist Last Word: Is Evo Ending?
This question of whether humans are still evolving has been the subject of much interesting debate of late. Medical advances mean that our physical and reproductive health are no longer the major determinants of our ability to pass on genes. This means natural environmental factors will have less influence on which people pass on genes than in our distant past. However, the process of evolution will still be at work.
It has been suggested that within many cultures, women now have greater freedom and choice in partners, so arguably traits that women in our culture find attractive are more likely to be passed to the next generation. One might think that in our modern technological world, genes for high intelligence would be favoured. However, success in modern societies seems to cause those successful people to have fewer children, not more. So perversely, genes for intelligence may be being selectively bred out of the population.
Alternatively, the human species could conceivably evolve into two separate genetic pools: an intelligent and affluent pool with low reproductive rates and longer lifespans because they can afford the latest medical treatment; and a larger pool of the lower intelligence, poorer, exploited class that has a high reproductive rate and lower lifespan. However, there is probably too much crossover of members between the two groups for this to happen, and it is also questionable whether such a society would be stable long enough to form two separate species. Long live the revolution!
Simon Iveson, The University of Newcastle Callaghan, New South Wales, Australia
The advance of technology does not mean the end of human evolution. Our species is evolving faster than ever. Evolution is caused by natural selection acting on inherited variation. With our population at 7 billion, we are more genetically diverse than ever; and selection pressure has shifted rather than ended.
Civilization exerts powerful selective pressure against lactose intolerance, coeliac disease, dyslexia, innumeracy, immune deficiency and – in this materialistic age – a lack of sales resistance.
Our far-future descendants will, from a very young age, be highly literate, numerate, have strong immune systems, and be resistant to stress; and they will also possess a superhuman sense of humour. They will need those traits to survive long enough to reproduce.
Nathaniel Hellerstein, San Francisco, California, US
Human beings will not stop evolving because of technology developed to enable us to survive changes in our environment. Consider that technology has dramatically reduced infant and maternal mortality in the developed world. Presumably, babies with larger heads are more likely to survive, along with their mothers, due to procedures such as caesarean sections. This should lead to an increase in average head size.
Similarly, technology enables people with poor eyesight, and other conditions that once would have been fatal, to survive and reproduce. Couples can overcome infertility through assisted reproductive technology, thus spreading their genes for low fertility more than would have been possible in ancestral environments.
These are just some examples of how technology alters the direction of evolution, although perhaps not in the way the writer would consider desirable.
Ellen Spertus, San Francisco, California, US
Duplicate Genes
From time to time, living cells will accidently make an extra copy of a gene during the normal replication process. Throughout the history of life, evolution has molded some of these seemingly superfluous genes into a source of genetic novelty, adaptation and diversity. A new study shows one way that some duplicate genes could have long-ago escaped elimination from the genome, leading to the genetic innovation seen in modern life.
Researchers have shown that a process called DNA methylation can shield duplicate genes from being removed from the genome during natural selection. The redundant genes survive and are shaped by evolution over time, giving birth to new cellular functions.
"This is the first study to show explicitly how the processes of DNA methylation and duplicate gene evolution are related," said Soojin Yi, an associate professor in the School of Biology and the Parker H. Petit Institute for Bioengineering and Bioscience at the Georgia Institute of Technology.
The study was sponsored by the National Science Foundation (NSF) and was scheduled to be published the week of April 7 in the Online Early Edition of the journal Proceedings of the National Academy of Sciences (PNAS).
At least half of the genes in the human genome are duplicates. Duplicate genes are not only redundant, but they can be bad for cells. Most duplicate genes accumulate mutations at high rates, which increases the chance that the extra gene copies will become inactive and lost over time due to natural selection.
The new study found that soon after some duplicate genes form, small hydrocarbons called methyl groups attach to a duplicate gene's regulatory region and block the gene from turning on.
When a gene is methylated, it is shielded from natural selection, which allows the gene to hang around in the genome long enough for evolution to find a new use for it. Some young duplicate genes are silenced by methylation almost immediately after being formed, the study found.
"What we have done is the first step in the process to show that young gene duplicates seems to be heavily methylated," Yi said.
The study showed that the average level of DNA methylation on the duplicate gene regulatory region is significantly negatively correlated with evolutionary time. So, younger duplicate genes have high levels of DNA methylation.
For about three-quarters of the duplicate gene pairs studied, the gene in a pair that was more methylated was always more methylated across all 10 human tissues studied, said Thomas Keller, a post-doctoral fellow at Georgia Tech and the study's first author.
"For the tissues that we examined, there was remarkable consistency in methylation when we looked at duplicate gene pairs," Keller said.
The computational study constructed a dataset of all human gene duplicates by comparing each sequence against every other sequence in the human genome. DNA methylation data was then obtained for the 10 different human tissues. The researchers used computer models to analyze the links between DNA methylation and gene duplication.
The human brain is one example of a tissue for which gene duplication has been particularly important for its evolution. In future studies, the researchers will examine the link between epigenetic evolution and human brain evolution.
Daytona and the Scopes Trial
Eighty-nine years after a Dayton jury found John Scopes guilty of teaching evolution, the people of this town are still acting out the creationist debate, all in order to put on a show.
In the classic style of southern Appalachia, Dayton, Tennessee, has abandoned coal mines, epidemic obesity, and a median income that’s half the national average.
Dayton is also famous, loosely speaking. In July 1925, the town hosted the Scopes Monkey Trial, a landmark case in the history of creationism. Eighty-nine years ago Monday, a Dayton jury found John Scopes guilty of teaching evolution in his classroom, in open disregard of Tennessee law, and fined him $100.
One feels, walking around Dayton, as if it has been swallowed whole by its stories—stories from the Bible, for sure, but also the grand fiction of the trial that has come to define the town. Torn by schism, 89 years ago—and torn again, today, by an evolutionary dispute at its little evangelical college—Dayton exemplifies two useful truths about the whole creationists-and-evolutionists hoopla. One: People enjoy a good fight. And, two: A nice, predictable performance has a way of subsuming the facts.
I spent part of this past weekend in Dayton for the town’s annual Scopes Trial Play and Festival. I toured the town in a tiny yellow school bus, met some relatives of the legendary boy named Sue, and attended a two-hour, music-infused reenactment of the trial, during which the audience sang hymns and the actor playing Clarence Darrow gave praise, in the final musical number, to religious freedom and the wisdom of the Bible.
On the county courthouse lawn, under the dripping leaves of century-old maples—each bearing a placard that reads “this tree witnessed the Scopes Trial”—I bought a snack from the Monkey Town Donut Company food truck, whose proprietor, fresh from Seattle, was wearing a fleece emblazoned with the logo of a Baptist convention.
Dayton’s been selling the Monkey Trial for years, and for much more than doughnuts. Sales were, really, the original intent. The Scopes Trial was a staged event, from start to finish. Pretty much every single participant was complicit in the effort to put on a show.
The lesson of the Scopes Trial is not that some people are stupid science-deniers. It’s that scientific fact can all too easily be spun up into performances and archetypal dramas.
John Scopes, the schoolteacher, was not a prisoner of his conscience. Dayton was not a particularly religious town. The defense lawyers did not necessarily want their client to win.
Here’s what actually happened: The ACLU was looking for a test case to challenge Tennessee’s anti-evolution law. Dayton’s civic leaders, suffering from the loss of the coal and steel industries in the 1910s, were desperate for some kind of stunt that would bring attention, and perhaps investment, to the town. The anti-evolution law seemed like an opportunity. A few of those community leaders invited Scopes to the drugstore, bought him a fountain drink, and convinced him to stand trial.
The resulting affair mixed the doctrinal with the sensational: It was part Jonathan Edwards, part Nancy Grace. Three-time presidential candidate William Jennings Bryan, a staunch political progressive, showed up to assist the prosecution (which included a man named Sue Hicks, the inspiration for the Johnny Cash classic and a local attorney named for his mother, who died in childbirth).
Darrow, the country’s most famous lawyer, came out to help defend Scopes, whom he very much hoped would lose, so that he could take the case to a higher court. He knew the case was a farce. Darrow and his wife stayed at the home of the prosecution’s chief witness and supposedly coached him on how best to ensure the conviction of the client whom Darrow had been hired to defend.
The trial - a misdemeanor case - quickly turned into a grand debate, culminating in a cross-examination, on the Dayton courthouse lawn, of Bryan. The examination morphed into a far-reaching conversation between Bryan and Darrow on scriptural interpretation. Bryan, by many accounts, didn’t fare so well. He died of stroke a few days later.
With that, writes historian Edward Larson, “The fundamentalist movement acquired a martyr.” Dayton, meanwhile, had acquired some worried citizens. As one older Tennessean told festival-goers this year, a local woman fed Bryan a platter of sliced tomatoes and salt the day before his death. The woman was terrified that this odd meal had contributed to the corpulent orator’s passing.
The Scopes Trial was a formative moment for modern creationism. It was also the first trial ever broadcast by radio. Reels of film from the courtroom were rushed around the country and screened in movie halls. So many journalists showed up that some had to be housed on the second floor of the local hardware store.
That a media boom and a creationist surge should be connected is no coincidence. The Scopes Trial offered a drama that, like a Hallmark original movie, spun stereotypes into archetypes and then pitted them against each other to create a kind of morality play—one designed to appeal to people on both sides of the debate.
Here it was: Northerners versus Southerners; urbanites versus ruralites; a big-shot Chicago defense lawyer versus a mostly local prosecutorial team; Bryan, the champion of the common man, against the expert witnesses of the defense.
Genesis, at this point, almost seems like an afterthought. Journalists—in particular, the brilliant but fact-averse H.L. Mencken—portrayed Dayton as a crazed backwoods hollow, overrun with Pentecostals hell-bent on driving out the devil Scopes. A few decades later, Jerome Lawrence and Robert Edwin Lee formalized this whole crazed-Southerners-versus-intellectuals story in the hit play and movie Inherit the Wind.
The weird thing, really, is that Dayton began to grow into the role which history had assigned it. In 1930, Bryan College, inspired by William Jennings, opened in the town and quickly grew into a bastion of conservative evangelism in eastern Tennessee. In 2004, Dayton’s county found its way into national news again, this time by trying to ban homosexuality. Today, people meet weekly in the Dayton McDonald’s and sing gospel music.
And, since the 1980s, Dayton has held dramatic reenactments of the trial. Attracting people to downtown and re-creating an event that was itself a performance, the whole event manages to keep the Scopes Trial alive. Watching the reenactment is a surreal spectacle: The people of Dayton are still acting out the creationist debate, all in order to put on a show.
The trial never really ended. Those archetypes still bounce around today. Watching the Scopes Trial reenactment, or reading the commentary on the Ken Ham/Bill Nye tussle last February, one gets the sense that the whole damn debate over creation and evolution is not exactly a battle over origins but a veiled way for people to vent their rage against their fellow citizens, stereotypes at the ready.
That applies within creationist subcultures, as well, as Bryan College has recently made clear. Last February, the school’s president unexpectedly changed the school’s Statement of Belief, which all faculty must sign. Instead of affirming that they believe that God created the world, using relatively general terms, faculty must now assert that Adam and Eve were “historical persons.”
Faculty, citing academic freedom, have protested. They issued a vote of no-confidence against Bryan’s president. Some are suing the school. This past weekend, as Daytonians geared up for the festival, four members of the school’s Board of Trustees resigned over the administration’s handling of the situation.
Again, more is at stake here than evolution. Enrollment rates, the Chattanooga Times-Free Press suggests, are in trouble at Bryan. There’s been concern about doctrinal backsliding. In Genesis, one finds the place to draw the lines, and launch a fight.
Species may evolve, but historical forces, it sometimes seems, do not. The challenge for science communicators around the country is to figure out a way to disentangle facts from the subcultures that come to claim, or deny, them, and to make scientific ideas accessible to people regardless of where they fall on the tribal spectrum of our often fragmented society. The lesson of the Scopes Trial, after all, is not that some people are stupid science-deniers. It’s that scientific fact can all too easy be spun up into performances and archetypal dramas, until what’s at stake is something that goes far deeper than the details of evolution.
Random Genetic Drift
(New Scientist)
In a cave, a bear gives birth to two cubs one long dark night. In the morning, the weak winter light reveals something strange: the cubs' fur is white, in stark contrast to the dark fur of their mother. They are freaks... or are they?
What is evolution? Easy, you might think: it's the way living organisms change over time, driven by natural selection. Well, that's not wrong, but it's not really how evolutionary biologists think of it.
Picture those bear cubs. Here we see a dramatic physical change, but it isn't evolution. Among black and brown bears, white bear cubs are not that uncommon. But white bears don't have more cubs than other bears, so the gene variants for white fur remain rare.
Among one group of brown bears living in the Arctic, though, white fur was an advantage, helping them sneak up on prey. There white bears thrived and had more offspring – their "fitness" increased – so the proportion of white bears rose until the entire population was white. This is definitely evolution. It happened as polar bears evolved from brown bears a few million years ago.
So although we tend to think about evolution in terms of the end results – physical changes in existing species or the emergence of new ones – the key concept is the spread of genetic variants within a population.
This results of this process can appear purposeful. Indeed, it is convenient to talk as if they are: "polar bears evolved white fur for camouflage". But it all comes down to cold numbers: a random mutation that boosts fitness spreading in a population.
What's more surprising is that even mutations that don't increase fitness can spread through a population as a result of random genetic drift. And most mutations have little, if any, effect on fitness. They may not affect an animal's body or behaviour at all, or do so in an insignificant way such as slightly altering the shape of the face. In fact, the vast majority of genetic changes in populations – and perhaps many of the physical ones, too – may be due to drift rather than natural selection. "Do not assume that something is an adaptation until you have evidence," says biologist Larry Moran at the University of Toronto, Canada.
So it is wrong to think of evolution only in terms of natural selection; change due to genetic drift counts too. Moran's minimal definition does not specify any particular cause: "Evolution is a process that results in heritable changes in a population spread over many generations."
It does not even have to involve many generations, says Michael Kinnison of the University of Maine in Orono, who studies how living species are evolving. Evolution occurs almost continuously, he says. It usually takes time for populations to change significantly, but sometimes it happens very fast, for instance when only individuals of a particular genetic type survive some catastrophe, or when only tumour cells with a particular mutation are not killed by a cancer drug.
In these cases, there is no need to wait for the survivors to reproduce to determine that the population has changed. "I would say evolution occurs whenever some process changes the distribution of heritable traits in a population, regardless of the time scale," Kinnison says. "While evolutionary biologists like to treat evolution as a generation-to-generation process, that is often more a matter of convenience than reality."
Suppose those white bear cubs somehow reached an island and founded a new bear population. The interbreeding of white bears always produces white offspring, and thus being white would be normal there. So we can boil down the concept of evolution to just six words: Evolution is what makes freaks normal.
Autism and Mutations
For years scientists searched fruitlessly for the causes of autism by looking for genes shared by families prone to the disorder. Now researchers taking a new approach have begun to unlock its secrets.
His name was David. He was 10 years old and, to put it bluntly, compellingly weird - especially in the buttoned-down, groomed normality of suburban Long Island in the early 1960s. At the time, Michael Wigler was a ninth-grade student in Garden City, and he liked to hang out at the home of his girlfriend. That’s where he encountered David, her younger brother. Half a century later, he still can’t get the boy out of his mind.
“He was just like from another planet - it was like meeting an alien,” says Wigler, who ended up a little further east on Long Island as a geneticist at Cold Spring Harbor Laboratory. “He was so different from anybody I had ever met before. First of all, he threw his arms about a lot. And then he moved his head around a lot and would never look at you when he talked to you. And he had an uncanny knowledge of baseball statistics. And I just thought, you know, ‘Boy, this guy is really different. I mean, he’s not just a little different. He’s very different.’”
In the 1950s and 1960s, children like David were pretty much anomalies without a name. Long after becoming a prominent cancer researcher, Wigler would mention him to colleagues, students, postdocs, writers, almost anyone. As one of those postdocs later recalled, “At the time, autism existed; they just didn’t call it autism, so Mike didn’t know this kid had that particular disorder.” Nonetheless, Wigler had become fascinated by the biological mystery that might explain such aberrant behavior. “I think it’s probably what got me interested in genetics,” he says.
Wigler, now 67, indeed devoted his career to genetics, establishing a reputation as one of the most original and productive thinkers in cancer research. So it was a bit of a surprise when, about 10 years ago, he jumped into autism research. Even more surprising has been what he and a few other maverick geneticists began to find.
One of the things Wigler had seen in cancer is that the disease usually arises because of spontaneous mutations. Rather than lurking in the population for generations and passing from ancestors to descendants, as in classic Mendelian illnesses like Huntington’s disease, these noninherited mutations popped up in one generation. They were fresh new changes in the DNA—de novo mutations, in the jargon of geneticists. As a cancer researcher, Wigler developed new techniques for identifying them, and that led to another surprise. Some of these new mutations were often stunningly complex—not just little typos in the DNA, but enormous chunks of duplicated or missing text, which often created unstable, mistake-prone regions in chromosomes.
All that—the memory of David, his successes in understanding cancer genetics, and the resulting realization that a focus on inheritance might miss some of the most significant disease-causing genes—served as background when, in the spring of 2003, Wigler received a phone call from James Simons, a wealthy hedge fund manager and cofounder (with his wife) of the Simons Foundation, whose daughter had been diagnosed with an autism spectrum disorder. The foundation had received a grant proposal for a research project, and Simons asked Wigler if he would be willing to evaluate it.
The researchers had proposed hunting for autism genes using conventional methods to look for inherited mutations passed down through families. Wigler didn’t mince his words. “I thought they were looking the wrong way,” he says now. “And I didn’t want to see all this wasted effort.”
Wigler, still fascinated by the boy he’d met some 40 years earlier, threw his own hat in the ring. “Autism?” he recalls telling Simons. “Autism? Iwant to work on autism.”
Beginning with a paper in Science in 2007 and culminating with a report published in Nature last October, Wigler’s group and its collaborators have written a dramatically different story about the genetic origins of autism spectrum disorders—a story so unexpected and “out of left field,” as Wigler puts it, that many other genetic researchers refused to believe it at first. Wigler and his colleagues have shown that many cases of autism seem to arise from rare de novo mutations—new wrinkles in the fabric of DNA that are not inherited in the traditional way but arise as last-minute glitches during the process in which a parent’s sperm or egg cells form.
Importantly, these rare mutations exert big effects on neurological development and function. Wigler’s methods have allowed researchers to zero in on numerous genes that are damaged in people with autism and begin to classify subtypes according to the genes involved. And they have begun to take the next step: using the specific genes as clues, they are working to identify critical pathways that may shed light on how the disorder works and suggest possible therapies.
Publishing Errors
It shouldn’t be surprising that the genetics of autism make for an extremely difficult puzzle. After all, autism disorders cover a spectrum characterized by everything from atypical yet highly functional behavior to severe intellectual disability—a jumble of excitation and withdrawal, stunning intellectual capacity and severe mental disability, kinetic explosions of movement and repetitive actions, and other symptoms seen to varying degrees in different people. And yet much current research is predicated on the belief that the tiniest aberration at the level of genes, in the wrong place at the wrong time in development, can produce the kinds of aberrant behavior that are the hallmark of autism: social awkwardness and repetitive thinking and actions.
Since the disorder was first described in 1943, by Leo Kanner of Johns Hopkins, people have been vexed by its complex and paradoxical nature. Researchers have put forward a series of hypotheses that have not survived scientific scrutiny, attributing it to everything from emotionally remote mothers to ingredients in childhood vaccines. Genetics had always been an obvious route to explore, because it was known that autism often runs in families. So researchers have spent years gathering data on affected families and looking for suspicious mutations passed down from parent to child.
Geneticists pored over genomes in search of small shared errors in the DNA that were seen frequently enough to explain the disorder. But overall, these attempts were consistently uninformative; to use Wigler’s characterization, they were “worthless.” Though the search turned up a few common genetic variants found in people with autism, each of these variants has only an insignificant effect. The effort to find the genetic causes of autism by this strategy was “a total failure,” says Gerald Fischbach, scientific director of the Simons Foundation.
That was precisely the point that Wigler made to James Simons when the foundation sought his advice. Wigler wanted to take the opposite approach: look for new mutations that were not shared by parents and children. Although extremely rare, these mutations were often very disruptive, creating devastating effects in a single generation; identifying them would be a much more effective way to discern which genes are especially important in autism. So Wigler urged the Simons Foundation to find families in which only one child had autism, while the parents and siblings did not. Thanks to their cancer research, he and his colleagues had already developed the technology to spot newly arising mutations, and it looked like a more powerful way to identify key autism-related genes, too.
Wigler’s move into autism came at an important juncture in the biology of development disorders. It was one thing to implicate new mutations in cancer, a disease that often results from genetic insults to a person’s DNA over a lifetime. It was quite another to suggest that de novo mutations played a major role in diseases that develop early in life. But scientists led by Wigler and a few others, including Evan Eichler at the University of Washington, had begun to find that the genome itself was not what previous researchers had envisioned. While the Human Genome Project had presented genomic DNA as a single thread of letters (the “sequence”), and researchers had then catalogued variations consisting primarily of thousands of small differences of a letter or two, “new school” geneticists were finding oddities: huge duplications, gaping holes, and vast tracts of repetitive segments, known collectively as copy number variants. “Let’s suppose you buy a book,” Wigler says. “We’re used to getting books where the cover’s on right, the pages are in order, and they tell a continuous story. But imagine a publisher that duplicated his pages, dropped some pages, changed the order of the pages. That’s what happens in the human genome. That’s copy number variation.”
This form of mutation turns out to appear with surprising frequency in the human genetic text. Wigler’s group first glimpsed the phenomenon in cancer cells, but his hunch was that similar “publishing” errors might also play a role in diseases like autism. Sure enough, when the researchers examined the genomes of people with autism, they often found weird, large-scale duplications or deletions of DNA—mutations not present in the mother or father. The fact that they were not inherited strongly suggested that they were recent corruptions of the genetic text, almost certainly arising in the sperm or egg cells of the parents.
As more families participated in the research, and as technologies for identifying mutations improved, this body of work painted a new picture of the genetics of autism (indeed, the genetics of neurocognitive disorders more generally), confirming that de novo mutations and copy number variations account for many cases of the disorder. And these mutations seem to be especially prevalent in genes that affect neurological development and cognition.
In October, Wigler’s group—with collaborators including Eichler at the University of Washington and Matthew State at the University of California, San Francisco—identified up to 300 genes potentially related to autism. Twenty-seven of them confer a significantly heightened risk when disrupted by these rare new mutations. Each specific de novo mutation is rare enough to be found in less than 1 percent of the autism population, but collectively they may account for 50 percent of all cases of autism, says the Simons Foundation’s Fischbach.
Some of these genes are active in the earliest weeks of prenatal brain development; others kick into gear after birth. Some affect the function of synapses, the junctions between nerve cells; others affect the way DNA is packaged (and activated) within cells. One gene, CHD8, previously linked by Eichler’s group to children with a severe form of autism, has also been linked to schizophrenia and intellectual disability. Subtypes of autism seem to be associated with mutations in certain genes, which may begin to explain such long-standing mysteries as why some cases of autism produce severe symptoms while others cause more modest behavioral tics.
The findings also provide insight into just why autism is so common. “Let me highlight a critical point, and one of the biggest insights to come from the genetics of autism,” says Jonathan Sebat, a professor at the University of California, San Diego, who previously worked in Wigler’s lab and helped to reveal this new genetic landscape. “We did not fully appreciate how plastic the genome is, in the sense of how much new mutation there is. The genome is mutating, evolving constantly, and there’s a steady influx of new mutations in the population. Every child born has roughly 60 new changes in their DNA sequence, and [one in] every 50 children born have at least one large rearrangement. This is a really significant contributor to developmental disorders.”
Another surprising discovery is that certain regions of the human genome seem especially prone to disruption. Not only do some of these genetic “hot spots” seem to be linked to many forms of autism, but some of them have a deep and significant evolutionary history. If you trace them back in time, as Evan Eichler’s laboratory has begun to do, you can begin to glimpse the emergence of precisely the traits that distinguish humans from all other animals. “It’s kind of a crazy idea,” Eichler says, “but it’s like autism is the price we pay for having an evolved human species.”
Copy number variations in one specific hot spot on the short arm of chromosome 16, for example, have been associated with autism. By comparing the DNA of chimpanzees, orangutans, a Neanderthal, and a Denisovan (another archaic human) with the genomes of more than 2,500 contemporary humans, including many with autism, Xander Nuttle, a member of Eichler’s group, has been able to watch this area on the chromosome undergo dramatic changes through evolutionary history. known as BOLA2 that seems to promote instability. Nonhuman primates have at most two copies of the gene; Neanderthals have two; contemporary humans have anywhere from three to 14, and the multiple copies of the gene appear in virtually every sample the researchers have looked at. This suggests that the extra copies of the BOLA2 gene, which predispose people to neurodevelopmental disorders like autism, must also confer some genetic benefit to the human species. Otherwise, evolutionary pressure would have scrubbed the duplications out of the genome. In other words, the same duplications that can lead to autism may also create what Eichler calls genetic “nurseries” in which new gene variants arise that enhance cognition or some other human trait.
At a meeting of the American Society of Human Genetics last fall, Nuttle reported that this mutation-prone region, which contains more than two dozen genes related to neurocognitive function, lies adjacent to an intriguing gene
“The evolutionary twist on this whole story,” says Eichler, “is that our genome is really set up to fail, in the sense that we’re prone to delete and duplicate. The flip side of it is that that selective disadvantage is offset by the emergence of novel genes that have conferred an advantage to us cognitively.”
Diagnosing Hope
Despite the recent advances in autism genetics, there hasn’t been much difference at the treatment level. Thomas Insel, director of the National Institute of Mental Health, put the new findings in perspective in an interview with a reporter from the Simons Foundation at the Society for Neuroscience meeting last November. “This has been an incredible period of discovery,” said Insel, “but families are looking for interventions, not papers.”
As genetic researchers identify more genes involved in autism, they are beginning to classify autism cases according to their association with particular mutations. Eichler’s team, for example, recently gathered a group of patients with a mutation in the CHD8 gene. And “lo and behold,” Eichler says, the individuals shared many symptoms: 73 percent, for example, had severe gastrointestinal problems (CHD8, the researchers subsequently discovered, is also active in the gut). Such findings may in turn point to gene-specific interventions someday. The long-range hope is that as more rare mutations associated with autism are uncovered, the affected genes will tend to converge in ways that suggest molecular pathways critical to neurological development and function. Researchers are quick to point out that de novo mutations are only part of the autism story.
Scientists continue to hunt for inherited mutations and common variations that may also play important roles. But by using de novo mutations to spotlight some of the genes involved, Wigler and others have provided renewed hope for the field. Indeed, though Wigler concedes that there is “a long way to go” before genetic findings translate into useful medicines, he sees therapeutic possibilities in the very nature of those mutations. “Because the kids that have autism have one bad gene and one good gene, I think there should be ways of getting that good gene to be more active, and probably reversing things,” he says.
The genetic findings also suggest that even more dramatic (and ethically provocative) forms of therapy may be possible in the more distant future. “For many of the genes that we now think are important for autism, the genes are essentially [active] at eight to 16 weeks of development,” says Eichler. “So you have to not only make a diagnosis early, but some people argue that you have to intervene early in order to make a big difference.” And because many of the genes in question are also related to intelligence, Wigler says, it will be tempting to harness emerging technologies like prenatal genome analysis and precise new gene-editing tools as part of broader interventions in cognitive development. “It’s a little dangerous to tap into it,” he adds, “because we’re getting to designer babies and the Gattaca world. The autism world does bring us face to face with some science fiction stuff.”
As urgently as Wigler wants to understand the puzzle of autism, even he abides by certain limitations on his curiosity. Asked if he had ever been tempted to reconnect with David, the autistic boy who inspired his original interest in the disease, he practically recoiled. “No,” he said quickly. “That would be intruding.” But he still can’t stop talking about his old girlfriend’s brother with something like awe. “It wasn’t like he was trying to be different, you know? He wasn’t,” he said. “If anything, he was probably doing the opposite. But he was just really different. And it was an amazing thing.”
Confront the Weasels
Darwinism is true in the complex sense that scientific theories always are—not fixed in its particulars, immutable and imposing, but rich, changing, and evermore explanatory.
Darwin Day, February 12th, passed last week without much fuss, even from those of us who have written at length about the man it honors. Celebrating Charles Darwin’s birthday has some of the vibe of Linus waiting for the Great Pumpkin—there’s a hope, and a ritual, but it can be pretty lonely. There was, however, one striking sort of counter-ceremony: the Wisconsin governor and would-be Republican Presidential candidate Scott Walker, asked, in London, if he “believed in evolution,” took a pass. “I’m going to punt on that one as well,” he said. “That’s a question a politician shouldn’t be involved in one way or the other.”
It does seem slightly odd to ask a man running for President—or, for that matter, for dogcatcher—to recite a catechism on modern science. It somehow puts one in mind of the stern and classic catechism of the Catholic Church, and the questions posed, in memorably ironic form, in “The Godfather,” when Michael Corleone attends his godson’s christening even as his boys are killing the heads of rival families. The priest asks, “Do you renounce Satan … and all his works?” Michael responds, “I do renounce them,” even as he doesn’t. One hears a British voice similarly demanding such things of American politicians: “Do you believe in an expanding universe with a strong inflationary instance in the first micro-seconds?” “I do so believe.”
But the notion that the evolution question was unfair, or irrelevant, or simply a “sorting” device designed to expose a politician as belonging to one cultural club or another, is finally ridiculous. For the real point is that evolution is not, like the Great Pumpkin, something one can or cannot “believe” in. It just is—a fact certain, the strongest and most resilient explanation of the development of life on Earth that there has ever been. And yet, as the Times noted, after Walker’s London catechism, “none of the likely Republican candidates for 2016 seem to be convinced. Former Gov. Jeb Bush of Florida said it should not be taught in schools. Former Gov. Mike Huckabee of Arkansas is an outright skeptic. Senator Ted Cruz of Texas will not talk about it. When asked, in 2001, what he thought of the theory, Gov. Chris Christie of New Jersey said, ‘None of your business.’ ”
What the question means, and why it matters, is plain: Do you have the courage to embrace an inarguable and obvious truth when it might cost you something to do so? A politician who fails this test is not high-minded or neutral; he or she is just craven, and shouldn’t be trusted with power. This catechism’s purpose—perhaps unfair in its form, but essential in its signal—is to ask, Do you stand with reason and evidence sufficiently to anger people among your allies who don’t?
Darwinism, or evolutionary biology, is true in the complex sense that scientific theories always are—not fixed in its particulars, immutable and imposing, but rich, changing, and evermore explanatory. (There are evolutionary biologists who protest against the simple “Darwinism” label, against “branding” it like a single-barrel Bourbon, but movement names tend to be taken, not chosen.) Evolution may be hard to accept, but it’s easy to understand. All the available evidence collected within the past hundred and fifty years is strongly in its favor, and no evidence argues that it is in any significant way false. Life on Earth proceeds through the gradual process of variation and selection, with the struggle for existence shaping its forms. Nobody got here all in one piece; we arrived in bits and were made up willy-nilly, not by the divine designer but by the tinkering of time.
There were not enough fossils in Darwin’s own lifetime to do more than offer a hunch about what they’d show, but the fossils unearthed since show that Darwin’s hunches were right—particularly about the evolution of man from early primates, which turns out to be confirmed by a particularly dense and eloquent sequence of skulls and skeletons. There was no genetic evidence when Darwin wrote, but all the genetic evidence that came after not only fits the evolutionary scheme but helps to explain its mechanisms. The DNA evidence, indeed, slips into the fossil evidence seamlessly. Darwinism is easily falsified, and it has survived every possible test. That’s a good theory—it rises above the pumpkin patch and beams right down.
While there is no debate about Darwinian theory, there are endless debates within Darwinian theory. The controversies are loud and real: Are the mutations offered up to selection always truly random, or could they be in some ways pre-winnowed? How gradual does “gradual” have to be? Is everything we find in an animal an adaptation, or does simple genetic drift and accident account for some part of biological change? There is always a controversy, in that sense, because science is an organized controversy, a self-correcting debate. Controversy is what Darwin wanted to start, and did.
But evolutionary biology is not an ideology, which one believes in or doesn’t. What it demands is not belief but what science always demands, and that is the ability to evaluate the evidence and hear out the theory, and to poke holes in it if you can. So far, the fabric remains defiantly unpoked, the holes either unmade or else readily mended, with the stitching improving the tensile strength of the whole.
Here, though, the Republican candidates might have taken a lesson, or even comfort. Evolutionary biology certainly renders a certain sort of Biblical literalism untenable. But it is compatible with any number of readings of the Bible, and with very different political belief systems—there are, and have been from the start, Marxist Darwinians and liberal ones, Catholic evolutionary biologists and Jewish ones, transgender Darwinians and gay ones, conservative Darwinians and radical ones, and, somewhere out there, doubtless, a Wiccan or two is doing important work on the flat worm.
But if Darwinian biology is open to every view of life, the opposite is not true. That is where the catch comes in, and why the question matters. Opposition to evolutionary biology is overwhelmingly tied to an investment in some kind of defiantly anti-rational ideology: in our time, to fundamentalist Christian reaction; in dark days past in the Soviet Union, to the Lysenkoist belief in culture-made traits. To oppose Darwinian biology is not to announce yourself neutral or disinterested or even uninterested. It is to announce yourself against the discoveries of science, or so frightened of those who are that you can be swayed from answering honestly.
But couldn’t someone who thinks the Earth is flat still be a perfectly fine dogcatcher? Well, yes—until he stops chasing the dogs racing ahead of him because he thinks they’re about to run off the edge of the Earth. Evolutionary science is not abstract—evaluating reports of a “superbug” in Los Angeles, wrought immune by natural selection to antibiotics, means applying Darwinian principles as they go about their often scary work. The institutions of Big Science certainly have interests like any other, and the bureaucracies of science have orthodoxies of their own. But scientific reasoning is the basic way human beings achieve knowledge about their world.
At the end of the week, Governor Walker responded on Twitter with a tweet of a kind that some bright fourteen-year-old has doubtless already dubbed a “tweasel”: a seemingly explanatory or apologetic tweet couched in obvious weasel wording: “Both science & my faith dictate my belief that we are created by God. I believe faith & science are compatible, & go hand in hand.”
Darwin himself, of course, avoided arguments with politicians and other public types as best he could, writing once that “direct arguments against Christianity and theism produce hardly any effect on the public; and freedom of thought is best promoted by the gradual illumination of men’s minds which follows from the advance of science.” Darwin suffered from the optimism of the Victorian age. Defiantly unapologetic irrationalism is, sad to say, still a winning strategy for power, all over the world. But we pay a huge price for its successes. Darwin’s coalition of light has a better record.
Teach It
Last month, Scott Walker, the governor of Wisconsin and a presumed Presidential candidate, delivered an address at Chatham House, an international-affairs think tank in London. For Walker, the point of the address was to bolster his foreign-policy credentials. That’s probably why the last question — “Are you comfortable with the idea of evolution?” — took him by surprise. “I’m going to punt on that one,” he said.
It’s obvious why politicians avoid the evolution question. A large fraction of the population — including more than fifty per cent of Republican voters — doesn’t believe in it. But politicians aren’t the only ones who punt. When it comes to questions that confront religious beliefs, many scientists and teachers do it, too. Recent studies — including a comprehensive national survey by researchers at Penn State University, in 2007 — show that up to sixty per cent of high-school biology teachers shy away from adequately teaching evolution as a unifying principle of biology. They don’t want to risk controversy by offending religious sensibilities. Instead, many resort to the idea, advocated by the late Stephen Jay Gould, that science and religion are “non-overlapping magisteria” — separate traditions of thinking that need not contradict one another.
“Non-overlapping magisteria” has a nice ring to it. The problem is that there are many religious claims that not only “overlap” with empirical data but are incompatible with it. As a scientist who also spends a fair amount of time in the public arena, if I am asked if our understanding of the Big Bang conflicts with the idea of a six-thousand-year-old universe, I face a choice: I can betray my scientific values, or encourage that person to doubt his or her own beliefs. More often than you might think, teaching science is inseparable from teaching doubt.
Doubt about one’s most cherished beliefs is, of course, central to science: the physicist Richard Feynman stressed that the easiest person to fool is oneself. But doubt is also important to non-scientists. It’s good to be skeptical, especially about ideas you learn from perceived authority figures. Recent studies even suggest that being taught to doubt at a young age could make people better lifelong learners. That, in turn, means that doubters — people who base their views on evidence, rather than faith — are likely to be better citizens.
Last year, writing in the Times, the political scientist Brendan Nyhan explained how “identity often trumps the facts.” We would rather reject evidence than change our sense of who we are. Knowledge is comparatively helpless against identity: as you grow better-informed about the issues, you just get better at selectively using evidence to reinforce your preëxisting commitments. A 2014 Yale Law School study, for example, demonstrated that the divergence between religious and non-religious peoples’ views on evolution actually grows wider among those who are familiar with math and science. Describing Nyhan’s work for this Web site, Maria Konnikova summarized his findings by writing that “it’s only after ideology is put to the side” that the facts become “decoupled from notions of self-perception.” One conclusion we might draw is that we ought to resist ideology in the first place. If we want to raise citizens who are better at making evidence-based judgments, we need to start early, making skepticism and doubt part of the experience that shapes their identities from a young age.
Meanwhile, earlier this year, an AP-GfK poll revealed that less than a third of Americans are willing to express confidence in the reality of human-induced climate change, evolution, the age of the Earth, and the existence of the Big Bang. Among those surveyed, there was a direct correlation between religious conviction and an unwillingness to accept the results of empirical scientific investigation. Religious beliefs vary widely, of course — not all faiths, or all faithful people, are the same. But it seems fair to say that, on average, religious faith appears to be an obstacle to understanding the world.
Science class isn’t the only place where students can learn to be skeptical. A provocative novel that presents a completely foreign world view, or a history lesson exploring the vastly different mores of the past, can push you to skeptically reassess your inherited view of the universe. But science is a place where such confrontation is explicit and accessible. It didn’t take more than a simple experiment for Galileo to overturn the wisdom of Aristotle. Informed doubt is the very essence of science.
Some teachers shy away from confronting religious beliefs because they worry that planting the seeds of doubt will cause some students to question or abandon their own faith or the faith of their parents. But is that really such a bad thing? It offers some young people the chance to escape the guilt imposed upon them simply for questioning what they’re told. Last year, I received an e-mail from a twenty-seven-year-old man who is now studying in the United States after growing up in Saudi Arabia. His father was executed by family members after converting to Christianity. He says that it’s learning about science that has finally liberated him from the spectre of religious fundamentalism. The same week, I received an e-mail from a young man who lives in Indiana; he feels isolated and damaged because of the reaction of his friends and family to his rejection of religion and his love of science. I get e-mails like this regularly. We owe it to these young people to help them feel, as another young letter-writer put it, that “I’m not the only one who has these thoughts.”
Religious fundamentalism exists closer to home than you might imagine. Consider Roy Moore, the chief justice of the Alabama Supreme Court, famous for refusing to remove the Ten Commandments from his courtroom wall: in a recent speech, he declared that the First Amendment only applies to Christians. Or consider the new freshman class in the House of Representatives: it includes Jody Hice, a man who claims that “blood moons” are fulfilling Biblical prophecies. In a recent decision, Pope Francis officially recognized, under canon law, the International Association of Exorcists. He called exorcism “a form of charity.” (When I tweeted about the decision, another user pointed out that the policy must be working—after all, no one has seen any demons recently.)
A new generation is always more comfortable dispensing with old ideas than are its predecessors; in this sense, we are never more than a generation away from altering long-held beliefs. The battle for gay marriage, for instance, has already been won because it is simply a non-issue for young people. Is it naïve to imagine that we can overcome centuries of religious intransigence in a single generation through education?
One thing is certain: if our educational system does not honestly and explicitly promote the central tenet of science—that nothing is sacred — then we encourage myth and prejudice to endure. We need to equip our children with tools to avoid the mistakes of the past while constructing a better, and more sustainable, world for themselves and future generations. We won’t do that by dodging inevitable and important questions about facts and faith. Instead of punting on those questions, we owe it to the next generation to plant the seeds of doubt.
British DNA
Nordic incursions into our DNA are few and far between, genetic scientists discover.
There was certainly pillaging, probably a fair bit of razing — and doubtless more than one monk found himself on a wrong end of a Viking’s battleaxe. Yet on one point we should reappraise our view of Norse depravity: their military conquest, it seems, was not accompanied by a sexual one.
The most comprehensive study of Britain’s genetic make-up has found that successive invasions over the millennia — whether the Romans, the Normans or the Vikings — have had little impact on who we are. It has also found that what did have an affect to a degree that surprised the scientists, is where our grandparents came from.
Such has been the historical lack of intermarriage between different regions that the present Devon-Cornwall border almost perfectly matches a genetic one. If the different parts of the West Country were unwilling to have sex with each other, then the rest of Britain could only concur — the border of both counties with England marks yet another genetic barrier.
Further north, the fine distinctions continue. People from the northwest can trace a DNA inheritance distinct from that on the east coast, and in parts of Wales you can take a short walk between two villages and find yourself moving between people who, for centuries, refused to marry. Little wonder then that the Vikings made so little headway.
The research showed that despite the fact that a region of England stretching from London to Durham was once ruled by Vikings, and called Danelaw, there is almost no evidence from that time of a similar Scandinavian incursion into our DNA.
The scientists said, however, that this is possibly less because of the enlightened sexual etiquette of the Vikings than that there just were not enough of them to make a difference.
“What is amazing about the military powers of the Norse and Danes is that a relatively small Viking army was capable of wreaking tremendous havoc,” said Professor Peter Donnelly, from the Wellcome Trust Centre for Human Genetics. A disorganised Britain barely out of the Dark Ages was no match for the Norse men.
Afterwards, though, this small force settled into the population and — genetically speaking — it seems were themselves conquered. Even in Orkney, which was ruled by Norway for six centuries, just a quarter of the locals’ DNA was found to have Viking origin.
The study, published in Nature, involved genetic testing of more than 2,000 people across Britain, and comparing that with 6,000 such tests across Europe. The Britons chosen were those who lived in the same place as their grandparents did. In this way the authors hoped to minimise the effects of 20th-century migrations, and show the geographical link to genetic heritage.
The authors said the work had added a new layer to our understanding of early Britain. “What we know about history comes from the elites,” said Professor Donnelly. “Genetics, though, tells us about the masses.” And what it tells us is that they were a conservative bunch.
“It was extraordinary when we looked at the maps.” He said he found the difference between Devon and Cornwall most interesting. The geographical boundary of the Tamar river in the south and Bodmin Moor in the north seemed to have kept the counties from intermarrying for millennia.
Cultural barriers proved just as strong. “In southwest Wales there are two genetic groups. At the very tip the language is English. Move in, though, and it becomes Welsh.” The genetic groupings reflected this.
The arrival of Norse raiders was a cataclysmic event. Alcuin of York described it in a letter to Ethelred, king of Northumbria. “Never before has such terror appeared in Britain as we have now suffered from a pagan race,” he wrote. “Behold the church of St Cuthbert spattered with the blood of the priests of God.”
Now we know that the Vikings’ ability to turn a small number of raiders into the most efficient monastery-bothering force Britain would see until Henry VIII did not extend to their loins.
“However effective they could be, there were not a lot of them,” said Sir Walter Bodmer, from the University of Oxford and an author on the paper. “They couldn’t actually penetrate a large number of local women.”
Anglo-Saxons
When the Romans left, Britain became a failed state. Infrastructure fell apart, warlords fought among themselves and, amid this chaos, the Anglo-Saxons arrived, their culture rapidly predominating.
The problem of how they achieved such cultural supremacy so quickly has divided historians. Was there a genocide? A mass displacement of peoples, pushing the ancient Britons to the Celtic fringe? It turns out that the only imperialism was of the cultural kind, or so the genetics imply: Britons just preferred the Germanic culture to their own.
“The Anglo-Saxons settled parts of southern England and brought a very different culture with them,” said Mark Robinson, an archaeologist at the University of Oxford. “It was much more primitive in terms of material goods and organisation than the Roman one. The language shifted to AngloSaxon and the culture became that of northwest Germany — regressing by perhaps 1,000 years. Perhaps Romano-British culture was just associated with failure.”
These days most people in south and central England have about 20 per cent Anglo-Saxon DNA. “What this is implying is once the Saxons arrived they mixed and intermarried,” he said.
Celts
Britain’s Celtic fringe may feel that they share a common culture and many common aspirations, but what they don’t share, it seems, is genes. Of all the groupings found within Britain, those on its extremities were found to be the most different of all.
There are more similarities between people from Kent and Glasgow than there are between those from north and south Wales.
“We saw no evidence of a general ‘Celtic’ population in non-Saxon parts of the UK,” the authors of the study write. “Instead there were many distinct genetic clusters in these regions, some among the most different in our study.”
Welsh
The Welsh can claim to be the last true Britons. The study found that, long before the Romans arrived, there were waves of migration across the Channel. These colonists gradually mixed with the post-Ice Age settlers, forming the mongrel race that greeted — or attacked — the Romans. Those early settlers persisted in the Welsh valleys, where descendants can be found of the first people to make a home in Britain after the glaciers receded.
(Letters to The London Times)
Sir, Your article on the shortage of Norse genes in the British DNA mix (Mar 19) does not tell the whole story.
It mentions the “cataclysmic event” of AD793, when Norse raiders arrived at Lindisfarne, as told by Alcuin of York in a letter to the Northumbrian king, but it does not spill the beans on the events of 1002 when Ethelred the Unready sent an edict by messenger to every corner of England, instructing the natives to ensure the killing of all Danes: men, women and children.
They turned out on St Brice’s night as instructed, and on November 13, 1002, the slaughter of everyone reckoned to be of Danish origin was accomplished — the eradication of Danish/Norse DNA within England.
Place names have survived, as have other bits of their language — but no Viking settler or suspected raider was left alive.
Sir, As Tom Whipple’s report points out, the Britons studied lived in the same place as their grandparents. It could be that this desire to stay put is genetic and that people of this disposition would marry almost exclusively within their genetic matches. This may also help to explain why the migratory and military populations were missing: they were not in the study because their ancient and recent ancestors were more likely to be spread across our islands and the world.
Sir, Although the Vikings made little impact on Britain’s DNA, their effect on genes travelling in the opposite direction is indisputable. Iceland has the reputation of being a pure-bred Viking land, settled almost entirely by exiles from Norway; yet DNA research there has revealed that during the Viking age 50 per cent of female Icelanders and 20 per cent of males had Celtic (Scottish or Irish) blood.
How these “Celtic Vikings” came to be is exemplified by the case of a woman called Melkorka. The daughter of an Irish king, she was abducted from her homeland in a Viking raid, transported to Norway, sold on to a Russian slave trader and eventually purchased by an Icelandic chieftain who took her home as his concubine — to the dismay of his wife. Her story is told in the medieval Laxdaela Saga, believed to be based on the lives of real people who lived in the 9th to 11th centuries.
Sir, The Vikings may not have had a big genetic influence but were they not responsible for the most sensible modification of the English language, ie moving the verb from the end of the sentence (as seen in Germanic Anglo-Saxon) to between the subject and the object? “Romeo loved Juliet” sounds a lot pleasanter than “Romeo Juliet loved”.
White Skinned People
Most of us think of Europe as the ancestral home of white people. But a new study shows that pale skin, as well as other traits such as tallness and the ability to digest milk as adults, arrived in most of the continent relatively recently. The work, presented here last week at the 84th annual meeting of the American Association of Physical Anthropologists, offers dramatic evidence of recent evolution in Europe and shows that most modern Europeans don’t look much like those of 8000 years ago.
The origins of Europeans have come into sharp focus in the past year as researchers have sequenced the genomes of ancient populations, rather than only a few individuals. By comparing key parts of the DNA across the genomes of 83 ancient individuals from archaeological sites throughout Europe, the international team of researchers reported earlier this year that Europeans today are a mix of the blending of at least three ancient populations of hunter-gatherers and farmers who moved into Europe in separate migrations over the past 8000 years. The study revealed that a massive migration of Yamnaya herders from the steppes north of the Black Sea may have brought Indo-European languages to Europe about 4500 years ago.
Now, a new study from the same team drills down further into that remarkable data to search for genes that were under strong natural selection—including traits so favorable that they spread rapidly throughout Europe in the past 8000 years. By comparing the ancient European genomes with those of recent ones from the 1000 Genomes Project, population geneticist Iain Mathieson, a postdoc in the Harvard University lab of population geneticist David Reich, found five genes associated with changes in diet and skin pigmentation that underwent strong natural selection.
First, the scientists confirmed an earlier report that the hunter-gatherers in Europe could not digest the sugars in milk 8000 years ago, according to a poster. They also noted an interesting twist: The first farmers also couldn’t digest milk. The farmers who came from the Near East about 7800 years ago and the Yamnaya pastoralists who came from the steppes 4800 years ago lacked the version of the LCT gene that allows adults to digest sugars in milk. It wasn’t until about 4300 years ago that lactose tolerance swept through Europe.
When it comes to skin color, the team found a patchwork of evolution in different places, and three separate genes that produce light skin, telling a complex story for how European’s skin evolved to be much lighter during the past 8000 years. The modern humans who came out of Africa to originally settle Europe about 40,000 years are presumed to have had dark skin, which is advantageous in sunny latitudes. And the new data confirm that about 8500 years ago, early hunter-gatherers in Spain, Luxembourg, and Hungary also had darker skin: They lacked versions of two genes—SLC24A5 and SLC45A2—that lead to depigmentation and, therefore, pale skin in Europeans today.
But in the far north—where low light levels would favor pale skin—the team found a different picture in hunter-gatherers: Seven people from the 7700-year-old Motala archaeological site in southern Sweden had both light skin gene variants, SLC24A5 and SLC45A2. They also had a third gene, HERC2/OCA2, which causes blue eyes and may also contribute to light skin and blond hair. Thus ancient hunter-gatherers of the far north were already pale and blue-eyed, but those of central and southern Europe had darker skin.
Then, the first farmers from the Near East arrived in Europe; they carried both genes for light skin. As they interbred with the indigenous hunter-gatherers, one of their light-skin genes swept through Europe, so that central and southern Europeans also began to have lighter skin. The other gene variant, SLC45A2, was at low levels until about 5800 years ago when it swept up to high frequency.
The team also tracked complex traits, such as height, which are the result of the interaction of many genes. They found that selection strongly favored several gene variants for tallness in northern and central Europeans, starting 8000 years ago, with a boost coming from the Yamnaya migration, starting 4800 years ago. The Yamnaya have the greatest genetic potential for being tall of any of the populations, which is consistent with measurements of their ancient skeletons. In contrast, selection favored shorter people in Italy and Spain starting 8000 years ago, according to the paper now posted on the bioRxiv preprint server. Spaniards, in particular, shrank in stature 6000 years ago, perhaps as a result of adapting to colder temperatures and a poor diet.
Surprisingly, the team found no immune genes under intense selection, which is counter to hypotheses that diseases would have increased after the development of agriculture.
The paper doesn’t specify why these genes might have been under such strong selection. But the likely explanation for the pigmentation genes is to maximize vitamin D synthesis, said paleoanthropologist Nina Jablonski of Pennsylvania State University (Penn State), University Park, as she looked at the poster’s results at the meeting. People living in northern latitudes often don’t get enough UV to synthesize vitamin D in their skin so natural selection has favored two genetic solutions to that problem—evolving pale skin that absorbs UV more efficiently or favoring lactose tolerance to be able to digest the sugars and vitamin D naturally found in milk. “What we thought was a fairly simple picture of the emergence of depigmented skin in Europe is an exciting patchwork of selection as populations disperse into northern latitudes,” Jablonski says. “This data is fun because it shows how much recent evolution has taken place.”
The Cambrian Explosion
Lactose Tolerance and Prosperity
HUMANS can digest lactose, the main carbohydrate in milk, only with the help of an enzyme called lactase. But two-thirds of people stop producing it after they have been weaned. The lucky third—those with “lactase persistence”—continue to produce it into adulthood. A recent paper* argues that this genetic quirk helps explain why some countries are rich and others poor.
Justin Cook of the University of California, Merced, uses data on historical migration flows to estimate the ethnic composition of 108 countries in Africa, Asia and Europe in 1500. He then estimates what proportion of the population would have been able to digest milk, using data on the lactose tolerance of different ethnic groups (which he assumes has not changed much over the centuries). Pre-colonial countries in western Europe tended to have the highest rates of lactase persistence, Mr Cook estimates. Some 96% of Swedes had it, for instance. The lowest levels were in Sub-Saharan Africa and South-East Asia.
A one-standard-deviation increase in the incidence of lactase persistence, in turn, was associated with a 40% rise in population density. People who could digest milk, the theory goes, used resources more efficiently than those who couldn’t. They could extract liquid energy from livestock, in addition to the wool, fertiliser, ploughing power and meat for which others raised them. The white stuff may have helped in other ways too: its fats, proteins, vitamins and minerals added balance to the pre-colonial diet, reducing the incidence of disease. If used as a substitute for breast-feeding, animal milk could have reduced weaning time and, thus, the time between mothers’ pregnancies. All this suggests that milk-guzzling societies could support higher population densities (although it remains puzzling that lactase persistence evolved in parts of Africa, but did not spread).
When people are tightly bunched together, the theory goes, growth takes off. Rulers find it easier to build infrastructure and administer the law, including property rights. Cities can develop, which allows workers to specialise. Technological innovation explodes; bigger armies can defend what is produced. Small wonder, then, that places with high population density in pre-colonial times tend to be relatively rich today. No single factor can explain long-run economic outcomes, of course, but Mr Cook’s idea may be worth milking.
Beetlemania
“AN INORDINATE fondness for beetles.” That was the reply of J.B.S. Haldane, a British scientific polymath of the early 20th century, when he was asked if there were anything that could be concluded about God from the study of natural history. There are 380,000 catalogued species of beetle, making them the most species-rich group of insects—and insects are the most species-rich group of animals. But why there are so many has been a mystery.
As they report in the Proceedings of the Royal Society, Dena Smith of the University of Colorado and Jonathan Marcot of the University of Illinois think biologists have been barking up the wrong tree on the matter of beetle diversity. Previous attempts to explain it considered reasons why new beetle species evolve: their catholic tastes in food, for example, would open lots of ecological niches. Dr Smith and Dr Marcot have looked from the other end of the microscope and asked if the explanation might rather be that, once a beetle species has appeared, it is less likely to become extinct than other animal species are.
To assess this idea they examined beetles’ fossil record. In general, insects are not well preserved down the ages, so their fossil record is patchy. But beetles, which have strong exoskeletons, preserve better than most, and the two researchers were able, by searching the world’s palaeontological archives, to assemble a list of 5,503 fossil species collected from 221 sites.
They divided the past 300m years, the period during which beetles have existed, into a dozen 25m-year blocks and looked at the number of consecutive blocks in which each known beetle family was represented. (A family is the taxonomic classification level above a genus, and it is used by palaeontologists interested in extinction rates because the randomness of preservation makes it hard to know when, exactly, a species or a genus really has vanished from the face of the Earth.) About 90% of modern beetle species belong to a group (technically, a suborder, which is one level up the classification ladder from a family) called the Polyphaga. And that dominance, and thus, in essence, the dominance of the beetles, Dr Smith and Dr Marcot found, is indeed because polyphagans seem hard to exterminate.
Since the Polyphaga appeared about 220m years ago, no family belonging to the suborder has become extinct—even at the end of the Cretaceous, 66m years ago, when an asteroid strike did for the dinosaurs and many other types of animal. As new families have appeared, therefore, the diversity of the Polyphaga has inevitably increased. Indeed, as they have prospered, other groups of beetles have withered. When they first appeared, they were one of ten coleopteran suborders. Now, only three other suborders remain.
Why this group of beetles has such resilience to extinction remains unknown. But at least biologists musing on why there are so many beetles around can now look in the right direction for the answer. Were Haldane alive today, he might want to refine his answer to “an inordinate fondness for Polyphaga”.
Humans Self-Domesticating
Your Bad Back
1. Thanks to evolution, your back is a marvel of load-bearing support and flexibility — and kind of a mess. Our species is prone to back pain, for example, because our ancestors’ imperfect transition to upright walking essentially took a spine similar to that of our nearest living relatives, knuckle-walking chimpanzees, and forced it vertical with piecemeal adaptations.
2. A 2015 study found that some people are, well, chimpier than others. Humans prone to certain back problems have vertebrae closer in shape to those of a chimpanzee than those of pain-free humans.
3. Regardless of shape, you might have more (or fewer) of the bones than your neighbor. Not everyone has the standard 33 vertebrae: From top to tail, that’s seven cervical, 12 thoracic, five lumbar, five sacral and four coccygeal.
4. The number of vertebrae in individual Homo sapiens actually varies between 32 and 35, with the biggest range of difference in the pelvic area.
5. The four natural curves in our spines develop at different times. Both the thoracic (midback) and sacral (pelvic), which develop early in embryos, curve outward.
6. The other two curves, which bend inward, become more pronounced at key points in infant development: the cervical curve, when a baby can hold up its head; and the lumbar, when the li’l tyke begins to walk.
7. Every doctor who’s ever examined you puts that ice-cold stethoscope on your triangle of auscultation, a quiet zone in between three major muscles near the base of your shoulder blade, where it’s easier to hear your lungs.
8. Lower back pain, our most common backache, may not have been as big a deal for our Neanderthal cousins. A 2008 study in the European Spine Journal found that the lower spines of two adult Neanderthals showed little of the degeneration associated with a life of heavy physical activity, which we believe they experienced.
9. The secret to Neanderthals’ better back health is apparently a combination of heavier musculature supporting their spines and lumbar kyphosis, a reverse curvature of the lower spine that, in our species, is considered abnormal.
10. The oldest known tattoos, including two on his back, belong to the famous 5,300-year-old Otzi the Iceman, found in the Italian Alps in 1991.
11. Some researchers theorize that Otzi’s “ink” (actually soot) mapped out acupuncture treatments intended to ease a variety of ailments.
12. Most other ancient tattoos, many of which are on the back, appear to be symbolic and represent status or specific achievements, such as the elaborate animals of Siberia’s 2,500-year-old Pazyryk mummies.
13. Bum backs, and remedies for them, have been recorded in the earliest medical documents. An ancient Egyptian scroll known as the Edwin Smith Surgical Papyrus, named after the archaeologist who purchased it in 1862, explains how to diagnose a “pulled vertebra.” Sadly, only partial instructions for treatment are included.
14. The Egyptian scroll’s author may have been concerned with fixing pulled vertebra, but by 2700 B.C., the Chinese were practicing intentional spinal manipulation.
15. Back-cracking was widespread throughout the ancient world — for better or worse. Hippocrates, for example, advocated strapping someone with an abnormally curved spine to a ladder, and then dropping the ladder (and patient) from a height. Don’t try this at home, kids.
16. The modern practice of chiropractic began when Daniel David Palmer, a self-styled “magnetic healer,” claimed to have restored the hearing of a deaf man by popping one of his vertebra back into place in 1895.
17. Palmer believed that a back out of whack — “subluxation,” or vertebral misalignment — causes 95 percent of diseases.
18. A 2012 white paper by the Institute for Science in Medicine, however, declared, “There is no scientific evidence that chiropractic subluxations exist or that their purported ‘detection’ or ‘correction’ confers any health benefit.” Ouch.
19. It’s commonly believed the saying “watch your back” derives from military tactics, but the Oxford English Dictionary doesn’t, ahem, back up this claim: It notes “watch your back” appears first in the 1949 Western novel Milk River Range by Lee Floren.
20. Disagree? Hey, we’re just telling you what’s in the OED so, you know, get off our backs (a saying with roots in the 17th century).
The Cambrian Explosion
Given the importance of oxygen for animals, researchers suspected that a sudden increase in the gas to near-modern levels in the ocean could have spurred the Cambrian explosion. To test that idea, they have studied ancient ocean sediments laid down during the Ediacaran and Cambrian periods, which together ran from about 635 million to 485 million years ago.
A series of dark, craggy pinnacles rises 80 meters above the grassy plains of Namibia. The peaks call to mind something ancient — the burial mounds of past civilizations or the tips of vast pyramids buried by the ages.
The stone formations are indeed monuments of a faded empire, but not from anything hewn by human hands. They are pinnacle reefs, built by cyanobacteria on the shallow sea floor 543 million years ago, during a time known as the Ediacaran period. The ancient world occupied by these reefs was truly alien. The oceans held so little oxygen that modern fish would quickly founder and die there. A gooey mat of microbes covered the sea floor at the time, and on that blanket lived a variety of enigmatic animals whose bodies resembled thin, quilted pillows. Most were stationary, but a few meandered blindly over the slime, grazing on the microbes. Animal life at this point was simple, and there were no predators. But an evolutionary storm would soon upend this quiet world.
Within several million years, this simple ecosystem would disappear, and give way to a world ruled by highly mobile animals that sported modern anatomical features. The Cambrian explosion, as it is called, produced arthropods with legs and compound eyes, worms with feathery gills and swift predators that could crush prey in tooth-rimmed jaws. Biologists have argued for decades over what ignited this evolutionary burst. Some think that a steep rise in oxygen sparked the change, whereas others say that it sprang from the development of some key evolutionary innovation, such as vision. The precise cause has remained elusive, in part because so little is known about the physical and chemical environment at that time.
But over the past several years, discoveries have begun to yield some tantalizing clues about the end of the Ediacaran. Evidence gathered from the Namibian reefs and other sites suggests that earlier theories were overly simplistic — that the Cambrian explosion actually emerged out of a complex interplay between small environmental changes that triggered major evolutionary developments.
Some scientists now think that a small, perhaps temporary, increase in oxygen suddenly crossed an ecological threshold, enabling the emergence of predators. The rise of carnivory would have set off an evolutionary arms race that led to the burst of complex body types and behaviours that fill the oceans today. “This is the most significant event in Earth evolution,” says Guy Narbonne, a palaeobiologist at Queen's University in Kingston, Canada. “The advent of pervasive carnivory, made possible by oxygenation, is likely to have been a major trigger.”
In the modern world, it's easy to forget that complex animals are relative newcomers to Earth. Since life first emerged more than 3 billion years ago, single-celled organisms have dominated the planet for most of its history. Thriving in environments that lacked oxygen, they relied on compounds such as carbon dioxide, sulfur-containing molecules or iron minerals that act as oxidizing agents to break down food. Much of Earth's microbial biosphere still survives on these anaerobic pathways.
Animals, however, depend on oxygen — a much richer way to make a living. The process of metabolizing food in the presence of oxygen releases much more energy than most anaerobic pathways. Animals rely on this potent, controlled combustion to drive such energy-hungry innovations as muscles, nervous systems and the tools of defence and carnivory — mineralized shells, exoskeletons and teeth.
Given the importance of oxygen for animals, researchers suspected that a sudden increase in the gas to near-modern levels in the ocean could have spurred the Cambrian explosion. To test that idea, they have studied ancient ocean sediments laid down during the Ediacaran and Cambrian periods, which together ran from about 635 million to 485 million years ago.
In Namibia, China and other spots around the world, researchers have collected rocks that were once ancient seabeds, and analysed the amounts of iron, molybdenum and other metals in them. The metals' solubility depends strongly on the amount of oxygen present, so the amount and type of those metals in ancient sedimentary rocks reflect how much oxygen was in the water long ago, when the sediments formed.
These proxies seemed to indicate that oxygen concentrations in the oceans rose in several steps, approaching today's sea-surface concentrations at the start of the Cambrian, around 541 million years ago — just before more-modern animals suddenly appeared and diversified. This supported the idea of oxygen as a key trigger for the evolutionary explosion.
But last year, a major study of ancient sea-floor sediments challenged that view. Erik Sperling, a palaeontologist at Stanford University in California, compiled a database of 4,700 iron measurements taken from rocks around the world, spanning the Ediacaran and Cambrian periods. He and his colleagues did not find a statistically significant increase in the proportion of oxic to anoxic water at the boundary between the Ediacaran and the Cambrian.
“Any oxygenation event must have been far, far smaller than what people normally considered,” concludes Sperling. Most people assume “that the oxygenation event essentially raised oxygen to essentially modern-day levels. And that probably wasn't the case”, he says.
The latest results come at a time when scientists are already reconsidering what was happening to ocean oxygen levels during this crucial period. Donald Canfield, a geobiologist at the University of Southern Denmark in Odense, doubts that oxygen was a limiting factor for early animals. In a study published last month, he and his colleagues suggest that oxygen levels were already high enough to support simple animals, such as sponges, hundreds of millions of years before they actually appeared. Cambrian animals would have needed more oxygen than early sponges, concedes Canfield. “But you don't need an increase in oxygen across the Ediacaran/Cambrian boundary,” he says; oxygen could already have been abundant enough “for a long, long time before”.
“The role of oxygen in the origins of animals has been heavily debated,” says Timothy Lyons, a geobiologist at the University of California, Riverside. “In fact, it's never been more debated than it is now.” Lyons sees a role for oxygen in evolutionary changes, but his own work with molybdenum and other trace metals suggests that the increases in oxygen just before the Cambrian were mostly temporary peaks that lasted a few million years and gradually stepped upward (see 'When life sped up').
Sperling has looked for insights into Ediacaran oceans by studying oxygen-depleted regions in modern seas around the globe. He suggests that biologists have conventionally taken the wrong approach to thinking about how oxygen shaped animal evolution. By pooling and analysing previously published data with some of his own, he found that tiny worms survive in areas of the sea floor where oxygen levels are incredibly low — less than 0.5% of average global sea-surface concentrations. Food webs in these oxygen-poor environments are simple, and the animals feed directly on microbes. In places where sea-floor oxygen levels are a bit higher — about 0.5–3% of concentrations at the sea surface — animals are more abundant but their food webs remain limited: the animals still feed on microbes rather than on each other. But around somewhere between 3% and 10% oxygen levels, predators emerge and start to consume other animals.
The implications of this finding for evolution are profound, Sperling says.The modest oxygen rise that he thinks may have occurred just before the Cambrian would have been enough to trigger a big change. “If oxygen levels were 3% and they rose past that 10% threshold, that would have had a huge influence on early animal evolution,” he says. “There's just so much in animal ecology, lifestyle and body size that seems to change so dramatically through those levels.”
The gradual emergence of predators, driven by a small rise in oxygen, would have meant trouble for Ediacaran animals that lacked obvious defences. “You're looking at soft-bodied, mostly immobile forms that probably lived their lives by absorbing nutrients through their skin,” says Narbonne.
Studies of those ancient Namibian reefs suggest that animals were indeed starting to fall prey to predators by the end of the Ediacaran. When palaeobiologist Rachel Wood from the University of Edinburgh, UK, examined the rock formations, she found spots where a primitive animal called Cloudina had taken over parts of the microbial reef. Rather than spreading out over the ocean floor, these cone-shaped creatures lived in crowded colonies, which hid their vulnerable body parts from predators — an ecological dynamic that occurs in modern reefs.
Cloudina were among the earliest animals known to have grown hard, mineralized exoskeletons. But they were not alone. Two other types of animal in those reefs also had mineralized parts, which suggests that multiple, unrelated groups evolved skeletal shells around the same time. “Skeletons are quite costly to produce,” says Wood. “It's very difficult to come up with a reason other than defence for why an animal should bother to create a skeleton for itself.” Wood thinks that the skeletons provided protection against newly evolved predators. Some Cloudina fossils from that period even have holes in their sides, which scientists interpret as the marks of attackers that bore into the creatures' shells.
Palaeontologists have found other hints that animals had begun to eat each other by the late Ediacaran. In Namibia, Australia and Newfoundland in Canada, some sea-floor sediments have preserved an unusual type of tunnel made by an unknown, wormlike creature. Called Treptichnus burrows, these warrens branch again and again, as if a predator just below the microbial mat had systematically probed for prey animals on top. The Treptichnus burrows resemble those of modern priapulid, or 'penis', worms — voracious predators that hunt in a remarkably similar way on modern sea floors.
The rise of predation at this time put large, sedentary Ediacaran animals at a big disadvantage. “Sitting around doing nothing becomes a liability,” says Narbonne.
The moment of transition from the Ediacaran to the Cambrian world is recorded in a series of stone outcrops rounded by ancient glaciers on the south edge of Newfoundland. Below that boundary are impressions left by quilted Ediacaran animals, the last such fossils recorded on Earth. And just 1.2 meters above them, the grey siltstone holds trails of scratch marks, thought to have been made by animals with exoskeletons, walking on jointed legs — the earliest evidence of arthropods in Earth's history.
No one knows how much time passed in that intervening rock — maybe as little as a few centuries or millennia, says Narbonne. But during that short span, the soft-bodied, stationary Ediacaran fauna suddenly disappeared, driven to extinction by predators, he suggests.
Narbonne has closely studied the few fauna that survived this transition, and his findings suggest that some of them had acquired new, more complex types of behaviour. The best clues come from traces left by peaceful, wormlike animals that grazed on the microbial mat. Early trails from about 555 million years ago meander and criss-cross haphazardly, indicating a poorly developed nervous system that was unable to sense or react to other grazers nearby — let alone predators. But at the end of the Ediacaran and into the early Cambrian, the trails become more sophisticated: creatures carved tighter turns and ploughed closely spaced, parallel lines through the sediments. In some cases, a curvy feeding trail abruptly transitions into a straight line, which Narbonne interprets as potential evidence of the grazer evading a predator.
This change in grazing style may have contributed to the fragmentation of the microbial mat, which began early in the Cambrian. And the transformation of the sea floor, says Narbonne, “may have been the most profound change in the history of life on Earth”, . The mat had previously covered the seabed like a coating of plastic wrap, leaving the underlying sediments largely anoxic and off limits to animals. Because animals could not burrow deeply in the Ediacaran, he says, “the mat meant that life was two-dimensional”. When grazing capabilities improved, animals penetrated the mat and made the sediments habitable for the first time, which opened up a 3D world.
Tracks from the early Cambrian show that animals started to burrow several centimeters into the sediments beneath the mat, which provided access to previously untapped nutrients — as well as a refuge from predators. It's also possible that animals went in the opposite direction. Sperling says that the need to avoid predators (and pursue prey) may have driven animals into the water column above the seabed, where enhanced oxygen levels enabled them to expend energy through swimming.
The emerging evidence about oxygen thresholds and ecology could also shed light on another major evolutionary question: when did animals originate? The first undisputed fossils of animals appear only 580 million years ago, but genetic evidence indicates that basic animal groups originated as far back as 700 million to 800 million years ago. According to Lyons, the solution may be that oxygen levels rose to perhaps 2% or 3% of modern levels around 800 million years ago. These concentrations could have sustained small, simple animals, just as they do today in the ocean's oxygen-poor zones. But animals with large bodies could not have evolved until oxygen levels climbed higher in the Ediacaran.
Understanding how oxygen influenced the appearance of complex animals will require scientists to tease more-subtle clues out of the rocks. “We've been challenging people working on fossils to tie their fossils more closely to our oxygen proxies,” says Lyons. It will mean deciphering what oxygen levels were in different ancient environments, and connecting those values with the kinds of traits exhibited by the animal fossils found in the same locations.
This past autumn, Woods visited Siberia with that goal in mind. She collected fossils of Cloudina and another skeletonized animal, Suvorovella, from the waning days of the Ediacaran. Those sites gave her the chance to gather fossils from many different depths in the ancient ocean, from the more oxygen-rich surface waters to deeper zones. Wood plans to look for patterns in where animals were growing tougher skeletons, whether they were under attack by predators and whether any of this had a clear link with oxygen levels, she says. “Only then can you pick out the story.”
How Evo Explains Optical Illusions
we go about our daily lives, we tend to assume that our perceptions — sights, sounds, textures, tastes — are an accurate portrayal of the real world. Sure, when we stop and think about it — or when we find ourselves fooled by a perceptual illusion — we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.
Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”
On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them. Experiment after experiment has shown — defying common sense — that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”
So while neuroscientists struggle to understand how there can be such a thing as a first-person reality, quantum physicists have to grapple with the mystery of how there can be anything but a first-person reality. In short, all roads lead back to the observer. And that’s where you can find Hoffman — straddling the boundaries, attempting a mathematical model of the observer, trying to get at the reality behind the illusion. Quanta Magazine caught up with him to find out more.
QUANTA MAGAZINE: People often use Darwinian evolution as an argument that our perceptions accurately reflect reality. They say, “Obviously we must be latching onto reality in some way because otherwise we would have been wiped out a long time ago. If I think I’m seeing a palm tree but it’s really a tiger, I’m in trouble.”
“Evolution has shaped us with perceptions that allow us to survive. But part of that involves hiding from us the stuff we don’t need to know. And that’s pretty much all of reality, whatever reality might be.”
DONALD HOFFMAN: Right. The classic argument is that those of our ancestors who saw more accurately had a competitive advantage over those who saw less accurately and thus were more likely to pass on their genes that coded for those more accurate perceptions, so after thousands of generations we can be quite confident that we’re the offspring of those who saw accurately, and so we see accurately. That sounds very plausible. But I think it is utterly false. It misunderstands the fundamental fact about evolution, which is that it’s about fitness functions — mathematical functions that describe how well a given strategy achieves the goals of survival and reproduction. The mathematical physicist Chetan Prakash proved a theorem that I devised that says: According to evolution by natural selection, an organism that sees reality as it is will never be more fit than an organism of equal complexity that sees none of reality but is just tuned to fitness. Never.
You’ve done computer simulations to show this. Can you give an example?
Suppose in reality there’s a resource, like water, and you can quantify how much of it there is in an objective order — very little water, medium amount of water, a lot of water. Now suppose your fitness function is linear, so a little water gives you a little fitness, medium water gives you medium fitness, and lots of water gives you lots of fitness — in that case, the organism that sees the truth about the water in the world can win, but only because the fitness function happens to align with the true structure in reality. Generically, in the real world, that will never be the case. Something much more natural is a bell curve — say, too little water you die of thirst, but too much water you drown, and only somewhere in between is good for survival. Now the fitness function doesn’t match the structure in the real world. And that’s enough to send truth to extinction. For example, an organism tuned to fitness might see small and large quantities of some resource as, say, red, to indicate low fitness, whereas they might see intermediate quantities as green, to indicate high fitness. Its perceptions will be tuned to fitness, but not to truth. It won’t see any distinction between small and large — it only sees red — even though such a distinction exists in reality.
But how can seeing a false reality be beneficial to an organism’s survival?
There’s a metaphor that’s only been available to us in the past 30 or 40 years, and that’s the desktop interface. Suppose there’s a blue rectangular icon on the lower right corner of your computer’s desktop — does that mean that the file itself is blue and rectangular and lives in the lower right corner of your computer? Of course not. But those are the only things that can be asserted about anything on the desktop — it has color, position and shape. Those are the only categories available to you, and yet none of them are true about the file itself or anything in the computer. They couldn’t possibly be true. That’s an interesting thing. You could not form a true description of the innards of the computer if your entire view of reality was confined to the desktop. And yet the desktop is useful. That blue rectangular icon guides my behavior, and it hides a complex reality that I don’t need to know. That’s the key idea. Evolution has shaped us with perceptions that allow us to survive. They guide adaptive behaviors. But part of that involves hiding from us the stuff we don’t need to know. And that’s pretty much all of reality, whatever reality might be. If you had to spend all that time figuring it out, the tiger would eat you.
So everything we see is one big illusion?
We’ve been shaped to have perceptions that keep us alive, so we have to take them seriously. If I see something that I think of as a snake, I don’t pick it up. If I see a train, I don’t step in front of it. I’ve evolved these symbols to keep me alive, so I have to take them seriously. But it’s a logical flaw to think that if we have to take it seriously, we also have to take it literally.
If snakes aren’t snakes and trains aren’t trains, what are they?
Snakes and trains, like the particles of physics, have no objective, observer-independent features. The snake I see is a description created by my sensory system to inform me of the fitness consequences of my actions. Evolution shapes acceptable solutions, not optimal ones. A snake is an acceptable solution to the problem of telling me how to act in a situation. My snakes and trains are my mental representations; your snakes and trains are your mental representations.
How did you first become interested in these ideas?
As a teenager, I was very interested in the question “Are we machines?” My reading of the science suggested that we are. But my dad was a minister, and at church they were saying we’re not. So I decided I needed to figure it out for myself. It’s sort of an important personal question — if I’m a machine, I would like to find that out! And if I’m not, I’d like to know, what is that special magic beyond the machine? So eventually in the 1980s I went to the artificial intelligence lab at MIT and worked on machine perception. The field of vision research was enjoying a newfound success in developing mathematical models for specific visual abilities. I noticed that they seemed to share a common mathematical structure, so I thought it might be possible to write down a formal structure for observation that encompassed all of them, perhaps all possible modes of observation. I was inspired in part by Alan Turing. When he invented the Turing machine, he was trying to come up with a notion of computation, and instead of putting bells and whistles on it, he said, Let’s get the simplest, most pared down mathematical description that could possibly work. And that simple formalism is the foundation for the science of computation. So I wondered, could I provide a similarly simple formal foundation for the science of observation?
A mathematical model of consciousness.
That’s right. My intuition was, there are conscious experiences. I have pains, tastes, smells, all my sensory experiences, moods, emotions and so forth. So I’m just going to say: One part of this consciousness structure is a set of all possible experiences. When I’m having an experience, based on that experience I may want to change what I’m doing. So I need to have a collection of possible actions I can take and a decision strategy that, given my experiences, allows me to change how I’m acting. That’s the basic idea of the whole thing. I have a space X of experiences, a space G of actions, and an algorithm D that lets me choose a new action given my experiences. Then I posited a W for a world, which is also a probability space. Somehow the world affects my perceptions, so there’s a perception map P from the world to my experiences, and when I act, I change the world, so there’s a map A from the space of actions to the world. That’s the entire structure. Six elements. The claim is: This is the structure of consciousness. I put that out there so people have something to shoot at.
But if there’s a W, are you saying there is an external world?
Here’s the striking thing about that. I can pull the W out of the model and stick a conscious agent in its place and get a circuit of conscious agents. In fact, you can have whole networks of arbitrary complexity. And that’s the world.
The world is just other conscious agents?
I call it conscious realism: Objective reality is just conscious agents, just points of view. Interestingly, I can take two conscious agents and have them interact, and the mathematical structure of that interaction also satisfies the definition of a conscious agent. This mathematics is telling me something. I can take two minds, and they can generate a new, unified single mind. Here’s a concrete example. We have two hemispheres in our brain. But when you do a split-brain operation, a complete transection of the corpus callosum, you get clear evidence of two separate consciousnesses. Before that slicing happened, it seemed there was a single unified consciousness. So it’s not implausible that there is a single conscious agent. And yet it’s also the case that there are two conscious agents there, and you can see that when they’re split. I didn’t expect that, the mathematics forced me to recognize this. It suggests that I can take separate observers, put them together and create new observers, and keep doing this ad infinitum. It’s conscious agents all the way down.
If it’s conscious agents all the way down, all first-person points of view, what happens to science? Science has always been a third-person description of the world.
The idea that what we’re doing is measuring publicly accessible objects, the idea that objectivity results from the fact that you and I can measure the same object in the exact same situation and get the same results — it’s very clear from quantum mechanics that that idea has to go. Physics tells us that there are no public physical objects. So what’s going on? Here’s how I think about it. I can talk to you about my headache and believe that I am communicating effectively with you, because you’ve had your own headaches. The same thing is true as apples and the moon and the sun and the universe. Just like you have your own headache, you have your own moon. But I assume it’s relevantly similar to mine. That’s an assumption that could be false, but that’s the source of my communication, and that’s the best we can do in terms of public physical objects and objective science.
It doesn’t seem like many people in neuroscience or philosophy of mind are thinking about fundamental physics. Do you think that’s been a stumbling block for those trying to understand consciousness?
I think it has been. Not only are they ignoring the progress in fundamental physics, they are often explicit about it. They’ll say openly that quantum physics is not relevant to the aspects of brain function that are causally involved in consciousness. They are certain that it’s got to be classical properties of neural activity, which exist independent of any observers — spiking rates, connection strengths at synapses, perhaps dynamical properties as well. These are all very classical notions under Newtonian physics, where time is absolute and objects exist absolutely. And then [neuroscientists] are mystified as to why they don’t make progress. They don’t avail themselves of the incredible insights and breakthroughs that physics has made. Those insights are out there for us to use, and yet my field says, “We’ll stick with Newton, thank you. We’ll stay 300 years behind in our physics.”
I suspect they’re reacting to things like Roger Penrose and Stuart Hameroff’s model, where you still have a physical brain, it’s still sitting in space, but supposedly it’s performing some quantum feat. In contrast, you’re saying, “Look, quantum mechanics is telling us that we have to question the very notions of ‘physical things’ sitting in ‘space.’”
I think that’s absolutely true. The neuroscientists are saying, “We don’t need to invoke those kind of quantum processes, we don’t need quantum wave functions collapsing inside neurons, we can just use classical physics to describe processes in the brain.” I’m emphasizing the larger lesson of quantum mechanics: Neurons, brains, space … these are just symbols we use, they’re not real. It’s not that there’s a classical brain that does some quantum magic. It’s that there’s no brain! Quantum mechanics says that classical objects — including brains — don’t exist. So this is a far more radical claim about the nature of reality and does not involve the brain pulling off some tricky quantum computation. So even Penrose hasn’t taken it far enough. But most of us, you know, we’re born realists. We’re born physicalists. This is a really, really hard one to let go of.
To return to the question you started with as a teenager, are we machines?
The formal theory of conscious agents I’ve been developing is computationally universal — in that sense, it’s a machine theory. And it’s because the theory is computationally universal that I can get all of cognitive science and neural networks back out of it. Nevertheless, for now I don’t think we are machines — in part because I distinguish between the mathematical representation and the thing being represented. As a conscious realist, I am postulating conscious experiences as ontological primitives, the most basic ingredients of the world. I’m claiming that experiences are the real coin of the realm. The experiences of everyday life — my real feeling of a headache, my real taste of chocolate — that really is the ultimate nature of reality.
Natural Selection Before Darwin
In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.
Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.
By the time nutritionists caught up with this mistake, the damage had been done. The spinach-iron myth stuck around in spite of new and better knowledge, wrote Arbesman, because “it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”
Arbesman was not the first to tell the cautionary tale of the missing decimal point. The same parable of sloppy science, and its dire implications, appeared in a book called “Follies and Fallacies in Medicine,” a classic work of evidence-based skepticism first published in 1989.1 It also appeared in a volume of “Magnificent Mistakes in Mathematics,” a guide to “The Practice of Statistics in the Life Sciences” and an article in an academic journal called “The Consequence of Errors.” And that’s just to name a few.
All these tellings and retellings miss one important fact: The story of the spinach myth is itself apocryphal. It’s true that spinach isn’t really all that useful as a source of iron, and it’s true that people used to think it was. But all the rest is false: No one moved a decimal point in 1870; no mistake in data entry spurred Popeye to devote himself to spinach; no misguided rules of eating were implanted by the sailor strip. The story of the decimal point manages to recapitulate the very error that it means to highlight: a fake fact, but repeated so often (and with such sanctimony) that it takes on the sheen of truth.
In that sense, the story of the lost decimal point represents a special type of viral anecdote or urban legend, one that finds its willing hosts among the doubters, not the credulous. It’s a rumor passed around by skeptics — a myth about myth-busting. Like other Russian dolls of distorted facts, it shows us that, sometimes, the harder that we try to be clear-headed, the deeper we are drawn into the fog.
No one knows this lesson better than Mike Sutton. He must be the world’s leading meta-skeptic: a 56-year-old master sleuth who first identified the myth about the spinach myth in 2010 and has since been working to debunk what he sees as other false debunkings. Sutton, a criminology professor at Nottingham Trent University, started his career of doubting very young: He remembers being told when he was still a boy that all his favorite rock stars on BBC’s “Top of the Pops” were lip-synching and that some weren’t even playing their guitars. Soon he began to wonder at the depths of this deception. Could the members of Led Zeppelin be in on this conspiracy? Was Jimmy Page a lie? Since then, Sutton told me via email, “I have always been concerned with establishing the veracity of what is presented as true, and what is something else.”
As a law student, Sutton was drawn to stories like that of Popeye and the inflated iron count in spinach, which to him demonstrated both the perils of “accepted knowledge” and the importance of maintaining data quality. He was so enamored of the story, in fact, that he meant to put it in an academic paper. But in digging for the story’s source, he began to wonder if it was true. “It drew me in like a problem-solving ferret to a rabbit hole,” he said.
Soon he’d gone through every single Popeye strip ever drawn by its creator, E.C. Segar, and found that certain aspects of the classic story were clearly false. Popeye first ate spinach for his super power in 1931, Sutton found, and in the summer of 1932 the strip offered this iron-free explanation: “Spinach is full of vitamin ‘A,’” Popeye said, “an’ tha’s what makes hoomans strong an’ helty.” Sutton also gathered data on spinach production from the U.S. Department of Agriculture and learned that it was on the rise before Segar’s sailor-man ever starting eating it.
It seems plausible that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them.
What about the fabled decimal point? According to Sutton’s research, a German chemist did overestimate the quantity of iron in spinach, but the mistake arose from faulty methods, not from poor transcription of the data.2 By the 1890s, a different German researcher had concluded that the earlier estimate was many times too high. Subsequent analyses arrived at something closer to the correct, still substantial value — now estimated to be 2.71 milligrams of iron per 100 grams of raw spinach, according to the USDA. By chance, the new figure was indeed about one-tenth of the original, but the difference stemmed not from misplaced punctuation but from the switch to better methodology. In any case, it wasn’t long before Columbia University analytical chemist Henry Clapp Sherman laid out the problems with the original result. By the 1930s, Sutton argues, researchers knew the true amount of iron in spinach, but they also understood that not all of it could be absorbed by the human body.
The decimal-point story only came about much later. According to Sutton’s research, it seems to have been invented by the nutritionist and self-styled myth-buster Arnold Bender, who floated the idea with some uncertainty in a 1972 lecture. Then in 1981, a doctor named Terence Hamblin wrote up a version of the story without citation for a whimsical, holiday-time column in the British Medical Journal. The Hamblin article, unscholarly and unsourced, would become the ultimate authority for all the citations that followed. (Hamblin graciously acknowledged his mistake after Sutton published his research, as did Arbesman.)
In 2014, a Norwegian anthropologist named Ole Bjorn Rekdal published an examination of how the decimal-point myth had propagated through the academic literature. He found that bad citations were the vector. Instead of looking for its source, those who told the story merely plagiarized a solid-sounding reference: “(Hamblin, BMJ, 1981).” Or they cited someone in between — someone who, in turn, had cited Hamblin. This loose behavior, Rekdal wrote, made the transposed decimal point into something like an “academic urban legend,” its nested sourcing more or less equivalent to the familiar “friend of a friend” of schoolyard mythology.
Emerging from the rabbit hole, Sutton began to puzzle over what he’d found. This wasn’t just any sort of myth, he decided, but something he would term a “supermyth”: A story concocted by respected scholars and then credulously disseminated in order to promote skeptical thinking and “to help us overcome our tendency towards credulous bias.” The convolution of this scenario inspired him to look for more examples. “I’m rather a sucker for such complexity,” he told me.
Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.
It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?
Sutton told me over email that he, too, worries that contrarianism can run amok, citing conspiracy theorists and anti-vaxxers as examples of those who “refuse to accept the weight of argument” and suffer the result. He also noted the “paradox” by which a skeptic’s obsessive devotion to his research — and to proving others wrong — can “take a great personal toll.” A person can get lost, he suggested, in the subterranean “Wonderland of myths and fallacies.”
In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”
Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”
Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.
Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”
That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.
Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.
In light of these discoveries, Sutton considers the case all but closed. He’s challenged Darwin scholars to debates, picked fights with famous skeptics such as Michael Shermer and Richard Dawkins, and even written letters to the Royal Society, demanding that Matthew be given priority over Darwin.
But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas … knowing that the age was not suited for such.”
When Sutton is faced with the implication that he’s taken his debunking too far — that he’s tipped from skepticism to crankery — he lashes out. “The findings are so enormous that people refuse to take them in,” he told me via email. “The enormity of what has, in actual fact, been newly discovered is too great for people to comprehend. Too big to face. Too great to care to come to terms with — so surely it can’t be true. Only, it’s not a dream. It is true.” In effect, he suggested, he’s been confronted with a classic version of the “Semmelweis reflex,” whereby dangerous, new ideas are rejected out of hand.
Could Sutton be a modern-day version of Ignaz Semmelweis, the Hungarian physician who noticed in the 1840s that doctors were themselves the source of childbed fever in his hospital’s obstetric ward? Semmelweis had reduced disease mortality by a factor of 10 — a fully displaced decimal point — simply by having doctors wash their hands in a solution of chlorinated lime. But according to the famous tale, his innovations were too radical for the time. Ignored and ridiculed for his outlandish thinking, Semmelweis eventually went insane and died in an asylum. Arbesman, author of “The Half-Life of Facts,” has written about the moral of this story too. “Even if we are confronted with facts that should cause us to update our understanding of the way the world works,” he wrote, “we often neglect to do so.”
Of course, there’s always one more twist: Sutton doesn’t believe this story about Semmelweis. That’s another myth, he says — another tall tale, favored by academics, that ironically demonstrates the very point that it pretends to make. Citing the work of Sherwin Nuland, Sutton argues that Semmelweis didn’t go mad from being ostracized, and further that other physicians had already recommended hand-washing in chlorinated lime. The myth of Semmelweis, says Sutton, may have originated in the late 19th century, when a “massive nationally funded Hungarian public relations machine” placed biased articles into the scientific literature. Semmelweis scholar Kay Codell Carter concurs, at least insofar as Semmelweis was not, in fact, ignored by the medical establishment: From 1863 through 1883, he was cited dozens of times, Carter writes, “more frequently than almost anyone else.”
Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!
And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”
CRISPR and Development Genes
To help his readers fathom evolution, Charles Darwin asked them to consider their own hands.
“What can be more curious,” he asked, “than that the hand of a man, formed for grasping, that of a mole for digging, the leg of the horse, the paddle of the porpoise, and the wing of the bat, should all be constructed on the same pattern, and should include similar bones, in the same relative positions?”
Darwin had a straightforward explanation: People, moles, horses, porpoises and bats all shared a common ancestor that grew limbs with digits. Its descendants evolved different kinds of limbs adapted for different tasks. But they never lost the anatomical similarities that revealed their kinship.
As a Victorian naturalist, Darwin was limited in the similarities he could find. The most sophisticated equipment he could use for the task was a crude microscope. Today, scientists are carrying on his work with new biological tools. They are uncovering deep similarities that have been overlooked until now.
On Wednesday, a team of researchers at the University of Chicago reported that our hands share a deep evolutionary connection not only to bat wings or horse hooves, but to fish fins.
The unexpected discovery will help researchers understand how our own ancestors left the water, transforming fins into limbs that they could use to move around on land.
To the naked eye, there’s not much similarity between a human hand and the fin of, say, a goldfish. A human hand is at the end of an arm. It has bones that develop from cartilage and contain blood vessels. This type of tissue is called endochondral bone. A goldfish grows just a tiny cluster of endochondral bones at the base of its fin. The rest of the fin is taken up by thin rays, which are made of an entirely different tissue called dermal bone. Dermal bone doesn’t start out as cartilage and doesn’t contain blood vessels.
These differences have long puzzled scientists. The fossil record shows that we share a common aquatic ancestor with ray-finned fish that lived some 430 million years ago. Four-limbed creatures with spines — known as tetrapods — had evolved by 360 million years ago and went on to colonize dry land.
For over two decades, Neil H. Shubin, an evolutionary biologist, has investigated this transition in two radically different ways.
On the one hand, he has dug up fossils that date back to the transition from sea to land. His discoveries include a 370-million-year-old fish called Tiktaalik, which had limb-like fins. It developed endochondral bones corresponding to those in our arms, beginning at the shoulder with the humerus, then the radius, ulna and wrist bones. But it lacked fingers, and still had a short fringe of fin rays.
When he isn’t digging for fossils, Dr. Shubin runs a lab at the University of Chicago, where he and his colleagues compare how tetrapods — mice, for example — and fish develop as embryos. Their embryos start out looking very similar, consisting of heads and tails and not much in between. Two pairs of buds then develop on their flanks. In fish, the buds grow into fins. In tetrapods, they become limbs.
In recent decades, researchers have uncovered some of the genes that govern this development. In 1996, a team of French researchers studying mice discovered genes that are essential for the development of their legs.
When the scientists shut down two genes, called Hoxa-13 and Hoxd-13, the mice developed normal long bones in their legs. But their wrist and ankle bones failed to appear, and they didn’t grow any digits. This discovery suggested that Hoxa-13 and Hoxd-13 genes tell certain cells in the tetrapod limb bud that they will develop into hands and feet.
Dr. Shubin knew that fish have genes related to Hoxa-13 and Hoxd-13. He wondered what those genes were doing, if anything, in developing fins. An experiment on fish might give him and his colleagues a clue. “But we didn’t have the means to do it until technology caught up with our aspirations,” Dr. Shubin said.
In the 1990s, no one yet knew how to shut down genes in fish embryos. But that changed in recent years, thanks to a new gene-editing technology called Crispr. Scientists can use it to readily alter genes in virtually any species.
In 2013, a postdoctoral researcher in Dr. Shubin’s lab, Tetsuya Nakamura, started using Crispr to manipulate fish embryos. He chose zebrafish to study, because their transparent embryos make it easy to track their development.
Dr. Nakamura inserted bits of DNA into the fish versions of the Hoxa-13 and Hoxd-13 genes. The inserted DNA garbled the sequence of the genes, so that the fish couldn’t make proteins from them.
Zebrafish with defective copies of both Hox genes grew deformed fins, the scientists found. But to their surprise, the fish failed to make fin rays. In the fish, their experiment showed, the Hox genes were controlling cells that became dermal bone rather than the endochondral bone found in our own limbs.
Dr. Shubin got a similar surprise when he saw the results of a parallel experiment run by his graduate student, Andrew R. Gehrke. Mr. Gehrke engineered zebrafish so that he could follow individual cells during the development of embryos.
In Mr. Gehrke’s altered fish, cells that switched on the Hox genes started to glow. They kept glowing throughout development, until they reached their final location in the fish’s body.
Mr. Gehrke observed that a cluster of cells started making the Hox proteins early in the development of fish fins. When the fins were fully developed, Mr. Gehrke found that the fin rays were glowing. In a similar experiment on mice, the digits and wrist bones lit up.
“Here we’re finding that the digits and the fin rays have some sort of equivalence at the level of the cells that make them,” Dr. Shubin said. “Honestly, you could have knocked me over with a feather — it ran counter to everything that I was expecting after working on this problem for decades.”
The new study was important because it revealed that the development of fins and limbs follows some of the same rules, said Matthew P. Harris, a geneticist at Harvard Medical School. In both cases, the Hox genes tell a clump of embryonic cells that they need to end up at the far end of an appendage. “The molecular address is the same,” said Dr. Harris, who was not involved in the study.
In zebrafish, the cells that get that molecular address end up making dermal bone for fin rays. In tetrapods like us, the research indicates, the same cells produce endochondral bone in our hands and feet.
The new discovery could help make sense of the intermediate fish with limb-like fins that Dr. Shubin and his colleagues have unearthed. These animals still used the molecular addresses their ancestors used. But when their cells reached their addresses, some of them became endochondral bone instead of fin rays. It may have been a simple matter to shift from one kind of tissue to another.
“This is a dial that can be tuned,” Dr. Shubin said.
You Are A Mutant
Genes from other species, and cells from your relatives, live inside your body – and they hint at how we can improve ourselves.
Let’s begin with the obvious. You are the product of billions of years of evolution, the accumulation of trillions of gene-copying errors. That’s what led single cells to evolve into jellyfish, ferns, warthogs and humans. Without mutations, life would never have evolved into Darwin’s “endless forms most beautiful”, and you would never have seen the light of day.
Today, while most of our genes are undeniably Homo sapiens, many of us also carry DNA from other species. We have known for a decade that people of non-African descent inherit between 2 and 4 per cent of their DNA from Neanderthals. And we now know that DNA from several other extinct human species is also still in circulation, on every continent including Africa.
Not only do you carry DNA from other species, you probably also play host to other people’s cells. Before you were born, your mother’s cells crossed the placenta into your bloodstream. Decades later, some of these migrants are still there in your blood, heart, skin and other tissues. This “microchimeric” exchange was mutual: if you are a mother, your children may still be inside you in the form of their embryonic stem cells.
You may even be carrying cells from your grandmother and any older siblings. Because microchimeric cells persist for a long time, there is a chance that during pregnancy, your mother was still carrying cells from any previous children she had, as well as cells from her own mother – and she may have shared some with you.
Maternal microchimerism is extensive, says Lee Nelson at the University of Washington in Seattle, and probably useful too. “There are so many examples in biology where organisms thrive as a result of exchange – why wouldn’t it also be useful for humans to exchange cellular material?” Fetal cells may help to repair a mother’s damaged heart tissue and lower her risk of cancer. Other research shows that mothers can end up with their child’s DNA in their brains, something that may even be linked to a reduced risk of the mother developing Alzheimer’s.
In future, we could become mutants by design. Gene-editing tools like CRISPR should allow genetic diseases to be treated by injecting genes into the body. For example, a small number of people with a mutation in the CCR5 gene, which supplies a protein to the surface of white blood cells, are resistant to HIV. CRISPR opens the possibility of inserting that mutation into the DNA of others, giving them a genetic vaccine against the virus.
From there, it’s only a baby-step to genetic superpowers. Ethical questions notwithstanding, future generations could be enhanced with genes for extra-strong bones, lean muscles and a lower risk of cardiovascular disease and cancer. A mutation in the ABCC11 gene currently found in about 1 in 50 Europeans even renders underarms odourless. Think of the savings on deodorant. Be warned, however: this mutation also makes your ear wax dry up. Swings and roundabouts.
Steps to Life
DO WE live on a rare earth? One so exceptional that it is pretty much alone in hosting a rich diversity of life, with almost all other planets being home to simple microbes at best? Or are we in a universe teeming with living things as complex as those here, meaning that we exist as part of a vast, cosmic zoo?
Debate on this rages on, but we say it is time to accept that the latter is very likely.
To date we know of at least 3700 exoplanets and there are likely to be trillions of other potentially habitable exoplanets and exomoons in our galaxy and beyond. We do not know how commonly life arises on them, but many scientists think that it may well emerge from the chemical and physical properties of any suitable planet.
With that in mind, we head to our central question. If a planet does host life, what is that life like? Our hypothesis is that if a life-bearing planet remains habitable for long enough, then complex living things will arise. It might take a long time – for example, oxygen was required for the development of animals on Earth, and it took a billion years of oxygen accumulating in our atmosphere before animals appear in the fossil record. But the jump from simple to complex life will take place, eventually and inevitably.
How can we claim this? Surely the path that Earth took was unique, full of extraordinarily unlikely events, such as the vast impact early in its history with another protoplanet? It left Earth with an over-sized moon, a big molten iron core, a mantle that can support plate tectonics, lots of water (but not too much), and many other specifics upon which life depends.
Rare earth contenders are right that our planet is unique, our solar system unlike any other we have found, just as there was only one Bach and one Schubert. But that does not mean that composers are incredibly rare. Other composers from other histories, other traditions, have created music –from Beethoven to boy bands.
We are not concerned with the specific examples of life on Earth, but with what life does; growing large structures, walking and thinking. For example, the mammalian placenta only evolved once, but similar tissues designed to feed growing embryos have evolved in scorpions, cockroaches, lizards, sharks and snakes. The mammalian eye is unique, but eyes have evolved probably a dozen times. Each specific path is a one-off, but there are many paths to each of these complex functions.
So what does it take to make a complex organism, such as a Beethoven or a birch tree? There are thousands of specifics, but evolutionary biologists have identified a few key innovations that take life a major step along the path from its simplest form to the diversity we see today.
Some of those steps are evident in the long hard road to the evolution of a brain: before you can evolve a smart brain, you have to evolve nerve cells, and that means evolving a way to run a complex genetic program in an organism, one that directs cells to form different tissues in different places and at different times. Many of the key steps are actually relatively basic ones even if the end result is far from basic.
The key innovations on the way to complexity turn out to be light capture (to provide energy), oxygen manufacture (to create a widely available and potent energy source for life to spread), complex cell architecture of the type we see in eukaryotes, multicellularity, genetic structures more sophisticated than a bacterium’s simple DNA, a way to run a complex genetic program, and intelligence. Oh, and sex.
Looking at these major innovations from the simplest life to the most complex, we find nearly all of them evolved independently several times on Earth. So while these are big steps towards complex life, they are not highly improbable.
As such, if and when humans visit one of the other inhabited planets, we expect life there could have taken many courses. The biochemistry may be different and life there will certainly have a different anatomy. We may not even recognise the more complex forms as animals or plants, but their functions and what they do are likely to be comparable to the more complex species on Earth.
Does this mean there would be technological civilisations comparable to ours? Intelligence is quite common on Earth. Tool use, playing, problem solving and the ability to learn new tricks and pass them on have all arisen independently in many animals such as octopuses, parrots, dolphins, apes (including ourselves) and elephants –lineages that have been around for a very long time. So why has just one species followed the evolutionary path towards technological intelligence?
If complex life is common, this final transition may be what economist Robin Hanson at George Mason University in Virginia calls the “Great Filter”, the stumbling block that makes truly advanced life very rare. If so, it would explain our failure to find any evidence of extraterrestrial technology. Of course, that search is just starting. Thirty years ago we didn’t know of any exoplanets, so who knows what the next generation of New Scientist readers will discover.
Within decades, the first interstellar probes – planned by NASA and the independent Breakthrough Starshot initiative –may be on their way to the nearest habitable examples. This would be the best way to look for the presence of complex life and test the cosmic zoo hypothesis. Ultimately, though, we may have to accept the idea that the galaxy is a zoo with few visitors, and to find other talking, travelling technologists like ourselves we will have to search in a galaxy far, far away.
New Species Evolving Now
Recent news has brought some surprises on the evolutionary front, just in time for Darwin Day. Indeed, one finchy report involving a familiar island would’ve particularly put a smile on ol’ Charlie’s face. Evolution isn’t progressive in a directional sense, but our knowledge about it is.
Evolution is a slow process, unfolding over millions of years. But scientist are catching it in the act of speciation over the course of mere decades.
In honor of Charles Darwin’s most emblematic discovery, I will start first with the discovery of a brand-spanking new Galapagos island finch species.
Yes, creationists, evolution CAN be observed. Just in time for Darwin Day, new research has revealed evolution -- and even speciation -- in real time.
The famous illustration of Galapagos finches from On the Origins of Species, meticulously laying out evidence of evolution and natural selection. Public domain.
Thirty-six years ago, a large cactus finch flew from far-off Española island and made himself at home on Daphne Major. The medium ground finches soon got busy with the new hot stud.
But their offspring had a problem. Their song just didn’t turn on the island’s other finch species. And the distance to Española island was too long a commute for mating. What to do?
Hello, cousin!
And — oila! — a new species is born. The principle of reproductive isolation once again shows its evolutionary power.
But what about natural selection? Charles Darwin was far from the first to theorize about evolution. Darwin’s true insight was the mechanism shaping the course of evolution: natural selection.
This new finch species — which its discoverers have dubbed Big Bird (and, no, their feathers aren’t yellow) — has occupied its own ecological niche. Their larger and more powerful beaks allow Big Birds to peacefully coexist with the other three species of finches on this small island.
After only two generations, Big Birds are feathering their own newly speciated nests.
If it hadn’t been for Darwin’s pioneering study of the finches of the Galapagos Islands, the arrival of the first entrepid Big Bird would’ve gone unnoticed. Other recent findings have instead involved the inadvertent role humans are playing in avian evolution.
Darwin used the popular British pastime of dog breeding as a graphic illustration of the morphological power of artificial selection. Yet bird feeders are an even more pervasive passion in the UK.
More than half of British gardens are believed to contain bird feeders. And Brits spend twice as much on bird seed as their erstwhile EU brethren. If I were a bird, I would Brexit right over to the UK.
And they are.
Blackcaps traditionally overwinter in Spain, but a population of blackcaps are instead heading over to the British Isles during the scarcity of winter.
Global Warming is real. But blackcaps don’t care about melting ice caps. Warmer winters in the UK are a net plus for birds who wish to sup on the beckoning buffet of British bird feeders.
While it’s too soon to talk about the evolution of a new species, shorter journey to their breeding grounds have led to rounder wings in this subpopulation. This shorter commute gives them a wing up on their Continental counterparts in the spring.
Their beaks are also longer and thinner and thus better suited to their man-made winter food source. And since they most often breed with their fellow British “snowbirds,” they’ve taken the first step to reproductive isolation.
Yet blackcaps are far from the only birds to have discovered the British bounty.
Great tits are common in both the UK and in the Netherlands. Wipe that smirk off your face!
The British great tits, however, are on average .2 ml longer than those in the Netherlands.
Researchers sequenced the genomes of more than 3000 great tits and identified a common sequence among the British birds that’s known to be associated with face shape. Indeed, it’s the very sequence that has been shown to control beak shape in — you guessed it — Galapagos finches.
They’ve been able to determine that longer-beaked great tits successully raise about one more chick every five years than those with shorter-bills.
It’s a small difference in bill size, which takes a long time to pay off in terms of differential survival. But it over time it would add up to a significant selective advantage.
Yes, creationists, evolution CAN be observed. Just in time for Darwin Day, new research has revealed evolution -- and even speciation -- in real time.
Snail kite with apple snail via Andy Morffew, CC By-SA 2.0.
Fast-evolving bird beaks are even helping to control an invasive snail overrunning the Everglades, while at the same time helping to save an endangered raptor.
When island apple snails began out-competing the smaller native snails in a section of the Everglades, ornithologists fretted about the effects on the endangered snail kite.
Not to worry, said natural selection. Before long, snail kites with larger beaks began scarfing down the hefty escargot. Beak and body size among the snail kites has increased from eight to twelve percent since the apple snail invasion.
An apple snail a day keeps extinction away. (Sorry, I couldn’t resist.)
Yet my favorite recent example of a species caught in the act of evolving is the strange tale of the marbled crayfish. This bizarre new species is only twenty-five years old.
It popped into existence spontaneously, perhaps in an aquarium.
It’s unknown when the marbled crayfish first appeared, but the species only came to the attention of scientist after a German aquarium hobbyist purchased a bag of crayfish he was told were “Texas crayfish.” (Marbled crayfish are mutant slough crayfish, which are native to Florida and Georgia.)
The misidentified crawdads were supposed to be all female, which is why he was surprised to find progeny overflowing his tank like Tribbles spilling out of the cargo hold of the Enterprise.
Every single marbled crayfish is female. And a clone of every other one.
Marbled crayfish aren’t exactly snuggly, and they don’t coo endearingly. But they do breed if you feed them, just like Tribbles. (Otherwise they would die.)
Soon marbled crayfish began popping up in German pet stores. And thus began the birth of marmorkrebs (marbled crayfish in German), of which the pet trade would never run dry.
If you have one marmorkrebs, you will quickly have thousands. Indeed, pet owner soon began dumping their excess marmorkrebs by the handful into local rivers and streams. They’ve been found as far as Madagascar, where it is rapidly replacing native crayfish.
The sale of marbled crayfish is now banned in the EU in a couple of US states, like, oh, you know already.
And much like their cuddlier Trekian counterpart, marbled crayfish are “born pregnant.” Well, not really, but close enough.
It’s impossible to know whether the mutation happened in a sperm or egg cell, but an abnormal sex cell with two instead of the normal single set of chromosomes fused with yet another cell, creating a triple-threat sex cell.
In humans, the duplication of chromosomes creates disorders like Down’s Syndrome. But this triploidal sex cell instead produced a new species.
When the very first female marbled crayfish began laying her batches of hundreds of eggs, they developed without fertilization. No males need apply.
And when mature, every single clone began begatting new clones.
This may seem like a good deal to many women. Yet parthenogenetic species are rare for a reason. According to one study, water fleas only evolved some 1500 years ago.
The first time a pathogen decides to target a clonal species their lack of genetic diversity bites them in their identical asses.
The marbled crayfish seems to laugh at evolution, appearing in an instant like the earth before the heavens in Genesis. Yet one day they too will likely fall prey to the winnowing ways of natural selection.
Call it Darwinian Dharma (without the mystical hooey).
Read more at http://www.patheos.com/blogs/miraclegirl/2018/02/evolution-caught-act-darwin-day-sampler/#K2xUf0F0mg881LJQ.99
Survival of the Tamest
Our ancestors domesticated dozens of animals – but only after doing the same to themselves, says Colin Barras
FIRST came the dog, followed by sheep and goats. Then the floodgates opened: pigs, cows, cats, horses and a menagerie of birds and other beasts made the leap. Over the past 30,000 years or so, humans have domesticated all manner of species for food, hunting, transport, materials, to control pests and to keep as pets. But some say that before we domesticated any of them, we first had to domesticate ourselves.
Mooted by Darwin and even Aristotle, the idea of human domestication has since been just that: an idea. Now, for the first time, genetic comparisons between us and Neanderthals suggest that we really may be the puppy dogs to their feral wolves. Not only could this explain some long-standing mysteries – including why our brains are weirdly smaller than those of our Stone Age ancestors – some say it is the only way to make sense of certain quirks of human evolution.
One major insight into what happens when wild beasts are domesticated comes from a remarkable experiment that began in 1959, in Soviet Siberia. There, Dmitry Belyaev took relatively wild foxes from an Estonian fur farm and bred them. In each new litter, he chose the most cooperative animals and encouraged them to mate. Gradually, the foxes began to behave more and more like pets. But it wasn’t just their behaviour that changed. The tamer foxes also looked different. Within 10 generations, white patches started to appear on their fur. A few generations later, their ears became floppier. Eventually the males’ skulls shrank and began to look more like those of the females.
These were precisely the traits that Belyaev was looking for. He had noticed that many domesticated mammals – most of which weren’t selectively bred, but gradually adapted to live alongside humans – have similarities. Rabbits, dogs and pigs often have patches of white hair and floppy ears, for instance, and their brains are generally smaller than those of their wild relatives. Over the years, the collection of physical traits associated with tameness has been extended to smaller teeth and shorter muzzles. Together, they are known as the domestication syndrome.
Many creatures carry aspects of the domestication syndrome, including one notable species: our own. We too have relatively short faces, small teeth and no prominent brow ridges. Our relatively large brains are smaller than those of our Neanderthal cousins – something that has puzzled many an evolutionary biologist. And like many domesticated species, young humans are also receptive to learning from their peers for an unusually long time. Some of these similarities between humans and domesticated animals were noted early in the 20th century, but there was no follow-up. It was only after Belyaev publicised his experiments that a few evolutionary biologists once more began to consider the possibility that modern humans might be a domestic version of our extinct relatives and ancestors.
“ Humans really may be the puppy dogs to Neanderthals’ feral wolves”
On its own, Belyaev’s work didn’t provide the hard evidence needed to convince the wider community of human evolutionary biologists. “You can imagine people not liking the idea,” says Cedric Boeckx at the Catalan Institute for Research and Advanced Studies in Barcelona. At best, many see it as an analogy, he says. In part, that’s because until recently there was no good explanation for why tameness was linked with a suite of physical traits. In the early 2000s, Susan Crockford, now at the University of Victoria in British Columbia, Canada, suggested the thyroid gland might be involved, but the idea didn’t go very far.
That changed in 2014 when Richard Wrangham of Harvard University, Adam Wilkins, now at the Humboldt University of Berlin, and Tecumseh Fitch at the University of Vienna, made a connection. They pointed out one thing that unites the various parts of the body that are influenced by domestication: all derive from a tiny collection of stem cells in the developing embryo. The cluster of cells is called the neural crest. As the embryo develops in the uterus, and eventually forms a fetus, the cells of the neural crest are sent around the body to form different tissues, including ear cartilage, the dentin that makes teeth, and melanocyte cells that produce skin pigments.
Significantly, the neural crest also gives rise to the adrenal glands, which play a key role in fear and stress. Wrangham and his colleagues outlined a simple idea. During the initial stages of domestication of any animal – pigs, for instance – our ancestors began by selecting individuals that were less fearful of them, and less aggressive towards them. That made them easier to breed in captivity.
Unwittingly, the tamers were selecting animals that had smaller, less active adrenal glands, a feature in turn linked to less active neural crest cells. Changes in the cartilage and other tissues derived from these cells were just inadvertent side effects. Crucially, the team predicted that dozens of genes with links to the neural crest should all change as a result of domestication. Domestic species should have distinct versions of these genes, not seen in their wild relatives.
The idea, now known as the neural crest cell hypothesis quickly gained fans, including Boeckx. “Before they formulated [it], the idea of self-domestication was hard to test,” he says. But with a genetic definition in place, it became possible to hunt for signs of it in species not normally considered domesticated – species like our own.
He and his colleagues looked at the genetic differences between modern humans and Neanderthals – the variations that, through the process of natural selection, caused our species to diverge. Remarkably, they discovered that many of the differences were linked to the neural crest. What’s more, the neural crest genes in several known domestic species were found to be distinct from those in their wild counterparts. In other words, some of the genetic differences that distinguish us from Neanderthals are the same as those that distinguish dogs from wolves and European cattle from European bison. This suggests there was an episode early in our evolution when our species underwent the same sort of domestication as these animals did. “The Boeckx result is totally cool,” says Wrangham.
There is a crucial difference, of course, between humans on one hand, and dogs and cattle, say, on the other. Most domestic animals were tamed by another species – us. So what tamed humans?
Evolution itself, says Boeckx. He and others distinguish between animals that are bred to be less aggressive, like horses, pigs and the Russian foxes, and ones that naturally evolve that way. Dogs, for instance, are thought by some to be partially self-domesticated. The idea is that some wolves were naturally bolder and less aggressive. They had an advantage because they could approach human settlements and dine on their leftovers. Only later did we selectively breed them and complete their domestication.
It is possible that being less aggressive and more cooperative was also an advantage for early humans, giving those with these traits a better chance of surviving and reproducing. Alternatively, researchers have argued that humans became less aggressive and more cooperative simply as a consequence of their large bodies and brains. Animals with these features typically show more self-control, so it is conceivable that our ancestors became less impulsive or quick to anger simply by virtue of their size. Sexual selection could also have played a role, with females finding less aggressive males more attractive, perhaps because they provided better care for their young. Wrangham and Brian Hare at Duke University in North Carolina have suggested that a similar process could explain why bonobos have evolved to be so much less violent than chimpanzees.
More work is needed to really pin down what ultimately drove self-domestication in humans, says Boeckx. He says the next step is take lab animals and change some of the genes his team has identified, inserting the domestic versions in individuals that have the wild variants. If this produces offspring that look and act like a domestic species, but are otherwise unchanged, then we can be more confident that the genetic differences between Neanderthals and us really are down to self-domestication.
That said, several researchers are already convinced that this process can explain several important events in our evolutionary history, such as the evolution of language (see “Civil tongues”, left), and the explosion of culture during the Stone Age. The objects archaeologists have found suggest that it was only within the past 100,000 years that jewellery, musical instruments and other cultural artefacts became a common feature of human life, 200,000 years after Homo sapiens first appeared. “That’s always been a puzzle,” says Steven Churchill at Duke University.
“ Most domestic species were tamed by humans. So what tamed us?”
In 2014, he and his colleagues speculated that this delayed cultural revolution might have been linked to an intense pulse of human self-domestication 100,000 years ago. They argued that our species had the capacity to innovate from the start, but that our ancestors lacked the social networks for ideas to spread from group to group. Instead, knowledge and good ideas lived and died in the family group. Genetic and archaeological evidence suggests population densities began to rise around 100,000 years ago. Until that time, it may well have been beneficial for humans to be hostile towards strangers, perhaps to prevent others encroaching on their territories. But as people began to live more closely together, it would have been better to welcome them, say the researchers. Humans would have experienced an evolutionary selective pressure to be friendly and cooperative, potentially an episode of self-domestication.
The idea predicts that H. sapiens should have begun to show some physical features of domestication around the same time. The team looked at dozens of ancient human skulls and found that it was indeed around then that brow ridges and long, powerfully built faces faded away to leave our species looking more feminine, just like Belyaev’s foxes. “To operate in [a wide social network], I think you need overt signals that you’re not going to behave aggressively,” says Churchill’s collaborator, Robert Franciscus at the University of Iowa. Smaller brow ridges and faces were probably just that, he says. It is a nice idea, but one that will need further work to explain away some contradictions. For instance, fossils show that several undomesticated mammals – bears, boars, even sea cows – also seem to have become more feminine over the past 100,000 years.
And so many researchers still need to be convinced that self-domestication – perhaps even successive pulses of self-domestication at different times – can explain profound mysteries of our evolutionary history. But advocates are undeterred. Wrangham is publishing a book on the subject later this year. Two millennia after Aristotle became the first person to compare people to domestic animals, the idea might be about to go mainstream.