If you want to make sense of the impressive immensity of the Buffalo Bill Historical Center here, or even take the full measure of its reconceived $3 million Buffalo Bill Museum, which opened in June, it might help first to turn away from this $75 million, seven-acre building with its 50,000 artifacts, five distinct “museums” and research library.
Just down the road, you will find the decidedly more humble Cody Dug Up Gun Museum, where more than 800 rusted, jammed and half-ruined pistols, revolvers and other weapons are displayed in the kind of dirt in which they were originally discovered. It’s an eccentric archaeological collection that includes a still-loaded Colt found in a Nevada ghost town — a gun, we learn, that was probably used as part of a 19th-century jailbreak.
Or, if your taste runs to gunslinger kitsch, sample the nightly shootout in front of the Irma Hotel (built by Buffalo Bill himself). The street-theater plot is a hokey variation on “True Grit,” with plenty of winks and elbowing jests. The sounds of the explosive blanks ricochet through this one-horse town (population under 10,000), luring standing-room-only crowds.
Or for more authentic fare, take a 10-minute drive out of town for the “Cody Nite Rodeo,” and watch cowboys rope cattle or ride bareback on bucking bulls.
It is difficult, at times, to determine which events are staged for tourists and which are reflections of a deeply ingrained local culture. Eating at the Irma Hotel buffet, you wonder if you are underdressed without a Stetson. It is not for mere effect that the entrance to the Buffalo Bill Historical Center specifies “Firearms Prohibited.”
Cody, you see, is cowboy country. For real. And cowboy country, too, for show. The town is surrounded by landscapes that could have been used as movie backdrops by John Ford. A stunning drive of just over 50 miles takes you to the eastern entrance of Yellowstone National Park.
The town was named after — and founded by — William F. Cody, who imagined that tourists would flock here because of its proximity to the park entrance. It was one of his few schemes that eventually succeeded. But even at its incorporation in 1902, Cody (the town) had a fighting chance only because Cody (the man) was Buffalo Bill — perhaps the most renowned celebrity of his time. His cowboy-and-Indian variety show, “Buffalo Bill’s Wild West,” had been seen across the country and in Europe. From 1883 to 1913, it toured (according to the history center) a quarter of a million miles, touching down in Paris, London and Brooklyn.
It is as if the show had given birth to a town. In that sense Cody is a bit like Disneyland or Las Vegas, originating from an entertainment concept. Except, of course, Cody is not just a theme park but also real: there are ranches and cattle. Moreover, because the town is not extraordinarily wealthy or (like nearby Jackson Hole) a draw for well-to-do skiers and financiers, it glitters with true grittiness. (Or is that the glitter of showmanship?)
And that brings us to this rather remarkable historical center. It traces its origins to a Buffalo Bill Memorial Association established in 1917, the year of that showman’s death. In 1959 the Whitney Gallery of Western Art opened and was later united with the Buffalo Bill Museum. The center grew.
In 1976 one of the world’s largest collections of American and European firearms made its debut in the Cody Firearms Museum. In 1979 a museum devoted to the American Plains Indians opened. And in 2002 the Draper Museum of Natural History, with its survey of the region’s natural habitats, joined the center. Now the Buffalo Bill Museum has been redesigned. The museums are connected like spokes on a wheel, and now draw 200,000 visitors a year.
The effect is extraordinary. The institution incorporates the paintings of Thomas Moran and Albert Bierstadt; the evolution of American weaponry; confrontations with, and betrayals of, the Plains Indians; Wild West entertainments; and a devotion to the beauty of the landscape. In a way it is an elaborate identity museum, exploring different facets of the West. But it is more ambitious than that, as if affirming that with its particular focus, it would also end up disclosing something universal.
There are problems along the way: a good part of the western art here is uninspiring; the Museum of the Plains Indians, like most American Indian museums, is suffused with romantic genuflection; and the Cody Firearms Museum is so overwhelmingly attentive to displays of its collections that the narrative can get lost. And there is the familiar tension between the ideals of the West and the messy details of what it took to “win” it. But for all that, the center is among the nation’s most remarkable museums, repaying close attention.
The center also seems a response to the question that Cody, the town, poses to a visitor about the intertwining of reality and mythology. This becomes explicit in the reworking of the Buffalo Bill Museum, whose curator is John C. Rumm.
The catalog suggests that the previous treatment of William F. Cody was a sober documentary tribute. Now the result is more ambiguous. It acknowledges that in Cody’s invention of Buffalo Bill and in his creation of “Buffalo Bill’s Wild West,” there was a fair amount of distortion. But there was also a fair amount of truth. The difficulty is separating them.
So in an exhibition that also combines artifacts (ranging from Cody’s mother’s clock to film clips of the “Wild West” show) with showmanship (a life-size diorama of Cody as buffalo hunter and scout, an 189os Buffalo Bill board game with figures blown up to life size), this becomes one of the recurring themes.
Here was a man who popularly defined the Wild West using his own experience (as Army scout, buffalo hunter, Pony Express rider), but he was also an inveterate inventor of tall tales (a genre that thrived in the West).
Did Cody, for example, even ride for the Pony Express? “Evidence is ambiguous, and scholars disagree,” the exhibition tells us.
And was he really such a heroic figure? In displays about his domestic life we read that after Cody and his wife were wed, “their marriage soon began to fall apart.”
Is Cody, then, like the figure Paul Newman portrayed in Robert Altman’s sardonic 1976 film, “Buffalo Bill and the Indians”: a drunk, a liar, a brute, dwarfed by the nobility of the Indians performing in his show?
No, that was a cartoon, a portrayal of the man as the era wished to see him. Now we are asked to acknowledge complexities. Cody often boasted of killing Indians and once cherished a Cheyenne scalp he cut off in combat. But Sitting Bull and others praised him in later years for his humanity, and Cody made critical comments about the country’s mistreatment of the Indians. He was a man of his time, but he also changed over time. His best biographers (see Louis S. Warren’s “Buffalo Bill’s America”) reveal intricacies. Cody’s show ultimately embraced — and appealed to — multitudes.
“Have you ever watched a man make a mosaic floor?” Cody asked in a 1901 newspaper interview cited in the exhibition. The show, he explained, displayed “a human mosaic of horsemen: Indians, Cossacks, cowboys, Bedouins, Mexicans, cuirassiers, Boers, Britons, 300 strong.”
The exhibition affirms what the center as a whole demonstrates: that behind the mythologizing is something worth cherishing, even if it is flawed, complex and still evolving. The old impulse to demolish the myth has been put aside. During my visit here, for example, I meet Gregory Hinton, who is exploring homosexuality in the American West, once a potentially explosive project, and now supported by the center, where Mr. Hinton was a resident fellow.
And what is at the core of the idea of the West? I am told that one of the best-selling books in town is “Cowboy Ethics: What Wall Street Can Learn From the Code of the West,” by James P. Owen. Here are some of his homilies: Live each day with courage; take pride in your work; always finish what you start; when you make a promise, keep it.
Even the Cody rodeo begins with a prayer for the preservation of the “cowboy way of life.” And in 2010 the historical center’s board also came up with a credo: “We believe in a spirit, definable and intellectually real, called ‘the Spirit of the American West.’ “We believe the Spirit of the American West is central to American Democracy and an iconic image of freedom worldwide.” “We believe,” it continues, that this Spirit was “originally interpreted and mythologized by people such as William F. ‘Buffalo Bill’ Cody.”
It is a spirit of “optimism,” of “hardy individualism,” of resistance to “indifference.”
Listen hard and you can hear someone calling out for Shane and see vivid evidence of the human need to idealize. But spend some time here, and you are almost prepared to adopt the faith.
Sinking Ships
During coverage in January of the Costa Concordia cruise liner disaster off the coast of Italy, I heard some survivors voice concerns about being "sucked under" if the boat sank. In what conditions would this be likely? How long does the downward force last, and would wearing a life jacket help?
• There has been much amateurish debunking and misunderstanding of this phenomenon. First-hand evidence from people who have been sucked down is hard to come by, because few survive, but any survivors' accounts make sense in light of the discussion below.
To understand the process, put small, slightly buoyant objects on large weights, let them sink through fluids and observe their behaviour. Start with a pillow or a slab of wood held in the air (air is, of course, a fluid). Scatter slips of paper on top. Most swirl away as the pillow or wood falls, but one or two in the middle will fall with the weighty object.
You can see similar effects with a brick covered in twigs as it sinks in clear water, or try it in slow motion by dropping large ball bearings in a jar of clear detergent containing a scattering of small bubbles. Objects slightly out of line simply swirl, but those caught directly in the wake follow the falling weights like a cyclist slipstreaming a truck.
As a ship goes down, the passengers most at risk are those on the top. Water in a hurry does not query the size of your life jacket; it grips you and down you go. Your best bet, whether in suction or in a rip current, is to swim to one side. You won't have far to go, and then your life jacket can get to work.
******
• It is quite possible for passengers of a foundering ship to experience the sensation of being sucked under. However, unless the ship is big and sinking quickly - creating a lot of turbulence and releasing a lot of trapped air on its descent - the forces involved are most likely to be small and transient, allowing passengers to swim to safety.
Air escaping from submerged compartments could bubble up through the column of water above the sinking ship. Aeration of water decreases its density and, according to Archimedes's principle, passengers would sink if their weight exceeded the reduced weight of water they displaced. This is why swimmers are less buoyant in the "white water" of the surf than in the "blue water" outside the breakers, and why small boats should avoid passing through the white water wake of big ships.
It is thought that bubbles caused by the release of methane gas from methane hydrate deposits beneath the sea floor can sink ships. In 2003, Joseph Monaghan of Monash University in Australia argued that a trawler discovered in a large methane pockmark known as Witch's Hole, about 150 kilometres off the east coast of Scotland, was sunk by a bubble at least as big as the vessel. Bruce Denardo of the Naval Postgraduate School in Monterey, California, tried to disprove the theory by floating a set of small spheres on the surface of a tank of water while feeding bubbling air into the bottom of the tank. The spheres sank. If bubbles can sink ships, the same could happen to people, although people might be able to swim clear of trouble.
While a sinking ship is still just below the surface, passengers could be dragged along in water currents flooding in to displace the escaping air and this helps explain why passengers on different parts of the same sinking ship can have very different experiences.
Charles Lightoller, second officer of the Titanic, was twice "sucked under", carried by water flooding down through ventilators and air shafts. In contrast, chief baker Charles Joughin claimed that he did not even get his hair wet as he stepped off the stern of the Titanic while it sank beneath him.
There are other good reasons to stay clear of a sinking ship. For example, when the hospital ship HMHS Britannic sank off the coast of Greece in the first world war, a lifeboat full of passengers was caught in the turning propeller as it rose out of the water.
And for passengers left in the water, there is the danger of being struck from below by buoyant objects that break loose from the submerged ship.
Spontaneous human combustion
Spontaneous human combustion is a macabre phenomenon people either dismiss as a myth or blame on alcoholism.
PEOPLE explode. One minute they may be relaxing in a chair, the next they erupt into a fireball. Jets of blue fire shoot from their bodies like flames from a blowtorch, and within half an hour they are reduced to a pile of ash. Typically, the legs remain unscathed, sticking out grotesquely from the smoking cinders. Nearby objects (a pile of newspapers on the armrest, for example) are untouched. Greasy fat lies on the floor. For centuries, this gruesome way of death has been debated, with many people discounting it as a myth. But spontaneous human combustion is real and we think we can show how it happens.
The first accounts date from 1641, when Danish doctor and mathematician Thomas Bartholin described the death of Polonus Vorstius - who drank wine at home in Milan, Italy, one evening in 1470 before bursting into flames. In 1663, Bartholin wrote of a Parisian woman who burned, leaving the mattress on which she lay unscathed. And in the Philosophical Transactions of 1745, Paul Rolli told how 62-year-old Countess Cornelia Bandi of Ceséna, Italy, said she felt "dull and heavy" after dining and went to bed. Next morning, her maid found a pile of ash with her legs protruding from the smouldering remains.
The first monograph on the subject was Essai sur les Combustions Humaines, produits par un long abus des liqueurs spiritueuses by French writer Pierre Aimé Lair in 1800. This set a moralistic tone that others followed, and alcoholism became accepted as the cause of combustion. In 1853, the Victorian magazine Notes and Queries included a summary of 19 cases between 1692 and 1829 by a Dr Lindsley, who wrote that those who had died of the condition were "habitually drunken" or "frequently indulged" in alcohol.
The first scientist to investigate spontaneous human combustion was German chemist Justus von Liebig, who examined the records of some 50 cases. He pointed out that even though anatomical specimens are stored in 70 per cent alcohol, they will not burn. He injected rats with ethanol and still could never make them catch light. This essentially disproved the causal link between alcoholism and combustion, but the belief persisted.
Recent cases are well documented. On 1 July 1951, Mary Reeser was visited at her home in St Petersburg, Florida, by her son, but when a telegram arrived next day, the doorknob of Reeser's apartment was found to be hot. When police broke in, all they found was a mound of smoking ash with a leg protruding and charred liver attached to the spine.
The remains of John Irving Bentley of Coudersport, Pennsylvania, were found by a meter reader who let himself in on 5 December 1966. A pile of ash and half a leg was what remained. And the most recent case was 76-year-old Michael Faherty who died on 22 December 2010. West Galway coroner Ciaran McLoughlin recorded the cause of death as spontaneous human combustion.
In 1961, London coroner Gavin Thurston published a paper, "Preternatural combustibility of the human body", in the Medico-Legal Journal. He described a potential mechanism for such combustion: the "wick effect". Human fat burns at about 250 °C, but if melted it will combust on a wick at room temperatures. He experimented with a roll of fat wrapped in gauze, and showed that the heat of the flame could melt body fat and produce continuous combustion like a candle.
In January 1986, a BBC Newsnight programme demonstrated the wick effect. The following year, Nigel Cruttenden of the Kent Police Force used the wick effect to explain the death on 28 December 1987 of Barry Soudain, a handyman from Folkestone, UK. Soudain's charred remains were found in his largely undamaged flat. Cruttenden surmised that fat in the body had liquefied in the conflagration and fuelled the fire. The wick effect was becoming the accepted explanation. Yet clothing soon burns away, leaving no wick; combustion lasts 12 hours or more, and the corpse is not destroyed. Spontaneous human combustion is a very different matter.
A decade on, in August 1998, the BBC set out to show in an episode of its QED series entitled "The Burning Question" that spontaneous human combustion was explained by conventional means. Among the experts was Dougal Drysdale, from the University of Edinburgh, who heated a piece of pig bone to 500 °C in a muffle furnace. After 6 hours, he assured his audience, the bone would be reduced to ash. Drysdale inspected the bone after 8 hours, but it was still intact.
Stan Ames of the Fire Research Station, a British research unit, also appeared - to demonstrate how easy it was for a wooden-framed armchair to be destroyed by fire. The commentary said that the burning could reduce a body to ash, or, in this case, an armchair to its springs. The chair was left to burn for 6 hours in an experimental chamber, at the end of which it was largely intact. Part of the back and the armrest were charred. Setting fire to a combustible armchair is hardly comparable to burning a moist human body, and we failed to see the relevance; still the BBC proclaimed that its programme meant that the "mystery" of spontaneous combustion was "finally solved".
I felt it was time to test the realities, so we marinated pork abdominal tissue in ethanol for a week. Even when cloaked in gauze moistened with alcohol, it would not burn. Alcohol is not normally present in our tissues, but there is one flammable constituent of the body that can greatly increase in concentration. Triacylglycerol lipids cleave to form fatty acid chains and glycerol. The fatty acids can be used as an alternative source of energy through beta-oxidation, giving rise to the key metabolic molecule acetyl-CoA. This helps drive the energy-producing Krebs cycle within the mitochondria of cells.
If the body's cells are starved (which can occur during chronic illness and even during a workout at the gym), acetyl-CoA in the liver is converted into acetoacetate, which can decarboxylate into acetone. And acetone is highly flammable. A range of conditions can produce ketosis, in which acetone is formed, including alcoholism, high-fat low-carbohydrate dieting, diabetes and even teething. So we marinaded pork tissue in acetone, rather than ethanol.
This was used to make scale models of humans, which we clothed and set alight. They burned to ash within half an hour. The remains - a pile of smoking cinders with protruding limbs - were exactly like the photographs of human victims. The legs remain, we think, because there is too little fat for much acetone to accumulate. For the first time a feasible cause of human combustion has been experimentally demonstrated.
So does this mean that victims of ketosis are likely to spontaneously combust? Hardly - there are only about 120 cases on record throughout history, so it is vanishingly rare. On the other hand, there would be an argument against people with ketosis wearing synthetic fibres on dry days, and a new argument to give up smoking.
Meanwhile, I sympathise with teenagers whose parents have repeatedly been told that the odour of solvent on a child's breath is a sure sign of glue-sniffing. It isn't of course: they were probably dieting or just unwell.
Mythbusters
When the first episode of Mythbusters aired in 2003, I couldn’t drive a car. I couldn’t see a R-rated movie. I was 14-years old and I couldn’t do much of anything. But Mythbusters taught me that I could do science.
Raised on Bill Nye videos, LEGOs, and CD-ROMs of dinosaurs, I was a lump of nerdy clay waiting to be molded. Mythbusters came to me at a critical time, and it transformed me into who I am today. Maybe it’s naïve to think that one television show shaped my entire professional trajectory, but if any TV show did, it was Mythbusters. I jumped into high school chemistry and biology without a second thought, in protest of my dismissive classmates. In my physics class I would interject with tidbits I learned from the show. When learning about circuits I asked, “Is this sort of like a Leyden jar?” My professor responded with, “Yes it is…where did you learn that?” My answer was consistent.
When I got to college, my burgeoning passion for science steered me into engineering. I still watched Mythbusters every week. Once, in my thermodynamics class, my professor explained why what Adam, Jamie, and the gang did wasn’t really science. I defended them.
With enough episodes in the bag to fill 10 straight days with explosions, Mythbusters enters its tenth season this summer, accepting a torch passed on to them by the likes of Sagan and Nye. In popular science communication, they stand alone amidst a cable TV landscape filled with mermaids, “ancient aliens,” and Bigfoot. The show really is a phenomenon like COSMOS or The Big Bang Theory. I’d argue that it has done more for the public understanding of science than almost any medium before it.
I Reject Your Reality And Substitute Science
As successful as the show has been for the communication of science, the Mythbusters hardly ever do it. They do not always get their terms right. They sometimes misunderstand physics. They make it look like an experiment with a handful of data points or less makes for a confirmation. Mythbusters gives kids all over the world the impression that an explosion is science (which it almost never is). And no scientist would call them scientists.
Maybe it’s the shackles of TV that keeps the Mythbusters from consistent scientific rigor. Maybe producers shoehorn in C4 where it has no reason being (though fun to watch). Undoubtedly the show has to find some kind of balance between entertainment and enlightenment that still delights audiences. No, Mythbusters is rarely science. But they know this.
In public appearances and on the show, the hosts Adam Savage and Jamie Hyneman admit that what they are doing is not always science. I have even asked them directly. At a “behind the scenes” tour appearance in my city, I took the microphone, hands and breath shaking from years of fandom, and asked about Adam and Jamie’s personal feelings on the show’s process. Adam responded that there are many things that both he and Jamie would like to do better, but simply don’t have the time to do or can’t get the higher-ups to agree to. Explosions work, rigorous experiments that take weeks or months to complete and film don’t.
Adam continued on to address the crowd after I asked my question, though I was sure he was speaking directly to me. Instead of trying to do peer-reviewed science, the Mythbusters try their hardest to promote scientific thinking and skepticism first and foremost. It’s a “teach a man to fish” model. Both Adam and Jamie are active skeptics and rationalists, noting that some of their least favorite myths were of the “woo-woo” variety—like “pyramid power” and perpetual motion devices, they went on to tell me. Teaching a whole generation of kids, like myself, how to hold up our beliefs to the light of experiment and empiricism instead of faith and fixed beliefs, will arguably go much further than spending the extra time to smash together 50 cars instead of five.
The scientific skepticism that the Mythbusters have slipped into the show under all the exhaust and explosions is more important now more than ever. Climate change denial, anti-vaccination proponents, creationist teachers, faith healers, fake bomb detector peddlers, psychic frauds, alternative medicine pushers…we need scientific thinking. We need a generation of kids who think an experiment is more important than a preconceived notion or an argument from authority. All of these rifts between science and pseudoscience are controversial, but Mythbusters sidesteps all the potential aggravation to get in on the ground floor. A wave of science-based decisions follows science-based reasoning. If we want to keep real science in our public schools, if we want public health measures to stand firm against bad ideas, Mythbusters gives viewers the basic tools to do so. As the “Bad Astronomer” Phil Plait says, “Teach a man to reason, and he can think for a lifetime.”
And by playing scientists on TV, the Mythbusters have done more for dispelling the white lab coat look and dorky disposition of scientists than any communication effort in my memory. They often fail to achieve real science, but failure is always an option.
When science writers and scientists gripe about the lack of actual science on Mythbusters, most of these people are indeed fans of the show. I can point out a flaw here or there, but I will keep coming back next week. (To their credit, Mythbusters is probably the only show on TV that will go back and redo a test if they think they got it wrong.) A critique of the science is not quite the same as a bad TV review. Scientists and science enthusiasts who fault the show want to see it get better. We know how powerful it has been in shaping the public’s view of science and the scientific method. Sure, an experiment with a larger sample size than two and less explosions would be good, but not one detractor I know of would see the show cancelled. Mythbusters is the way station between a childhood fascination with how things work and a full-blown interest in STEM fields. How much work do you think it would take to interest an avid Mythbusters fan in trying real science? Can you think of another show that could produce the same answer?
I have seen every episode of Mythbusters. I have been to the museum exhibit. I have tried my hand at fact-checking certain episodes. I have even met Jamie. Ten years ago I saw the gang try to get a car to fly after strapping rockets to it. Now I write about science for a living. I can’t be sure I’d be doing that if it weren’t for the Mythbusters.
After a decade, Adam, Jamie, Kari, Grant, and Tori are still showing a legion of fans that it’s okay to be a geek. It’s okay to take that reservoir of passion that you have and let it flow into whatever you love. Experiment, question, replicate, be critical, be nerdy, be yourself. Mythbusters taught me that.
Wild Weather
On the 3rd June 1983 the coast of Dorset and Hampshire was hit by a terrifying thunderstorm, with torrential rain and fierce winds that tore down trees and capsized dozens of small boats. A barrage of giant hailstones the size of golf balls smashed thousands of windows, part of the roof of a leisure centre in Portsmouth collapsed, and a helicopter nearly crashed after one of its two engines disintegrated after sucking in the ice. A bolt of lightning also blasted a car across a road and left it upside down.
Possibly the most bizarre incident of the storm was a shower of black ice, which turned out to be lumps of coal encased in ice. The incident was described by Mr P. A. Rogers in The Bournemouth Meteorological Registrar: “My telephone never stopped ringing with reports of coke having fallen all over the Bournemouth, Poole and Christchurch area. I investigated several reports and found the pieces to be the same, all having been discovered over lawns, paths, etc, and all found after the storms of the 5th. At one lady’s house I picked up 92 pieces of coke and there were still many pieces left. The largest piece of coke measured 6.0cm by 4.6cm.” And further along the coast at Brighton, a large crab 10in across encased in ice plunged from the storm cloud in front of a man, followed by a shower of hailstones.
These surreal events were probably all caused by tornados or waterspouts. In fact, witnesses had seen a waterspout at Brighton, which may have sucked up the crab, frozen it in the thundercloud and eventually dumped it on land. Another tornado may have swept through a coal merchant’s yard and sucked up a load of coal before showering it far and wide, all encased in ice.
Strange showers have happened many times before. If a tornado whips across a lake or pond it can hoover up small animals and objects into the thundercloud above — eventually the cloud grows too heavy with its unusual payload of wildlife or inanimate objects, and the whole lot crashes to earth in a bizarre shower.
Lee Harvey Oswald
Marina and Lee: The Tormented Love and Fatal Obsession Behind Lee Harvey Oswald’s Assassination of John F. Kennedy” by Priscilla Johnson McMillan.
One book explains Lee Harvey Oswald’s motivations in assassinating John F. Kennedy better than any other—and it’s been out of print for decades. Joseph Finder salutes the careful reporting of Priscilla Johnson McMillan’s ‘Marina and Lee,’ and why there isn’t a conspiracy in sight.
Shortly after President John F. Kennedy was assassinated, in November 1963, a Gallup poll found that 52 percent of the American public believed that the assassin, Lee Harvey Oswald, was part of a conspiracy. In the 50 years since, that figure has climbed closer to 80 percent.
You can understand why. It’s painful to accept that an American president was cut down by one small, half-crazy guy with a mail-order rifle who could easily have been stopped in any of a dozen different small ways - but wasn’t. No wonder Norman Mailer called the assassination “the largest mountain of mystery in the twentieth century ... a black hole in space absorbing great funds of energy and never providing a satisfactory answer.”
The key word here is “satisfactory.” The simple explanation—that Oswald acted alone—was unpalatable. The enormity of the crime didn’t fit the insignificance of the criminal. Far easier to imagine Oswald as a “cat’s paw” of a much larger scheme, engineered by invisible but all-powerful forces.
There’s something deeply consoling about conspiracy. As a writer of suspense fiction for whom conspiracy is a stock in trade, I know the gratifications of a world in which everything means something, everything adds up, everything is under the control of some grand human intention. We like to think that things happen for a reason, and that large things happen for large reasons.
The Warren Commission, established by President Lyndon Johnson a week after the assassination, was meant to set the record straight. Its task was to reassure a grieving nation that everything was under control, that there hadn’t been a coup d’état, that the U.S. wasn’t, in Johnson’s phrase, a “banana republic.” Its published report gave us such turgid bureaucratese as “The Commission does not believe that the relations between Oswald and his wife caused him to assassinate the President” and “Many factors were undoubtedly involved in Oswald's motivation for the assassination, and the Commission does not believe that it can ascribe to him any one motive or group of motives. It is apparent, however, that Oswald was moved by an overriding hostility to his environment.”
All this bureaucratic caution had a paradoxical effect, however. The Oswald who emerged from the Warren Commission report’s 26 volumes was a blank slate. No wonder it was so densely inscribed with our worst suspicions. It didn’t help that Oswald was himself shot dead two days after the assassination, by a nightclub operator named Jack Ruby in the basement of Dallas police headquarters. The shooting of the shooter made him loom all the larger in our imagination. As Thomas Powers pointed out, “Lee Harvey Oswald in prison for decade after decade—surfacing in the news whenever parole boards met, but otherwise forgotten, like Sirhan Sirhan, James Earl Ray, Arthur Bremer, John Hinckley—would have faded back down to size. It is Oswald dead and unexplained that excites suspicion. We needed a good long look in order to forget him.”
That good long look didn’t come until 1977, with the publication of Marina and Lee by Priscilla Johnson McMillan. The timing could not have been worse. It was two years after the ignominious end of the Vietnam War and three years after Watergate. The country had been through two more traumatic assassinations (Robert Kennedy and Martin Luther King). We were by then steeped in conspiracy thinking. Our distrust of politicians and government organizations was at fever pitch, shaped in part by the paranoid conspiracy thriller that had come into vogue in Hollywood: The Parallax View and The Conversation and Chinatown in 1974, Three Days of the Condor in 1975, All the President’s Men in 1976.
Marina and Lee offered a deep, nuanced, and spellbinding portrait of Oswald, as seen through the prism of the person who knew him best, his Russian wife, Marina. But it gave us no sensational revelations, no grassy-knoll conspiracy talk. What it offered instead was something far more unsettling: a portal to the life and times of a twisted, small man. The book was widely reviewed but its sales were modest. It wasn’t what the conspiracy-minded American public was in the mood to buy. McMillan’s book forces readers to confront something more vexing than a conspiracy: an absence of conspiracy.
It’s no less suspenseful for all that, in part because of the breathtaking intimacy of its character studies. The author’s gifts of observation are considerable. Yet she was also extraordinarily fortunate in the access that she enjoyed. A few months after the assassination, Oswald’s Russian widow, Marina Prusakova Oswald, was offered a choice of collaborators to write a book about her life with Lee. One was a Russian-born journalist named Isaac Don Levine, who’d written biographies of Lenin and Stalin. But he was mostly interested in talking about politics, and Marina had no patience for that. She wanted to talk about her tempestuous marriage.
The one writer Marina was drawn to was a 36-year-old woman named Priscilla Johnson (later, Priscilla Johnson McMillan), who had a gentle, warm nature and an intriguing background. McMillan had been a friend of John F. Kennedy’s—she had been an aide to him when he was in the Senate, and, pretty and socially connected, was a target of his attentions, though it never led to an affair. She also spoke fluent Russian, which was crucial, since Marina’s facility with English was poor. She understood the idiosyncrasies of Soviet life, having spent several years in Russia as a young reporter.
By a startling coincidence, she had also known Marina’s husband. In November, 1959, as a reporter in Moscow, she had interviewed a 20-year-old ex-Marine at the Metropole Hotel in Moscow named Lee Harvey Oswald, who’d announced he wanted to defect to the Soviet Union.
Marina Oswald and Priscilla Johnson McMillan hit it off immediately. McMillan then signed a contract with Harper & Row for a book about Lee Oswald for which she received an advance of $60,000. Two thirds of that went to Marina. Marina signed a release giving McMillan a free hand to write whatever she wanted.
From July 1964 until the end of the year, McMillan all but moved in with Oswald’s young widow and her two small children in her ranch house outside Dallas. They cooked meals and traveled together. McMillan babysat Marina and Lee’s kids. They traded confidences. The terrible event was less than a year old, and its details were still fresh. This was about as close as we could get to asking questions of Oswald himself.
McMillan had a difficult task. Marina had been over-interviewed. Fearing deportation to the Soviet Union, she had given different versions of her life to the FBI, the Secret Service, and the Warren Commission. She was also wary, ashamed, and overwhelmed with guilt. Was she in some way to blame for his actions? She vacillated between wanting to condemn her late husband and wanting to defend him.
The result of McMillan’s immersive reporting is a full, rounded sense of Oswald’s character. His sense of self swings wildly. At times he regards himself as a world-historical figure destined to change the course of human events; at other times, he’s a cruelly neglected victim. It was a highly volatile combination. He fancied himself a Marxist, lived in rooming houses under aliases and was a furtive, nasty man. He wrote in what he called his “Historic Diary” while singing the theme song to the Gary Cooper Western High Noon (“Although you’re grievin’, I can’t be leavin’/Until I shoot Frank Miller dead”). He was far too angry, unbalanced, and delusional to consent to be the cat’s paw of some gleaming cadre of conspirators. (Only if you haven’t read Marina and Lee can you take Oswald’s famous jailhouse remark—“I’m just a patsy!”—at face value.) He’s a liar, a manipulator, a wife-beater, an odious human being, and finally a pathetic one. We like to think that great men make history. McMillan reminds us that small men do, too.
It’s a matter of being in the wrong place at the wrong time. The idea of assassination, McMillan believes, is highly contagious, like an influenza virus, and Oswald was infected not once but on multiple occasions. McMillan was the first to report that, in January 1962, when Oswald was living in Minsk, there was an assassination attempt on Soviet leader Nikita Khrushchev, probably by one of his own bodyguards, at a nearby hunting lodge. Oswald heard about it from a relative of his new wife, Marina Prusakova. The attempt was hushed up; no one outside Russia knew the details until McMillan’s book was published. “If this had happened in America,” Oswald told Marina and her family, “it would have been in all the newspapers, and everyone would be talking about it.”
Seven months before that afternoon at Dealey Plaza, Oswald had tried to assassinate another political figure: the segregationist and right-wing hero Gen. Edwin Walker. Oswald had missed by one inch, and he was emboldened by how easy it had been - and how no one had ever found out. Neither the FBI nor the Dallas police had an inkling he’d tried.
McMillan’s book undermines all the conspiracy theories so successfully because it doesn’t set out to do so. Marina and Lee doesn’t polemicize; it portrays. It’s alive to the small crevices of character—and to the vast and irreducible role of chance.
Even today, half a century after the assassination, the cascade of contingencies McMillan documents is painful to absorb. Oswald had only learned of the route of the president’s motorcade a few days before, she establishes, when it was published in the Dallas newspapers. The shooting was practically a spur-of-the-moment decision. Once he heard that the president’s limousine would be passing right by the building where he worked, he felt that fate had put him there. The president’s limousine looped right under his window. (McMillan’s reconstruction of the day of the assassination, documentary-like yet novelistic, is as pulse-pounding as the finest thriller.)
Would Oswald have shot any politician who passed under his window? Would he have traveled across town to shoot Kennedy if Kennedy hadn’t presented himself, in a slow-moving open-topped limousine, some 88 yards from the Texas Schoolbook Depository? McMillan can’t say for sure, of course, but she doubts it.
And the cascade continues. What if the FBI hadn’t closed its investigation of Oswald—who changed his mind about defecting to the Soviet Union and returned to the U.S. in 1962—once they’d realized he wasn’t a Moscow-directed threat to national security? What if they hadn’t investigated Oswald at all? (McMillan speculates that the FBI’s repeated questioning of Oswald and his wife and their friends may paradoxically have inflated his delusional sense of his own importance and may have even emboldened him to go after the president.) What if Marina had agreed to his repeated pleas that she and their children move back in with him? What if it hadn’t been so easy to buy guns? What if the Secret Service had argued against JFK’s request to take down the protective bubble top of his limo on that nice sunny day?
“The tragedy of the president’s assassination was in its terrible randomness,” McMillan writes. The task of coming to terms with this reality is the challenge that Marina and Lee bodies forth in meticulous, mesmerizing detail. For most Americans, that challenge remains unmet. The reissue of McMillan’s classic book is the perfect occasion to surrender the salve of conspiracy, and take that good, long look. The truth is out there. Just turn the page and start reading.
I have a second, unpaid job. It’s seasonal. Each year during the last two weeks of October, I field a lot of calls from reporters. You see, I’m the world’s leading authority on poisoned Halloween candy. I acquired this distinction by default; I’m the only one who’s given the phenomenon much attention.
I became skeptical about Halloween sadism while I was a grad student. I couldn’t understand what would lead someone to hand out poisoned Halloween candy. When I’d say this to my friends, they’d be outraged: “Of course people do that! That’s just what people like that do!”
Eventually, I decided to test this. I figured that a child killed by a poisoned treat would be a big news story, so I looked at 25 years of Halloween coverage in The New York Times, the Los Angeles Times and the Chicago Tribune—the most prominent papers in the nation’s three biggest urban areas. I could not find a single report of a child who had been killed or seriously injured by a contaminated treat picked up on the course of trick-or-treating.
To be sure, one boy had died after his father gave him poisoned candy. Presumably, his father figured that so many kids were poisoned by Halloween maniacs that the death wouldn’t arouse suspicion. He was wrong on both counts and was arrested, convicted and eventually executed. The story of that poisoning - in Texas, a long way from New York, L.A. or Chicago - made all three papers, and reports of a couple other deaths were followed by retractions. One little boy had gotten into his uncle’s heroin stash and so on. That the press hadn’t covered any cases seemed telling.
I published my results in 1985 in a sociology journal and wrote a shorter piece for Psychology Today. That was a fairly popular magazine in those days, and they arranged for me to appear on NBC’s “Today Show.” I also gave dozens of newspaper and radio interviews that year. That was the beginning.
Every October since 1985, I’ve continued to get calls from reporters. Usually, it’s a young person working for a newspaper who’s been assigned to write a piece about Halloween safety. There’s a pretty good chance that a reporter who goes online to review last year’s stories about the topic will see me quoted, so I get called and re-interviewed.
Each year, I update my research and post the new version on the UD library’s UDSpace [UD’s institutional repository], but my conclusions haven’t changed. Over the years, my findings have been reported in fairly visible media, including USA Today, Reader’s Digest, Bill O’Reilly’s show on Fox News, and NPR’s “All Things Considered”—as well as hundreds of newspapers with circulations large and small. My findings also have been posted on all manner of websites, although those folks rarely bother interviewing me.
Of course, it is fun to have people interested in my research. Yet the fact that I have been giving essentially the same interview for more than 25 years makes me wonder about the value of news coverage for social scientific research. My data now cover more than 50 years, and I still haven’t found a documented case of a child who was seriously harmed by a contaminated treat. I can’t say it has never happened; after all, logicians tell us that it is impossible to prove a negative. But I can say with great confidence that it isn’t common. Nonetheless, people still worry: a 2011 poll of parents with young children found that 24 percent had concerns about poisoned treats.
So, each year I expect to get more calls. Nor would it surprise me to learn that people have devised new programs to thwart those Halloween sadists. Since I started this research, many hospitals started to x-ray children’s treats, and shopping malls began encouraging parents to bring their kids to the mall to trick-or-treat from store to store (presumably that additional foot traffic leads to more sales). Some church congregations have trunk-or-treat programs; members drive to the church parking lot, open trunks decorated for Halloween, and pass out treats to the youngsters who walk from car to car. Parents like these programs; after all, none of the children who visit the hospital, the mall or the church parking lot is poisoned, and parents can feel reassured that they’ve protected their kids.
The Halloween sadist resembles that hook-handed homicidal maniac who terrorizes Lover’s Lane; they are both central figures in urban legends. Halloween used to be about ghosts and goblins, but most of us no longer believe in supernatural terrors. What we believe in—what scares us—is criminals, and our scary stories have evolved to keep up with our fears.
So, what has this taught me? Humility. I’ve come to realize that, regardless of how much attention my research receives, some people won’t be convinced. An urban legend is harder to kill than a vampire.
(From Fark)
Back when I was a kid, it was razor blades in apples. Boy, those were all over the place.
We all knew it was bullshiat even when I was a kid. Even though we all trick or treated in our own neighborhoods anyway, to people we knew. Our parents didn't drive us out 10 miles to Elm Street to trick or treat strangers. But everyone (well apparently not everyone) knew it was just an urban legend. As for the "razor blade in the apple" thing, first of all who gives an apple as a treat and what kid would eat said apple?
That would make a funny short film, a creep gives a kid an apple with a razor blade in it, the kid burns down the creeps house, they tell the kid that he deserved it for putting a razor blade in the apple and the kid says, "there was a razor blade in the apple? That's even more farked up".
The Beast
I'M AT a set of traffic lights with 1,000bhp under my right foot and a straight stretch of empty road ahead. Even at tickover, the noise inside the car’s cabin is deafening. Whatever is under the long bonnet in front of me is desperate to be let loose. I slip the transmission into Drive, press the accelerator pedal and say a silent prayer.
Forty years ago this car was the talk of ashen-faced drivers everywhere. Motorists would return home trembling with tales of a car longer than a lorry, louder than a plane and as fast as a rocket. Today we marvel at the Bugatti Veyron’s 253mph top speed; back in the 1970s, the Beast was rumoured to be capable of that speed too and for a time it was officially the world’s fastest road car.
Tales about the car abounded, and not all of them were urban myths. A German count reported being overtaken on the autobahn by the Beast being driven so quickly that his Porsche was left floundering in its wake. He mistook it for a Rolls-Royce and phoned the car maker demanding it build him one.
In shaking tones, my father told me about the Beast and now I’m shaking too as I prepare to experience it for myself.
As I stamp on the accelerator, a noise assaults my ears that is like 1,000 washing machines at maximum spin, all with their drums broken. The Beast clangs, clatters, bangs and shrieks. It’s a symphony orchestra falling into a scrapyard. It’s a corrugated iron shantytown rolling down a hill. I’m aware my arms are being pulled from their sockets, my stomach is churning and my life is passing before my eyes.
The Beast
Engine
27,000cc, V12
Power
1,000bhp
Torque
1,450 lb ft
Transmission
3-speed automatic
Acceleration
0-60mph: 3.0sec (estimate)
Top speed
260mph (claimed)
Fuel
2mpg (range 92 miles)
Road tax
Free (historic vehicle)
The Beast can cover a standing quarter mile in 10 seconds and has been timed at 183mph. Right now, with its speedo needle winding towards 70mph and 1,450 lb ft of torque surging through the driveshaft, I can believe it. But I can also believe that unusual things start to happen at such velocities and that this suburban road in Torremolinos, a Spanish town more used to bulls than beasts, is not the place to experience them.
So I lift off the throttle, the Beast settles to a tolerably deafening clatter and I take stock of this awesome machine. The source of the Beast’s gnashing and wailing is a 27-litre Rolls-Royce Merlin engine. The Merlin is best known for powering Spitfires and Lancaster bombers in the Second World War but the one under the Beast’s extended bonnet comes from a Boulton Paul Balliol, an aircraft used by the RAF for flight crew training in the late 1940s. About 40 years ago it was fitted to a ladder-frame chassis and so was born one man’s satanic vision of a high-performance car.
It was a vision that had first formed in the mind of a British engineer called Paul Jameson in the late 1960s. He would never be able to explain exactly why but one day he was seized by the idea of fitting an 850hp Meteor V12 engine from a Centurion tank to a 19ft-long chassis of his own design.
He fitted a gearbox he thought capable of handling the Meteor, fired up the engine and, still without a body on the car, hared off down the runway at Biggin Hill airfield, Kent, to test his creation. At 120mph the clutch disintegrated and the car shuddered to a halt at the feet, incredibly, of an automatic gearbox specialist called John Dodd.
Dodd had just flown into Biggin Hill with a plane-load of gearboxes and was soon telling Jameson how he’d build him a “step-up” gearbox that, once fitted, would enable the car to use one of his automatic boxes. It sounded plausible but some weeks later Jameson declared himself bored with the car and sold it to Dodd for £400.
Today, Dodd is a trim, fiercely energetic 81-year-old living in Malaga on Spain’s Costa del Sol, from where he runs an automatic gearbox repair business. “I still don’t know why Jameson built the car or why he suddenly got bored with it,” he says. “But I snapped it up. I saw it could have great PR value.”
The car ate two of Dodd’s gearboxes before declaring itself pleased with the third. He had a glass-fibre body made for it in the style of a stretched Ford Capri and, in a confident flourish that would later come to haunt him, attached a Rolls-Royce grille to the front. The car was ready to meet the world and Dodd wasted no time introducing it at motor shows and events across Europe. “I wanted to show people it was engineered to perfection,” he says.
He also wanted to show them it was fast. In 1973 the RAC timed the car at 183mph at Biggin Hill (it was also timed at 75mph in reverse) and it entered the Guinness Book of Records as the world’s fastest road car. In 1975 disaster struck when it was destroyed by fire. But Dodd had the bug: a new one would rise from the ashes, even more powerful and outrageous than the first.
To the old car’s restored chassis, Dodd attached a Merlin aircraft engine, a three-speed General Motors automatic gearbox and a lightly modified replica of the original body. He fitted Jaguar disc brakes but got the suspension from an Austin A110 Westminster, a family saloon with an engine considerably less powerful than a Merlin V12. The steering of the car remained unassisted. To complete the effect, Dodd once more fitted a Rolls-Royce grille to the car. He then had it registered by London county council which, fooled by the car’s Spirit of Ecstasy mascot, taxed it as a Rolls-Royce.
Dodd and his car were soon back breaking speed records. On one occasion, claims Dodd, he outran a police helicopter on a drive from Ayr in southwest Scotland to Carlisle. A police Range Rover tried and failed to stop him. Dodd appeared in court but the case was dismissed when the police driver admitted he could not identify the speeding driver.
One morning in 1981 Dodd received a High Court writ accusing him of trademark infringement. Rolls-Royce had wearied of Dodd and his use of its signature trappings for his car and decided to take action. The case attracted huge publicity. Dodd drove to the hearing each day in his car. On the fourth day his counsel quit, accusing him of making a mockery of the court. Later the same day, the judge criticised Dodd for his “cavalier” attitude to the case. The next day, Dodd arrived on horseback, proclaiming he had swapped a 1,000-horsepower form of transport for a single-horsepower one.
It came as no surprise when Dodd was eventually found guilty of contempt for ignoring court orders about the car. He was fined £5,000 and ordered to pay the same in costs but the judge also enshrined the car’s name in legal history when he too referred to it as “the Beast”. Dodd lost an appeal the following year. Still declining to pay, he was sentenced to six months in jail and a warrant was issued for his arrest. He jumped in his Rolls-Royce (a real one) and fled to Spain. He had the Beast shipped out later and says all the legal wrangling has now been settled.
The car’s name is as apt today as it was more than 30 years ago. It is 19ft and 2 tons of hideously proportioned custard-coloured awfulness. Today, in place of its Rolls-Royce grille and mascot, are the letters JD. The extended cabin looks as if it would accommodate four comfortably but it’s an illusion; there is space only for two. Which is just as well because it’s not easy to persuade anyone to come along for the ride.
As befits a car with a Second World War aero engine, its starting procedure is akin to firing up a Spitfire. You turn an ignition key, flick two magneto switches, flick a third switch to prime the cylinder valves with neat fuel and press the starter button.
The engine explodes into life, immediately filling the cabin with fumes from the exhausts down by the doors. Whereas a Spitfire’s supercharged Merlin engine roars and throbs, the Beast’s unblown V12 bangs and clatters furiously at its idling speed of 120rpm. “All that banging you can hear is torsional vibration,” yells Dodd from the passenger seat. “The propeller would have absorbed it. It disappears over 40mph,” he says, not entirely convincingly.
The T-shaped shift lever on the elderly General Motors box is loose and flops around the gate. Like a co-pilot, Dodd is on hand to operate it while I concentrate on the car’s remaining controls.
Controls? There’s a steering wheel but as we hurtle away from the traffic lights, I’m not sure it’s attached to the wheels. There are brakes, too, but they need a hefty shove to restrain the Beast. The trick to driving the Beast is to nail the throttle, before quickly lifting off and letting the momentum carry the car forward. Later, I learn why it’s not a good idea to mash the throttle in a roundabout, when the rear end steps out alarmingly. Amazingly, it tucks back into line with a few turns of the steering wheel.
“I love this car,” shouts an unfazed Dodd. “It’s one of the most powerful cars on the road yet once you learn how to handle it, easy to control, too.”
Later, I have the pleasure of parking in a narrow Torremolinos side street. Retired English people pour out of the cafes in astonishment, eager to shake Dodd’s hand and admire the Beast’s engine. “It’s a Merlin,” announces a beaming Dodd to the assembled throng.
“Does it fly?” asks one elderly gent.
I should say so.
TWA Flight 800
TWA Flight 800, an older Boeing 747 jumbo, took off and headed out over the Atlantic Ocean. About twelve minutes after its departure, at about 13,700 feet, an explosion broke the aircraft in half just forward of the wing. All 230 people on board were killed.
The NTSB (National Transportation and Safety Board) managed to recover all 230 bodies, and over 95% of the wreckage from the ocean floor, which is pretty incredible. They reconstructed the aircraft to understand what went wrong. What was found, and what nobody disputes, is that the principal destruction was caused by an explosion of the fuel in the center wing fuel tank. What has never been determined is what triggered that explosion. Conspiracy theorists immediately jumped on this, and concluded that the aircraft must have been shot down by the US government, either deliberately or as accidental friendly fire.
Fuel was thrown on this fire from two principal sources. The first source was a number of eyewitness reports from people who saw a second brightly lit object going up into the sky and contacting the aircraft, a description which certainly sounds like a missile attacking the aircraft. The second cause of conspiratorial speculation was triggered by the government itself. The FBI, who was investigating the crash to see if it was a criminal act, engaged the CIA to produce a computer animation showing what the aircraft did after it exploded, in order to answer the questions of the families of the victims. According to the NTSB, when the nose broke off the aircraft, that made it tail heavy and it veered sharply upward for several thousand feet, burning all the way, thus looking like a missile. The FBI had no available computer animation resources of their own, so they had the CIA do it. And, once the CIA became involved, that screamed out to every conspiracy theorist in the world that the whole operation was a clandestine government coverup.
The aircraft was two and a half miles up, and about nine miles offshore, when it exploded. That puts the coastline just about exactly one minute away at the speed of sound. The vast majority of the eyewitnesses were between one minute and two minutes away, as sound travels. The majority of the 38 eyewitnesses who reported a skybound streak that's been described as a missile trail only turned to look after they heard the explosion. This means that for at least two minutes after the plane exploded, something happened that looked to many eyewitnesses like a missile going up. Remember, the majority of people who reported that it looked like a missile struck the aircraft, did not start watching until at least one minute after the explosion happened. Therefore, in most cases of people who said it was absolutely a missile, the laws of physics make it impossible that they could have seen such a missile. We know for a fact that what the aircraft did one minute after it exploded, looked enough like a missile to convince many eyewitnesses that it couldn't possibly have been anything else. In all of these cases, whatever they saw happened after any theorized missile would have detonated.
One of the conspiracy web sites, Flight800.org, has a page giving testimony from witnesses who believe that they distinctly saw two separate objects, a missile and a plane, converge. As you listen, pay attention to when the witnesses heard the sound relative to what they saw:
Witness 73: ...While keeping her eyes on the aircraft, she observed a 'red streak' moving up from the ground toward the aircraft at an approximately a 45 degree angle. The 'red streak' was leaving a light gray colored smoke trail... At the instant the smoke trail ended at the aircraft's right wing, she heard a loud sharp noise which sounded like a firecracker had just exploded at her feet. She then observed a fire at the aircraft followed by one or two secondary explosions which had a deeper sound. She then observed the front of the aircraft separate from the back.
Witness 88: ...All of a sudden he heard an explosion. He glanced over to the southeast and observed what he thought was a firework ascending into the sky. All of a sudden, it apparently reached the top of its flight... At this point he observed an airplane come into the field of view. He stated that the bright red object ran into the airplane and upon doing so both the plane and the object turned a real bright red then exploded into a huge plume of flame.
Witness 675: ...Noticed an orange flare ascending from the south... trailing white or light gray smoke. He then observed the flare strike what looked like an eastbound Cessna airplane on the port side... Within five (5) seconds... he heard what sounded like thunder and felt the ground shake.
Witness 145: ...She saw a plane and noticed an object spiraling towards the plane. The object which she saw for about one second, had a glow at the end of it and a gray/white smoke trail... She heard a loud noise and saw an explosion just as the object hit the plane. The plane dropped towards the water and appeared to split in two pieces. A few seconds later, she heard another explosion.
Whether you're a conspiracy theorist or not, the 1-minute minimum delay required by the speed of sound clearly makes it impossible to corroborate what these people heard with what they think they saw. This illustrates why the witness testimony, while still valuable, cannot be relied on as the definitive explanation for what happened. Anecdotal evidence has value for suggesting directions to research, but it does not by itself constitute evidence, and cannot reasonably be treated as such.
If you do a Google search for "TWA Flight 800", most of the results are from conspiracy web sites that uncritically start with the assumption that the US government shot down the aircraft. These web sites then present opinion, conjecture, and hypothetical extrapolation that support that assumption. Sometimes you'll hear conspiracy theorists charge that the NTSB ignores eyewitness reports, or suppresses anything that doesn't agree with their official story of an accident. Anyone who's a pilot or an aviation nut knows that this couldn't be further from the truth. Go to NTSB.gov and click on Aviation Accident Database. Search for some recent accidents, as these will show you what an investigation looks like in progress. What you'll see are the facts that are known, and you'll see any eyewitness reports there might be. What you won't see is anything like an explanation or a theory, and certainly nothing like an "official story" that anyone is sticking to.
Of course, this doesn't change the mind of a die-hard conspiracy theorist, because this government-produced paper is simply part of the conspiracy. In fact, they consider the report's very existence as further evidence of the conspiracy. When you hear a conspiracy theory that provides no testable evidence of its own, but relies only on anecdotal testimonies, extrapolations of possible motivations, and non-evidenced claims of implausible coverups, you have every good reason to be skeptical.
Skeptoid on Conspiracy Theories
Ever since the earliest days of Skeptoid, listeners have been asking me for two things: Do an episode on paranormal claims that turned out to be true, and do an episode on conspiracy theories that turned out to be true. For both types of requests, I've always answered "Great, just find some for me." Nothing. Ever. Crickets chirping. So when I went on the Joe Rogan podcast, which has an enormous conspiracy theory following, I asked straight out: Please send me examples of conspiracy theories that turned out to be true. I was buried in email... to the degree that such a thing is possible.
Judging conspiracy theories can be a tricky business. For one thing, they're often uselessly vague. I can say "The government does things we don't know about," and then virtually anything can come out in the news and I can claim to have been right. For another thing, the world is full of real criminal conspiracies, and I can always point to any one of them and claim "Hey, this is a conspiracy theory that was proven true." So I have a simple pair of requirements that a conspiracy theory must adhere to in order to be considered the type of conspiracy theory that we're actually talking about when we use the term.
First, it must be specific enough to be falsifiable. This is the fundamental requirement that every scientific theory must comply with to be considered valid. By way of example, compare a vague version of the chemtrails conspiracy theory to a specific disprovable claim. You can't just say "Some airplanes spray some unknown chemical." That's so vague that you could claim you were proven correct the next time a crop duster sprays a field. But if you say "United Airlines tail number NC13327 is equipped to spray VX nerve gas, and that one right there is spraying it right now," then that's a claim that can be disproven with a single inspection. You make a claim that specific, you're proven right, I'll stand behind you 100%.
Second, it must be known to the conspiracy theorist before it's discovered by the media or law enforcement. Simply repeating what someone else's proper investigation has led them to does not constitute developing a theory. Woodward and Bernstein did an intense investigation and put together evidence bit by bit until they had the whole story of the Watergate scandal; at no point did they sit back in their chairs, propose an elaborate conspiracy, then watch as every detail unfolded exactly as they predicted. If you want to impress me with your conspiracy theory, you have to discover it (in detail) before other investigators piece together the proof and make it public for you. Otherwise you're just claiming credit for reading the newspaper.
So now let's look at the most common "conspiracy theories proven true" that I was sent:
1. The Gulf of Tonkin
This was overwhelmingly the most common story sent to me from listeners of the Rogan podcast. It was the American excuse to enter the Vietnam War. A small naval battle took place between US forces and North Vietnamese torpedo boats, after which Congress gave President Lyndon B. Johnson the authority to order military action in support of certain Southeast Asian countries who were threatened by Communist forces. Basically, a thinly-veiled authorization for Johnson to go to war with North Vietnam.
The conspiracy part comes from the claim that the naval battle never actually took place, or that it was a fake "false flag" attack by American conspirators trying to give Congress the excuse they wanted. There's probably a grain of truth to this. There was indeed one real engagement on August 2, 1964, in which planes and ships were damaged on both sides and the North Vietnamese suffered a number of casualties. There's no doubt there. But it was the second attack two days later on August 4 that was fishy. American forces fired heavily on radar targets only, and nobody ever reported any visual sightings of North Vietnamese forces.
Throughout the day on August 4, as the action was unfolding, Captain Herrick of the destroyer USS Maddox cabled Washington a number of times, and reported in no uncertain terms that he believed there were no enemy forces. This information was public from the beginning. Even as Johnson was drafting his resolution, Senator Wayne Morse was holding public press conferences to reveal that the second attack was without evidence.
Provoking attacks may seem pretty unethical to most of us, but the fact is it's been a common military tactic since the Romans and the Carthaginians. At no point were the details of the Gulf of Tonkin incident unknown, so it never existed as a conspiracy theory.
2. COINTELPRO
The FBI's domestic Counter Intelligence Program was a terrible thing from the beginning. It operated since 1956, and also less formally for nearly 50 years before that. Their purpose was to discredit and harm American groups mainly associated with civil rights, characterizing them as hate groups that threatened national security. The program was blown in 1971 when a group of eight men, calling themselves the Citizens' Commission to Investigate the FBI, broke into a small FBI office in a perfectly planned and executed raid. They seized some 1,000 documents detailing COINTELPRO operations and mailed them to newspapers. The FBI was unable to identify any of the burglars before the statute of limitations ran out, so they got away with it clean. As a result, the FBI was forced to terminate this often-illegal program.
COINTELPRO fails both qualifications to be a conspiracy theory that was proven true. There are no records of anyone having made a specific and accurate claim prior to 1971 about what COINTELPRO was doing. Lots of people and groups had always believed the government was subverting them in some way, but there were no specific accusations. I can state right now that the government continues to subvert some groups in some way, and I'm right, but I also don't have any claims that are specific enough to be falsifiable. Nobody had any falsifiable claims about COINTELPRO until investigators did the legwork to reveal it.
3. Government Assassination of Martin Luther King, Jr.
COINTELPRO was in its heyday when one of its targets, the Reverend Martin Luther King, Jr., was assassinated by a sniper in 1968. King's family always believed that the government was behind it, and that James Earl Ray, who was convicted of the killing, was an innocent scapegoat. Conspiracy theorists claim the King family was ultimately proven right in 1999 when they won a civil wrongful death lawsuit filed against a man named Loyd Jowers and unnamed co-conspirators. If a court finds that a conspiracy was responsible for the death of someone, that means it must be true, right?
Well, often it does, but certainly not in this case. Nobody had ever even heard of Loyd Jowers until 1993, twenty four years after the investigation convicted James Earl Ray. Jowers went on the television show Prime Time Live and told a fairly wacky tale, about how he and the Mafia and the US Government conspired to murder King. His story was full of contradictions and made little sense, and the general feeling was that he was just some random guy out to make a name for himself and hopefully get a book deal.
A few years later, King's widow Coretta Scott King filed a civil suit against Jowers and unnamed co-conspirators, asking for damages of $100. The government didn't even bother to show up (in part because no one in the government was named), and only one reporter actually stayed for the whole case. Mrs. King essentially won by default, which is what happens when no defendant shows up. The case was so trivial that it wasn't worth anyone's time to contest. Given that every single criminal investigation of the murder found James Earl Ray to be the killer, the conspiracy theorists can hardly claim credit for getting this one right.
Incidentally, Jowers died shortly after paying the $100, so his lucrative book deal never quite materialized.
4. The Tuskegee Syphilis Experiments
From 1932 to 1972, a program operated by the US Public Health Service operated a clinic in Tuskegee, Alabama, providing free health care to the local black sharecroppers, many of whom were infected with syphilis. Contrary to some modern versions of the story, the study did not actually give anyone syphilis who didn't have it. Instead, they continued their studies of the progression of the disease, even after 1940 when syphilis became easily treatable with penicillin — a treatment they withheld from their patients.
600 patients were studied over the forty years, about two thirds of whom had syphilis. Some died, went blind, or suffered other effects of the disease, all untreated by a government clinic that easily could have cured them. The study began as a genuine community-based public healthcare study; but over the years, as scientists came to believe that blacks and whites responded differently to the disease, it gradually became an all-out exploitation of helpless victims. It was useful science, but with abhorrent ethical standards, or none at all.
No outsiders are known to have ever suspected the nature of the program, thus it never existed as a conspiracy theory. Peter Buxtun, a former Public Health Service investigator, blew the whistle in 1972 and provided full details to the newspapers.
5. CIA Drugs for Guns
For a long time, the US Central Intelligence Agency has worked to maintain fragile relationships with small foreign governments or factions within them. During the Korean War, the CIA employed Chinese warlords to get intelligence reports from peasants. During the Nicaraguan conflict, the CIA employed men like Panama's Manuel Noriega for similar reasons. In both cases, the people the CIA worked with were involved in the drug trade. To the CIA, condoning their drug shipments to the United States was a small price to pay for what the government saw as a far more important objective: intelligence.
In all likelihood, most details about such relationships have yet to surface, but investigative reporters have always found plenty. The most notorious indictment of CIA involvement with drug traffickers was written by Gary Webb for the San Jose Mercury News and later expanded into his 1998 book Dark Alliance. Amid overwhelming criticism from all sides, Webb soon took his own life. But not even Webb's investigation, nor those who have followed him, ever turned up evidence of charges made by some of today's conspiracy theorists like the CIA purposefully got people addicted to cocaine, or sold it themselves to fund weapons purchases.
In this case, the conspiracy theories have never been evidenced, and the facts that have come to light were discovered by patient and detailed investigators on the ground and immediately published. I can't find a single case of a conspiracy theorist having made a specific, falsifiable claim that was later proven true by investigators.
If you have a conspiracy theory, you cannot work backwards. You cannot start with your theory, and then set about looking for information that supports it. The investigators who have revealed the conspiracies we now know to have existed did so by investigating, piecing together what they learned, and then reporting their findings to the world. If you believe or support a conspiracy theory that has not yet come to light and been confirmed, history shows you're almost certainly wrong, and could probably stand to be a bit more skeptical.
The more years that I've put into Skeptoid, the more clearly I've recognized what truly excites my passion. Producing Skeptoid is not just fun, it's outrageously fun. It's a grind because I have to stay on schedule and there's a lot of busy work involved, but the research part is like Disneyland on steroids. Who wouldn't love a legitimate excuse to spend part of their day learning about new wild and crazy stuff? I certainly do, and what excites me the most is making a connection that no previous researcher has. It's the difference between opening a treasure chest that nobody's ever seen before, compared to one that a hundred people have already plundered. I've struck such gold a number of times, and the thirst for more has me in a kind of gold fever every time I sit down at my desk. This passion for discovery is what drives me and what makes me spring out of bed at 5:30 every morning with honest excitement. I truly do love it.
Now I don't know for sure that no previous researcher has made these connections, probably somebody has in many of the cases; but I didn't find them and made the discoveries on my own. It's a kind of natural high that I hope everyone can experience, and that I hope will inspire others to want to learn more. Don't stop at the pop-culture supernatural explanations, those are boring. What's fresh and new is the often surprising fact of what's really going on.
The example that I've probably cited the most comes from Borley Rectory, said to be the most haunted house in all of history. Its most notable haunted feature was the automatic writing that appeared on the walls, often while people watched, scrawled words begging for help. This claim appears in print in every modern book written about Borley Rectory, in every documentary film, and it's all over the Internet. My connection came when I read the oldest accounts I could find from a pair of mediums who came to the house to perform seances using a planchette, which is a kind of Ouija board with a pencil. The mediums placed their hands on the planchette, and as it moved around on the table, the pencil would write. The old accounts said that the mediums used rolls of wallpaper laid out on the table, probably because they were the only large rolls of paper that were handy. These primary accounts never made any mention of writing appearing on the walls. It turns out that whole part of the story was nothing more than a modern misinterpretation from reading how writing appeared on the wallpaper while people watched. A completely understandable mistake, and it's become one of pop culture's most often cited examples of paranormal activity. For me, this was a Eureka moment that solved a question that had bugged me since I first read about it as a tiny kid.
There have been at least two cases where something strange has entered the popular mythology, and I discovered that what happened was driven by obscure cultural influences unknown to the Western reporters. One of these was the Faces of Belmez, a case in Spain where a woman was found to have been painting faces on the cement floor of her house for decades, and all the locals believed it was a psychic manifestation. What puzzled me was that it's a profoundly Catholic population: Why would such staunch Catholics be so quick to believe in psychic powers? To me, that was the real mystery, and the answer lay in cultural anthropology. Many of the lower classes in the Andalusian region of Spain are called Romani gitanos, who are ethnically descended from Eastern European gypsies. Even today, gypsies are hugely into psychic powers, faith healing, and communication with the dead. Blend that with the ubiquitous Catholicism throughout Spain, and suddenly a psychic manifestation becomes a practically inevitable confirmation of Santa Maria. The cultural influence spoke louder than the conventional clues.
I made a similar discovery investigating the case of Hambo Lama Itigelov, a Buddhist monk from eastern Russia near the Mongolian border, who died in 1927 and whose body is on display at his monastery where it was exhumed and found to be miraculously incorrupt. They do exaggerate how fresh he looks, but he is remarkably well preserved. To nearly all modern reporters, the discovery of his incorrupt state was a surprise and an inexplicable miracle. But upon reading that Itigelov had given instructions that he be exhumed, I wondered if it might not have been such a surprise after all, so I looked into the traditions of Buddhist monks. And I found one, one that would have especially appealed to Itigelov, who was a pharmacologist and had written a Buddhist encyclopedia on the subject. As the Hambo Lama he would have been thoroughly learned in all the ancient traditions, including one handed down from the Buddhist monks in Japan, called sokushinbutsu, or self mummification. I put the pieces together and discovered he'd given instructions that he be buried packed in salt, and that he'd spent his final months in secrecy and his body was found to be saturated with preserving bromine salts. There was no mystery and no surprise, there was merely the practice of an obscure ancient tradition. You get no help from previous writers with this stuff; I had to find all of this on my own, and when everything fell into place, it was so rewarding.
Many times, previous writers aren't merely unhelpful, they actually steer you wrong. One thing I've discovered time and time again is that flagrant plagiarism is endemic among those who write about the paranormal on the Internet. Articles are often unapologetically wholesale copy-and-pasted from one site to another. It's because these authors are so thoughtful and put so much research into their work. One such case was my episode on the Brown Mountain Lights in North Carolina. Every article I found about them, Wikipedia included, contained a snippet of text purported to be from a researcher named de Brahm back in 1771, who wrote that the lights were caused by gases escaping from the mountains and igniting. The problem was that this author's books are all online and searchable, and I couldn't find that sentence in anything he'd ever written. I put the question out to the Skeptoid Research email list to see if anyone could track it down, and sure enough, several people did. Turns out I couldn't find the text because he'd been subtly misquoted, and furthermore, when read in context, he was expressing his prescientific thoughts on why lightning occurs. It didn't have anything whatsoever to do with the Brown Mountain Lights. Just about every book and article ever written about the Lights cites this sentence as a primary source and is the only evidence that the Lights existed prior to the arrival of the train and the automobile, and they're all wrong.
This happened again when I researched the Bell Witch, widely considered the most authentic and dramatic case of witchcraft in U.S. history. This required deep historical research — not into the events, but into the printed sources. When I followed the primary sources cited by all the contemporary accounts, they all led back to one original book written and published 75 years after the hauntings. This author, a writer and publisher named Ingram, claimed that his primary source was a diary from one of the Bell children that dubiously fell into his possession. He never produced the diary, gave any evidence of its existence, or gave a believable account of how he obtained it. He also falsified at least one other primary source, gave quotes and stories only from people who were already deceased, and made up historical events (like a visit from Andrew Jackson) that contradict known history. I discovered that the entire Bell Witch incident was almost entirely, if not entirely, a hoax from an imaginative author, just like the Amityville Horror.
People sometimes ask me if I'm not disappointed when I find out that these urban legends aren't true. Hardly. For me it's never been about disproving things or showing that the world is less interesting than we hope. For me it's about new discoveries, and every one of them gives me a rush. Many of these stories are so deeply ingrained into our culture that everyone who's heard about them believes them, and every book you can pick up reinforces that. Making a new discovery, and overturning the current state of our knowledge, is what every researcher and scientist longs for. It's where the fun is. It's the gold at the end of the rainbow.
Paranormal stories are square pegs in round holes. They simply don't fit our understanding of how the universe works. When a lazy researcher concludes that a paranormal explanation must be the true one, they know that red buzzer is going off telling them they probably haven't found the right answer yet; they just choose to ignore it for whatever reason. Maybe to them, being different is more important than actually solving the puzzle. I like to have everything fit. I like seeing the pegs drop into the holes. I'm excited to see what the solved puzzle looks like. I know that's not everyone's cup of tea, but for me, picking up a Rubik's Cube and getting it right is where the real magic is.
Every story out there that depends on some supernatural element is an unsolved puzzle. I look at all of these puzzles and I feel like a kid in a toy store. And that, I think, is the best way to explain why I love making Skeptoid.
Skeptoid - The Oak Island Money Pit
It began as every boy's dream adventure, like a chapter from Tom Sawyer. It was the year 1795 when young Daniel McGinnis, a lad of 16, rowed to Oak Island in Nova Scotia on a journey of exploration. On the eastern end of the wooded island he found something out of place: an old wooden tackle block suspended from a heavy branch, and on the ground below, a sunken depression.
It didn't take young Daniel long to bring in heavy equipment, in the persons of two friends armed with shovels and the knowledge of old stories that the pirate Captain Kidd may have buried treasure on this part of the coast. Two feet down they struck a layer of flagstones, and all the way down they found pick marks on the walls of the shaft. The three boys dug for days, and just when they were about to give up, they came to a solid platform of logs. There was nothing under the logs, but it fired the boys up: There was no longer any question of whether something had been buried here. Over the coming weeks they finally reached 30 feet — that's incredible for three teenagers — and along the way found two more log platforms. By then the difficulty and frustration won, and they gave up.
But the local newspapers had made the pit something of a phenomenon. A few random people came and tried to dig further, and around 1803 a mining group called the Onslow Company took over the island and made the first really serious effort. They found more regularly spaced log platforms, a few of which they found to have been apparently sealed using coconut fiber and putty. Their most important find was a stone tablet at 90 feet, inscribed with weird symbols, translated to mean "40 feet below, two millions pounds lie buried." Encouraged, they continued their dig, but got no further: Just below the tablet they struck a side tunnel, open to the sea, which immediately flooded the pit. It was a booby trap.
To make a very long story short, many companies and investor groups have taken over the island and launched major digging efforts, costing millions of dollars and the lives of six men killed in various accidents. An auger sent down the hole in 1849 past the flood tunnel went through what was said to be a sheet of iron, more oak, "broken bits of metal", and brought up three links of gold chain and a 1-centimeter scrap of parchment reading "vi" or "ri". Flooding and collapses marred many mining efforts over the decades. In 1965 a causeway was built to the island to deliver a great 70-ton digging crane which excavated the Money Pit to a depth of 134 feet and a width of 100 feet. In 1971, workers sunk a steel caisson all the way down, finally striking bedrock at 235 feet. They lowered a video camera down into a cavern at the bottom of the shaft, but whether it shows nothing interesting or a spectacular pirate's cave seems to depend on whose account you read. After more than 200 years of excavation, that entire part of Oak Island is now a wasteland of tailings and abandoned gear, and the location of the original Money Pit is no longer known.
What might have been buried at the Money Pit? Theories abound beyond the popular local notion of Captain Kidd. Other pirates said to have buried loot in the region include Henry Morgan and Edward Teach, as well as Sir Francis Drake and Sir Francis Bacon. Some suggest the Vikings. An interesting nomination is that of the British Army protecting payroll during the American Revolution, whose Royal Corps of Engineers were about the only folks at that time with the ability to construct such a system, with subterranean flood tunnels leading to the sea.
I've long been curious about a couple of elements from the Oak Island story. First, the coconut fiber. Coconuts are not found in Nova Scotia, nor anywhere in the vicinity. The closest place coconuts were found in the 18th century was Bermuda, about 835 miles due south of Oak Island. You might think this supports the pirate theory, since pirates certainly frequented Bermuda and the Caribbean. Some accounts of Oak Island have said that coconut fibers were found in large quantities buried beneath the beach, though this has never been evidenced. A 1970 analysis by the National Research Council of Canada did identify three of four samples submitted as being coconut fiber. Radiocarbon dating found that the coconut came from approximately the year 1200 — three centuries before the first European explorers visited the region, and two centuries after the only known Viking settlement more than 600 miles away. How would 300 year old coconuts get buried 50 feet underground when there was nobody around to do it, in a booby-trapped shaft that nobody had the technology to dig?
It's really those flood tunnels that put the Money Pit over the top of anything you could expect pirates or anyone else to be responsible for, especially during a century when no Europeans were within thousands of miles. Divers would have needed weeks or months to cut subterranean tunnels from the bay nearby to the 90-foot level of the Money Pit. Or would they?
The geology of Oak Island and its surrounding area gives us some more clues. The region is primarily limestone and anhydrite, the conditions in which natural caves are usually formed. In 1878, a farmer was plowing Oak Island just 120 yards away from the Money Pit when suddenly her oxen actually broke through the ground, into a 12 foot deep sinkhole above a small natural limestone cavern. 75 years later, just across the bay, workers digging a well encountered a layer of flagstone at two feet, and as they dug to a depth of 85 feet, they encountered occasional layers of spruce and oak logs. Excitement raged that a second Money Pit had been found, but experts concluded that it was merely a natural sinkhole. Over the centuries sinkholes occasionally open up, trees fall in, and storms fill them with debris like logs or coconuts traveling the ocean currents. These events, coupled with the underground cavern at the bottom of the Money Pit discovered in 1971 and the discoveries of numerous additional sinkholes in the surrounding area, tell us that Oak Island is naturally honeycombed with subterranean limestone caverns and tunnels. The geological fact is that no Royal Corps of Engineers is needed to explain how a tunnel open to the sea would flood a 90 foot deep shaft on Oak Island, booby trap style.
Obviously the story has plenty of elements not thoroughly explained by the theory that the Money Pit was simply a natural sinkhole consistent with others in the area. One such element is the stone tablet. There's a link to a drawing of it made by investigator Joe Nickell in the online transcript of this episode. No photographs exist of the stone, nor any documentation of where it might ever have physically been. The transcribed markings are in a simple substitution cipher using symbols borrowed from common Freemasonry, and they do indeed decode as "Forty feet below, two million pounds lie buried", in plain English. The stone tablet made its appearance in the Onslow Company's records, coincidentally, about the same time they were running out of money and their pit flooded. Most researchers have concluded that the stone tablet was probably a hoax by the Onslow Company to attract additional investment to continue their operation. The same can be said of the other two significant artifacts, the links of gold chain and the parchment. Accompanying his map, Joe Nickell said "The artifacts are not properly documented archaeologically, and most would appear to derive from historical activity on the island or from subsequent excavation or hoaxing by workmen."
And as with so many other subjects, the older the account you read, the less specific and impressive the details. The contemporary newspaper accounts of Daniel McGinnis and his two friends make no mention of a tackle block or of regularly spaced log platforms, only that logs were found in the pit, and that the tree branch showed evidence of a block and tackle having been used. Armed with proper skepticism and the willingness to look deeper than the modern sensationalized retellings, the Money Pit's intrigue and enchantment begin to fade.
I was probably no more than eight or nine when I first read about young Daniel McGinnis and his treasure tree, and at that very moment, Oak Island became a permanent part of me, as it has so many others. Oak Island's history is a patchwork of individual romances and adventures, a tapestry made from the reveries of skeptics and believers alike. Whether building causeways and sinking caissons, analyzing old newspapers, swinging a pick in the glare of a lantern, or even listening to a podcast about the mystery, all of us share the same ambition. No matter if we seek treasure or truth, we all long for the chance to turn just a few thrusts of the shovel, and we care not what we find.
Skeptoid on Conspiracy Theories 2
Today we're going to open the Skeptoid mailbag (it's not really a bag) and answer some emails about conspiracies. Don't laugh off conspiracy theories; they're still one of the most prevalent examples of failed thought processes. Conspiracy theories cut across all demographics. There's no age group, political affiliation, geography, or economic class that is free of conspiratorial thinking. Even I continue to be surprised at how widespread it is. I can be out with a group of friends, and if the subject comes up, it's a virtual certainty that someone I'd never expect will launch into a conspiracy tirade.
So with this in mind, I thought it would be a perfect time to spend an entire feedback episode answering the following email, sent in by Bruce from Sydney, Australia. Bruce writes:
Last week I was out at dinner the with a group of people when my two cousins launched into a conspiracy rant that covered everything from Osama's death being a staged to robots flying the planes that hit the Twin towers. Incredibly, the audience at dinner was lapping up this nonsense. Because these campfire tales make such interesting table conversation, they end up being believed by the group. When I chime in mentioning such boring words as evidence and science I literally get shut down in favour of far more exciting yarns of conspiracy. What is your advice on how to provide a balanced argument so the conspiracy crowd doesn't continue to grow in numbers by the time the dessert cart arrives...
Now previously, in my earlier episode called How to Be a Skeptic and Still Have Friends, my basic advice is to say nothing. In cases when nothing is really at stake, which is most of the time, there's no need to start a fight with your friends. There's rarely an upside to that. But in this case, Bruce's friends are hearing about conspiracy theories for the first time. Most likely, they walked away and probably never gave it another thought; but the message they came away with was "There are some really scary conspiracies out there."
Well, I'd rather they come away with a different message. I'd rather they come away thinking "There are some really goofy conspiracy theories out there."
Here's one way to do that.
The idea is to give the conspiracy theorist enough rope to hang himself. There's rarely a need to challenge him on any specific point. The reason this is almost guaranteed to work is that most conspiracy theories are usually absurdly complicated, self-contradictory, and united only by the concept that some authority or official story is wrong. Bruce's cousins claimed the airliners were piloted by robots. I've also heard that the airliners were holograms, that they were flying missiles disguised as airliners, and that they were real airliners with big mysterious boxes attached to their undersides. All four of these are contradictory; but in the mind of the conspiracy theorist, they're all more likely than "the official story". Each of those alternate suggestions would have required a completely different kind of conspiracy, with radically divergent details of who was involved and how it was accomplished.
There are two questions you can raise, next time you're in Bruce's shoes. Your goal is to show the other listeners present that they're being fed crackpot nonsense, so your basic strategy is to force the lack of coherence to the surface. Start by asking about one of the competing theories:
I've heard that the airliners were missiles and not real airliners. What do you think about that?
I promise you that the conspiracy theorists will be far more receptive to any alternate conspiracy, than they will to the official story. Suggest an alternate conspiracy, and they'll probably say "Sure, that's another possibility," or something along those lines. If it's Osama bin Laden's death, two competing theories are that he's still alive living in some CIA luxury suite, and that he died naturally some years ago.
Your work is now half done. Flesh out the competing theories. If they were fake airliners, then the original ones must still exist. If they were real airline flights, then the pilots must have been replaced with robots at the airport. Get as many details as you can to supplement both theories with as much divergent information as you can. And then, ask your second question:
Given the three possibilities — robot pilots of real airliners, fake airliners, and the official story — what convinced you that robot pilots was the true version?
Now, my final guarantee to you is that you will then get to watch your conspiracy friend flounder; and whatever he says, the other people present are only going to hear that he has no idea what he's talking about. Chances are he'll happily grant that either is possible, and both fit the facts, despite all the ridiculously different real-world requirements that would have been needed to make either one happen. Watching him explain how two mutually exclusive conspiracies both fit the same set of facts is always entertaining.
So let's take the remaining time to answer a couple of other emails, this time from listeners who do assert conspiracies. Alex from Miami wrote in about the episode on the Bohemian Club, a private club for rich old men trying to rediscover their frat house days. A number of conspiracy theorists assert that it's secretly for the planning of global domination:
...Your initial question was false: it's not if these guys actually *plan* nefarious things there, it's if there is any evidence that the Powers That Be actually go there and network...which they do. We can infer what that means: ...the Club members are more likely to team up on projects in the future. These projects just happen to result in the increased power of the government and military industrial complex.
And maybe, through other collaborations forged in the Bohemian Grove, the Ronald McDonald House charity will undertake some new project with the Corporation for Public Broadcasting. Notice that conspiracy theorists tend to only see malevolence and evil. Powerful people getting together; therefore something evil must be afoot. In the episode about the workings of the conspiratorial mind, we discussed some of the evolutionary psychology theory behind why the brain is always on the lookout for threats. I want to carefully point out the way Alex concluded his note:
You are realllly good on exposing bad products, but reeeally weak on politics.
I've combed carefully through the Bohemian Grove episode and can't find anything that could reasonably be construed as political content. My guess is that Alex, upon finding himself in disagreement with me over whether the Bohemian Grove is a breeding ground for evil, allowed his brain to push him a step further and decide that I'm politically opposed to him as well — which is my reading between the lines of his charge that I'm weak on politics.
The conspiratorial mindset is very much an us-versus-them mentality. Everything is black or white, good or evil. We see this as well with Bruce's cousins and the idea that they'll be receptive to various different conspiracy theories: Only the government's official story is black; while all of those who dispute it in any way are white. This is why it's possible to be equally supportive of the idea that Osama bin Laden is still alive and that he died naturally years ago, even though they're fundamentally at odds. They're both good-guy theories united by their dismissal of the bad-guy official story.
This next email illustrates the way that conspiracy theories often evolve as evidence strengthens against them. This one is from Anthony in Newcastle. This was written prior to the 2012 apocalypse prediction, and as the date got closer and closer, no impending threat appeared; and predictions started blurring out. Anthony wrote:
...with all the recent actions of 2011 i am beginning to consider the possibility of a change in the Worlds Government setup, as major leaders die and recessions hit many a countries. But if it is true, it will certainly not happen in 2012, i'd say between 2016 and 2036. meaning that all this NWO speak will be dubbed as untrue after nothing happens in 2012, meaning that no one will see it coming or will have a way to fight back.
The prediction softened from one of global destruction to vague "changes in world government". Major leaders die and recessions hit countries all the time: always have, always will. No conspiracy or apocalypse is needed to explain such events. Additionally, note the pushing back of the date from 2012 to "between 2016 and 2036". It's a pretty safe bet that something bad will happen in that 20-year time frame, and it's an equally safe bet that some who predicted global apocalypse in 2012 will consequently claim their prediction came true.
All that's necessary to breed these conspiracy theory effects is for some authority figure, such as NASA, to state that nothing extraordinary is expected to happen in December 2012. That becomes the evil black-hat theory; and anything — anything — no matter how divergent, becomes united under the umbrella of good white-hat theories.
This raises an obvious question: Isn't it good to simply doubt all authority? I submit that the answer is no, which will obviously delight my detractors who consider me an Illuminati shill. Rather, I direct your attention back to the 1970s and the popular slogan "Question authority". Keep your mind open to the possibility that everything is subject to correction. But the default position of "Authority is always corrupt and untruthful" is just as fallacious as "Authority is always incorrupt and pure." No random authority, or any other party, is any more or less likely to be right or wrong than are you or I.
The Mother of All ULs
Focus on the Family, a Fundamentalist Christian agency, posted an humorous essay about Urban Legends on their CitizenLink web site. 1They described the following as "a montage of several of the urban myths currently floating around cyberspace. This anonymous email is being passed around under the heading, 'It Must be True, I Saw it on the Internet.' "
I was on my way to the post office to pick up my case of free M&M's (sent to me because I forwarded an e-mail to five other people, celebrating the fact that the year 2000 is "MM" in Roman numerals), when I ran into a friend whose neighbor, a young man, was home recovering from having been served a rat in his bucket of Kentucky Fried Chicken (which is predictable, since as everyone knows, there's no actual chicken in Kentucky Fried Chicken, which is why the government made them change their name to KFC).
Anyway, one day this guy went to sleep and when he awoke he was in his bathtub and it was full of ice and he was sore all over and when he got out of the tub he realized that HIS KIDNEY HAD BEEN STOLEN. He saw a note on his mirror that said "Call 911!" but he was afraid to use his phone because it was connected to his computer, and there was a virus on his computer that would destroy his hard drive if he opened an e-mail entitled "Join the crew!"
He knew it wasn't a hoax because he himself was a computer programmer who was working on software to prevent a global disaster in which all the computers get together and distribute the $250.00 Neiman-Marcus cookie recipe under the leadership of Bill Gates. (It's true - I read it all last week in a mass e-mail from BILL GATES HIMSELF, who was also promising me a free Disney World vacation and $5,000 if I would forward the e-mail to everyone I know.)
The poor man then tried to call 911 from a pay phone to report his missing kidneys, but a voice on the line first asked him to press #90, which unwittingly gave the bandit full access to the phone line at the guy's expense. Then reaching into the coin-return slot he got jabbed with an HIV-infected needle around which was wrapped a note that said, "Welcome to the world of AIDS."
Luckily he was only a few blocks from the hospital - the one where that little boy who is dying of cancer is, the one whose last wish is for everyone in the world to send him an e-mail and the American Cancer Society has agreed to pay him a nickel for every e-mail he receives. I sent him two e-mails and one of them was a bunch of x's and o's in the shape of an angel (if you get it and forward it to more than 10 people, you will have good luck but for only 10 people you will only have OK luck and if you send it to fewer than 10 people you will have BAD LUCK FOR SEVEN YEARS).
So anyway the poor guy tried to drive himself to the hospital, but on the way he noticed another car driving without its lights on. To be helpful, he flashed his lights at him and was promptly shot as part of a gang initiation.
Send THIS to all the friends who send you their mail and you will receive 4 green M&Ms -- if you don't, the owner of Proctor and Gamble will report you to his Satanist friends and you will have more bad luck: you will get sick from the Sodium Laureth Sulfate in your shampoo, your spouse will develop a skin rash from using the antiperspirant which clogs the pores under your arms, and the U.S. government will put a tax on your e-mails forever.
I know this is all true 'cause I read it on the Internet.
Monsanto is Modern Satan
In one of the Grimm brothers’ less popular tales, Satan visits Martin Luther’s study at Wartburg Castle to stop him from translating the New Testament. Luther, long accustomed to encounters with the devil and his minions, chucks an inkwell at Satan’s head, leaving a dark stain in the study. As the Grimms tell it, tour guides show the stain to visitors until, after hundreds of years, it finally fades away.
Belief in Satan, like Luther’s inkstain, has faded over the centuries. According to Barna data, the majority of modern Americans—and modern Christians—do not believe that Satan walks among us, preferring instead to identify the great deceiver as a symbol of evil. People today would likely blame the inkwell incident on madness. And we now see madness itself, once blamed on the Evil One and his servants, as emerging from the chaotic causal nexus of biology and circumstance.
But Satan has not disappeared. We need him too much. In the ongoing struggle with inexplicable suffering, there is no greater comfort than finding a target for simple, righteous blame. And so the list of Infernal Names, now secularized, grows ever longer: Big Government, Big Business, Big Pharma, Big Food. These are complex systems, of course—too complex to serve as satisfying scapegoats. But through the alchemy of capital letters we transform them into fairy-tale caricatures of corruption and deceit, villains that help to make sense of it all.
My own Satan has always been Big Business. For many years, I nurtured a hatred of profit-driven corporations and banks, engines of greed that deploy deceitful agents to visit iniquity on an unsuspecting public. I delighted in exposés of corporate malfeasance, and I found it difficult to conceive of investment bankers or advertisers as anything more than shells of humans, animated by an unholy desire to accumulate wealth and serve their masters. I saw my own occupation—professor of religious studies—as ideally situated for objective critique. I had traded the hollow promise of great riches for the intrinsic good of seeking truth.
So it came as quite a shock when people began calling me one of Satan’s minions.
It all started with MSG. While I was living abroad in China, I found that many expatriates insisted they were highly sensitive to MSG, yet multiple double-blind, placebo-controlled trials had led allergists to conclude MSG sensitivity is largely psychosomatic. In itself, this wasn’t surprising: the mind is well-known for having powerful positive (placebo) and negative (nocebo) effects on health. What surprised me was the dogmatic fervor with which my companions denied these findings. I noticed that similar dogmatism attends most debates about diet and health, and my fascination with the quasi-religious foundations of culinary culture led me to write various articles and a book.
The accusations began almost immediately. Again and again, online commenters accused me of being paid by Big Food to spread propaganda. And while Big Food consists of quite a few multinational corporations, commenters most often blamed my corruption on Monsanto, the agricultural biotechnology giant. Monsanto’s legendary depravity goes back for decades—they made Agent Orange for the government—earning it the nickname Monsatan (see #Monsatan on Twitter), a dedicated resistance campaign, Millions Against Monsanto, and yearly protest marches in over 40 countries.
Like most people, I knew how bad Monsanto really was, despite not having thought too hard about it. (It is the 3rd most hated company in America.) I knew Monsanto sues farmers into oblivion, caused a rash of suicides in India, suppresses negative media coverage, and pays politicians, and scientists to lie on its behalf.
But there was one story I didn’t believe, because I knew it wasn’t true: Monsanto hadn’t paid me. So I did what any academic or journalist would do, and started learning more about the company that supposedly had me on its payroll. In the process, I discovered that very little coverage of Monsanto included extensive discussions with representatives of the company. When it did, or when the coverage wasn’t completely negative, comment threads exploded with accusations of bribery. In one high-profile example, anti-GMO activist Vandana Shiva suggested that journalist Michael Specter and The New Yorker were Monsanto shills after Specter published a less-than-flattering profile of her activism.
David Remnick, the editor of The New Yorker, responded to Shiva’s criticism with a letter that the magazine subsequently released to the public. Remnick’s letter opens by acknowledging the futility of arguing with someone who believes in a great deceiver possessed of near-infinite power.
“I should say,” writes Remnick, “that since you have said that the entire scientific establishment has been bought and paid for by Monsanto, I fear it will be difficult to converse meaningfully about your accusation that the story contained ‘fraudulent assertions and deliberate attempts to skew reality.’”
This is why believing in Satan is so dangerous—and so tempting: If he really exists, we can protect our most deeply held beliefs by blaming any opposition on the work of a great deceiver. There is no need for dialogue. In fact, dialogue is inadvisable, because the deceiver is so powerful that any contact risks corruption. Best to avoid it entirely, lest you end up like Bill Nye, the Science Guy, who changed his mind on GMOs after visiting Monsanto.
Under most circumstances, the reasonable explanation would be that Nye was persuaded by argument and evidence. But for those who believe in Monsatan, the better—the only—explanation is that Nye was coerced, just as the best explanation for my skepticism about the dangers of grains or MSG is that the industry has paid me.
Shiva’s logic, the logic of believing in Satan, is really just the logic of conspiracy: simple, irrefutable, and empowering. Scholar of apocalypticism Michael Barkun puts it well:
[The conspiracy theorist’s view] is frightening because it magnifies the power of evil, leading in some cases to an outright dualism in which light and darkness struggle for cosmic supremacy. At the same time, however, it is reassuring, for it promises a world that is meaningful rather than arbitrary. Not only are events nonrandom, but the clear identification of evil gives the conspiracist a definable enemy against which to struggle, endowing life with purpose.
Unfortunately, reassuring narratives of good and evil are incapable of communicating complicated realities. Annie’s Naturals is owned by General Mills; Burt’s Bees is owned by Clorox. Merck has defrauded the government; it has also developed a remarkable cure for hepatitis C. Independent university researchers—and activists!—can falsify data; corporate researchers can do excellent, unbiased work.
Monsanto may well be as bad as its detractors assert (tobacco companies certainly proved worse than anyone imagined). But the current climate makes it impossible to find out. Widespread belief in Monsanto’s irredeemably evil nature discourages unbiased reporting. I know this because I experienced it myself.
For a time, I wanted to write about the company being blamed for my work. I interviewed scientists who had worked there. A complicated picture emerged, of a large (but not too large—about the size of Whole Foods) multinational that employed a wide variety of people, some of whom cared mainly about making money, and others who cared mainly about doing good science. I saw a company that litigated fiercely, but no more fiercely than Sony, Disney, or Apple, and I wondered why people—myself included—felt that seeds should be governed by different intellectual property laws than, say, tractors.
But then I realized I would never write that story. It wasn’t worth it. Why risk associating myself, even in passing, with Satan? Other journalists have told me they feel the same way. “I’m not proud of the chilling effect it has on me,” says Nathanael Johnson, who writes about food and the environment for Grist.
“There’s a real problem. If you don’t want to be a biased reporter, you have to talk to Monsanto, but just talking to them will be perceived as selling out. You can’t do the same piece that a tech reporter might do about Apple—even though Apple is the biggest corporation in the world and much more litigious.”
This is tremendously problematic, not least because it means the public conversation about important issues will be dominated by zealots. Take GMOs. On one side there are the activists, wearing gas masks and waving anti-Monsanto signs emblazoned with skulls. On the other are those who come to believe that any opposition to GMOs springs from deep-seated idiocy; the work of anti-science demons.
It would be nice if none of this rancor affected reporting, but that’s wishful thinking. Journalists, editors, and publishers care about accuracy, but they also worry about their audience. When that audience insists on believing in Satan, stories will be far more likely to feature him—even if he doesn’t exist.
Like David Remnick, I fear it will be difficult to converse meaningfully with people for whom belief in a great deceiver endows their life with purpose. But as an academic and a journalist, the only thing I fear more than damaging my reputation is an environment in which meaningful conversation is rendered impossible by fervent belief in comfortable falsehoods. I enjoy believing that investment bankers are soulless agents of Big Banking, but not as much as I enjoy believing the truth.
The solution, as I see it, is for journalists to chuck their inkwells at Satan’s head. When confronted with the Evil One, we should remember that purity of vision usually reflects ignorance, not reality. We should challenge our own presuppositions, the better to challenge those of our audience.
And we should never make the conspiracist’s mistake, and fear that contact with the enemy can only end in corruption. Occasionally these efforts will confirm the existence of pure, naked evil. But in my own experience, most often the story ends like Luther’s, and Satan simply vanishes—a happy ending for those who value knowledge, in all its chaos and messiness, over fairy tales.
Indian Woman Gives Birth to 11 Boys At Once!
An Indian Woman gave birth to eleven 11 kids baby boys few days ago. Resources have been told that few of them were test tubes babies but it really seems strange at once. It was also rumored that 6 were twins. Doctors were really surprised, shocked and glad to have successful delivery. Well it's a blessing of God who give 11 baby boys to one woman.
Wikipedia's list of multiple births records only two instances of nonuplets (nine children born at once to the same mother), none of whom survived more than a few days, so the news of a woman's giving birth to eleven healthy children at once (all of them boys) would be remarkable indeed.
The article referenced above, which claims an Indian woman delivered eleven living children at the same time, is not such news, however. It's a fabrication that was reported nowhere other than on a Zambian "news and entertainment" web site which used an out-of-context photograph from a hospital in Surat, India, taken while the staff there was celebrating the birth of eleven babies (to eleven different sets of parents) at that facility on the date of 11 November 2011 (i.e., 11/11/11).
As described by the Times of India, the couples involved had all conceived via in vitro fertilization with an eye towards giving birth on that particular date:
A city-based In Vitro Fertilization (IVF) centre will undertake operations on 11 would-be mothers to schedule the births of their babies on Friday, 11-11-11.
About 30 women had conceived through IVF nine months ago at the 21st Century Hospital in the city. Of them, 11 couples wanted the delivery of their babies on the special date.
A later version of this item included a purported photograph of the Indian woman who birthed the eleven baby boys:
This picture was taken from an unrelated news report about a Mexican woman who underwent surgery to remove a 132-lb. tumor from her abdomen.
Spinach
In 2012, network scientist and data theorist Samuel Arbesman published a disturbing thesis: What we think of as established knowledge decays over time. According to his book “The Half-Life of Facts,” certain kinds of propositions that may seem bulletproof today will be forgotten by next Tuesday; one’s reality can end up out of date. Take, for example, the story of Popeye and his spinach.
Popeye loved his leafy greens and used them to obtain his super strength, Arbesman’s book explained, because the cartoon’s creators knew that spinach has a lot of iron. Indeed, the character would be a major evangelist for spinach in the 1930s, and it’s said he helped increase the green’s consumption in the U.S. by one-third. But this “fact” about the iron content of spinach was already on the verge of being obsolete, Arbesman said: In 1937, scientists realized that the original measurement of the iron in 100 grams of spinach — 35 milligrams — was off by a factor of 10. That’s because a German chemist named Erich von Wolff had misplaced a decimal point in his notebook back in 1870, and the goof persisted in the literature for more than half a century.
By the time nutritionists caught up with this mistake, the damage had been done. The spinach-iron myth stuck around in spite of new and better knowledge, wrote Arbesman, because “it’s a lot easier to spread the first thing you find, or the fact that sounds correct, than to delve deeply into the literature in search of the correct fact.”
Arbesman was not the first to tell the cautionary tale of the missing decimal point. The same parable of sloppy science, and its dire implications, appeared in a book called “Follies and Fallacies in Medicine,” a classic work of evidence-based skepticism first published in 1989.1 It also appeared in a volume of “Magnificent Mistakes in Mathematics,” a guide to “The Practice of Statistics in the Life Sciences” and an article in an academic journal called “The Consequence of Errors.” And that’s just to name a few.
All these tellings and retellings miss one important fact: The story of the spinach myth is itself apocryphal. It’s true that spinach isn’t really all that useful as a source of iron, and it’s true that people used to think it was. But all the rest is false: No one moved a decimal point in 1870; no mistake in data entry spurred Popeye to devote himself to spinach; no misguided rules of eating were implanted by the sailor strip. The story of the decimal point manages to recapitulate the very error that it means to highlight: a fake fact, but repeated so often (and with such sanctimony) that it takes on the sheen of truth.
In that sense, the story of the lost decimal point represents a special type of viral anecdote or urban legend, one that finds its willing hosts among the doubters, not the credulous. It’s a rumor passed around by skeptics — a myth about myth-busting. Like other Russian dolls of distorted facts, it shows us that, sometimes, the harder that we try to be clear-headed, the deeper we are drawn into the fog.
No one knows this lesson better than Mike Sutton. He must be the world’s leading meta-skeptic: a 56-year-old master sleuth who first identified the myth about the spinach myth in 2010 and has since been working to debunk what he sees as other false debunkings. Sutton, a criminology professor at Nottingham Trent University, started his career of doubting very young: He remembers being told when he was still a boy that all his favorite rock stars on BBC’s “Top of the Pops” were lip-synching and that some weren’t even playing their guitars. Soon he began to wonder at the depths of this deception. Could the members of Led Zeppelin be in on this conspiracy? Was Jimmy Page a lie? Since then, Sutton told me via email, “I have always been concerned with establishing the veracity of what is presented as true, and what is something else.”
As a law student, Sutton was drawn to stories like that of Popeye and the inflated iron count in spinach, which to him demonstrated both the perils of “accepted knowledge” and the importance of maintaining data quality. He was so enamored of the story, in fact, that he meant to put it in an academic paper. But in digging for the story’s source, he began to wonder if it was true. “It drew me in like a problem-solving ferret to a rabbit hole,” he said.
Soon he’d gone through every single Popeye strip ever drawn by its creator, E.C. Segar, and found that certain aspects of the classic story were clearly false. Popeye first ate spinach for his super power in 1931, Sutton found, and in the summer of 1932 the strip offered this iron-free explanation: “Spinach is full of vitamin ‘A,’” Popeye said, “an’ tha’s what makes hoomans strong an’ helty.” Sutton also gathered data on spinach production from the U.S. Department of Agriculture and learned that it was on the rise before Segar’s sailor-man ever starting eating it.
It seems plausible that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them.
What about the fabled decimal point? According to Sutton’s research, a German chemist did overestimate the quantity of iron in spinach, but the mistake arose from faulty methods, not from poor transcription of the data.2 By the 1890s, a different German researcher had concluded that the earlier estimate was many times too high. Subsequent analyses arrived at something closer to the correct, still substantial value — now estimated to be 2.71 milligrams of iron per 100 grams of raw spinach, according to the USDA. By chance, the new figure was indeed about one-tenth of the original, but the difference stemmed not from misplaced punctuation but from the switch to better methodology. In any case, it wasn’t long before Columbia University analytical chemist Henry Clapp Sherman laid out the problems with the original result. By the 1930s, Sutton argues, researchers knew the true amount of iron in spinach, but they also understood that not all of it could be absorbed by the human body.
The decimal-point story only came about much later. According to Sutton’s research, it seems to have been invented by the nutritionist and self-styled myth-buster Arnold Bender, who floated the idea with some uncertainty in a 1972 lecture. Then in 1981, a doctor named Terence Hamblin wrote up a version of the story without citation for a whimsical, holiday-time column in the British Medical Journal. The Hamblin article, unscholarly and unsourced, would become the ultimate authority for all the citations that followed. (Hamblin graciously acknowledged his mistake after Sutton published his research, as did Arbesman.)
In 2014, a Norwegian anthropologist named Ole Bjorn Rekdal published an examination of how the decimal-point myth had propagated through the academic literature. He found that bad citations were the vector. Instead of looking for its source, those who told the story merely plagiarized a solid-sounding reference: “(Hamblin, BMJ, 1981).” Or they cited someone in between — someone who, in turn, had cited Hamblin. This loose behavior, Rekdal wrote, made the transposed decimal point into something like an “academic urban legend,” its nested sourcing more or less equivalent to the familiar “friend of a friend” of schoolyard mythology.
Emerging from the rabbit hole, Sutton began to puzzle over what he’d found. This wasn’t just any sort of myth, he decided, but something he would term a “supermyth”: A story concocted by respected scholars and then credulously disseminated in order to promote skeptical thinking and “to help us overcome our tendency towards credulous bias.” The convolution of this scenario inspired him to look for more examples. “I’m rather a sucker for such complexity,” he told me.
Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.
It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?
Sutton told me over email that he, too, worries that contrarianism can run amok, citing conspiracy theorists and anti-vaxxers as examples of those who “refuse to accept the weight of argument” and suffer the result. He also noted the “paradox” by which a skeptic’s obsessive devotion to his research — and to proving others wrong — can “take a great personal toll.” A person can get lost, he suggested, in the subterranean “Wonderland of myths and fallacies.”
In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”
Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”
Some context: The Patrick Matthew story isn’t new. Matthew produced a volume in the early 1830s, “On Naval Timber and Arboriculture,” that indeed contained an outline of the famous theory in a slim appendix. In a contemporary review, the noted naturalist John Loudon seemed ill-prepared to accept the forward-thinking theory. He called it a “puzzling” account of the “origin of species and varieties” that may or may not be original. In 1860, several months after publication of “On the Origin of Species,” Matthew would surface to complain that Darwin — now quite famous for what was described as a discovery born of “20 years’ investigation and reflection” — had stolen his ideas.
Darwin, in reply, conceded that “Mr. Matthew has anticipated by many years the explanation which I have offered of the origin of species, under the name of natural selection.” But then he added, “I think that no one will feel surprised that neither I, nor apparently any other naturalist, had heard of Mr. Matthew’s views.”
That statement, suggesting that Matthew’s theory was ignored — and hinting that its importance may not even have been quite understood by Matthew himself — has gone unchallenged, Sutton says. It has, in fact, become a supermyth, cited to explain that even big ideas amount to nothing when they aren’t framed by proper genius.
Sutton thinks that story has it wrong, that natural selection wasn’t an idea in need of a “great man” to propagate it. After all his months of research, Sutton says he found clear evidence that Matthew’s work did not go unread. No fewer than seven naturalists cited the book, including three in what Sutton calls Darwin’s “inner circle.” He also claims to have discovered particular turns of phrase — “Matthewisms” — that recur suspiciously in Darwin’s writing.
In light of these discoveries, Sutton considers the case all but closed. He’s challenged Darwin scholars to debates, picked fights with famous skeptics such as Michael Shermer and Richard Dawkins, and even written letters to the Royal Society, demanding that Matthew be given priority over Darwin.
But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas … knowing that the age was not suited for such.”
When Sutton is faced with the implication that he’s taken his debunking too far — that he’s tipped from skepticism to crankery — he lashes out. “The findings are so enormous that people refuse to take them in,” he told me via email. “The enormity of what has, in actual fact, been newly discovered is too great for people to comprehend. Too big to face. Too great to care to come to terms with — so surely it can’t be true. Only, it’s not a dream. It is true.” In effect, he suggested, he’s been confronted with a classic version of the “Semmelweis reflex,” whereby dangerous, new ideas are rejected out of hand.
Could Sutton be a modern-day version of Ignaz Semmelweis, the Hungarian physician who noticed in the 1840s that doctors were themselves the source of childbed fever in his hospital’s obstetric ward? Semmelweis had reduced disease mortality by a factor of 10 — a fully displaced decimal point — simply by having doctors wash their hands in a solution of chlorinated lime. But according to the famous tale, his innovations were too radical for the time. Ignored and ridiculed for his outlandish thinking, Semmelweis eventually went insane and died in an asylum. Arbesman, author of “The Half-Life of Facts,” has written about the moral of this story too. “Even if we are confronted with facts that should cause us to update our understanding of the way the world works,” he wrote, “we often neglect to do so.”
Of course, there’s always one more twist: Sutton doesn’t believe this story about Semmelweis. That’s another myth, he says — another tall tale, favored by academics, that ironically demonstrates the very point that it pretends to make. Citing the work of Sherwin Nuland, Sutton argues that Semmelweis didn’t go mad from being ostracized, and further that other physicians had already recommended hand-washing in chlorinated lime. The myth of Semmelweis, says Sutton, may have originated in the late 19th century, when a “massive nationally funded Hungarian public relations machine” placed biased articles into the scientific literature. Semmelweis scholar Kay Codell Carter concurs, at least insofar as Semmelweis was not, in fact, ignored by the medical establishment: From 1863 through 1883, he was cited dozens of times, Carter writes, “more frequently than almost anyone else.”
Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!
And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”
Snopes
The most scenic way to find truth on the internet is to drive north of Los Angeles on the Pacific Coast Highway, blue ocean foaming to the left, sunlit hills cresting to the right, until Malibu Canyon Road, where you take a sharp right and wind for a few miles through the oak-lined knolls and dips of Calabasas, past gated estates that are home to the likes of Justin Bieber, Kim Kardashian and Mel Gibson, and keep going until you reach an odd-looking wood-and-brick house with a US flag on the porch: the home of David Mikkelson.
It feels like a good jumping off point for a hike, or a pony trek. But really it is the ideal place to explore fibs like whether Hillary Clinton stole $200,000 in White House furnishings, or whether Donald Trump called Republicans the “dumbest group of voters”, or whether Black Lives Matter protesters chanted for dead cops, or whether Nicolas Cage died in a motorcycle accident, or whether chewing gum takes seven years to pass through the digestive system, or whether hair grows back thicker after being shaved, or, if you really, really must know, whether Richard Gere had an emergency “gerbilectomy” at Cedars-Sinai hospital.
Mikkelson owns and runs Snopes.com, a hugely popular fact-checking site which debunks urban legends, old wives’ tales, fake news, shoddy journalism and political spin. It started as a hobby in the internet’s Pleistocene epoch two decades ago and evolved into a professional site that millions now rely on as a lie-detector. Every day its team of writers and editors interrogate claims ricocheting around the internet to determine if they are false, true or somewhere in the middle – a cleaning of the Augean stables for the digital era.
“There are more and more people piling on to the internet and the number of entities pumping out material keeps growing,” says Mikkelson, who turns out to be a wry, soft-spoken sleuth. “I’m not sure I’d call it a post-truth age but … there’s been an opening of the sluice-gate and everything is pouring through. The bilge keeps coming faster than you can pump.”
In the midst of terror attacks, policing protests, Brexit, and Trump’s run for president the need for accurate information has seldom felt more urgent – or forlorn. Here we are with the freest access to knowledge in history, troves of data and facts at our fingertips and HG Wells’s dream of a world brain a reality, yet a tide of truthiness, propaganda and nonsense surges ever higher. Bogus claims about Barack Obama’s citizenship, say, or Britain’s payments to the European Union, are exposed, yet the claim-pedlars breeze on, unimpeded – they win.
“We need such sites more than ever,” says Jack Pitney, a politics professor at Claremont McKenna College, California, who uses Snopes in his blog. “In Trump, we have a major presidential candidate who doesn’t just parse words, conceal facts, or shade the truth, but constantly tells big blatant lies.”
In person Mikkelson, 56, is boyish, with a toothy smile and shy demeanour. On the day we meet his site sieves a typical stew of online stories: Trump sent $10,000 to a bus driver who saved a woman from jumping off a bridge? True. Chinese restaurants in Pretoria, South Africa have been authorised to sell dog meat? False. Evangelist Franklin Graham said Christians faced death camps if they didn’t support Trump? False. Transgender students in Wisconsin must wear identifying wristbands? Undetermined.
Several times during the interview at his home, and later over lunch, Mikkelson consults his tablet because news, or what masquerades as news, is relentless. Such is the public hunger for reliable information he is treated, on the rare occasions he is recognised in public, as a celebrity – the patron saint of fact-checking. It leaves him chuffed and a little perplexed, because his inbox seethes with angry emails accusing him of bias. “I get lots of negative emails but the people I meet are always friendly.”
The existence of Snopes and similar sites like FactCheck.Org, TruthOrFiction.com and PolitiFact.com raises several questions: who produces the bilge? Why do people share it? And how much should we trust those who blow the whistle?
Mikkelson’s home, tucked in the San Fernando valley hills, is an incongruous base to referee the world’s brawling, squalling system of interconnected computer networks. The phone signal is feeble and the internet connection sometimes drops (Mikkelson had to use nearby hotels’ Wi-Fi during the 2008 presidential election). A previous owner tacked on additional rooms seemingly at random, giving the impression of a mad, elongated cottage with an internal maze.
Mikkelson works in a small kitchenette-cum-office that he shares with his cat. His half a dozen writers and editors are scattered across the US, with another dozen IT workers in San Diego, all funded through advertising revenue. Bookcases line the property: there are tomes on Hitler, Disney, Titanic, J Edgar Hoover, proverbs, quotations, fables, grammar, the Beach Boys, top 40 pop hits, baseball, Charlie Chaplin – any and every topic. In the living room, stacked floor-to-ceiling, are boardgames, hundreds of them: Africana, Parfum, Pirate’s Cove, Whitechapel, Tzolk’in, Goa, Hacienda.
Mikkelson’s restless mind stems from a challenging childhood. His mother was a hoarder and his father moved out, leaving young David to seek solace in reading and obsessively following the LA Dodgers. “I was trying to find ways to impose order in response to home difficulties. I was always trying to organise and categorise.” A computer science degree led to a job with Digital Equipment Corporation and embrace of the nascent internet, which he and his wife Barbara – who is no longer involved with the site – used to research a passion for folklore.
He adopted Snopes – the name of a venal family in a trilogy of William Faulkner novels — as a nom-de-net. The couple transferred the name to the site they started in 1994 to explore myths and urban legends – UFOs, fake moon landings – for a small, devoted following. When Mikkelson was laid off from his job he used the redundancy cash, and extra time, to upgrade the site.
Then al-Qaida destroyed the World Trade Center. “There was a huge bulge after 9/11.” The ensuing convulsions tilted Snopes into politics, fact-checking a nation’s increasingly shrill, bitter partisanship, and in the process winning legions of fans known as Snopesters.
Mikkelson says he is not political, just sceptical. “I don’t think I put a lot of trust in politicians. The candidate I would vote for is the one that would stand above the fray.” He’s still waiting.
Mainstream media, for all its faults, tended to not “crank out any old nonsense”, he says, but the social media explosion diffused gatekeepers by allowing anyone to self-publish and upload content. There is, for example, a cottage industry of Facebook posts detailing supposed attempts by human traffickers to snatch victims from shopping malls, cars, job interviews, firework displays and ice-cream trucks. Snopes tends to classify them as false or unproven.
UFOs, maybe, over Salem, Massachusetts in 1952. Snopes began as a site to explore urban legends. Photograph: Popperfoto
Some social media reports are faster and slicker than traditional news outlets, which often react to rather than report news, amplifying misinformation. “If somebody posts something controversial on Facebook and 20 minutes later it’s a headline in the Daily Mail or New York Post it only makes things worse. So we need to create new types of gatekeepers.”
The convergence of a wild US election, terror attacks and police shootings has unleashed a flood of misinformation but also sharpened appetite for verification, according to Eugene Kiely, director of FactCheck.Org. “There is evidence that the demand to hold politicians accountable for making false statements has never been higher. Our website set a record last year for web traffic, and this year our page views are up 146%. This has been true for the other fact-checking operations, as well.” Sharing investigations with outlets such as USA Today and MSN.com further boosts the audience, says Kiely. Trump and Hillary Clinton’s dismal honesty ratings, he says, show scrutiny is working. “I think that’s evidence that the public is holding candidates accountable.”
Mikkelson sighs at perennial rumours such as the US government planning internment camps or gun confiscations, or signing away national parks to the United Nations. “It gets tiresome having to do the same thing over and over. Most of the stuff we debunk is so distorted from its source it’s hard to think it’s done accidentally.” Even so, he is philosophical. The bilge is contaminating, not destroying, public discourse. He shudders at Trump but shrugs off fears of fascism. “The president is not really that powerful.” Toiling in the bowels of online muck has no discernible effect on Mikkelson’s equanimity. He does not touch alcohol or coffee. The closest he comes to profanity is “holy heck”.
Snopes regularly interrogates Trump’s lies and boasts but also the smears against him. The “leaked” photo of a bald, pasty-faced Donald? False. The audio clip calling Abraham Lincoln a dishonest traitor? A fabricated compilation of sound snippets from a campaign speech. The 1998 People interview quote calling Republicans the “dumbest group of voters”? False.
Mikkelson lists four principal misinformation sources:
1 Legitimate satire sites such as the Onion, which dupe the truly credulous, requiring occasional intervention. “No, SeaWorld isn’t drowning live elephants as part of a new attraction.” “Are the parents of teen Caitlin Teagart going to euthanise her because she is only capable of texting and rolling her eyes? False.”
2 Legitimate news organisations that regurgitate stories without checking, such as the $200 Bill Clinton haircut on Air Force One which supposedly snarled air traffic at LAX in 1993.
3 Political sites that distort, such as Breitbart.com twisting an Obama quote about the “contributions of Muslim Americans to building the very fabric of our nation” into the headline “Obama: Muslims Built ‘The Very Fabric of Our Nation’.”
4 Fake news sites fabricating click-bait stories. Such as: “Ted Cruz sent shockwaves through the Republican Party today when he announced he would endorse Donald Trump for President, but only if the GOP nominee would publicly support a ban on masturbation, (saying) without ‘swift action … the country was doomed to slide down a slippery slope of debauchery and self-satisfaction’.” Snopes sourced this to a site that mimicked ABC News to lure clicks to an underlying malware site, generating advertising revenue. It named and shamed the worst offenders earlier this year.
Such sites have targeted Mikkelson himself. Google his name and you swiftly learn the FBI busted him for involvement in a pitbull fighting ring. News 4 KTLA reported this with photos of a mauled, bloodied dog, and of Mikkelson being arrested. All fake. News 4 KTLA does not exist. The picture manipulated Mikkelson’s head on to that of Victor Bout, the arms dealer. Fake sites resent Snopes, he says, partly because Facebook used it as a metric to limit the reach of fake news.
Misinformation pedlars appear to be shy woodland animals. Of half a dozen individuals and sites contacted for this article, none replied.
Jenna LeFever, an Arizona-based activist and writer, is not accused of wrongdoing but did reply. Snopes criticised an article, not written by her, on a site she works for, Winning Democrats. “Regardless if Snopes or any other fact-checking site wants to ding my site for not telling the absolute truth, they’re still providing this country with a great service, no matter how obsolete the truth seems to have become.” But she thinks Snopes can go overboard disproving satirical claims. “I just think it’s sad that we live in an age where people can’t for the life of them tell if something is satire or not.”
The internet is the great enabler but what drives misinformation is human nature, and that is hardly new. “A lie gets halfway around the world before the truth has a chance to get its pants on,” said Winston Churchill. Except he didn’t. It was allegedly Mark Twain. And the evidence for that is patchy. On this, as so much else, truth is still reaching for its pants.
Rail Gauge
I'd like to open this episode with a warning. The Internet is full of little meme graphics festooned with text that often makes some historical claim, and my caution to you is that — hold onto your hats — they may not all be 100% factually accurate. One of these is the perennial claim, first put forward in a Popular Mechanics article in 1905, that the standard gauge of the world's railroads goes all the way back to the ruts in Roman roads made by war chariots. It seems logical but also raises a few red flags. Honestly, when I came into this, I had no idea whether I was going to find that it was completely true or completely false. But this is Skeptoid, so we are legally obligated to get to the truth.
Here is the basic story. Widely available all over the Internet is some variation on the following narrative:
Did you know that the US Standard railroad gauge (distance between the rails) is 4 feet, 8 1/2 inches?
Why? Because that's the way they built them in England, and the US railroads were built by English expatriates.
Why? Because the first railway lines were built by the same people who built the pre-railroad tramways, and that's the gauge they used.
Why? Because the people who built the tramways used the same jigs and tools that they used for building wagons, which used that wheel spacing.
Why? Because, if they tried to use any other spacing the wagon wheels would break on some of the old, long distance roads. Because that's the spacing of the old wheel ruts.
The first long distance roads in Europe were built by Imperial Rome for the benefit of their legions. The Roman roads have been used ever since.
The original ruts, which everyone else had to match for fear of destroying their wagons, were first made by the wheels of Roman war chariots. Since the chariots were made for or by Imperial Rome they were all alike in the matter of wheel spacing.
Thus, we have the answer to the original question. The United States standard railroad gauge of 4 feet, 8 1/2 inches derives from the original specification for an Imperial Roman army war chariot.
So that's a lot of factual claims. Sounds pretty airtight, doesn't it? Well, let's go through it point by point, in reverse order.
The first (and particularly painful) point of failure for the legend is there was really no such thing as a Roman war chariot. Chariots in Rome were used for racing and in parades, not in warfare, or for daily transportation. Wars were not fought on roads or flat racetrack surfaces, but on natural terrain, where chariots were not suitable. Thus, there was no "official specification" for "Imperial Roman war chariots." At least one book claims Julius Caesar himself issued an edict to standardize this, but no evidence exists to support that. In addition, it's improbable, as long before the time of Caesar, cavalry horses had been bred large enough that the military would have had no need for chariots. Neither specialized racing chariots nor fancy parade chariots would have been in daily use on the rutted Roman roads, so then what else could have caused those ruts?
There are plenty of academic papers written on the use of streets in Roman cities, and none of them mention chariots. Vehicle traffic consisted of hand-drawn two-wheeled carts; two and four-wheeled wagons pulled by one or two horses, donkeys, or oxen; and even sledges on runners. One paper which looked at vehicles throughout the Western Mediterranean region found that hand and donkey cart track widths ranged from 115-120cm, single-horse carts ranged 135-140cm, and pair-drawn carts ranged from 145-170cm. Four feet, 8.5 inches comes out to 143.5cm, so most Roman vehicles had smaller axle widths than a standard gauge railway, but some were bigger, a few quite a bit bigger.
The rut systems found in Roman roads tell a similar story. It is simply a false urban legend that track widths of Roman ruts are all 4 feet, 8.5 inches (go there and measure them yourself). Many papers have been published describing the ruts in Roman roads. In some regions, ruts made by vehicles would be repaired by repaving the roads; but in many other places, ruts were deliberately cut and still bear visible chisel marks. In most cases, the width of artificial ruts was designed for a particular vehicle that was frequently used there. In cities this would often be hand carts. On longer roads the tracks would be wider to accommodate larger freight vehicles. I wish I could give common or usual measurements, but in the many articles I read, the track width of artificial ruts was all over the place. Most were smaller than standard gauge — in the 120-135cm range for hand and donkey carts — and some were bigger. What definitely cannot be claimed is that there was a standard track size. It also can't be claimed that the ruts were made by or for chariots, and certainly not war chariots.
In short, everything about 4 feet, 8.5 inches coming from Roman war chariots is a complete, utter, and total non-starter.
So to continue evaluating this urban legend, we can skip on down a bit. The next thing it claims is that later European wagons used this same width because if they were going to drive in the Roman ruts with a non-conforming axle width, it would have "broken their wheels". It's true that if you needed a wagon to travel a particular route, and that route is on an old Roman road with artificial ruts, you'd do better to match that width to get a smoother, more efficient ride. But what we've already learned is that those ruts were not all the same size, and also that rutted sections of road were fragmented and inconsistent. Wagons built to any given specification would h
Accordingly, we have empirical evidence that wagons were not standardized. One book on the history of transport included a study of English wagons from the era when tramways were first starting to be built. It found substantial regional variations within extremes of about 130 and 190cm. No standard width.
The next step in the urban legend — that "the people who built the tramways used the same jigs and tools that they used for building wagons" — doesn't make any sense at all. Casting iron axles and wheels would not use a single tool or jig involved with the manufacture of wooden wagon wheels. Literally, not even the slightest bit of crossover. This is a horrible logic fail by whomever originally wrote this urban legend.
However, what we should expect to see is that, since the first tramways were drawn by two to six horses side-by-side, such vehicles would have been roughly the same width as Roman two-horse carts. Draft horses in one era were about the same size as draft horses in the other, and that's not because of any mythical Imperial edicts or official specifications. It's just about how big horses are. So the track widths are in the same neighborhood — but again, 4 feet, 8.5 inches was only one example among many different gauges.
When the English first started building railed tramways, one of the more common of the many competing gauges was five feet, measured from the outside edge of the track. Five feet is a good round number, easily remembered, and easily shared with contractors and manufacturers. Track was generally two inches wide, which gave an inside gauge of 4' 8". Track gauge is measured from the inside edge because that's where the vertical flange on the wheel is, which is what holds the axles atop the two parallel tracks.
It is fair to point out that Romans also used a unit of length called the foot, and their foot was only a tiny fraction shorter than the modern Imperial foot. It's not inconceivable that at some point, some Roman influencer could have said "Hey, let's standardized on five feet, because that's a good round number everyone can remember." It's a nice idea, however the evidence shows that while that gauge can indeed be found in some Roman ruts, it's in the minority. Recall that Roman vehicles and Roman ruts were all over the place. They demonstrably did not standardize.
But standardizing is something that did have to happen — and this need arose earliest and most urgently during the American Civil War. It came at a time of turmoil in the American rail system. During the 19th century, tramways gave way fast to steam locomotives. Gauges were all over the place, based on many factors. Mines needed tiny locomotives to fit in cramped tunnels. The great English engineer Isambard Kingdom Brunel used a massive 7-foot gauge for maximum stability for the high-speed rail service he anticipated. Narrow gauges saved a lot of weight and made the tracks easier to build and maintain. Wider gauges were better able to plow through snow. Narrower gauges could handle turns of a smaller radius. Regional railroads, such as those on different sides of rivers or mountain ranges, saw little benefit in consolidating with one another.
Once the war broke out, efficient nationwide transportation of troops and materials was literally a national emergency. The greatest problem was that of breakbulk cargo, which had to be manually unloaded from a train of one gauge and manually loaded onto a train of another. This transfer of cargo was, for a time, among the most serious emergencies the United States faced. Abraham Lincoln proposed a national standard of 5 feet, but the priority had to be to choose a standard that could be implemented rapidly. It turned out that there was one gauge, most used in the northeastern lines, that was just common enough that it emerged the winner. It was called Stephenson gauge, after English railroad engineer George Stephenson. He had been perhaps the principal inventor of the steam locomotive. No record survives that tells us exactly how Stephenson came up with his gauge, but there are a couple of perfectly reasonable theories. One is that he was simply most familiar with that gauge as he'd worked with it before on tramways; another hearkens back to "five feet is a good round number" minus two inches of width for each track, and plus a half inch to allow for a quarter inch of play on each wheel. So when the Pacific Railroad Act of 1863 was passed, it dictated that the transcontinental railroad should be built to a gauge of 4 feet, 8.5 inches, as would all other lines needed for the war effort.
As far as the urban legend's final point goes — that US railroads were built by English expatriates — that's demonstrably false as well. The first US locomotives were indeed purchased from England, and some of them used the Stephenson gauge. A lot of other railroad equipment was purchased from England, because they'd started building them first. But by 20 years before the Civil War, American industry was producing far more locomotives than all of Europe and was exporting them. Certainly there were some English expatriates employed by the American lines, but it's hardly true to say that US railroads were either built or designed by them.
Now, Skeptoid is hardly the first show to tackle the historicity of this particular urban legend. Snopes, the Straight Dope, and others have done it before. Generally they all conclude with calling the legend partially true, because the gauges are indeed very similar, and the average width of two horses played a role at several stages in the evolution of tracks for vehicles. I regret that I cannot concur. The influence of horse width at different times in history is very different from the definitely-false claim of an official Roman specification for war chariots directly informing the gauge for 55% of the world's railroads.
People love oversimplified explanations that make it possible to comprehend a complicated subject — and it is a complicated subject; the book American Narrow Gauge Railroads that was one of my primary sources for this is almost 600 large-leaf pages of dense history. It's one reason conspiracy theories are popular; they replace complexity with simplicity. Whenever you hear such an explanation for anything, you have very good reason to be skeptical.
Fish Falling From Sky
Today we're going to run in panic from a meteorological downpour only Bartholomew and his Oobleck could appreciate: Storms of frogs and fish falling from the sky! For at least 200 years, newspapers and books have published accounts of people being pelted by huge numbers of frogs and fish coming down during rainstorms, or even sometimes out of a clear blue sky.
In 1901, a rainstorm in Minneapolis, MN produced frogs to a depth of several inches, so that travel was said to be impossible. Fish famously fell from the sky in Singapore in 1861, and again over a century later in Ipswich, Australia in 1989. Residents in southern Greece awoke one morning in 1981 to find that a shower of frogs had blanketed their village. Golfers in Bournemouth, England found herring all over their course after a light shower in 1948. In 1901, a huge rainstorm doused Tiller's Ferry, SC, and covered it with catfish as well as water, to the point that fish were found swimming between the rows of a cotton field. In 1953, Leicester, MA was hit with a downpour of frogs and toads of all sorts, even choking the rain gutters on the roofs of houses. The stories go on and on: More frogs in Missouri in 1873 and Sheffield, England in 1995, and more fish in Alabama in 1956.
How could such things happen? Obviously frogs and fish are heavier than air and can't evaporate up into clouds, nor can they suspend themselves up there to breed. Almost every printed version of these tales offers a single explanation: That a waterspout somewhere sucks the animals out of some water and lifts them up into the clouds, from where they later fall back to land. This explanation is so ubiquitous that even the Encyclopedia Britannica suggests it as the only offered hypothesis.
I've always had problems with the waterspout story, and the more you look into it, the poorer an explanation it turns out to be. Waterspouts come in two varieties, just like tornadoes on land. The first and most common is a non-tornadic waterspout, which is a local fair-weather phenomenon, akin to a dust devil you might see over farmland. They have little or no effect on the surface of the water. The second much rarer type is the full-blown supercell tornadic waterspout. The decreased air pressure inside a tornadic waterspout can actually raise the water level by as much as half a meter, but water itself is not sucked up inside. The visible column of a waterspout is made up of condensation, and is transparent. The high winds will kick up a lot of spray from wavelets on the surface, but if you look at pictures of waterspouts, you'll see that this spray is thrown outward, not sucked up inward. Just below the surface of the water, things are undisturbed. Waterspouts simply do not have any mechanism by which they might reach down into the water, collect objects, and then transport them upward into the sky.
If you've watched video of destructive tornadoes on land, you've seen this same effect. When a tornado rips through a building or a town, you'll see debris kicked up into the air, often quite high, from where it takes a ballistic trajectory outward. Never do objects ascend the inner column, because there is simply no mechanism inside for doing that. It's not an elevator; it's a destructive force scraping stuff off the surface and throwing it upward and outward. Stuff might take a lap or two around the column while it's being snatched up and tossed. Debris goes everywhere; groups of related objects are never picked up from one place, kept together, and neatly deposited somewhere else. Certainly there is no mechanism that might carry a group of objects way up into the clouds, transport them laterally great distances through fair weather while somehow counteracting gravity, and then suddenly release them in a single tight group to drop to the ground.
Not once in a single case of several dozen that I read was there ever a report of a tornado or waterspout in the vicinity, or even at all, no matter how far away. I conclude that waterspouts have no connection, either hypothetical or evidentiary, to the phenomenon of frogs, fish, or any other animals, falling out of the sky. There's a much better explanation that's well known to zoologists, but for reasons I can't fathom, is almost never put forward to explain these stories.
The thing is, we've got these stories repeated over and over again, and all of them, or almost all of them, are completely credulous. Nearly every author uncritically repeats the story, often giving the waterspout theory as a possible explanation. Almost never will you hear someone ask the question "Wait a minute; did this actually happen the way witnesses thought?" These authors don't know one of the fundamentals of critical examination: Before you try to explain a strange phenomenon, first see if the strange event ever really happened; or at least whether it happened the way it's been reported.
Drop a frog off a building, and unless it's extremely small, it goes splat; so undoubtedly, what people think they're seeing in these stories can't be what's actually going on. If the frog didn't come from the sky, could it have come from somewhere else?
Frogs do swarm naturally on occasion. It happens frequently enough that people start to correlate these events with other things that happened: Storms, earthquakes, celebrity deaths, what have you. It's been reported that frog swarms were correlated with both the 1989 Loma Prieta earthquake in California, and the 2008 Sichuan earthquake in China. Shortly thereafter, when frogs swarmed in Bakersfield, California, some called for earthquake preparedness. Needless to say, there was no earthquake; just a random population explosion of frogs. Sometimes these explosions can be dramatic. In 2004, four hurricanes hit Florida, making that state about the wettest it's ever been. The local species of frogs and toads all had a banner year, described by the Florida Museum of Natural History as a carpet.
Every spring and fall, frogs migrate between shallow breeding ponds and deeper lakes. Because they're amphibians and need to keep their skin moist, they migrate most often during rainstorms. In many cases, the day with the right conditions will come, and the whole frog population will move cross-country en masse, across roads, across properties, wherever it needs to go. When you look outside during a heavy rainstorm and you see thousands of frogs jumping everywhere all over the ground, the illusion that they're falling from the sky and bouncing can be quite convincing. A swarm of frogs looks like ping pong balls bouncing in a lottery machine. The fact that there usually aren't frogs here adds credibility to the illusion. Throw in a healthy dose of confirmation bias and some exaggerated second, third, and fourth hand reports, and you automatically end up with every imaginable detail like the frogs were choking rain gutters on top of buildings.
Although this explanation might satisfy the stories of frogs falling from the sky, what about fish? You don't find mass migrations of fish crossing overland, do you? Well, maybe not mass migrations, but believe it or not, there are fish species that occasionally take to the ground in search of better waters. There are many species of "walking fish" in the world. Mudskippers are probably the best known variety. In Florida in 2008, a school of about 30 walking catfish emerged from the sewer during a heavy storm and went slithering around on the street. The northern snakehead is another fish that can wriggle its way around on dry land. Throughout Africa and Asia are 36 species of climbing perches. They have a special organ that allows them to breathe air, and are able to walk using their gill plates, fins and tail, despite looking completely fishlike with no obvious ambulatory limbs. None of these fish move gracefully or even look like they have the ability. To the average witness, it's a live fish flopping around on the ground where no fish has any business being, and having fallen from the sky seems as good an explanation as any.
But they can't have fallen from the sky. Drop a fish off a building, and that's a dead fish. These fish are not reported as being burst open with their guts splattered out, but as flapping and squirming about, very much alive. An alternate explanation, that these fish are simply using their rare but well established ability to move overland, doesn't require us to accept some unexplained, implausible hypothesis even in the face of a lack of splattered-fish evidence.
We have no reason to think anyone actually observed the fish falling from the sky. All we know is that some people have reported finding live fish on dry ground. No doubt many of them couldn't think of any explanation other than the fish fell there, so they probably told the reporter "A bunch of fish fell out of the sky." At that point, the reporter had the story he was looking for, and needed to inquire no further.