Area 25 began as the perfect place for America to launch a nuclear-powered spaceship

Friday, February 28th, 2025

Area 51 by Annie JacobsenArea 25, Annie Jacobsen explains (in Area 51), began as the perfect place for America to launch a nuclear-powered spaceship that would get man to Mars and back in the astonishingly short time of 124 days:

The spaceship was going to be enormous, sixteen stories tall and piloted by one hundred and fifty men. Project Orion seemed like a space vehicle from a science fiction novel, except it was real. It was the brainchild of a former Los Alamos weapons designer named Theodore Taylor, a man who saw space as the last “new frontier.”

For years, beginning in the early 1950s, Taylor designed nuclear bombs for the Pentagon until he began to doubt the motives of the Defense Department. He left government service, at least officially, and joined General Atomics in San Diego, the nuclear division of defense contractor General Electric. There, he began designing nuclear-powered spaceships. But to build a spaceship that could get to Mars required federal funding, and in 1958 General Atomics presented the idea to President Eisenhower’s new science and technology research group, the Advanced Research Projects Agency, or ARPA. The agency had been created as a result of the Sputnik crisis, its purpose being to never let the Russians one-up American scientists again. Today, the agency is known as DARPA. The D stands for defense.

At the time, developing cutting-edge space-flight technology meant hiring scientists like Wernher Von Braun to design chemical-based rockets that could conceivably get man to the moon in a capsule the size of a car. Along came Ted Taylor with a proposal to build a Mars-bound spaceship the size of an office building, thanks to nuclear energy. For ARPA chief Roy Johnson, Ted Taylor’s conception was love at first sight. “Everyone seems to be making plans to pile fuel on fuel on fuel to put a pea into orbit, but you seem to mean business,” the ARPA chief told Taylor in 1958.

General Atomics was given a one-million-dollar advance, a classified project with a code name of Orion, and a maximum-security test facility in Area 25 of the Nevada Test Site at Jackass Flats. The reason Taylor’s spaceship needed an ultrasecret hiding place and could not be launched from Cape Canaveral, as other rockets and spaceships in the works could be, was that the Orion spacecraft would be powered by two thousand “small-sized” nuclear bombs. Taylor’s original idea was to dispense these bombs from the rear of the spaceship, the same as a Coke machine dispenses sodas. The bombs would fall out behind the spaceship, literally exploding and pushing the spaceship along. The Coca-Cola Company was even hired to do a classified early design.

At Area 25, far away from public view, Taylor’s giant spaceship would launch from eight 250-foot-tall towers. Blastoff would mean Orion would rise out of a column of nuclear energy released by exploding atomic bombs. “It would have been the most sensational thing anyone ever saw,” Taylor told his biographer John McPhee. But when the Air Force took over the project, they had an entirely different vision in mind. ARPA and the Air Force reconfigured Orion into a space-based battleship. From high above Earth, a USS Orion could be used to launch attacks against enemy targets using nuclear missiles. Thanks to Orion’s nuclear-propulsion technology, the spaceship could make extremely fast defensive maneuvers, avoiding any Russian nuclear missiles that might come its way. It would be able to withstand the blast from a one-megaton bomb from only five hundred feet away.

For a period of time in the early 1960s the Air Force believed Orion was going to be invincible. “Whoever builds Orion will control the Earth!” declared General Thomas S. Power of the Strategic Air Command. But no one built Orion. After atmospheric nuclear tests were banned in 1963, the project was indefinitely suspended. Still wanting to get men to Mars, NASA and the Air Force turned their attention to nuclear-powered rockets. From now on, there would be no nuclear explosions in the atmosphere at Jackass Flats—at least not officially. Instead, the nuclear energy required for the Mars spaceship would be contained in a flying reactor, with fuel rods producing nuclear energy behind barriers that were lightweight enough for space travel but not so thin as to cook the astronauts inside. The project was now called NERVA, which stood for Nuclear Engine Rocket Vehicle Application. The facility had a public name, even though no one from the public could go there. It was called the Nuclear Rocket Test Facility at Jackass Flats. A joint NASA/ Atomic Energy Commission office was created to manage the program, called the Space Nuclear Propulsion Office, or SNPO.

[…]

All NERVA employees entered work through a small portal in the side of the mountain, “shaped like the entrance to an old mining shaft, but spiffed up a bit,” Barnes recalls, remembering “large steel doors and huge air pipes curving down from the mesas and entering the tunnel.” Inside, the concrete tunnel was long and straight and ran into the earth “as far as the eye could see.” Atomic Energy Commission records indicate the underground tunnel was 1,150 feet long. Barnes remembered it being brightly lit and sparkling clean. “There were exposed air duct pipes running the length of the tunnel as well as several layers of metal cable trays, which were used to transport heavy items into and out of the tunnel,” he says. “The ceiling was about eight feet tall, and men walked through it no more than two abreast.”

[…]

For each engine test, a remote-controlled locomotive would bring the nuclear reactor over to the test stand from where it was housed three miles away in its own cement-block-and-lead-lined bunker, called E-MAD. “We used to joke that the locomotive at Jackass Flats was the slowest in the world,” Barnes explains. “The only thing keeping the reactor from melting down as it traveled down the railroad back and forth between E-MAD and the test stand was the liquid hydrogen [LH2] bath it sat in.” The train never moved at speeds more than five miles per hour. “One spark and the whole thing could blow,” Barnes explains. At ? 320 degrees Fahrenheit, liquid hydrogen is one of the most combustible and dangerous explosives in the world.

[…]

“The railroad car carried the nuclear reactor up to the test stand and lifted it into place using remotely controlled hydraulic hands,” Barnes explains. “Meanwhile, we were all underground looking at the reactor through special leaded-glass windows, taking measurements and recording data as the engine ran.” The reason the facility was buried inside the mountain was not only to hide it from the Soviet satellites spying on the U.S. nuclear rocket program from overhead, but to shield Barnes and his fellow workers from radiation poisoning from the NERVA reactor. “Six feet of earth shields a man from radiation poisoning pretty good,” says Barnes.

When running at full power, the nuclear engine operated at a temperature of 2,300 Kelvin, or 3,680.6 degrees Fahrenheit, which meant it also had to be kept cooled down by the liquid hydrogen on a permanent basis. “While the engine was running the canyon was like an inferno as the hot hydrogen simultaneously ignited upon contact with the air,” says Barnes. These nuclear rocket engine tests remained secret until the early 1990s, when a reporter named Lee Davidson, the Washington bureau chief for Utah’s Deseret News, provided the public with the first descriptive details. “The Pentagon released information after I filed a Freedom of Information Act,” Davidson says. In turn, Davidson provided the public with previously unknown facts: “bolted down, the engine roared… sending skyward a plume of invisible hydrogen exhaust that had just been thrust through a superheated uranium fission reactor,” Davidson revealed. Researching the story, he also learned that back in the 1960s, after locals in Caliente, Nevada, complained that iodine 131—a major radioactive hazard found in nuclear fission products—had been discovered in their town’s water supply, Atomic Energy officials denied any nuclear testing had been going on at the time. Instead, officials blamed the Chinese, stating, “Fresh fission products probably came from an open-air nuclear bomb test in China.” In fact, a NERVA engine test had gone on at Area 25 just three days before the town conducted its water supply test.

Had the public known about the NERVA tests when they were going on, the tests would have been perceived as a nuclear catastrophe in the making. Which is exactly what did happen. “Los Alamos wanted a run-away reactor,” wrote Dewar, who in addition to being an author is a longtime Atomic Energy Commission employee, “a power surge until [the reactor] exploded.” Dewar explained why. “If Los Alamos had data on the most devastating accident possible, it could calculate other accident scenarios with confidence and take preventative measures accordingly.” And so, on January 12, 1965, the nuclear rocket engine code-named Kiwi was allowed to overheat. High-speed cameras recorded the event. The temperature rose to “over 4000 ° C until it burst, sending fuel hurtling skyward and glowing every color of the rainbow,” Dewar wrote. Deadly radioactive fuel chunks as large as 148 pounds shot up into the sky. One ninety-eight-pound piece of radioactive fuel landed more than a quarter of a mile away.

Once the explosion subsided, a radioactive cloud rose up from the desert floor and “stabilized at 2,600 feet” where it was met by an EG& G aircraft “equipped with samplers mounted on its wings.” The cloud hung in the sky and began to drift east then west. “It blew over Los Angeles and out to sea,” Dewar explained. The full data on the EG& G radiation measurements remains classified.

The test, made public as a “safety test,” caused an international incident. The Soviet Union said it violated the Limited Test Ban Treaty of 1963, which of course it did. But the Atomic Energy Commission had what it wanted, “accurate data from which to base calculations,” Dewar explained, adding that “the test ended many concerns about a catastrophic incident.” In particular, the Atomic Energy Commission and NASA both now knew that “in the event of such a launch pad accident [the explosion] proved death would come quickly to anyone standing 100 feet from ground zero, serious sickness and possible death at 400 feet, and an unhealthy dose at 1000 feet.”

Because it is difficult to believe that the agencies involved did not already know this, the question remains: What data was Atomic Energy Commission really after? The man in charge of the project during this time, Space Nuclear Propulsion Office director Harold B. Finger, was reached for comment in 2010. “I don’t recall that exact test,” Finger says. “It was a long time ago.”

Five months later, in June of 1965, disaster struck, this time officially unplanned. That is when another incarnation of the nuclear rocket engine, code-named Phoebus, had been running at full power for ten minutes when “suddenly it ran out of LH2 [liquid hydrogen and] overheated in the blink of an eye,” wrote Dewar. As with the planned “explosion” five months earlier, the nuclear rocket reactor first ejected large chunks of its radioactive fuel out into the open air. Then “the remainder fused together, as if hit by a giant welder,” Dewar explained. Laymen would call this a meltdown. The cause of the accident was a faulty gauge on one of the liquid hydrogen tanks. One gauge read a quarter full when in reality there was nothing left inside the tank.

So radiated was the land at Jackass Flats after the Phoebus accident, even HAZMAT cleanup crews in full protective gear could not enter the area for six weeks. No information is available on how the underground employees got out. Originally, Los Alamos tried to send robots into Jackass Flats to conduct the decontamination, but according to Dewar the robots were “slow and inefficient.” Eventually humans were sent in, driving truck-mounted vacuum cleaners to suck up deadly contaminants. Declassified Atomic Energy Commission photographs show workers in protective gear and gas masks picking up radioactive chunks with long metal tongs.

[…]

“We did develop the rocket,” Barnes says. “We do have the technology to send man to Mars this way. But environmentally, we could never use a nuclear-powered rocket on Earth in case it blew up on takeoff. So the NERVA was put to bed.”

As far as fire risk is concerned, these areas combine the worst aspects of wildland and urban environments

Thursday, February 27th, 2025

Over a century ago, people started to live together more densely than ever before, and this transformed fires from unfortunate incidents to conflagrations that destroyed entire cities:

By the 1870s, “great” fires were happening several times a decade and viewed as a normal part of life in cities.

And then, by the 1920s, it stopped. Why? After the Great Chicago Fire of 1871, we finally got serious as a civilization about stopping urban fires. We rewrote building codes to require fire-resistant materials and metal escape ladders; we built professional firefighting forces instead of relying on local fire brigades with, literally, buckets; and we invented new technologies, like automatic sprinkler systems in 1872, motorized fire trucks with powered pumps and engines in 1910, and CTC fire extinguishers in 1912. Chicago never burned down again.

Today, urban fires are treated as a largely-solved problem. Modern urban firefighting forces and infrastructure are designed for putting out fires in homes. In fact, firefighting is so solved that only 4% of firefighting calls are fire-related — the vast majority are medical. Yet, this model of firefighting is not adapted to the challenges we face today.

[…]

Over the past fifty years, people all throughout developed nations have moved into suburban areas, which fire experts refer to as the “wildland-urban interface” (WUI). As far as fire risk is concerned, these areas combine the worst aspects of wildland and urban environments. Because of humans living in density, you have frequent ignition events. But because they are near nature, you are surrounded by kindling. The environment is furthermore relatively sparse, so you can’t have the same firefighting density as in a city. Taken together this means fires can reach wildfire scale with urban frequency.

Simply put, urban firefighting forces are using an old playbook on a new, unsolved problem. 29% of the United States lives in the WUI now, and California has the highest such percentage of any state.

[…]

To allow for more preventative maintenance, there must be reform to both the California Environmental Quality Act (CEQA) and the National Environmental Policy Act (NEPA). The environmental impact statement of a controlled burn takes 7.2 years to complete, which is longer than most fire cycles. Amazingly, under Sierra Club v. Bosworth, a 2007 Ninth Circuit case, there is no categorical exclusion for controlled burns.

[…]

Major urban centers should not just have one reservoir, but at least two for redundancy. All fire infrastructure must go through an audit and, if need be, rebuilt. California throws 21 million acre-feet of water away each year, which is more than enough to fill firefighting reservoirs and use for irrigation to wet our forests. The money is already there—California voters already approved $2.7 billion for reservoirs with Prop. 1 back in 2014, none of which have been built a decade later.

[…]

The homeless cause 54% of all fires in Los Angeles. That number jumps to 80% for downtown fires. In San Francisco, such fires have doubled in the past five years. Throughout California, the homeless plague highway underpasses with fires, some from cooking and some from derangement. Many of the arsonists arrested during these fires were homeless. Unfortunately, the eternal summer of endless fire season means the time for tolerance is over.

The ideal candidate can go from street to fully certified in about 4 years

Wednesday, February 26th, 2025

Tracing Woodgrains recounts the full story of the FAA’s hiring scandal:

Then, on New Year’s Eve, 2013, while students and professors alike were out for winter break, the FAA abruptly sent an announcement to the presidents of the CTI [collegiate training initiative] schools. The announcement came, without warning, as an email from one Mr. Joseph Teixeira, the organization’s vice president for safety and technical training. “The FAA completed a barrier analysis of the ATC occupation pursuant to the Equal Employment Opportunity Commission’s (EEOC) Management Directive 715,” the email read, then went on to spell out some changes:

First, every past aptitude test applicants had taken was voided. Andrew Brigida’s perfect score was meaningless.

Second, every applicant would be required to take and pass an unspecified “biographical questionnaire” to have a shot at entering the profession.

Third, existing CTI students were left with no advantage in the hiring process, which would be equally open to all off-the-street applicants—their degrees rendered useless for the one specialized job they had trained for.

[…]

As the hiring wave approached, some of Reilly’s friends in the program encouraged her to join the National Black Coalition of Federal Aviation Employees (NBCFAE), telling her it would help improve her chances of being hired. She signed up as the February wave started. Soon, though, she became uneasy with what the organization was doing, particularly after she and the rest of the group got a voice message from FAA employee Shelton Snow:

“I know each of you are eager very eager to apply for this job vacancy announcement and trust after tonight you will be able to do so….there is some valuable pieces of information that I have taken a screen shot of and I am going to send that to you via email. Trust and believe it will be something you will appreciate to the utmost. Keep in mind we are trying to maximize your opportunities…I am going to send it out to each of you and as you progress through the stages refer to those images so you will know which icons you should select…I am about 99 point 99 percent sure that it is exactly how you need to answer each question in order to get through the first phase.”2

The biographical questionnaire Snow referred to as the “first phase” was an unsupervised questionnaire candidates were expected to take at home. You can take a replica copy here. Questions were chosen and weighted bizarrely, with candidates able to answer “A” to all but one question to get through. Some of the most heavily weighted questions were “The high school subject in which I received my lowest grades was:” (correct answer: science, worth 15 points) and “The college subject in which I received my lowest grades was:” (correct answer: history, for another 15 points).

Reilly, Brigida, and thousands of others found themselves faced with the questionnaire, clicking through a bizarre sequence of questions that would determine whether they could enter the profession they’d been working towards. Faced with the opportunity to cheat, Reilly did not. It cost her a shot at becoming an air traffic controller. Like 85% of their fellow CTI students, Brigida and Reilly found themselves faced with a red exclamation point and a dismissal notice: “Based upon your responses to the Biographical Assessment, we have determined that you are NOT eligible for this position.”

[…]

Throughout the ‘90s and ‘00s, the FAA faced pressure to diversify its field of air traffic controllers, historically a profession that has been primarily white men, notably from the NBCFAE. In the early 2000s, this pressure focused on the newly developed air traffic control qualification test, the AT-SAT, which the NBCFAE hired Dr. Outtz to critique from an adverse impact standpoint. As originally scored, the test was intended to pass 60% of applicants, but predictions suggested only 3% of black applicants would pass. In response, the FAA reweighted the scoring to make the test easier to pass, reducing its correlation with job performance as they did so. In its final form, some 95% of applicants passed the test.

This was a bit of a shell game. In practice, they divided it into a “well qualified” band (with scores between 85 and 100 on the test, met by around 60% of applicants) and a “qualified” band (with scores between 70 and 84), and drew some 87% of selections from that “well qualified” band. Large racial disparities remained in the “well qualified” band. As a result, facing continued pressure, the FAA began to investigate ways to deprioritize the test.

Why not ditch it altogether? Simple: the test worked. It had “strong predictive validity,” outperforming “most other strategies in predicting mean performance,” and it was low cost and low time commitment. On average, people who performed better on the test actually did perform better as air traffic controllers, and this was never really in dispute.

[…]

The NBCFAE continued to pressure the FAA to diversify, with its members meeting with the DOT, FAA, Congressional Black Caucus, and others to push for increased diversity among ATCs. After years of fiddling with the research and years of pressure from the NBCFAE, the FAA landed on a strategy: by using a multistage process starting with non-cognitive factors, they could strike “an acceptable balance between minority hiring and expected performance”—a process they said would carry a “relatively small” performance loss. They openly discussed this tension in meetings, pointing to “a trade-off between diversity (adverse impact) and predicted job performance/outcomes,” asking, “How much of a change in job performance is acceptable to achieve what diversity goals?”

[…]

An active air traffic controller reached out to me at the end of January last year, talking about his frustration that so many qualified individuals were eliminated from entering training, contributing to a dire staffing shortage. His facility was operating at less than 75%, with controllers fatigued from working 6-day weeks. As he explained it, training can take 1.5 to 2.5 years at larger facilities, with washout rates from 30-60%, and when we spoke upper level facilities were only accepting transfers from lower level facilities, adding years to preparation time. Every issue compounds and adds to the problem over time, and the FAA’s 2013 changes set staffing back years. The FAA, he said, simply is not bringing in enough people to match the number leaving, with some air traffic controllers believing the agency is failing on purpose to find an excuse to privatize them.

A controller who worked for 25 years and retired in 2015 spoke with me last January as well. He had been offered a position at Oklahoma City to instruct new students. Per him, the ideal candidate can go from street to fully certified in about 4 years, while some trainees during his last few years had been training for 6-8 years. He alleged a pattern where some students were trained for years longer than others, rarely washed out, and were quietly checked out and promoted away from direct air traffic control positions into management.

Another retired controller and supervisor who formerly worked at the Chicago ARTCC echoed his story, claiming the FAA would regularly change the “best qualified list,” with those responsible for promotions changing requirements depending on who they wanted to promote. He was never told to certify an inadequate trainee, he said, but “the pressure was mounting.”

The US VC industry is causally responsible for the rise of one-fifth of the current largest 300 US public companies

Tuesday, February 25th, 2025

Will Gornall and Ilya A. Strebulaev examine The Economic Impact of Venture Capital:

Venture capital-backed companies account for 41% of total US market capitalization and 62% of US public companies’ R&D spending. Among public companies founded within the last fifty years, VC-backed companies account for half in number, three quarters by value, and more than 92% of R&D spending and patent value. The US did not spawn top public companies at a higher rate than other large, developed countries prior to 1970s ERISA reforms, but produced twice as many after it. Using those reforms as a natural experiment suggests that the US VC industry is causally responsible for the rise of one-fifth of the current largest 300 US public companies and that three-quarters of the largest US VC-backed companies would not have existed or achieved their current scale without an active VC industry.

Despite severe economic stagnation, Japan is still a desirable place to live and work

Monday, February 24th, 2025

For more than three decades, Maxwell Tabarrok notes, Japan has endured near complete economic stagnation:

But despite severe economic stagnation, Japan is still a desirable place to live and work. The major costs of living, like housing, energy, and transportation are not particularly expensive compared to other highly-developed countries. Infrastructure in Japan is clean, functional, and regularly expanded. There is very little crime or disorder, and almost zero open drug use or homelessness. Compared to a peer country like Britain, whose economic stagnation over the past 30 years has been less severe, Japan seems to enjoy a higher quality of life.

[…]

Japan’s zoning code is set at the national level and therefore tends to be much less restrictive than the local zoning codes found in the West. Its national system lays out just 12 inclusive zones, which means the permitted building types carry over as you move up the categories, allowing mixed-use development by default. This compares favorably to zoning codes in the US which often have multiple dozens of exclusive land use categories. Even the most restrictive category in Japan’s system, shown in the top left (below), allows people to run small shops and offices out of their homes. There are floor-area-ratio limits and setbacks, but they are modest, and there is no distinction between single and multi-family housing units within these limits.

For environmental permitting, Japan mostly relies on explicit standards for environmental impact, rather than a lengthy permitting process where applicants must write detailed reports about possible alternatives and mitigation measures under threat of lawsuit, as in the US. Japan does have a copy-cat National Environmental Policy Act (NEPA) procedural environmental law that was enacted in 1997, but it has two important differences that prevent its evolution into the procedural morass seen in other countries.

First is explicit numerical standards for which projects must go through the impact statement process, rather than the hand-waving ambiguities of NEPA. These standards generally only include large infrastructure projects like a port extension exceeding 300 hectares. Some residential projects are covered, but only those which exceed 75 hectares in area. Only 854 environmental impact assessments have been started in Japan since the act passed, and there have been zero for residential construction projects.

Second, the completed environmental assessments are harder to sue over than in Western countries. Plaintiffs need to have a personal and legally protected injury to have standing, rather than a generalized concern for the environment as in the US. Plus, the greater specificity of when the law applies and a court that has a much more deferential attitude towards agency determinations means the lawsuits are harder to win.

Permissive national zoning and an absence of environmental proceduralism leads to Japan having the highest rate of housing construction and the lowest home price to income ratio in the OECD.

[…]

Japan’s social order is incredibly valuable. The annual cost of crime in the United States is around $5 trillion dollars which is 18% of GDP. Higher crime rates would threaten the high-density urbanism which makes Japanese cities so affordable and desirable.

The medieval house might have been built to specifications approved by a rodent council

Sunday, February 23rd, 2025

Dozens of rodents carry plague, Ed West notes, but it would only become deadly to humans when Yersinia pestis infected the flea of the black rat (Rattus rattus):

Black rats are sedentary homebodies and don’t like to move more than 200 metres from their nests; they especially like living near to humans, which is what makes them so much more dangerous than more adventurous rodents. Black rats have been our not-entirely-welcome companion for thousands of years, and were living near human settlements in the Near East from as far back as 3000 BC; the Romans and their roads helped them spread across the empire and brought them to Britain, the oldest rat remains here being found from the fourth century, underneath Fenchurch Street in London.

Black rats were especially comfortable in the typical medieval house, and while stone buildings became a feature of life in the 12th century, most were still made of wood and straw. In the words of historian Philip Ziegler, ‘The medieval house might have been built to specifications approved by a rodent council as eminently suitable for the rat’s enjoyment of a healthy and care-free life.’ This type of rat is also a very good climber, so could easily live in the thatched roofs which were common then.

Because of its preferred home, the black rat is also called the house rat or ship rat, while the brown rat prefers sewers. On top of this, the animals are fecund to a horrifying degree; one black rat couple can theoretically produce 329 million descendants in three years. So the typical medieval city had lots of rats, and with them came lots of fleas.

Fleas are nature’s great survivors. They can endure in all sorts of conditions, and some have developed the ability to live off bits of bread and only require blood for laying eggs. The black rat flea, called Xenopsylla cheopis, is also exceptionally hardy, able to survive between 6-12 months without a host, living in an abandoned nest or dung, although it is only active when the temperature is between 15-20 centigrade. As John Kelly wrote in The Great Mortality, the Oriental rat flea is ‘an extremely aggressive insect. It has been known to stick its mouth parts into the skin of a living caterpillar and suck out the caterpillar’s bodily fluids and innards’. What a world.

There are two types of flea: fur fleas and nest fleas, and only the former travels with its host rather than remaining in the nest. The rat flea is a fur flea, and while it prefers to stay on its animal of choice, they will jump on to other creatures if they’re nearby – unfortunately, in the 14th century that happened to be us. (In fact, they will attach themselves to most farmyard animals, and only the horse was left alone, because its odour repulses them, for some reason.)

As part of the great and disgusting chain of being, the rats inadvertently brought the plague to humans, but it wasn’t fun for the rats either, or the fleas for that matter. When the hungry flea bites the rat, the pestis triggers a mutation in the flea guts causing it to regurgitate the bacteria into the wound, so infecting the rat. (Yes, it is all a bit disgusting). Y. pestis can be transmitted by 31 different flea species, but only in a rodent does the quantity of bacillus become large enough to block the fleas’s stomach.

The flea therefore feeds more aggressively as it dies of starvation, and its frantic feeding makes the host mammal more overrun with the bacterium. The fleas also multiply as the plague-carrying rat gets sick, so that while a black rat will carry about seven fleas on average, a dying rat will have between 100 and 150. Rats were infected with the disease far more intensely than humans, so that ‘the blood of plague-infected rats contains 500-1,000 times more bacteria per unit of measurement than the blood of plague-infected humans.’

When the disease is endemic to rodents it’s called ‘sylvatic plague’, and when it jumps to humans it’s called ‘bubonic’ plague. For Y pestis to spread, there will ideally be two populations of rodents living side by side: one must be resistant to the disease so that it can play host, and the other non-resistant so the bacteria can feed on it. There needs to be a rat epidemic to cause a human epidemic because it provides a ‘reservoir’ for the disease to survive. Robert Gottfried wrote: ‘Y pestis is able to live in the dark, moist environment of rodent burrows even after the rodents have been killed by the epizootic, or epidemic. Thus as a new rodent community replaces the old one, the plague chain can be revived’. The rat colony will all be dead within two weeks of infection and then the fleas start attacking humans.

The first human cases would typically appear 16-23 days after the plague had arrived in a rat colony, with the first deaths taking place after about 20-28 days. It takes 3-5 days after infection for signs of the disease to appear in humans, and a similar time frame before the victim died. Somewhere between 20-40 per cent of infected people survived, and would thereafter mostly be immune.

What happened next would have been terrifying. ‘From the bite site, the contagion drains to a lymph node that consequently swells to form a painful bubo,’ or swelling lump, ‘most often in the groin, on the thigh, in an armpit or on the neck. Hence the name bubonic plague.’

Politicians are in a sense less important than intellectuals and activists

Saturday, February 22nd, 2025

Origins of Woke by Richard HananiaRichard Hanania noted almost immediately that the second Trump administration was more serious about policy across the board:

In 2016, Trump took over the GOP practically out of nowhere, nobody thought he would win the general election, and conservatives weren’t really prepared to do much of anything other than give him judges to confirm. The right has since then spent the last eight years thinking about how to make full use of the executive branch for when a Republican returns back to office.

But one thing this whole experience has taught me is that knowledge is fragmented and so much of politics, like life more generally, is about drawing attention. The Origins of Woke relies on the work of several scholars who are lesser known and have been hammering on some of the points I made in the book for decades, including Gail Heriot and Eugene Volokh, and many attorneys like Dan Morenoff and Alison Somin have done important work far from the public spotlight. And I think I probably originally learned about disparate impact from Steve Sailer. So there’s a kind of pipeline here, which in this case went Heriot et al-Hanania-Vivek-Trumpverse, from the most scholarly towards the most famous and attention grabbing. It’s been instructive to play a part in this process. One maybe can place Rufo in between Vivek and Trumpverse, or as part of an independent branch between Hanania and Trumpverse.

It’s possible no single person actually made the marginal difference here. If Trump hadn’t won the election, DeSantis certainly would’ve gone just as far. Vivek would have too, and even Nikki Haley opened her campaign with a video talking about wokeness as a threat to America, although in her case we can have doubts as to whether she would have taken decisive action on the issue. And maybe if Rufo and I didn’t exist, someone else would have filled our niches.

[…]

All of this makes me think that politicians are in a sense less important than intellectuals and activists. It’s actually difficult for me to imagine all this happening without me or Rufo, but easy to imagine it happening without Trump.

As I discuss in the introduction to The Origins of Woke, I started thinking about the relationship between wokeness and civil rights law around 2011 while I was in law school. I then spent about a decade trying to convince people how important this topic was. Finally, I just wrote about it myself, and things started to change.

[…]

Another lesson people can potentially draw from this experience is that it is possible to influence policy even if you’re starting out without much in the way of fame, connections, or money. Furthermore, my messaging hasn’t exactly been optimized to win over Republicans. Yet by making a compelling case in emphasizing the issue and bringing it to public attention, I was able to contribute towards changing the conversation on civil rights law. For anyone else who wants to influence policy, here’s a demonstration that it can be done.

[…]

Most political struggles end in failure or some kind of ambiguous outcome. But sometimes you advocate for an idea, and it just wins. I wanted conservatives to go to war against wokeness as a matter of policy, and the outcome has surpassed my most optimistic hopes. It’s a very satisfying feeling.

The president did not have a need-to-know about them

Friday, February 21st, 2025

Area 51 by Annie JacobsenThe Nevada Test Site, Annie Jacobsen explains (in Area 51), led to one of the most important and most secret businesses of the twenty-first century:

Called remote sensing, it is the ability to recognize levels of radioactivity from a distance using ultraviolet radiation, infrared, and other means of detection.

Within a decade of the disastrous nuclear accidents at Palomares and Thule, EG&G would so dominate the radiation-detection market that the laboratory built at the Nevada Test Site for this purpose was initially called the EG&G Remote Sensing Laboratory. After 9/11, the sister laboratory, at Nellis Air Force Base in Las Vegas, was called the Remote Sensing Laboratory and included sensing-detection mechanisms for all types of WMD. This facility would become absolutely critical to national security, so much so that by 2011, T. D. Barnes says that “only two people at Nellis are cleared with a need-to-know regarding classified briefings about the Remote Sensing Lab.”

[…]

EG&G had been taking radiation measurements and tracking radioactive clouds for the Atomic Energy Commission since 1946. For decades, EG&G Energy Measurements has maintained control of the vast majority of radiation measurements records going back to the first postwar test at Bikini Atoll in 1946. Because much of this information was originally created under the strict Atomic Energy classification Secret/ Restricted Data — i.e., it was “born classified” — it has largely remained classified ever since. It cannot be transferred to another steward. For decades, this meant there was no one to compete with EG&G for the remote sensing job.

[…]

So secret are the record groups in EG&G’s archives, even the president of the United States can be denied access to them, as President Clinton was in 1994. One year earlier, a reporter named Eileen Welsome had written a forty-five-page newspaper story for the Albuquerque Tribune revealing that the Atomic Energy Commission had secretly injected human test subjects with plutonium starting in the 1940s without those individuals’ knowledge or consent. When President Clinton learned about this, he created an advisory committee on human radiation experiments to look into secrets kept by the Atomic Energy Commission and to make them public. In several areas, the president’s committee succeeded in revealing disturbing truths, but in other areas it failed. In at least one case, regarding a secret project at Area 51, the committee was denied access to records kept by EG&G and the Atomic Energy Commission on the grounds that the president did not have a need-to-know about them. In another case, regarding the nuclear rocket program at Area 25 in Jackass Flats, the president’s committee also failed to inform the public of the truth. Whether this is because the record group in EG&G’s archive was kept from the committee or because the committee had access to it but chose not to report the facts in earnest remains unknown.

Who should skip college?

Thursday, February 20th, 2025

The central thesis of The Case Against Education, Bryan Caplan explains, is that education has a low (indeed, negative) social return, because signaling, not building human capital, is its main function — but the selfish return to education is negative, too, for many students, depending on ability:

First and foremost: know thyself.

  • Don’t base your life choices on what your immediate social circle finds “demeaning.” As Dirty Jobs repeatedly proves, people routinely get used to jobs that initially disgust them.
  • Don’t base your life choices on whether parents and teachers constantly tell you that you’re “smart.” They’re not trustworthy assessors of your intelligence.
  • Don’t rule out options because they require “declining status.” If your family’s initial status is above average, declining status is the mathematical norm. That’s what “regression to the mean” means.

What should you do instead? First and foremost: Get objective evidence on your own intelligence.

  • If your SAT score is at 1200 or greater, your odds of successfully finishing a “real” major are quite good.
  • If your SAT is in the 1100-1200 range, it’s a toss-up.
  • If you’re in the 1000-1100 range, only try college if your peers consider you an annoyingly hard worker.
  • Below 1000? Don’t go.

[…]

What will go wrong if you ignore my advice? The most likely scenario is that you spend years worth of time and tuition, then fail to finish your degree. Maybe you’ll keep failing crucial classes. Maybe you’ll keep switching majors. Maybe you’ll die of boredom. The precise mechanism makes little difference: Since about 70% of the college payoff comes from completion, non-completion implies a terrible return on investment.

Drones are not a new category but dramatically reduce the cost of some existing functions

Wednesday, February 19th, 2025

The side in control of the air tends to win, Austin Vernon notes:

At a minimum, dominant air power is a massive force multiplier that allows the side wielding it to take significantly less casualties than its opponent. Aircraft can uniquely disrupt supply lines, command and control, and troop concentrations. The forces on the losing side must drastically alter their tactics to survive, limiting their ability to attack or defend.

Another feature is that air-to-air battles tend to be lopsided. It is more common to see 20:1 or 10:1 kill/loss ratios than even matches. For example, the F-15 has 104 kills and zero losses since entering service in 1976. The defining factors have been pilot quality, aircraft performance, weapon performance, and sensor capability (radar, airborne early warning aircraft, etc.).

US airpower was so dominant in the 20th century that most opponents focused on building ground-based anti-aircraft defenses. An arms race developed between these anti-aircraft missile batteries and ever more sophisticated aircraft, weapons, and tactics on the US side. Stealth to avoid detection, cruise missiles to avoid risking aircraft, and highly specialized tactics and weapons to defeat anti-aircraft batteries are an outgrowth of this competition.

Drones are not a new category but dramatically reduce the cost of some existing functions:

FPV Drones → Attack Helicopters

Advocates of rotor aircraft thought they would dominate the battlefield in the 60s, 70s, and 80s, to the detriment of traditional armor. It didn’t happen because helicopters are vulnerable to air defenses, including shoulder-fired missiles and anti-aircraft guns.

First Person View (FPV) kamikaze drones that cost <$1000 or slightly larger reusable drones are bringing this prediction back from the dead. They are still vulnerable to air defense, but it is irrelevant given their cost. Ground forces will need to make many adjustments, similar to when anti-tank guided missiles made WWII-style tanks obsolete in the 1960s and 1970s.

Bomber Drones → Attack Helicopters (pt. 2)

Some missions call for slightly larger munitions than disposable FPVs can justify, and “bomber” drones that weigh around 25-50 kg and cost $10,000 fill the void. They mostly fly at night to increase survival rates and often use satellite communications, like StarLink, to avoid jamming. Missions are attacking parked vehicles, mining roads, and dropping grenades on infantry. These drones are much more powerful than FPVs and are worth the price if they can survive a few missions.

Recon Drones → Scout Helicopters and Forward Air Control Aircraft

Scouting for artillery, ground attack aircraft, and attack helicopters has long been a scarce resource, even for the US military. Infantry and armor units still had to self-scout with limited visibility.

Small recon drones, often off-the-shelf commercial models, bring top-tier scouting down to the squad level. Their cost makes using them sustainable, while many large drones, like the US Predator, are obsolete in high-intensity battles because of their price and vulnerability to air defenses.

One-Way Attack Drones → Cruise Missiles

Cruise missiles have a unique ability to attack heavily defended targets in the opponent’s rear, but their price limits their number.

Propeller-powered one-way attack drones can cost as little as $50,000 instead of $1+ million, increasing volume. The overall impact has been much more muted than FPV and recon drones because these drones are so easy to shoot down and have small payloads that limit what targets they can be effective against. They travel slowly, roughly the same as a car on the interstate, to meet cost goals and extend range. Their utility plummets once the opponent adapts to shoot them down with cheap weapons, like guns on trucks and helicopters, cheap interceptor drones, or electronic warfare. The drones can still provide net benefits if they temporarily overwhelm air defenses, force the enemy to expend significant organizational resources to counter them, or the targets are valuable enough.

Interceptor Drones → Man Portable Anti-Aircraft Missiles

Militaries developed man-portable anti-aircraft missiles to counter helicopters and low-flying aircraft, but they are much too expensive and complex to counter drones.

Instead, small racing-style drones that cost no more than a few thousand dollars ram or explode near targets. Their prey is primarily more expensive attack drones and higher-tier recon drones that cost $30,000-$200,000.

There are some experiments with drones carrying shotguns and other air-to-air weaponry to deal with the smallest FPV and recon drones. Time will tell if these are viable.

He projects some trends:

Barbell Procurement Strategy

The battlefield is so hostile that drones must be cheap enough to be expendable or capable enough to avoid all air defenses. The somewhat fancy $100,000 recon drone is probably in no-man’s-land. Large drones without sophisticated countermeasures, like the US Global Hawk or Predator/Reaper family, are obsolete outside the most permissive airspace. Even drones that were considered cheap before the war in Ukraine, like the Turkish TB-2, have been sent to the scrap heap.

One of the only viable(?) large drones currently in use is the pricey US RQ-180 because of its size and modern stealth features. Traditional cruise missiles also continue to be viable for deep strikes.

Small Eats Large

Drones aren’t automatically cheaper than legacy systems like helicopters or strategic reconnaissance platforms. Radical reduction in size and complexity is the best way to achieve this.

Better electronics and cameras have allowed recon drones with mass measured in grams. Or a shaped charge driven by an FPV drone into the weakest part of a vehicle’s armor can be much smaller than traditional anti-tank missile warheads.

Battery-electric powertrains can shrink much more than engines can, and these drones have disrupted short-range, low-speed categories much more than long-range or high-powered missions.

The success rate of these drones is often low, between 10%-50%, and many targets need multiple hits. However, the low cost of small drones means the math is favorable and similar to artillery shells.

Single Function Dominates

Drones can only be small and cheap if they are highly specialized for one task. Examples include anti-vehicle, anti-personnel, high-value targets ~30 km behind enemy lines, hitting enemy drones, dropping mines or supplies, etc. Many of these categories even have further specialization within them.

Paths with Faster Iteration Win

Things change fast since small drones are a relatively new technology. Pathways that allow quick adjustments can outcompete slow paths. Small and single-function platforms can increase iteration speed.

Slow, electric drones have massive advantages in operating footprint and costs, he notes:

Fuel

Fuel is one of the biggest concerns for modern militaries. The US military often assumes a fuel cost of hundreds or thousands of dollars per gallon to deliver to war zones for planning. The volume of demand in a high-intensity conflict could reach the level of economies like Japan. Aircraft are often the largest fuel consumers.

These drones require a fraction of the energy of high-performance aircraft and can often use electricity instead of fuel. The AEW scout battery pack would only be a few pounds and could charge with a tiny solar panel. Models like the tactical bomber, air-defense fighter, or air superiority fighter have batteries small enough to swap by hand. A few standard-size solar panels could provide enough juice for one sortie per day and don’t require vulnerable centralized infrastructure.

Infrastructure

An aircraft’s weight and stall speed plays a large part in determining runway length and quality. Small, slow drones need minimal airstrips, if they need them at all. There is no need for traditional air bases.

Parts/Maintenance

The US military prefers “module-based” maintenance. Techs change an entire radar module instead of diagnosing and fixing a certain subcomponent to reduce labor hours and the number of parts in stock.

Many drones would cost as much as a typical module, and there would be no reason to bother with parts or repairs. The need for techs and parts management would be minimal.

Battery-electric powertrains are reliable compared to jet engines and should be able to fly hundreds or thousands of hours before replacement without maintenance.

Training

Fighter pilots are the most valuable rank-adjusted human capital in any military. One great pilot can make a meaningful difference in an entire war by helping to clear the skies. Selection is intense, and training is very slow. Simulators help, but learning to fly a $100+ million fighter jet doesn’t happen overnight.

AI pilots take more effort to train initially but can replicate as needed. The burden for any human pilot/manager will be lower given the narrower mission of drones than multi-role fighters. The low cost of the platforms means both AI and humans can train constantly on real aircraft instead of using simulators. Battles can be live instead of simulated without endangering human pilots, improving the quality of training.

Sortie Rate

Most fighters need full crews to turn the aircraft around and keep it flying. Each airframe only has so many hours without full refits. Many aircraft struggle to fly one sortie per day. A low-maintenance, battery-electric drone with swappable batteries could fly 20-22 hours each day.

The sortie challenge would be especially beneficial for countries like Taiwan. China constantly flies fighters at the edge of Taiwan’s air defense identification zone, which forces Taiwan to send fighters to intercept them, wearing down airframes and pilots. A constant picket of drones would negate this strategy.

Shipping

Munitions, especially bombs, are the biggest logistical challenge after fuel. Manned aircraft tend to drop large bombs that are overkill because of their limited sortie rates and the risk each mission entails. Drones with high sortie rates can use small bombs that make the drone more practical and reduce total tonnage dropped as each target gets the appropriate amount instead of a truck getting vaporized by a 2000 lb bomb.

Which branch should take on the drones?

The Air Force loathes low-performance aircraft and is skeptical of deleting human pilots. Its budget mostly goes towards capabilities that the drone air force isn’t replacing, like deep strike, high-end fighters, or the nuclear umbrella. Deleting these platforms makes little sense when they still provide key capabilities (hedging!) and are in the phase where unit cost is falling. For those reasons, the Air Force is a poor choice to raise the drone force, and its job is to ensure its aircraft are protected from small drones when parked.

Thankfully, the US has four air forces to choose from, three of which already operate high-end aviation (Air Force, Navy, Marines).

The bacteria continue to grow exponentially only within tumors

Tuesday, February 18th, 2025

A University of Massachusetts Amherst-Ernest Pharmaceuticals team has developed a non-toxic bacterial therapy to deliver cancer-fighting drugs directly into tumors:

The team has been finetuning the development of non-toxic, genetically engineered strains of Salmonella to target tumors and then control the release of cancer-fighting drugs inside cancer cells. In addition to sparing healthy tissue from damage, this cancer treatment platform is able to deliver orders of magnitude more therapy than the administered dose because the simple-to-manufacture bacteria grow exponentially in tumors.

[…]

Early on in the research, the scientists discovered that it was the bacterial flagella – part of the cell that aids in movement – that enables the bacteria to invade cancer cells. So they engineered a genetic circuit in the bacteria that turns on the production of flagella with a simple, over-the-counter dose of aspirin. Without the turn-on switch provided by salicylic acid, the active metabolic product in the blood after a person takes an aspirin, the bacteria remain dormant in the tumor.

“One core part of this technology is the controlled activation of flagella,” Raman explains. “And the other core part is once the bacteria go inside cancer cells, we engineered them with a suicide circuit. So they rupture on their own and deliver the therapy inside the cancer cell.”

In pre-clinical research with mouse models, the bacteria is injected intravenously. “It goes everywhere, but then the immune system rapidly clears the attenuated bacteria from healthy organ tissue within two days. The bacteria continue to grow exponentially only within tumors during that time. On the third day, we give an over-the-counter dose of aspirin to trigger the bacteria to invade the cancer cells and then deliver the therapy,” Raman says.

Russian Army and FSB Veteran on the Ukraine War

Monday, February 17th, 2025

Valery Shiryaev, an ex-Russian Army & FSB Officer, reveals when Ukraine’s war will end, exposes North Korean troops’ failures in Kursk, and breaks down the flaws of Western tanks in Ukraine, in an interview dubbed using AI voice cloning:

Whether that’s good or bad, progressive or reactionary isn’t the point; it just is necessary

Sunday, February 16th, 2025

Freddie DeBoer notes that liberals cannot admit that past examples of successful immigration have involved migrants conforming to their new culture:

Last fall there was something of a local controversy in my town regarding drive-thru workers with limited English skills. Apparently, a number of my neighbors had taken to Facebook and NextDoor to complain that local fast food restaurants employed recent immigrants whose English skills were so bad, they were incapable of doing their jobs. A particular McDonald’s location drew special ire. As a good progressive defender of immigration, I dismissed this talk as simple xenophobia, and indeed there was no doubt a lot of that. What I read about on a local blog certainly involved language that was, at least, unkind. But in the months since then, I’ve gone through that McDonald’s drive-thru window probably a half-dozen times, and I have to tell you…. I often genuinely can’t understand what the women (all women) working the drive-thru are saying. I honestly can’t, and I spent years teaching English language skills at the college and graduate school level. Most visits are an exercise in frustration. And I have to admit that a job where your primary responsibility is to talk to customers is a job that requires you to speak the native language of the country you’re in with at least a certain minimum of fluency. Whether that’s good or bad, progressive or reactionary isn’t the point; it just is necessary.

Why are female intellectuals crazy?

Saturday, February 15th, 2025

Emil Kirkegaard presents his speculative model of sex differences among people with extreme beliefs, which explains why female intellectuals are crazy:

Women are more centrist in their personality and thus their beliefs than men. They hold views that are more common, or statistically speaking, their standard deviation is smaller for beliefs and their strength of belief. This is just a special case of the nearly universal greater male variance finding.

To move a person to adopt views that are very unlike those held by the rest of society, some kind of psychological push-factor is needed. The main push-factors are intelligence, open-mindedness, or craziness (psychopathology, P factor).

Thus, statistically speaking, women with extreme views need a stronger push factor than men do to attain those views.

Thus, statistically speaking, women with extreme views will average higher open-mindedness, intelligence, and craziness.

Dangerous top secret tests can be conducted there without much scrutiny or oversight

Friday, February 14th, 2025

Area 51 by Annie JacobsenThe idea behind a facility like Area 51, Annie Jacobsen reminds us (in Area 51), is that dangerous top secret tests can be conducted there without much scrutiny or oversight:

To this end, there is no shortage of death woven into the uncensored history of Area 51. One of the most dangerous tests ever performed there was Project 57, the dirty bomb test that took place five miles northwest of Groom Lake, in a subparcel called Area 13. And yet what might have been the one defensible, positive outcome in this otherwise shockingly outrageous test — namely, lessons gleaned from its cleanup — was ignored until it was too late.

Unlike the spy plane projects at Groom Lake, where operations tend to have clear-cut beginnings and ceremonious endings, Project 57 was abandoned midstream. If the point of setting off a dirty bomb in secret was to see what would happen if an airplane carrying a nuclear bomb crashed into the earth near where people lived, it follows that serious efforts would then be undertaken by the Atomic Energy Commission to learn how to clean up such a nightmare scenario after the catastrophe occurs. No such efforts were initially made.

Instead, about a year after setting off the dirty bomb, the Atomic Energy Commission put a barbed-wire fence around the Area 51 subparcel, marked it with HAZARD/ DO NOT ENTER/ NUCLEAR MATERIAL signs, and moved on to the next weapons test. The bustling CIA facility five miles downwind would be relatively safe, the nuclear scientists and the weapons planners surmised. Alpha particles are heavy and would rest on the topsoil after the original dust cloud settled down. Furthermore, almost no one knew about the supersecret project, certainly not the public, so who would protest? The closest inhabitants were the rank and file at the CIA’s Groom Lake facility next door, and they also knew nothing of Project 57. The men there followed strict need-to-know protocols, and as far as the commission was concerned, all anyone at Area 51 needed to know was to not venture near the barbed-wire fence marking off Area 13.

And yet the information gleaned from a cleanup effort would have been terribly useful, as was revealed eight years and eight months after Project 57 unfurled. On the morning of January 17, 1966, a real-life dirty bomb crisis occurred over Palomares, Spain. A Strategic Air Command bomber flying with four armed hydrogen bombs — with yields between 70 kilotons and 1.45 megatons — collided midair with a refueling tanker over the Spanish countryside.

On the morning of the accident, an Air Force pilot and his six-man crew were participating in an exercise that was part of Operation Chrome Dome, something that had begun in the late 1950s as part of Strategic Air Command.

[…]

That morning, the bomber lined up with the tanker and had just begun refueling when, in the words of pilot Larry Messinger, “all of a sudden, all hell seemed to break loose” and the two aircraft collided. There was a massive explosion and the men in the fuel tanker were instantly incinerated. Somehow Messinger, his copilot, the instructor pilot, and the navigator managed to eject from the airplane carrying the bombs. Their parachutes deployed, and the men floated down, landing in the sea. The four nuclear bombs — individually powerful enough to destroy Manhattan — also had parachutes, two of which did not deploy. One parachuted bomb landed gently in a dry riverbed and was later recovered relatively intact. But when the two bombs without parachutes hit the earth, their explosive charges detonated, breaking open the nuclear cores. Nuclear material was released at Palomares in the form of aerosolized plutonium, which then spread out across 650 acres of Spanish farmland — consistent with dispersal patterns from the Project 57 dirty bomb test. The fourth bomb landed in the sea and became lost. Palomares was then a small fishing village and farming community located on the Mediterranean Sea. As fortune would have it, January 17 was the Festival of Saint Anthony, the patron saint of Palomares, which meant most people in the village were at church that day and not out working in the fields.

[…]

The daily brief said nothing about widespread plutonium dispersal or about the lost thermonuclear bomb. Only that the “16th Nuclear Disaster Team had been dispatched to the area.” The “16th Nuclear Disaster Team” sounded official enough, but if fifteen nuclear disaster teams had preceded this one or existed concurrently, no record of any of them exists in the searchable Department of Energy archives. In reality, the group was ad hoc, meaning it was put together for the specific purpose of dealing with the Palomares incident. An official nuclear disaster response team did not exist in 1966 and would not be created for another nine years, until 1975, when retired Brigadier General Mahlon E. Gates, then the manager of the Nevada Test Site, put together the Nuclear Emergency Search Team, or NEST.

In 1966, the conditions in Palomares, Spain, were strikingly similar to the conditions at the Nevada Test Site in terms of geology. Both were dry, hilly landscapes with soil, sand, and wind shear as significant factors to deal with. But considering, with inconceivable lack of foresight, the Atomic Energy Commission had never attempted to clean up the dirty bomb that it had set off at Area 13 nine years before, the 16th Nuclear Disaster Team was, essentially, working in the dark.

Eight hundred individuals with no hands-on expertise were sent to Palomares to assist in the cleanup efforts there. The teams improvised. One group secured the contaminated area and prepared the land to remove contaminated soil. A second group worked to locate the lost thermonuclear bomb, called a broken arrow in Defense Department terms. The group cleaning up the dispersed plutonium included “specialists and scientists” from the Los Alamos Laboratory, the Lawrence Radiation Laboratory, Sandia Laboratories, Raytheon, and EG&G. It was terribly ironic. The very same companies who had engineered the nuclear weapons and whose employees had wired, armed, and fired them were now the companies being paid to clean up the deadly mess. This was the military-industrial complex in full swing.

For the next three months, workers labored around the clock to decontaminate the site of deadly plutonium. By the time the cleanup was over, more than fourteen hundred tons of radioactive soil and plant life were excavated and shipped to the Savannah River plant in South Carolina for disposal. The majority of the plutonium dispersed on the ground was accounted for, but the Defense Nuclear Agency eventually conceded that the extent of the plutonium particles scattered by wind, carried as dust, and ingested by earthworms and excreted somewhere else “will never be known.”

As for the missing hydrogen bomb, for forty-four days the Pentagon refused to admit it was lost despite the fact that it was widely reported as being missing. “I don’t know of any missing bomb,” one Pentagon official told the Associated Press. Only after the bomb was recovered from the ocean floor did the Pentagon admit that it had in fact been lost.

The nuclear accidents did not stop there. Two years and four days later there was another airplane crash involving a Strategic Air Command bomber and four nuclear bombs. On January 21, 1968, an uncontrollable fire started on board a B-52G bomber during a secret mission over Greenland. Six of the seven crew members bailed out of the burning airplane, which crested over the rooftops of the American air base at Thule and slammed into the frozen surface of North Star Bay. The impact detonated the high explosives in at least three of the four thermonuclear bombs — similar to exploding multiple dirty bombs — spreading radioactive plutonium, uranium, and tritium over a large swath of ice. A second fire started at the crash site, consuming bomb debris, wreckage from the airplane, and fuel. After the inferno burned for twenty minutes the ice began to melt. One of the bombs fell into the bay and disappeared beneath the frozen sea. In November of 2008, a BBC News investigation found that the Pentagon ultimately abandoned that fourth nuclear weapon after it became lost.

Once again, an ad hoc emergency group was put together; there was still no permanent disaster cleanup group. This time five hundred people were involved. The conditions were almost as dangerous as the nuclear material. Temperatures fell to –70 degrees Fahrenheit, and winds blew at ninety miles per hour. Equipment froze. In a secret SAC document, made public by a Freedom of Information Act request in 1989, the Air Force declared their efforts would be nominal, “a cleanup undertaken as good housekeeping measures,” with officials anticipating the removal of radioactive debris “to equal not less than 50%” of the total of what was there. For eight months, a crew calling themselves the Dr. Freezelove Team worked around the clock. When they were done, 10,500 tons of radioactive ice, snow, and crash debris was airlifted out of Greenland and flown to South Carolina for disposal.

[…]

After the Nuclear Test Ban Treaty of 1963, testing had moved underground, but often these underground tests “vented,” releasing huge plumes of radiation from fissures in the earth.