Castle design assumes the enemy will reach the walls

December 1st, 2022

The battlements along the top of a castle wall were designed to allow a small number of defenders to exchange fire effectively with a large number of attackers, and in so doing to keep those attackers from being able to “set up shop” beneath the walls:

The goal is to prevent the enemy operating safely at the wall’s base, not to prohibit approaches to the wall. These defenses simply aren’t designed to support that much fire, which makes sense: castle garrisons were generally quite small, often dozens or a few hundred men. While Hollywood loves sieges where all of the walls of the castle are lined with soldiers multiple ranks deep, more often the problem for the defender was having enough soldiers just to watch the whole perimeter around the clock (recall the above example at Antioch: Bohemond only needs one traitor to access Antioch because one of its defensive towers was regularly defended by only one guy at night). It is actually not hard to see that merely by looking at the battlements: notice in the images here so far often how spaced out the merlons of the crenellation are. The idea here isn’t maximizing fire for a given length of wall but protecting a relatively small number of combatants on the wall. As we’ll see, that is a significant design choice: castle design assumes the enemy will reach the walls and aims to prevent escalade once they are there; later in this series we’ll see defenses designed to prohibit effective approach itself.

The self-described dark elf who yearns for a king

November 29th, 2022

Andrew Prokop of Vox recently spoke with Curtis Yarvin, the monarchist, anti-democracy blogger that many of us still remember as Mencius Moldbug:

When I first asked to speak with Yarvin, he requested that I prove my “professional seriousness as a current historian” by “reading or at least skimming” three books, and I complied. One of them, Public Opinion by Walter Lippmann — a classic of the journalism school canon — describes how people can respond when their previous beliefs about how the world works are called into question.

“Sometimes, if the incident is striking enough, and if he has felt a general discomfort with his established scheme, he may be shaken to such an extent as to distrust all accepted ways of looking at life, and to expect that normally a thing will not be what it is generally supposed to be,” Lippmann wrote. “In the extreme case, especially if he is literary, he may develop a passion for inverting the moral canon by making Judas, Benedict Arnold, or Caesar Borgia the hero of his tale.”

There, I thought of Yarvin — the self-described dark elf who yearns for a king.

Among the subjects was 17-year-old Ted Kaczynski

November 28th, 2022

I remember first finding out about the Unabomber in 1995 and being shocked that I hadn’t heard about a real-life mad-scientist supervillain mysteriously blowing up professors and industrialists.

I recently watched Unabomber: In His Own Words — in which Ted Kaczynski sounds like a bitter nerd, not Doctor Doom — and learned that his origin story involves another character who could have come out of a pulp novel, one Henry Murray:

During World War II, he left Harvard and worked as lieutenant colonel for the Office of Strategic Services (OSS). James Miller, in charge of the selection of secret agents at the OSS during World War II, said the situation test was used by British War Officer Selection Board and OSS to assess potential agents.

In 1943 Murray helped complete Analysis of the Personality of Adolph Hitler, commissioned by OSS boss Gen. William “Wild Bill” Donovan. The report was done in collaboration with psychoanalyst Walter C. Langer, Ernst Kris, New School for Social Research, and Bertram D. Lewin, New York Psychoanalytic Institute. The report used many sources to profile Hitler, including informants such as Ernst Hanfstaengl, Hermann Rauschning, Princess Stephanie von Hohenlohe, Gregor Strasser, Friedelind Wagner, and Kurt Ludecke. The groundbreaking study was the pioneer of offender profiling and political psychology. In addition to predicting that Hitler would choose suicide if defeat for Germany was near, Murray’s collaborative report stated that Hitler was impotent as far as heterosexual relations were concerned and that there was a possibility that Hitler had participated in a homosexual relationship. The report stated: “The belief that Hitler is homosexual has probably developed (a) from the fact that he does show so many feminine characteristics, and (b) from the fact that there were so many homosexuals in the Party during the early days and many continue to occupy important positions. It is probably true that Hitler calls Albert Forster ‘Bubi’, which is a common nickname employed by homosexuals in addressing their partners.”

In 1947, he returned to Harvard as a chief researcher, lectured and established with others the Psychological Clinic Annex.

From late 1959 to early 1962, Murray was responsible for unethical experiments in which he used twenty-two Harvard undergraduates as research subjects. Among other goals, experiments sought to measure individuals’ responses to extreme stress. The unwitting undergraduates were submitted to what Murray called “vehement, sweeping and personally abusive” attacks. Specifically tailored assaults to their egos, cherished ideas, and beliefs were used to cause high levels of stress and distress. The subjects then viewed recorded footage of their reactions to this verbal abuse repeatedly.

Among the subjects was 17-year-old Ted Kaczynski, a mathematician who went on to be known as the ‘Unabomber’, a domestic terrorist who targeted academics and technologists for 17 years. Alston Chase’s book Harvard and the Unabomber: The Education of an American Terrorist connects Kaczynski’s abusive experiences under Murray to his later criminal career.

In 1960, Timothy Leary started research in psychedelic drugs at Harvard, which Murray is said to have supervised.

Some sources have suggested that Murray’s experiments were part of, or indemnified by, the US Government’s research into mind control known as the MKUltra project.

How the Billboard Hot 100 Lost Interest in the Key Change

November 27th, 2022

Chris Dalla Riva, a musician from New Jersey who works on analytics and personalization at Audiomack, explains how the Billboard Hot 100 lost interest in the key change:

When looking at every Billboard Hot 100 number one hit between 1958 and 1990, we see that the key of G major was a very popular key. This was because the key of G major is easy to work with on the guitar and piano, the two most popular compositional instruments during these years. In fact, across the decades, we see that keys that are convenient to use on these instruments (i.e. C major, G major, D major) are more popular than others that are less convenient to use, like B major and Gb major.

But songs don’t have to be in a single key. In fact, 23 percent of number one hits between 1958 and 1990 were in multiple keys, like “Man in the Mirror.”


The act of shifting a song’s key up either a half step or a whole step (i.e. one or two notes on the keyboard) near the end of the song, was the most popular key change for decades. In fact, 52 percent of key changes found in number one hits between 1958 and 1990 employ this change.


What’s odd is that after 1990, key changes are employed much less frequently, if at all, in number one hits.

What’s doubly odd is that around the same time, the keys that number one hits are in change dramatically too. In fact, songwriters begin using all keys at comparable rates.


So what is going on? Both of the shifts can be tied back to two things: the rise of hip-hop and the growing popularity of digital music production, or recording on computers.


Hip-hop stands in stark contrast to nearly all genres that came before because it puts more emphasis on rhythm and lyricism over melody and harmony.


As hip-hop grew in popularity, the use of computers in recording also exploded too. Whereas the guitar and piano lend themselves to certain keys, the computer is key-agnostic. If I record a song in the key of C major into digital recording software, like Logic or ProTools, and then decide I don’t like that key, I don’t have to play it again in that new key. I can just use my software to shift it into that different key. I’m no longer constrained by my instrument.

Furthermore, digital recording software lends itself to a new style of songwriting that isn’t as inviting to key changes within a recording.


Because songwriters in the pre-digital age were writing linearly, shifting the key in a new section was a natural compositional technique.

There are no known commodity resources in space that could be sold profitably on Earth

November 26th, 2022

There are no known commodity resources in space that could be sold profitably on Earth, Casey Handmer explains:

On Earth, bulk cargo costs are something like $0.10/kg to move raw materials or shipping containers almost anywhere with infrastructure. Launch costs are more like $2000/kg to LEO, and $10,000/kg from LEO back to Earth.


Let’s consider a representative list of the most expensive materials in the world. In descending order, they are:

  • Antimatter, currently $62.5t/g.
  • Californium, $25m/g.
  • Diamond, $55k/g.
  • Tritium, $30k/g.
  • Taaffite, $20k/g.
  • Helium 3, $15k/g.
  • Painite, $6k/g.
  • Plutonium, $4k/g.
  • LSD, $3k/g.
  • Cocaine, $236/g.
  • Heroin, $130/g.
  • Rhino horn, $110/g.
  • Crystal meth, $100/g.
  • Platinum, $60/g.
  • Rhodium, $58/g.
  • Gold, $56/g.
  • Saffron, $11/g.

The previous ballpark estimate for transport costs was $100,000/kg, or $100/g. Since I want to be inclusive, I’ll include everything down to saffron in the list above, whose cost is roughly equal to the current LEO-surface transport cost.


None of the products represent large markets, due to their prohibitive price or relative scarcity. As a result, they are subject to substantial price elasticity depending on supply. For example, the global annual market for Helium-3 is about $10m. Double the supply, halve the price, and the net revenue is still about the same. No-one seriously thinks that Lunar mining infrastructure can be built for less than many billions of dollars, so even at a price of $100,000/kg, annual demand needs to exceed hundreds of tons to ensure adequate revenue and price stability.

Tritium, helium-3, platinum and antimatter represent speculative future markets, particularly where increased supply could help develop an industry based on, say, fusion, exotic batteries, or a bunch of gamma rays. If fusion-induced demand for helium-3 reaches a point where annual demand has climbed by three orders of magnitude, then I am willing to revisit this point. But current construction rates of cryogenically cooled bolometers are not adequate to fund Lunar mine development, and solar PV electricity production has every indication of destroying competing generation methods, including fusion.

Some relatively expensive minerals are only expensive because low levels of industrial demand have failed to develop efficient supply chains. If demand increases, new refining mechanisms are invariably developed which substantially lower the price. A salient example here is rare platinum group metals.

Space-based solar power is not a thing

November 25th, 2022

Space-based solar power is not a thing, Casey Handmer argues:

As Elon Musk has concisely pointed out, the fundamental problem with space-based solar power is that it’s obtaining a commodity, power, somewhere where it’s expensive and selling it somewhere where it’s cheap. This is not a good business. Indeed, it might make more sense to beam power from Earth to space stations, if they needed it.


What are the extra costs? Broadly, they fall into the following categories: Transmission losses, thermal losses, logistics costs, and space technology penalty. Individually, any one of these issues cancels out the benefits, and combined they leave space-based solar power at least three orders of magnitude more expensive than the terrestrial equivalents.


For a baseline comparison, consider a GW-scale power station. For terrestrial solar, this consists of standard panels on single axis mounts, covering about 10 square miles. For the space-based solar case, an identical area of land is covered instead with an antenna, a mesh of conductive wire held above the ground, to absorb the transmitted microwaves and convert them to electricity. An identical area implies similar overall energy fluxes, which is correct.


Transmission losses: The process of converting sunlight to electricity is about 20% efficient, depending on the type of panel – and this is a loss common to both systems. In addition, the space-based system has to convert the electrical power back into EM radiation, which is converted back into power on Earth. Proponents think that it should be possible to perform each conversion with 90% efficiency, but even beam-forming that well is not possible without a much larger antenna. My personal opinion is that the end-to-end microwave link efficiency would be lucky to exceed 40% efficiency, which erodes the competitive advantage substantially.

Thermal losses: The conversion efficiency of the high-power microwave transmitter has a nasty side-effect, namely that what isn’t transmitted is wasted as heat, and that heat has to go somewhere. If the transmitter is 80% efficient (which is being very generous), then it will have to radiate 200MW of thermal power. This is a different problem to the thermal losses in the solar panels, which are more like 4GW but spread over a huge area that is in radiative thermal equilibrium with its environment. Instead, the microwave power electronics will need a huge cooling system. If the electronics can operate at 350K, then the radiator power will be 850W/m^2, so the radiator will need a total area of 23ha, comparable to the total size of the solar array and the microwave transmission antenna. In contrast to the usual claims of perfect scaling efficiency with solar arrays in microgravity, a large space-based solar power system will also need a huge antenna and cooling system, which don’t scale quite as nicely.

Logistics costs: Consider transportation cost. Today, SpaceX has crushed the orbital transport market with a price of around $2000/kg. Compare this to the worldwide network of intermodal containers, which can transport anything in 20T units almost anywhere on Earth for about $0.05/kg. Even if all of Elon Musk’s wildest Starship dreams come true, transport costs will dominate the total capex of any space-based solar system, by many orders of magnitude. A factor of 10x improvement in resource does not make up for transport costs which are more than 10,000x higher. If logistics costs are more than 0.1% of current solar farm costs (they’re more like 20%), then increased transport costs completely negate the improved solar resource. It’s not even close.

One further aspect of logistics bears closer examination. In our baseline case, we considered an array of panels strung up on posts, compared to a mesh of wire strung up on posts. It turns out that (as of 2019) a substantial fraction of the overall cost of a solar PV station is the mounting hardware, which is also required by the microwave receiver. So if the mounting hardware costs 20% of the overall deployment cost for terrestrial solar, that places a strong upper bound on total system cost allowable for space-based. In other words, does anyone seriously believe that the microwave receiving antenna could cost 20% of the overall system capex, the other 80% to be used to launch thousands of tonnes of high performance gear into space? Put another way, the most cost-effective way to get a GW of power out of a microwave receiving antenna is obviously to tear down the wire mesh and sling up a bunch of solar panels, which can be ordered with a lead time of weeks from any of dozens of suppliers worldwide with widely available financing.

Finally, the space technology penalty. On Earth, we are living in an extremely exciting time for energy. Hundreds of major companies are competing on development cycles measuring only months to provide solar panels in an industry that’s growing at 20% a year. As a result, costs have fallen by 10% a year, and in the last few years, solar and batteries have neared, equaled, then utterly crushed all other forms of electricity generation. Initially, this process occurred on remote islands with high fuel import costs. Then the sunnier parts of the US. The rampage continues northwards at about 200 miles a year. The industry can sustain 30% deployment growth rate worldwide for another decade at least, before saturation occurs.

Today, I can pick up the phone and any of dozens of contractors in the LA market can fill hundreds of acres with panels, each built to survive 30 years under the harsh sun and sized perfectly for deployment using the latest tech, which is men in orange vests with forklifts.

In contrast, space technology has not benefited from such breakneck levels of growth, demand, and investment. Prohibitive maintenance costs demand perfect performance, and low rates of deployment ensure a slow innovation feedback loop. The result is that none of the current incredibly cheap solar panels could work in space, where thermal and vacuum, not to mention stresses of launch, would destroy their operation in days.

Instead, space operators rely on more traditional supply chains, with the result that building anything for space takes years and costs billions. Right now, a billion dollars invested will buy about 100MW of solar panels on the Earth, or 100kW of solar panels in space. This is a factor of 1000, and it also erases the advantages of more sunlight in space.

These four elements, transmission, thermal, logistics, and space technology, inflate the relative cost of space-based solar power to the point where it simply cannot compete with terrestrial solar. It’s not a matter of 5% here or there. It’s literally thousands of times more expensive. It’s not a thing.

The totokia was intended to peck holes in skulls

November 24th, 2022

The Tusken Raiders in the original Star Wars wield a peculiar weapon that Luke calls a gaffi stick. It turns out that the gaderffii is based on the Fijian totokia:

According to Fiji material culture scholar Fergus Clunie who describes it as a beaked battle hammer (in Fijian Weapons and Warfare, 1977: p. 55), “…the totokia was intended to ‘peck’ holes in skulls.” The weight of the head of the club was concentrated in the point of the beak of the weapon or kedi-toki (toki to peck; i toki: a bird’s beak). The totokia “…delivered a deadly blow in an abrupt but vicious stab, not requiring the wide swinging arc demanded by the others.” (Yalo i Viti. A Fiji Museum Catalogue, 1986: p. 185) It was a club that could be used in open warfare or to finish-off or execute warriors on the battlefield.

Totakia and Gaffi Stick

Mechanochemical breakthrough unlocks cheap, safe, powdered gases

November 23rd, 2022

Nanotechnology researchers based at Deakin University’s Institute for Frontier Materials claim to have found a super-efficient way to mechanochemically trap and hold gases in powders, which could radically reduce energy use in the petrochemical industry, while making hydrogen much easier and safer to store and transport:

Mechanochemistry is a relatively recently coined term, referring to chemical reactions that are triggered by mechanical forces as opposed to heat, light, or electric potential differences. In this case, the mechanical force is supplied by ball milling – a low-energy grinding process in which a cylinder containing steel balls is rotated such that the balls roll up the side, then drop back down again, crushing and rolling over the material inside.

The team has demonstrated that grinding certain amounts of certain powders with precise pressure levels of certain gases can trigger a mechanochemical reaction that absorbs the gas into the powder and stores it there, giving you what’s essentially a solid-state storage medium that can hold the gases safely at room temperature until they’re needed. The gases can be released as required, by heating the powder up to a certain point.


This process, for example, could separate hydrocarbon gases out from crude oil using less than 10% of the energy that’s needed today. “Currently, the petrol industry uses a cryogenic process,” says Chen. “Several gases come up together, so to purify and separate them, they cool everything down to a liquid state at very low temperature, and then heat it all together. Different gases evaporate at different temperatures, and that’s how they separate them out.”


“The energy consumed by a 20-hour milling process is US$0.32,” reads the paper. “The ball-milling gas adsorption process is estimated to consume 76.8 KJ/s to separate 1,000 liters (220 gal) of olefin/paraffin mixture, which is two orders less than that of the cryogenic distillation process.”


Chen tells us the powder can store a hydrogen weight percentage of around 6.5%. “Every one gram of material will store about 0.065 grams of hydrogen,” he says. “That’s already above the 5% target set by the US Department of Energy. And in terms of volume, for every one gram of powder, we wish to store around 50 liters (13.2 gal) of hydrogen in there.”

Indeed, should the team prove these numbers, they’d represent an instant doubling of the best current solid-state hydrogen storage mass fractions, which, according to Air Liquide, can only manage 2-3%.

Domes are over-rated

November 22nd, 2022

Any article about Moon or Mars bases needs to have a conceptual drawing of habitation domes, but domes have significant drawbacks, Casey Handmer reminds us:

Domes feature compound curvature, which complicates manufacturing. If assembled from triangular panels, junctions contain multiple intersecting acute angled parts, which makes sealing a nightmare. In fact, even residential dome houses are notoriously difficult to insulate and seal! A rectangular room has 6 faces and 12 edges, which can be framed, sealed, and painted in a day or two. A dome room has a new wall every few feet, all with weird triangular faces and angles, and enormously increased labor overhead.

It turns out that the main advantage of domes – no internal supports – becomes a major liability on Mars. While rigid geodesic domes on Earth are compressive structures, on Mars, a pressurized dome actually supports its own weight and then some. As a result, the structure is under tension and the dome is attempting to tear itself out of the ground. Since lifting force scales with area, while anchoring force scales with circumference, domes on Mars can’t be much wider than about 150 feet, and even then would require extensive foundation engineering.

Once a dome is built and the interior occupied, it can’t be extended. Allocation of space within the dome is zero sum, and much of the volume is occupied by weird wedge-shaped segments that are hard to use. Instead, more domes will be required, but since they don’t tessellate tunnels of some kind would be needed to connect to other structures. Each tunnel has to mate with curved walls, a rigid structure that must accept variable mechanical tolerances, be broad enough to enable large vehicles to pass, yet narrow enough to enable a bulkhead to be sealed in the event of an inevitable seal failure. Since it’s a rigid structure, it has to be structurally capable of enduring pressure cycling across areas with variable radii of curvature without fatigue, creep, or deflection mismatch.

Does this sound like an engineering nightmare? High tolerances, excessive weight, finicky foundations which are a single point of failure, major excavation, poor scaling, limited interior space, limited local production capability. At the end of the day, enormous effort will be expended to build a handful of rather limited structures with fundamental mechanical vulnerabilities, prohibitively high scaling costs, and no path to bigger future versions.

viaDomes are over-rated –’s blog.

The sort of Life Support System required to nourish a generation ship to fly through space for millennia is beyond our current capabilities

November 21st, 2022

No life support system miracles are required to keep humans alive on Mars in the near future, Casey Handmer’s argues:

A common criticism of ambitious space exploration plans, such as building cities on Mars, is that life support systems (LSS) are inadequate to keep humans alive, ergo the whole idea is pointless. As an example, the space shuttle LSS could operate for about two weeks. The ISS LSS operates indefinitely but requires regular replenishment of stores launched from Earth, and regular and intense maintenance. Finally, all closed loop LSS, both conceptual and built, are incredibly complex pieces of machinery, and complexity tends to be at odds with reliability. The general consensus is that the sort of LSS required to nourish a generation ship to fly through space for millennia is beyond our current capabilities.

No matter how big the rocket, supplies launched to Mars are finite and will eventually be exhausted. These supplies include both bulk materials like oxygen or nitrogen, and replacement parts for machinery. This doesn’t bode well. Indeed, much of the dramatic tension in The Martian is due precisely to the challenges of getting a NASA-quality LSS to keep someone alive for much longer than originally intended.


On Earth, we breath a mixture of nitrogen and oxygen, with bits of argon, water vapor, CO2, and other stuff mixed in. The LSS has to scrub CO2, regenerate oxygen, condense water vapor evaporated by our moist lungs, and filter out contaminants that are toxic, such as ozone and hydrazine.

With breathing gas sorted out, humans also drink water, consume food, and excrete waste. For extended habitation, these needs also need to be addressed by the LSS.

On Earth, these various elemental and chemical cycles are produced, and buffered by, the immensely large natural environment. I don’t think anyone thinks that a compact biological regeneration system is adequate to meet the needs of a growing city on Mars. Biosphere 2 had a really good go at this and failed for a variety of reasons. One major one was complexity. If the LSS depends on the good will of tonnes of microbes, most of which are undescribed by science, it is very easy to have a bad day.

The alternative is a physical/chemical system. Much simpler, it employs a glorified air conditioning system to process the air and recycle/sanitize waste products. Something like this exists on every spacecraft, and submarine, ever built. The difficulty arises when a simple, robust machine that is 90% efficient is asked to perform at 99.999% efficiency.


Once on the surface, there is an entire planet of atoms ready to harvest. Rocky planets such as the Earth or Mars are, to a physicist, a giant pile of iron atoms encapsulated by a giant pile of oxygen atoms, with other stuff in the gaps. Nearly all rocks, plus water, contain more oxygen than any other element. The Moon and Mars have a lot of water if one knows where to look. Nitrogen is another issue but does exist in the Martian atmosphere. The upshot is that the LSS on Mars doesn’t have to be closed loop. It can depend on constant air mining or environmental extraction to make up for losses, leaks, and inefficiencies. The machinery can be relatively simple, robust, and easy to maintain. The ISS LSS is, after all, 1980s technology at best.

Underground construction is basically unknown except for nuclear bunkers

November 20th, 2022

Tunnels are a staple of both science fiction and popular journalism regarding human habitations on the Moon, Mars, or other rocky places, Casey Handmer notes:

They’re fun to write about and interesting to put on screen. I’ve lost count of the times I’ve seen beautifully illustrated Mars city maps featuring a hexagonal grid of domes connected by tunnels. On a visual level, it certainly ticks all the right boxes.

And yet, while I’ve wasted years of my life on real estate websites I’ve never seen a subterranean house on the market. They do exist, if you want a converted ICBM bunker or limestone cave, but they’re a definite rarity.


The simplest explanation is that digging holes, particularly really deep ones, is very energetically intensive and expensive. The cost of building a road tunnel works out to be about $100,000 per meter, or equivalent to a stack of Hamiltons of the same length! For comparison, $100,000 will buy materials and labor on a respectable manufactured home, or substantial renovations.

Indeed, on Earth, underground construction is basically unknown except for nuclear bunkers. These have two powerful reasons to accept the cost and inconvenience: unlimited sweet DoD money, and surviving really big explosions.

Why build underground in space? The usual explanation is to provide shielding against galactic cosmic rays, or micrometeorites.

It is true that tunnels deep underground are relatively safe from both, and also well thermally insulated. But as I discussed in the blog on space radiation, relatively little shielding is necessary even in areas that people spend a lot of time, such as sleeping areas. And even if that works out to be a meter or two of rock, it’s orders of magnitude less effort to drop sandbags on the roof of some structure constructed on the surface, than to dig a hole of the necessary size deep underground.

Micrometeorites are not a concern on Mars, which has a thin atmosphere, and can be well shielded on the Moon with a thin blanket of loose rubble.

If there’s a central point to my blogs on space architecture, it’s that our cities and houses on Mars will look and feel a lot more like regular houses on Earth, and for the same reasons. It may not be very exciting, but the most important consideration for design and construction, on Earth or in space, is expedience. Given the relative scarcity of human labor in space cities, structures will have to maximize usable area and minimize effort even more than on Earth. Instead of tunnels, think warehouses and aircraft hangars! At least they can have natural light.

The Moon is a Harsh Mistress is not an instruction manual

November 19th, 2022

In what ways, Casey Handmer asks, does The Moon is a Harsh Mistress (and other novels in the genre) fail as an instruction manual?

We know that a Moon city is not a good place to grow plants, that water is relatively abundant on the surface near the poles, and that underground construction is pointlessly difficult. So any future Moon city will have to be structured around some other premise, which is to say its foundational architecture on both a social and technical level will be completely different.

We know that AIs are pretty good at tweaking our amygdala, but strictly speaking we don’t need to build one on the Moon, and I would hope its existence is strictly orthogonal to the question of political control.

Lunar cities, and all other space habitats, are tremendously vulnerable to physical destruction. This means that, for all practical purposes, Earthling power centers hold absolute escalation dominance. No combination of sneaky AIs, secret mass drivers, or sabotage would be enough to attain political independence through force. If space habitats want some degree of political autonomy, they will have to obtain it through non-violent means. Contemporary science fiction author Kim Stanley Robinson makes this argument powerfully in this recent podcast, when discussing how he structured the revolutions in his Mars trilogy.

Lastly, the “Brass cannon” story is like “Starship Troopers” – a falsifiably satirical critique of popular conceptions of political control. For some reason, libertarians swarm Heinlein novels and space advocacy conferences like aphids in spring. I will resist the temptation to take easy shots, but point out merely that every real-world attempt at implementation of libertarianism as the dominant political culture has failed, quickly and predictably. This is because libertarianism, like many other schools of thought that fill out our diverse political scene, functions best as an alternative actually practiced by very few people. It turns out a similar thing occurs in salmon mating behavior.

Opioid prescriptions are not correlated with drug-related deaths

November 18th, 2022

Six years ago, the Centers for Disease Control and Prevention (CDC) issued guidelines that discouraged doctors from prescribing opioids for pain and encouraged legislators to restrict the medical use of such drugs, based on the assumption that overprescribing was responsible for rising drug-related deaths:

Using data for 2010 through 2019, Aubry and Carr looked at the relationship between prescription opioid sales, measured by morphine milligram equivalents (MME) per capita, and four outcomes: total drug-related deaths, total opioid-related deaths, deaths tied specifically to prescription opioids, and “opioid use disorder” treatment admissions. “The analyses revealed that the direct correlations (i.e., significant, positive slopes) reported by the CDC based on data from 1999 to 2010 no longer exist,” they write. “The relationships between [the outcome variables] and Annual Prescription Opioid Sales (i.e., MME per Capita) are either non-existent or significantly negative/inverse.”

Those findings held true in “a strong majority of states,” Aubry and Carr report. From 2010 through 2019, “there was a statistically significant negative correlation (95% confidence level) between [opioid deaths] and Annual Prescription Opioid Sales in 38 states, with significant positive correlations occurring in only 2 states. Ten states did not exhibit significant (95% confidence level) relationships between overdose deaths and prescription opioid sales during the 2010–2019 time period.”

During that period, MME per capita dropped precipitously, falling by nearly 50 percent between 2009 and 2019. By 2021, prescription opioid sales had fallen to the lowest level in two decades.

Policies and practices inspired by the CDC’s 2016 guidelines contributed to that downward trend. Aubry and Carr note that “forty-seven states and the District of Columbia” now “have laws that set time or dosage limits for controlled substances.” In a 2019 survey by the American Board of Pain Medicine, the American Medical Association reports, “72 percent of pain medicine specialists” said they had been “been required to reduce the quantity or dose of medication” they prescribed as a result of the CDC guidelines.

The consequences for patients have not been pretty. They include undertreatment, reckless “tapering” of pain medication, and outright denial of care.

The Zeppelin engineers knew what they were doing

November 17th, 2022

Casey Handmer trusts that the Zeppelin engineers knew what they were doing:

But they were built of primitive 2000 series aluminium alloys, doped canvas, and cow gut. I think we can improve on the materials. In particular, carbon fiber pultrusions are about six times as strong and far simpler to assemble than the typical recursive riveted Zeppelin truss.

These beams could be integrated with injection molded nodes and tensioned with Kevlar cables. Gas bags would be aluminized mylar (space blanket) while the outer cover could be ripstop Nylon. (It is hard to overstate just how much better Nylon is than what came before. Try skydiving with a hemp parachute!)

Alternatively, one could optimize for cost instead of performance and cobble together a functional structure from foam core fiberglass produced onsite with simple tooling and assembled like LEGO by hand in the open air.

Alternatively, use a welded aluminium truss segment like the ones used for events. There are about half a dozen manufacturers just in Los Angeles, and while some tooling changes would be needed to support a thinner tube wall, the Hindenburg needs about 20 km of truss.

The exciting thing about the low cost approach is that it closely mirrors the approach of the original Zeppelin designers, who were severely resource constrained. Indeed, with modern materials I think it could be possible to home-build a Zeppelin at a similar scale to the Bodensee for less than $100k and with less than ten person-years of labor. This brings it into the realm of home built yachts and kit aircraft.

Such a home built would have to use innovative manufacturing to be assembled outside a large hangar, perhaps by extruding it tail first from the ground. It may also exploit a more conventional power system with salvaged automotive engines turning propellers in pods. The lifting gas of choice would be hydrogen, in order to keep operating costs low. Provided the space between gas bags and cover is sufficiently well ventilated that hydrogen can never build up at a concentration between 4% and 75%, ignition and/or deflagration is unlikely without a major structural failure or gas bag tear.


The structure, at 118 T, is just over half the total lift of 216 T. Doubling structural margins with composites could still reduce overall structural mass by a factor of 3, to 39 T, while also greatly simplifying assembly. That’s less than the weight of a railway carriage! All else being equal, the payload increases from 9.5 T to 88 T, almost a 10x improvement. Payload fraction increases from 4.4% to 40%.


The Hindenburg had 59 T of fuel and 4 T of oil. Operating with relatively primitive and heavy diesel engines, it could cruise at about 80 mph, crossing the Atlantic in 2.5 days. As it burned fuel it either had to vent hydrogen or capture rain to offset the reduced mass. The earlier Graf Zeppelin used neutrally buoyant blaugas, enabling longer flights over the equator to Brazil since burning didn’t change the weight of the airship.

But there’s no rule saying we have to afford the Zeppelin designers the benefit of copying their propulsion system. Like materials, we can assume that if they had something better, they would have used it.

My suggestion is to affix a steerable electric fan to each structural node. These ~1700 small motors would be able to completely control the boundary layer flow over the airship, stabilizing it in gusty wind and enabling fine-grained control while maneuvering. No need for a big, heavy and structurally vulnerable tail. Many airships were damaged or lost due to gusts while attempting to dock or enter a hangar. No more!

Each motor would be powered during the day by thin film solar panels built into the airship’s skin. This should be able to drive it along at about 50 mph. This number is quite robust to scaling as both drag and power increase as linear dimension squared, while elongating the airship to reduce frontal area both increases structural difficulty and doesn’t actually improve drag.

For additional power or during the night, a neutrally buoyant mix of propane and ethane can be burned in a compact turbine generator. In such a case, range is limited only by what fraction of the envelope is devoted to fuel as opposed to lifting gas. Powering cruise at 50 mph for 7 days would require 33 T of gas, which would consume about 15% of the displacement volume. This increases as the cube of speed.

The original Zeppelins never made money, he notes, and modern airships probably wouldn’t, either:

Despite hopes, they are not particularly useful for hauling cargo to remote areas. Airships depend on finessed trim and buoyancy — so dropping or picking up a huge cargo load somewhere is a big ask. They’re also not much use near the ground in wind, and no better than alternative logistics methods for delivering containers anywhere.

Synthesizing a barrel of oil requires about 5.7 MWh of electricity at 30% conversion efficiency

November 16th, 2022

The team at Terraform Industries is now 11 people, Casey Handmer says, working towards a near-term future where atmospheric CO2 becomes the preferred default source of industrial carbon:

Our process works by using solar power to split water into hydrogen and oxygen, concentrating CO2 from the atmosphere, then combining CO2 and hydrogen to form natural gas. Very similar processes can produce other hydrocarbon fractions, including liquid fuels. Synthetic hydrocarbons are drop in replacements for existing oil and gas wells and are distributed through existing pipeline infrastructure. As far as any of the market participants are concerned, fuel synthesis plants are less polluting, cheaper gas wells that convert capital investment into steady flows of fuel in a boringly predictable way.

Most recently, Terraform Industries succeeded in producing methane from hydrogen and CO2.

There is nothing particularly special about the technological approach we’re taking. Each of the various parts is built on at least 100 years of industrial development, but up until this point no-one has considered scaling these up as a fundamental source of hydrocarbons, because doing so would be cost prohibitive. Why? The machinery is not particularly complex, but the energy demands are astronomical.


The solar panel industry has been growing by about 25-35% per year for the last decade, making steady progress on cost and becoming a mainstream energy source to the point where its continued displacement of other grid power sources is partly limited only by the battery manufacturing ramp rate, itself redlining at about 250%/year!

Wright’s Law describes the tendency of some products to get cheaper with a growing manufacturing rate. It is not guaranteed by the laws of physics, but rather describes the outcome of a positive feedback loop, where a lower cost increases demand, increases revenue, increases investment, increases cognitive effort, and further lowers cost. For solar technology, the same effect is known as Swanson’s Law, and works out at 20% cost reduction per doubling of cumulative installations since 1976.

This is not the full story, though. Solar has only been cost competitive with other forms of grid electricity generation since about 2011, at which point investment and engineering effort greatly increased. Since 2011 there has been an acceleration of production growth rate and an increase in the learning rate, such that the cost decline is now 30-40% per doubling. For more details, check out Ramez Naam’s excellent blog on the topic.


In particular, the US consumes about 37 Quads of energy for electricity generation, of which about a third goes into wires and the rest is lost in thermodynamic heat loss in generating stations and transmission. Ceteris paribus while solar PV and batteries are much less inefficient, PV capacity factors are limited by daytime sunlight, seasonal daylight variations, poor weather, and mismatches between times of peak generation and consumption. The end state of the solar electricity build out will likely see 3-6x overbuild in nameplate capacity, and large variations in electricity price by time of year, day, and location. These price differences, incidentally, already drive the engine of arbitrage which has turbocharged the battery industry.

Analysts recognize that coal and natural gas used for electricity production will eventually be displaced by renewable generation. Just as converting chemical energy in the form of fuel into electricity endures 45-75% thermodynamic losses, converting electricity back into chemical fuels loses 60-70% of the energy in the process. Converting solar power into natural gas only to burn it in a gas turbine power plant could help with long term seasonal energy storage but is so much less cost competitive than other ways to stabilize electricity supply that we should expect this usage modality in, at most, niche cases.

But what of other uses of carbon-based fuels? In the US, roughly twice as much energy is consumed by transportation, industry, and other uses, as in direct electrical generation. Electrification of cars and trucks proceeds apace but other, more fuel hungry forms of transport including aviation are harder to convert. Fuel uses for high temperature industry will continue to demand non-electrical processes. In particular, it’s easy for industry to transition to purely electrical energy if it’s cheaper for them to use it, but not if it’s not.


13 Quads of electrical consumption in the US will require perhaps 50 Quads of solar generation, profitable deployment of batteries, and no further miracles as displacement occurs organically over the next 10-20 years. 70 Quads of fossil fuel consumption will be displaced by about 240 Quads of solar generation, and there will be a steep price incentive to enable this displacement.

In the US, we are anticipating a 6-10x demand increase once solar costs cross the critical threshold.


What is the solar cost threshold of interest? One barrel of oil contains about 1.7 MWh of chemical energy. Synthesizing a barrel of oil requires about 5.7 MWh of electricity at 30% conversion efficiency. Crude oil prices are between $60 and $100/barrel, indicating cost parity at between $10 and $17/MWh. There are already solar farms installed in some places that sell power at these prices, and between now and 2030 solar costs should come down at least another 60%