There are no known commodity resources in space that could be sold profitably on Earth

Saturday, November 26th, 2022

There are no known commodity resources in space that could be sold profitably on Earth, Casey Handmer explains:

On Earth, bulk cargo costs are something like $0.10/kg to move raw materials or shipping containers almost anywhere with infrastructure. Launch costs are more like $2000/kg to LEO, and $10,000/kg from LEO back to Earth.


Let’s consider a representative list of the most expensive materials in the world. In descending order, they are:

  • Antimatter, currently $62.5t/g.
  • Californium, $25m/g.
  • Diamond, $55k/g.
  • Tritium, $30k/g.
  • Taaffite, $20k/g.
  • Helium 3, $15k/g.
  • Painite, $6k/g.
  • Plutonium, $4k/g.
  • LSD, $3k/g.
  • Cocaine, $236/g.
  • Heroin, $130/g.
  • Rhino horn, $110/g.
  • Crystal meth, $100/g.
  • Platinum, $60/g.
  • Rhodium, $58/g.
  • Gold, $56/g.
  • Saffron, $11/g.

The previous ballpark estimate for transport costs was $100,000/kg, or $100/g. Since I want to be inclusive, I’ll include everything down to saffron in the list above, whose cost is roughly equal to the current LEO-surface transport cost.


None of the products represent large markets, due to their prohibitive price or relative scarcity. As a result, they are subject to substantial price elasticity depending on supply. For example, the global annual market for Helium-3 is about $10m. Double the supply, halve the price, and the net revenue is still about the same. No-one seriously thinks that Lunar mining infrastructure can be built for less than many billions of dollars, so even at a price of $100,000/kg, annual demand needs to exceed hundreds of tons to ensure adequate revenue and price stability.

Tritium, helium-3, platinum and antimatter represent speculative future markets, particularly where increased supply could help develop an industry based on, say, fusion, exotic batteries, or a bunch of gamma rays. If fusion-induced demand for helium-3 reaches a point where annual demand has climbed by three orders of magnitude, then I am willing to revisit this point. But current construction rates of cryogenically cooled bolometers are not adequate to fund Lunar mine development, and solar PV electricity production has every indication of destroying competing generation methods, including fusion.

Some relatively expensive minerals are only expensive because low levels of industrial demand have failed to develop efficient supply chains. If demand increases, new refining mechanisms are invariably developed which substantially lower the price. A salient example here is rare platinum group metals.

Space-based solar power is not a thing

Friday, November 25th, 2022

Space-based solar power is not a thing, Casey Handmer argues:

As Elon Musk has concisely pointed out, the fundamental problem with space-based solar power is that it’s obtaining a commodity, power, somewhere where it’s expensive and selling it somewhere where it’s cheap. This is not a good business. Indeed, it might make more sense to beam power from Earth to space stations, if they needed it.


What are the extra costs? Broadly, they fall into the following categories: Transmission losses, thermal losses, logistics costs, and space technology penalty. Individually, any one of these issues cancels out the benefits, and combined they leave space-based solar power at least three orders of magnitude more expensive than the terrestrial equivalents.


For a baseline comparison, consider a GW-scale power station. For terrestrial solar, this consists of standard panels on single axis mounts, covering about 10 square miles. For the space-based solar case, an identical area of land is covered instead with an antenna, a mesh of conductive wire held above the ground, to absorb the transmitted microwaves and convert them to electricity. An identical area implies similar overall energy fluxes, which is correct.


Transmission losses: The process of converting sunlight to electricity is about 20% efficient, depending on the type of panel – and this is a loss common to both systems. In addition, the space-based system has to convert the electrical power back into EM radiation, which is converted back into power on Earth. Proponents think that it should be possible to perform each conversion with 90% efficiency, but even beam-forming that well is not possible without a much larger antenna. My personal opinion is that the end-to-end microwave link efficiency would be lucky to exceed 40% efficiency, which erodes the competitive advantage substantially.

Thermal losses: The conversion efficiency of the high-power microwave transmitter has a nasty side-effect, namely that what isn’t transmitted is wasted as heat, and that heat has to go somewhere. If the transmitter is 80% efficient (which is being very generous), then it will have to radiate 200MW of thermal power. This is a different problem to the thermal losses in the solar panels, which are more like 4GW but spread over a huge area that is in radiative thermal equilibrium with its environment. Instead, the microwave power electronics will need a huge cooling system. If the electronics can operate at 350K, then the radiator power will be 850W/m^2, so the radiator will need a total area of 23ha, comparable to the total size of the solar array and the microwave transmission antenna. In contrast to the usual claims of perfect scaling efficiency with solar arrays in microgravity, a large space-based solar power system will also need a huge antenna and cooling system, which don’t scale quite as nicely.

Logistics costs: Consider transportation cost. Today, SpaceX has crushed the orbital transport market with a price of around $2000/kg. Compare this to the worldwide network of intermodal containers, which can transport anything in 20T units almost anywhere on Earth for about $0.05/kg. Even if all of Elon Musk’s wildest Starship dreams come true, transport costs will dominate the total capex of any space-based solar system, by many orders of magnitude. A factor of 10x improvement in resource does not make up for transport costs which are more than 10,000x higher. If logistics costs are more than 0.1% of current solar farm costs (they’re more like 20%), then increased transport costs completely negate the improved solar resource. It’s not even close.

One further aspect of logistics bears closer examination. In our baseline case, we considered an array of panels strung up on posts, compared to a mesh of wire strung up on posts. It turns out that (as of 2019) a substantial fraction of the overall cost of a solar PV station is the mounting hardware, which is also required by the microwave receiver. So if the mounting hardware costs 20% of the overall deployment cost for terrestrial solar, that places a strong upper bound on total system cost allowable for space-based. In other words, does anyone seriously believe that the microwave receiving antenna could cost 20% of the overall system capex, the other 80% to be used to launch thousands of tonnes of high performance gear into space? Put another way, the most cost-effective way to get a GW of power out of a microwave receiving antenna is obviously to tear down the wire mesh and sling up a bunch of solar panels, which can be ordered with a lead time of weeks from any of dozens of suppliers worldwide with widely available financing.

Finally, the space technology penalty. On Earth, we are living in an extremely exciting time for energy. Hundreds of major companies are competing on development cycles measuring only months to provide solar panels in an industry that’s growing at 20% a year. As a result, costs have fallen by 10% a year, and in the last few years, solar and batteries have neared, equaled, then utterly crushed all other forms of electricity generation. Initially, this process occurred on remote islands with high fuel import costs. Then the sunnier parts of the US. The rampage continues northwards at about 200 miles a year. The industry can sustain 30% deployment growth rate worldwide for another decade at least, before saturation occurs.

Today, I can pick up the phone and any of dozens of contractors in the LA market can fill hundreds of acres with panels, each built to survive 30 years under the harsh sun and sized perfectly for deployment using the latest tech, which is men in orange vests with forklifts.

In contrast, space technology has not benefited from such breakneck levels of growth, demand, and investment. Prohibitive maintenance costs demand perfect performance, and low rates of deployment ensure a slow innovation feedback loop. The result is that none of the current incredibly cheap solar panels could work in space, where thermal and vacuum, not to mention stresses of launch, would destroy their operation in days.

Instead, space operators rely on more traditional supply chains, with the result that building anything for space takes years and costs billions. Right now, a billion dollars invested will buy about 100MW of solar panels on the Earth, or 100kW of solar panels in space. This is a factor of 1000, and it also erases the advantages of more sunlight in space.

These four elements, transmission, thermal, logistics, and space technology, inflate the relative cost of space-based solar power to the point where it simply cannot compete with terrestrial solar. It’s not a matter of 5% here or there. It’s literally thousands of times more expensive. It’s not a thing.

The totokia was intended to peck holes in skulls

Thursday, November 24th, 2022

The Tusken Raiders in the original Star Wars wield a peculiar weapon that Luke calls a gaffi stick. It turns out that the gaderffii is based on the Fijian totokia:

According to Fiji material culture scholar Fergus Clunie who describes it as a beaked battle hammer (in Fijian Weapons and Warfare, 1977: p. 55), “…the totokia was intended to ‘peck’ holes in skulls.” The weight of the head of the club was concentrated in the point of the beak of the weapon or kedi-toki (toki to peck; i toki: a bird’s beak). The totokia “…delivered a deadly blow in an abrupt but vicious stab, not requiring the wide swinging arc demanded by the others.” (Yalo i Viti. A Fiji Museum Catalogue, 1986: p. 185) It was a club that could be used in open warfare or to finish-off or execute warriors on the battlefield.

Totakia and Gaffi Stick

Mechanochemical breakthrough unlocks cheap, safe, powdered gases

Wednesday, November 23rd, 2022

Nanotechnology researchers based at Deakin University’s Institute for Frontier Materials claim to have found a super-efficient way to mechanochemically trap and hold gases in powders, which could radically reduce energy use in the petrochemical industry, while making hydrogen much easier and safer to store and transport:

Mechanochemistry is a relatively recently coined term, referring to chemical reactions that are triggered by mechanical forces as opposed to heat, light, or electric potential differences. In this case, the mechanical force is supplied by ball milling – a low-energy grinding process in which a cylinder containing steel balls is rotated such that the balls roll up the side, then drop back down again, crushing and rolling over the material inside.

The team has demonstrated that grinding certain amounts of certain powders with precise pressure levels of certain gases can trigger a mechanochemical reaction that absorbs the gas into the powder and stores it there, giving you what’s essentially a solid-state storage medium that can hold the gases safely at room temperature until they’re needed. The gases can be released as required, by heating the powder up to a certain point.


This process, for example, could separate hydrocarbon gases out from crude oil using less than 10% of the energy that’s needed today. “Currently, the petrol industry uses a cryogenic process,” says Chen. “Several gases come up together, so to purify and separate them, they cool everything down to a liquid state at very low temperature, and then heat it all together. Different gases evaporate at different temperatures, and that’s how they separate them out.”


“The energy consumed by a 20-hour milling process is US$0.32,” reads the paper. “The ball-milling gas adsorption process is estimated to consume 76.8 KJ/s to separate 1,000 liters (220 gal) of olefin/paraffin mixture, which is two orders less than that of the cryogenic distillation process.”


Chen tells us the powder can store a hydrogen weight percentage of around 6.5%. “Every one gram of material will store about 0.065 grams of hydrogen,” he says. “That’s already above the 5% target set by the US Department of Energy. And in terms of volume, for every one gram of powder, we wish to store around 50 liters (13.2 gal) of hydrogen in there.”

Indeed, should the team prove these numbers, they’d represent an instant doubling of the best current solid-state hydrogen storage mass fractions, which, according to Air Liquide, can only manage 2-3%.

Domes are over-rated

Tuesday, November 22nd, 2022

Any article about Moon or Mars bases needs to have a conceptual drawing of habitation domes, but domes have significant drawbacks, Casey Handmer reminds us:

Domes feature compound curvature, which complicates manufacturing. If assembled from triangular panels, junctions contain multiple intersecting acute angled parts, which makes sealing a nightmare. In fact, even residential dome houses are notoriously difficult to insulate and seal! A rectangular room has 6 faces and 12 edges, which can be framed, sealed, and painted in a day or two. A dome room has a new wall every few feet, all with weird triangular faces and angles, and enormously increased labor overhead.

It turns out that the main advantage of domes – no internal supports – becomes a major liability on Mars. While rigid geodesic domes on Earth are compressive structures, on Mars, a pressurized dome actually supports its own weight and then some. As a result, the structure is under tension and the dome is attempting to tear itself out of the ground. Since lifting force scales with area, while anchoring force scales with circumference, domes on Mars can’t be much wider than about 150 feet, and even then would require extensive foundation engineering.

Once a dome is built and the interior occupied, it can’t be extended. Allocation of space within the dome is zero sum, and much of the volume is occupied by weird wedge-shaped segments that are hard to use. Instead, more domes will be required, but since they don’t tessellate tunnels of some kind would be needed to connect to other structures. Each tunnel has to mate with curved walls, a rigid structure that must accept variable mechanical tolerances, be broad enough to enable large vehicles to pass, yet narrow enough to enable a bulkhead to be sealed in the event of an inevitable seal failure. Since it’s a rigid structure, it has to be structurally capable of enduring pressure cycling across areas with variable radii of curvature without fatigue, creep, or deflection mismatch.

Does this sound like an engineering nightmare? High tolerances, excessive weight, finicky foundations which are a single point of failure, major excavation, poor scaling, limited interior space, limited local production capability. At the end of the day, enormous effort will be expended to build a handful of rather limited structures with fundamental mechanical vulnerabilities, prohibitively high scaling costs, and no path to bigger future versions.

viaDomes are over-rated –’s blog.

The sort of Life Support System required to nourish a generation ship to fly through space for millennia is beyond our current capabilities

Monday, November 21st, 2022

No life support system miracles are required to keep humans alive on Mars in the near future, Casey Handmer’s argues:

A common criticism of ambitious space exploration plans, such as building cities on Mars, is that life support systems (LSS) are inadequate to keep humans alive, ergo the whole idea is pointless. As an example, the space shuttle LSS could operate for about two weeks. The ISS LSS operates indefinitely but requires regular replenishment of stores launched from Earth, and regular and intense maintenance. Finally, all closed loop LSS, both conceptual and built, are incredibly complex pieces of machinery, and complexity tends to be at odds with reliability. The general consensus is that the sort of LSS required to nourish a generation ship to fly through space for millennia is beyond our current capabilities.

No matter how big the rocket, supplies launched to Mars are finite and will eventually be exhausted. These supplies include both bulk materials like oxygen or nitrogen, and replacement parts for machinery. This doesn’t bode well. Indeed, much of the dramatic tension in The Martian is due precisely to the challenges of getting a NASA-quality LSS to keep someone alive for much longer than originally intended.


On Earth, we breath a mixture of nitrogen and oxygen, with bits of argon, water vapor, CO2, and other stuff mixed in. The LSS has to scrub CO2, regenerate oxygen, condense water vapor evaporated by our moist lungs, and filter out contaminants that are toxic, such as ozone and hydrazine.

With breathing gas sorted out, humans also drink water, consume food, and excrete waste. For extended habitation, these needs also need to be addressed by the LSS.

On Earth, these various elemental and chemical cycles are produced, and buffered by, the immensely large natural environment. I don’t think anyone thinks that a compact biological regeneration system is adequate to meet the needs of a growing city on Mars. Biosphere 2 had a really good go at this and failed for a variety of reasons. One major one was complexity. If the LSS depends on the good will of tonnes of microbes, most of which are undescribed by science, it is very easy to have a bad day.

The alternative is a physical/chemical system. Much simpler, it employs a glorified air conditioning system to process the air and recycle/sanitize waste products. Something like this exists on every spacecraft, and submarine, ever built. The difficulty arises when a simple, robust machine that is 90% efficient is asked to perform at 99.999% efficiency.


Once on the surface, there is an entire planet of atoms ready to harvest. Rocky planets such as the Earth or Mars are, to a physicist, a giant pile of iron atoms encapsulated by a giant pile of oxygen atoms, with other stuff in the gaps. Nearly all rocks, plus water, contain more oxygen than any other element. The Moon and Mars have a lot of water if one knows where to look. Nitrogen is another issue but does exist in the Martian atmosphere. The upshot is that the LSS on Mars doesn’t have to be closed loop. It can depend on constant air mining or environmental extraction to make up for losses, leaks, and inefficiencies. The machinery can be relatively simple, robust, and easy to maintain. The ISS LSS is, after all, 1980s technology at best.

Underground construction is basically unknown except for nuclear bunkers

Sunday, November 20th, 2022

Tunnels are a staple of both science fiction and popular journalism regarding human habitations on the Moon, Mars, or other rocky places, Casey Handmer notes:

They’re fun to write about and interesting to put on screen. I’ve lost count of the times I’ve seen beautifully illustrated Mars city maps featuring a hexagonal grid of domes connected by tunnels. On a visual level, it certainly ticks all the right boxes.

And yet, while I’ve wasted years of my life on real estate websites I’ve never seen a subterranean house on the market. They do exist, if you want a converted ICBM bunker or limestone cave, but they’re a definite rarity.


The simplest explanation is that digging holes, particularly really deep ones, is very energetically intensive and expensive. The cost of building a road tunnel works out to be about $100,000 per meter, or equivalent to a stack of Hamiltons of the same length! For comparison, $100,000 will buy materials and labor on a respectable manufactured home, or substantial renovations.

Indeed, on Earth, underground construction is basically unknown except for nuclear bunkers. These have two powerful reasons to accept the cost and inconvenience: unlimited sweet DoD money, and surviving really big explosions.

Why build underground in space? The usual explanation is to provide shielding against galactic cosmic rays, or micrometeorites.

It is true that tunnels deep underground are relatively safe from both, and also well thermally insulated. But as I discussed in the blog on space radiation, relatively little shielding is necessary even in areas that people spend a lot of time, such as sleeping areas. And even if that works out to be a meter or two of rock, it’s orders of magnitude less effort to drop sandbags on the roof of some structure constructed on the surface, than to dig a hole of the necessary size deep underground.

Micrometeorites are not a concern on Mars, which has a thin atmosphere, and can be well shielded on the Moon with a thin blanket of loose rubble.

If there’s a central point to my blogs on space architecture, it’s that our cities and houses on Mars will look and feel a lot more like regular houses on Earth, and for the same reasons. It may not be very exciting, but the most important consideration for design and construction, on Earth or in space, is expedience. Given the relative scarcity of human labor in space cities, structures will have to maximize usable area and minimize effort even more than on Earth. Instead of tunnels, think warehouses and aircraft hangars! At least they can have natural light.

The Moon is a Harsh Mistress is not an instruction manual

Saturday, November 19th, 2022

In what ways, Casey Handmer asks, does The Moon is a Harsh Mistress (and other novels in the genre) fail as an instruction manual?

We know that a Moon city is not a good place to grow plants, that water is relatively abundant on the surface near the poles, and that underground construction is pointlessly difficult. So any future Moon city will have to be structured around some other premise, which is to say its foundational architecture on both a social and technical level will be completely different.

We know that AIs are pretty good at tweaking our amygdala, but strictly speaking we don’t need to build one on the Moon, and I would hope its existence is strictly orthogonal to the question of political control.

Lunar cities, and all other space habitats, are tremendously vulnerable to physical destruction. This means that, for all practical purposes, Earthling power centers hold absolute escalation dominance. No combination of sneaky AIs, secret mass drivers, or sabotage would be enough to attain political independence through force. If space habitats want some degree of political autonomy, they will have to obtain it through non-violent means. Contemporary science fiction author Kim Stanley Robinson makes this argument powerfully in this recent podcast, when discussing how he structured the revolutions in his Mars trilogy.

Lastly, the “Brass cannon” story is like “Starship Troopers” – a falsifiably satirical critique of popular conceptions of political control. For some reason, libertarians swarm Heinlein novels and space advocacy conferences like aphids in spring. I will resist the temptation to take easy shots, but point out merely that every real-world attempt at implementation of libertarianism as the dominant political culture has failed, quickly and predictably. This is because libertarianism, like many other schools of thought that fill out our diverse political scene, functions best as an alternative actually practiced by very few people. It turns out a similar thing occurs in salmon mating behavior.

Opioid prescriptions are not correlated with drug-related deaths

Friday, November 18th, 2022

Six years ago, the Centers for Disease Control and Prevention (CDC) issued guidelines that discouraged doctors from prescribing opioids for pain and encouraged legislators to restrict the medical use of such drugs, based on the assumption that overprescribing was responsible for rising drug-related deaths:

Using data for 2010 through 2019, Aubry and Carr looked at the relationship between prescription opioid sales, measured by morphine milligram equivalents (MME) per capita, and four outcomes: total drug-related deaths, total opioid-related deaths, deaths tied specifically to prescription opioids, and “opioid use disorder” treatment admissions. “The analyses revealed that the direct correlations (i.e., significant, positive slopes) reported by the CDC based on data from 1999 to 2010 no longer exist,” they write. “The relationships between [the outcome variables] and Annual Prescription Opioid Sales (i.e., MME per Capita) are either non-existent or significantly negative/inverse.”

Those findings held true in “a strong majority of states,” Aubry and Carr report. From 2010 through 2019, “there was a statistically significant negative correlation (95% confidence level) between [opioid deaths] and Annual Prescription Opioid Sales in 38 states, with significant positive correlations occurring in only 2 states. Ten states did not exhibit significant (95% confidence level) relationships between overdose deaths and prescription opioid sales during the 2010–2019 time period.”

During that period, MME per capita dropped precipitously, falling by nearly 50 percent between 2009 and 2019. By 2021, prescription opioid sales had fallen to the lowest level in two decades.

Policies and practices inspired by the CDC’s 2016 guidelines contributed to that downward trend. Aubry and Carr note that “forty-seven states and the District of Columbia” now “have laws that set time or dosage limits for controlled substances.” In a 2019 survey by the American Board of Pain Medicine, the American Medical Association reports, “72 percent of pain medicine specialists” said they had been “been required to reduce the quantity or dose of medication” they prescribed as a result of the CDC guidelines.

The consequences for patients have not been pretty. They include undertreatment, reckless “tapering” of pain medication, and outright denial of care.

The Zeppelin engineers knew what they were doing

Thursday, November 17th, 2022

Casey Handmer trusts that the Zeppelin engineers knew what they were doing:

But they were built of primitive 2000 series aluminium alloys, doped canvas, and cow gut. I think we can improve on the materials. In particular, carbon fiber pultrusions are about six times as strong and far simpler to assemble than the typical recursive riveted Zeppelin truss.

These beams could be integrated with injection molded nodes and tensioned with Kevlar cables. Gas bags would be aluminized mylar (space blanket) while the outer cover could be ripstop Nylon. (It is hard to overstate just how much better Nylon is than what came before. Try skydiving with a hemp parachute!)

Alternatively, one could optimize for cost instead of performance and cobble together a functional structure from foam core fiberglass produced onsite with simple tooling and assembled like LEGO by hand in the open air.

Alternatively, use a welded aluminium truss segment like the ones used for events. There are about half a dozen manufacturers just in Los Angeles, and while some tooling changes would be needed to support a thinner tube wall, the Hindenburg needs about 20 km of truss.

The exciting thing about the low cost approach is that it closely mirrors the approach of the original Zeppelin designers, who were severely resource constrained. Indeed, with modern materials I think it could be possible to home-build a Zeppelin at a similar scale to the Bodensee for less than $100k and with less than ten person-years of labor. This brings it into the realm of home built yachts and kit aircraft.

Such a home built would have to use innovative manufacturing to be assembled outside a large hangar, perhaps by extruding it tail first from the ground. It may also exploit a more conventional power system with salvaged automotive engines turning propellers in pods. The lifting gas of choice would be hydrogen, in order to keep operating costs low. Provided the space between gas bags and cover is sufficiently well ventilated that hydrogen can never build up at a concentration between 4% and 75%, ignition and/or deflagration is unlikely without a major structural failure or gas bag tear.


The structure, at 118 T, is just over half the total lift of 216 T. Doubling structural margins with composites could still reduce overall structural mass by a factor of 3, to 39 T, while also greatly simplifying assembly. That’s less than the weight of a railway carriage! All else being equal, the payload increases from 9.5 T to 88 T, almost a 10x improvement. Payload fraction increases from 4.4% to 40%.


The Hindenburg had 59 T of fuel and 4 T of oil. Operating with relatively primitive and heavy diesel engines, it could cruise at about 80 mph, crossing the Atlantic in 2.5 days. As it burned fuel it either had to vent hydrogen or capture rain to offset the reduced mass. The earlier Graf Zeppelin used neutrally buoyant blaugas, enabling longer flights over the equator to Brazil since burning didn’t change the weight of the airship.

But there’s no rule saying we have to afford the Zeppelin designers the benefit of copying their propulsion system. Like materials, we can assume that if they had something better, they would have used it.

My suggestion is to affix a steerable electric fan to each structural node. These ~1700 small motors would be able to completely control the boundary layer flow over the airship, stabilizing it in gusty wind and enabling fine-grained control while maneuvering. No need for a big, heavy and structurally vulnerable tail. Many airships were damaged or lost due to gusts while attempting to dock or enter a hangar. No more!

Each motor would be powered during the day by thin film solar panels built into the airship’s skin. This should be able to drive it along at about 50 mph. This number is quite robust to scaling as both drag and power increase as linear dimension squared, while elongating the airship to reduce frontal area both increases structural difficulty and doesn’t actually improve drag.

For additional power or during the night, a neutrally buoyant mix of propane and ethane can be burned in a compact turbine generator. In such a case, range is limited only by what fraction of the envelope is devoted to fuel as opposed to lifting gas. Powering cruise at 50 mph for 7 days would require 33 T of gas, which would consume about 15% of the displacement volume. This increases as the cube of speed.

The original Zeppelins never made money, he notes, and modern airships probably wouldn’t, either:

Despite hopes, they are not particularly useful for hauling cargo to remote areas. Airships depend on finessed trim and buoyancy — so dropping or picking up a huge cargo load somewhere is a big ask. They’re also not much use near the ground in wind, and no better than alternative logistics methods for delivering containers anywhere.

Synthesizing a barrel of oil requires about 5.7 MWh of electricity at 30% conversion efficiency

Wednesday, November 16th, 2022

The team at Terraform Industries is now 11 people, Casey Handmer says, working towards a near-term future where atmospheric CO2 becomes the preferred default source of industrial carbon:

Our process works by using solar power to split water into hydrogen and oxygen, concentrating CO2 from the atmosphere, then combining CO2 and hydrogen to form natural gas. Very similar processes can produce other hydrocarbon fractions, including liquid fuels. Synthetic hydrocarbons are drop in replacements for existing oil and gas wells and are distributed through existing pipeline infrastructure. As far as any of the market participants are concerned, fuel synthesis plants are less polluting, cheaper gas wells that convert capital investment into steady flows of fuel in a boringly predictable way.

Most recently, Terraform Industries succeeded in producing methane from hydrogen and CO2.

There is nothing particularly special about the technological approach we’re taking. Each of the various parts is built on at least 100 years of industrial development, but up until this point no-one has considered scaling these up as a fundamental source of hydrocarbons, because doing so would be cost prohibitive. Why? The machinery is not particularly complex, but the energy demands are astronomical.


The solar panel industry has been growing by about 25-35% per year for the last decade, making steady progress on cost and becoming a mainstream energy source to the point where its continued displacement of other grid power sources is partly limited only by the battery manufacturing ramp rate, itself redlining at about 250%/year!

Wright’s Law describes the tendency of some products to get cheaper with a growing manufacturing rate. It is not guaranteed by the laws of physics, but rather describes the outcome of a positive feedback loop, where a lower cost increases demand, increases revenue, increases investment, increases cognitive effort, and further lowers cost. For solar technology, the same effect is known as Swanson’s Law, and works out at 20% cost reduction per doubling of cumulative installations since 1976.

This is not the full story, though. Solar has only been cost competitive with other forms of grid electricity generation since about 2011, at which point investment and engineering effort greatly increased. Since 2011 there has been an acceleration of production growth rate and an increase in the learning rate, such that the cost decline is now 30-40% per doubling. For more details, check out Ramez Naam’s excellent blog on the topic.


In particular, the US consumes about 37 Quads of energy for electricity generation, of which about a third goes into wires and the rest is lost in thermodynamic heat loss in generating stations and transmission. Ceteris paribus while solar PV and batteries are much less inefficient, PV capacity factors are limited by daytime sunlight, seasonal daylight variations, poor weather, and mismatches between times of peak generation and consumption. The end state of the solar electricity build out will likely see 3-6x overbuild in nameplate capacity, and large variations in electricity price by time of year, day, and location. These price differences, incidentally, already drive the engine of arbitrage which has turbocharged the battery industry.

Analysts recognize that coal and natural gas used for electricity production will eventually be displaced by renewable generation. Just as converting chemical energy in the form of fuel into electricity endures 45-75% thermodynamic losses, converting electricity back into chemical fuels loses 60-70% of the energy in the process. Converting solar power into natural gas only to burn it in a gas turbine power plant could help with long term seasonal energy storage but is so much less cost competitive than other ways to stabilize electricity supply that we should expect this usage modality in, at most, niche cases.

But what of other uses of carbon-based fuels? In the US, roughly twice as much energy is consumed by transportation, industry, and other uses, as in direct electrical generation. Electrification of cars and trucks proceeds apace but other, more fuel hungry forms of transport including aviation are harder to convert. Fuel uses for high temperature industry will continue to demand non-electrical processes. In particular, it’s easy for industry to transition to purely electrical energy if it’s cheaper for them to use it, but not if it’s not.


13 Quads of electrical consumption in the US will require perhaps 50 Quads of solar generation, profitable deployment of batteries, and no further miracles as displacement occurs organically over the next 10-20 years. 70 Quads of fossil fuel consumption will be displaced by about 240 Quads of solar generation, and there will be a steep price incentive to enable this displacement.

In the US, we are anticipating a 6-10x demand increase once solar costs cross the critical threshold.


What is the solar cost threshold of interest? One barrel of oil contains about 1.7 MWh of chemical energy. Synthesizing a barrel of oil requires about 5.7 MWh of electricity at 30% conversion efficiency. Crude oil prices are between $60 and $100/barrel, indicating cost parity at between $10 and $17/MWh. There are already solar farms installed in some places that sell power at these prices, and between now and 2030 solar costs should come down at least another 60%

One can build a commuter rail network, an intercity network, or a point-to-point HSR line

Tuesday, November 15th, 2022

Rail systems are, by their nature, one dimensional, Casey Handmer explains:

Any disruption on a rail line shuts down the entire line, imposing high maintenance costs on an entire network to ensure reliable uptime. To add a destination to a network, an entire line must be graded and constructed from the existing network, and even then it will be direct to almost nowhere.

Contrast this with aircraft. There are 15,000 airports in the US. Any but the largest aircraft can fly to any of these airports. If I build another airport, I have added 15,000 potential connections to the network. If I build another rail terminal and branch line, at significantly greater cost than an airstrip, I have added only one additional connection to the network.

Roads and trucks are somewhere between rail and aircraft. The road network largely already exists everywhere, and there aren’t any strict gauge restrictions, mandatory union labor requirements, obscure signaling standards, or weird 19th century incompatible ownership structures. Damage or obstruction isn’t a showstopper, as trucks have two dimensions of freedom of movement, and can drive around an obstacle. In Los Angeles during the age of streetcars, a fire anywhere in the city would result in water hoses crossing the street from hydrant to firetruck, and then the network ground to a halt because steel wheels can’t cross a hose or surmount a temporary hump!

California‘s High-Speed Rail (HSR) project had to make too many promises it couldn’t keep:

Routing HSR on the east side of the central valley via Bakersfield and Modesto means those cities can have a station, but frequent services means that most trains have to stop there, and each stop adds 20 minutes to the travel time just to slow down and speed back up. Alternatively, the stations and their railway corridors are extremely expensive city decorations that help no-one because the trains, dedicated to a high speed SF-LA shuttle, never stop. Because they are trains, we can’t have both. If it was aircraft, we could have smaller, more frequent commercial aircraft offering direct flights to dozens of destinations from both cities. But rail has relatively narrow limits in terms of train size and frequency meaning that any route will be both congested at peak times and under-utilized for much of the rest.

Serving peripheral population centers in California is a nice thing to do, but aircraft pollution from Modesto is not driving global warming. Car traffic from Modesto would hardly overwhelm the Interstate 5. HSR minimizes financial losses when it is serving large population centers with high speed direct services. By failing to make the political case serving the main mission, the CA HSR project adopted numerous unnecessary marginal requirements which added so much cost that the project is unlikely to succeed. Even if the money materializes and the project is completed, the train will be so slow that it will hardly impact aircraft demand, so expensive it will be unable to operate without substantial subsidies, and so limited in throughput that it will hardly even alleviate traffic from LA’s outer dormitory suburbs.

In other words, one can build a commuter rail network, an intercity network, or a point-to-point HSR line, but forcing all three usage modes into the same system cannot succeed.

A power-to-weight ratio of 10 screams possibility

Monday, November 14th, 2022

Electric aircraft have certain advantages and disadvantages, Casey Handmer notes:

Advantages include mechanical simplicity and reliability, reduced noise, reduced cost, increased efficiency, and reduced engine weight. The major disadvantage is that battery energy density is still, at best, about 50 times less than gasoline. Even factoring in other efficiency gains, electric aircraft have greatly reduced flying time and range.

The underlying reason that I believe electric aircraft can break the sound barrier is that electric motors can deliver far higher power-to-weight ratios than piston engines, jets, or turbines. The F-4 Phantom is a textbook example of high thrust, being able to (just) achieve a vertical climb. In contrast, for $100 I can buy a racing drone that can accelerate vertically at 10 gs. There are other factors at play but a power-to-weight ratio of 10 screams possibility.

In terms of fundamental physical limits, let’s consider the Concorde. While most fighter jets can fly supersonic for at most a few minutes, the Concorde couldn’t do in-flight refueling and had to cross the Atlantic in a single hop. It could cruise at Mach 2 for 201 minutes! Let’s say that when battery energy density and electric motor efficiency are factored in, electric systems with present technology would have 10x less range. Still, an electric Concorde could fly for 20 minutes, covering almost 450 miles. That’s more range than a Tesla!


Of course it should be possible to develop a better configuration than a Concorde clone, but it’s an interesting starting point. In particular, many supersonic aircraft use delta wings because of relatively consistent lift characteristics over a range of speeds. It’s not that Concorde needs that enormous wing to fly at Mach 2 at 60,000 feet. Concorde needs the huge, draggy wing to fly slowly enough to land on a runway. But electric aircraft can deliver the necessary power and control to take off and land vertically (VTOL) like a helicopter, obviating the need for much wing at all.

Before diving into the specifics of different subsystems, I will motivate an example point design by appealing to the obvious. A supersonic electric aircraft must have a lot of thrust and minimal drag. When we think about what it might look like, the F-104 Starfighter comes to mind. Long, pointy, and with the barest minimum of a wing.


Ordinarily, fast planes use jet engines for propulsion. Their compressor stages operate at subsonic speed so all supersonic jets have complex intake systems designed to decelerate inrushing air with a series of shocks prior to impacting the turbine. Building a turbine to ingest a supersonic stream ordinarily seems like a recipe for disaster. Jets need subsonic flow because combustion typically occurs subsonically. Electric propellers have no such constraint, and nor do they care that 80% of the atmosphere isn’t oxygen.

Most of the time, the road is far too big, and the rest of the time, it’s far too small

Sunday, November 13th, 2022

Casey Handmer did a bunch of transport economics when he worked at Hyperloop:

Let’s not bury the lede here. As pointed out in The Original Green blog, the entire city of Florence, in Italy, could fit inside one Atlanta freeway interchange. One of the most powerful, culturally important, and largest cities for centuries in Europe with a population exceeding 100,000 people. For readers who have not yet visited this incredible city, one can walk, at a fairly leisurely pace, from one side to the other in 45 minutes.


There are thousands of cities on Earth and not a single one where mass car ownership hasn’t led to soul-destroying traffic congestion.

Cars are both amazing and terrible:

Imagine there existed a way to move people, children, and almost unlimited quantities of cargo point to point, on demand, using an existing public network of graded and paved streets practically anywhere on Earth, in comfort, style, speed, and safety. Practically immune to weather. Operable by nearly any adult with only basic training, regardless of physical (dis)ability. Anyone who has made a habit of camping on backpacking trips knows well the undeniable luxury of sitting down in air-conditioned comfort and watching the scenery go by. At roughly $0.10/passenger mile, cars are also incredibly cheap to operate.


Some American cities have nearly 60% of their surface area devoted to cars, and yet they are the most congested of all. Would carving off another 10% of land, worth trillions in unimproved value alone, solve the problem? No. According to simulations I’ve run professionally, latent demand for surface transport in large cities exceeds supply by a factor of 30. Not 30%. 3000%. That is, Houston could build freeways to every corner of the city 20 layers deep and they would still suffer congestion during peak hours.

Why is that? Roads and freeways are huge, and expensive to build and maintain, but they actually don’t move very many people around. Typically peak capacity is about 1000 vehicles per lane per hour. In most cities, that means 1000 people/lane/hour. This is a laughably small number. All the freeways in LA over the four hour morning peak move perhaps 200,000 people, or ~1% of the overall population of the city. 30x capacity would enable 30% of the population to move around simultaneously.


Spacing between the bicycles, while underway, is a few meters, compared to 100 m for cars with a 3.7 m lane width. Bicycles and pedestrians take up roughly the same amount of space.


Like a lot of public infrastructure, the cost comes down to patterns of utilization. For any given service, avoiding congestion means building enough capacity to meet peak demand. But revenue is a function of average demand, which may be 10x lower than the peak. This problem occurs in practically all areas of life that involve moving or transforming things. Roads. Water. Power. Internet. Docks. Railways. Computing. Organizational structures. Publishing. Tourism. Engineering.

This effect is intuitively obvious for roads. Most of the time, the roads in my sleepy suburb of LA are lifeless expanses of steadily crumbling asphalt baking in the sun. The adjacent houses command property prices as high as $750/sqft, and yet every house has half a basketball court’s worth of nothing just sitting there next to it. Come peak hour, the road is now choked with cars all trying to get home, because even half a basketball court per house isn’t enough to fit all the cars that want to move there at that moment. And of an evening, onstreet parking is typically overwhelmed because now every car, which spends >95% of its life empty and unused, now needs 200 sqft of kerb to hang out. Most of the time, the road is far too big, and the rest of the time, it’s far too small.

People often underestimate the cost of having resources around that they aren’t currently using. And since our culture expects roads and parking to be both limitless, available, and free, we can’t rely on market mechanisms to correctly price and trade the cost. Seattle counted how many parking spaces were in the city and came up with 1.6 million. That’s more than five per household! Obviously most of them are vacant most of the time, just sitting there consuming space, and yet there will never be enough when they are needed!

TurboTax for customs paperwork

Saturday, November 12th, 2022

Ryan Petersen’s entire life seems to be a series of entrepreneurial experiments in ferrying items from Point A to Point B, culminating in Flexport:

The son of entrepreneurs, Petersen earned pocket money delivering sodas to his mother’s food safety business. After graduating from college in 2002, he worked alongside his older brother, David, re-selling Chinese scooters and motorcycle parts in the United States. As that business gathered steam, the younger Petersen moved to China in 2005 to monitor local operations. The disorganization the duo encountered inspired their next company.

In 2007, Ryan Petersen headed to Columbia for business school. The same year, he, David, and Michael Kanko – one of David’s former roommates – started a new endeavor: ImportGenius. The business collects data associated with global trade, organizing import and export records. This information is extremely useful for those searching for suppliers within a specific industry or looking for better visibility into a competitor’s supply chain. Over the following six years, the Brothers Petersen and Kanko developed ImportGenius into a profitable business, albeit one with a capped upside. Today, it is under Kanko’s stewardship and reportedly does millions in revenue.

Recognizing ImportGenius’s limitations and feeling ready for a new challenge, David Petersen applied to Y Combinator in 2013 with BuildZoom, a platform to initiate and manage the home remodeling process. Ryan reportedly “grabbed an air mattress and tagged along.”

Rather than becoming part of David’s startup, Ryan spent his stint in California developing an idea of his own. While he initially conceived of it being an extension of ImportGenius, he soon realized it was a much larger idea than simply searching trade documentation – trade itself was broken. After making his pitch for a “TurboTax for customs paperwork” in the spring of 2013, he was accepted into the following year’s Y Combinator batch. His acceptance afforded him the chance to work under the mentorship of the accelerator’s founder, Paul Graham.

When I asked Petersen what it had been like working with arguably one of the most influential thinkers of the last two decades, he noted that Graham remained an active counselor before highlighting his particular genius. “Paul is probably the best in the world at asking what’s possible rather than what’s likely.”

In Petersen, Graham saw someone willing to dream audaciously and endure discomfort to bring those dreams to reality. “Ryan is an armor-piercing shell,” Graham previously commented, “a founder who keeps going through obstacles that would make other people give up.”

With Graham’s support and Y Combinator’s signaling power, Petersen ended his time at the accelerator by closing a $4 million seed round with backing from Initialized Capital and Rugged Ventures.

In the years that followed, Petersen succeeded in expanding Flexport’s scope and growing revenue. What began as an ambition to improve global trade through a smoother customs process transformed into a fully-fledged freight forwarder with software at its heart.

It came with bumps in the road. Perhaps the largest arrived after raising $1 billion from Softbank in 2019. As well profiled by Forbes, Petersen revved up the company’s hiring – and burn rate – as if Masayoshi Son’s pockets would remain ever-full. After WeWork’s disastrous collapse, Softbank changed its approach, slowing investments and forcing Flexport to adjust. The company cut 3% of its team (fifty employees) and shifted from hypergrowth to chasing profitability.

It wouldn’t take long. The pandemic set shipping prices skyrocketing, pushing Flexport to a profit of $37 million. The company achieved new relevance during this period, with Petersen becoming an influential voice on various logistical crises. Though Flexport briefly looked to be on rocky footing a couple of years earlier, it entered 2022 with earned self-assurance and $3.2 billion in annual revenue.