A power-to-weight ratio of 10 screams possibility

Monday, November 14th, 2022

Electric aircraft have certain advantages and disadvantages, Casey Handmer notes:

Advantages include mechanical simplicity and reliability, reduced noise, reduced cost, increased efficiency, and reduced engine weight. The major disadvantage is that battery energy density is still, at best, about 50 times less than gasoline. Even factoring in other efficiency gains, electric aircraft have greatly reduced flying time and range.

The underlying reason that I believe electric aircraft can break the sound barrier is that electric motors can deliver far higher power-to-weight ratios than piston engines, jets, or turbines. The F-4 Phantom is a textbook example of high thrust, being able to (just) achieve a vertical climb. In contrast, for $100 I can buy a racing drone that can accelerate vertically at 10 gs. There are other factors at play but a power-to-weight ratio of 10 screams possibility.

In terms of fundamental physical limits, let’s consider the Concorde. While most fighter jets can fly supersonic for at most a few minutes, the Concorde couldn’t do in-flight refueling and had to cross the Atlantic in a single hop. It could cruise at Mach 2 for 201 minutes! Let’s say that when battery energy density and electric motor efficiency are factored in, electric systems with present technology would have 10x less range. Still, an electric Concorde could fly for 20 minutes, covering almost 450 miles. That’s more range than a Tesla!


Of course it should be possible to develop a better configuration than a Concorde clone, but it’s an interesting starting point. In particular, many supersonic aircraft use delta wings because of relatively consistent lift characteristics over a range of speeds. It’s not that Concorde needs that enormous wing to fly at Mach 2 at 60,000 feet. Concorde needs the huge, draggy wing to fly slowly enough to land on a runway. But electric aircraft can deliver the necessary power and control to take off and land vertically (VTOL) like a helicopter, obviating the need for much wing at all.

Before diving into the specifics of different subsystems, I will motivate an example point design by appealing to the obvious. A supersonic electric aircraft must have a lot of thrust and minimal drag. When we think about what it might look like, the F-104 Starfighter comes to mind. Long, pointy, and with the barest minimum of a wing.


Ordinarily, fast planes use jet engines for propulsion. Their compressor stages operate at subsonic speed so all supersonic jets have complex intake systems designed to decelerate inrushing air with a series of shocks prior to impacting the turbine. Building a turbine to ingest a supersonic stream ordinarily seems like a recipe for disaster. Jets need subsonic flow because combustion typically occurs subsonically. Electric propellers have no such constraint, and nor do they care that 80% of the atmosphere isn’t oxygen.

Most of the time, the road is far too big, and the rest of the time, it’s far too small

Sunday, November 13th, 2022

Casey Handmer did a bunch of transport economics when he worked at Hyperloop:

Let’s not bury the lede here. As pointed out in The Original Green blog, the entire city of Florence, in Italy, could fit inside one Atlanta freeway interchange. One of the most powerful, culturally important, and largest cities for centuries in Europe with a population exceeding 100,000 people. For readers who have not yet visited this incredible city, one can walk, at a fairly leisurely pace, from one side to the other in 45 minutes.


There are thousands of cities on Earth and not a single one where mass car ownership hasn’t led to soul-destroying traffic congestion.

Cars are both amazing and terrible:

Imagine there existed a way to move people, children, and almost unlimited quantities of cargo point to point, on demand, using an existing public network of graded and paved streets practically anywhere on Earth, in comfort, style, speed, and safety. Practically immune to weather. Operable by nearly any adult with only basic training, regardless of physical (dis)ability. Anyone who has made a habit of camping on backpacking trips knows well the undeniable luxury of sitting down in air-conditioned comfort and watching the scenery go by. At roughly $0.10/passenger mile, cars are also incredibly cheap to operate.


Some American cities have nearly 60% of their surface area devoted to cars, and yet they are the most congested of all. Would carving off another 10% of land, worth trillions in unimproved value alone, solve the problem? No. According to simulations I’ve run professionally, latent demand for surface transport in large cities exceeds supply by a factor of 30. Not 30%. 3000%. That is, Houston could build freeways to every corner of the city 20 layers deep and they would still suffer congestion during peak hours.

Why is that? Roads and freeways are huge, and expensive to build and maintain, but they actually don’t move very many people around. Typically peak capacity is about 1000 vehicles per lane per hour. In most cities, that means 1000 people/lane/hour. This is a laughably small number. All the freeways in LA over the four hour morning peak move perhaps 200,000 people, or ~1% of the overall population of the city. 30x capacity would enable 30% of the population to move around simultaneously.


Spacing between the bicycles, while underway, is a few meters, compared to 100 m for cars with a 3.7 m lane width. Bicycles and pedestrians take up roughly the same amount of space.


Like a lot of public infrastructure, the cost comes down to patterns of utilization. For any given service, avoiding congestion means building enough capacity to meet peak demand. But revenue is a function of average demand, which may be 10x lower than the peak. This problem occurs in practically all areas of life that involve moving or transforming things. Roads. Water. Power. Internet. Docks. Railways. Computing. Organizational structures. Publishing. Tourism. Engineering.

This effect is intuitively obvious for roads. Most of the time, the roads in my sleepy suburb of LA are lifeless expanses of steadily crumbling asphalt baking in the sun. The adjacent houses command property prices as high as $750/sqft, and yet every house has half a basketball court’s worth of nothing just sitting there next to it. Come peak hour, the road is now choked with cars all trying to get home, because even half a basketball court per house isn’t enough to fit all the cars that want to move there at that moment. And of an evening, onstreet parking is typically overwhelmed because now every car, which spends >95% of its life empty and unused, now needs 200 sqft of kerb to hang out. Most of the time, the road is far too big, and the rest of the time, it’s far too small.

People often underestimate the cost of having resources around that they aren’t currently using. And since our culture expects roads and parking to be both limitless, available, and free, we can’t rely on market mechanisms to correctly price and trade the cost. Seattle counted how many parking spaces were in the city and came up with 1.6 million. That’s more than five per household! Obviously most of them are vacant most of the time, just sitting there consuming space, and yet there will never be enough when they are needed!

TurboTax for customs paperwork

Saturday, November 12th, 2022

Ryan Petersen’s entire life seems to be a series of entrepreneurial experiments in ferrying items from Point A to Point B, culminating in Flexport:

The son of entrepreneurs, Petersen earned pocket money delivering sodas to his mother’s food safety business. After graduating from college in 2002, he worked alongside his older brother, David, re-selling Chinese scooters and motorcycle parts in the United States. As that business gathered steam, the younger Petersen moved to China in 2005 to monitor local operations. The disorganization the duo encountered inspired their next company.

In 2007, Ryan Petersen headed to Columbia for business school. The same year, he, David, and Michael Kanko – one of David’s former roommates – started a new endeavor: ImportGenius. The business collects data associated with global trade, organizing import and export records. This information is extremely useful for those searching for suppliers within a specific industry or looking for better visibility into a competitor’s supply chain. Over the following six years, the Brothers Petersen and Kanko developed ImportGenius into a profitable business, albeit one with a capped upside. Today, it is under Kanko’s stewardship and reportedly does millions in revenue.

Recognizing ImportGenius’s limitations and feeling ready for a new challenge, David Petersen applied to Y Combinator in 2013 with BuildZoom, a platform to initiate and manage the home remodeling process. Ryan reportedly “grabbed an air mattress and tagged along.”

Rather than becoming part of David’s startup, Ryan spent his stint in California developing an idea of his own. While he initially conceived of it being an extension of ImportGenius, he soon realized it was a much larger idea than simply searching trade documentation – trade itself was broken. After making his pitch for a “TurboTax for customs paperwork” in the spring of 2013, he was accepted into the following year’s Y Combinator batch. His acceptance afforded him the chance to work under the mentorship of the accelerator’s founder, Paul Graham.

When I asked Petersen what it had been like working with arguably one of the most influential thinkers of the last two decades, he noted that Graham remained an active counselor before highlighting his particular genius. “Paul is probably the best in the world at asking what’s possible rather than what’s likely.”

In Petersen, Graham saw someone willing to dream audaciously and endure discomfort to bring those dreams to reality. “Ryan is an armor-piercing shell,” Graham previously commented, “a founder who keeps going through obstacles that would make other people give up.”

With Graham’s support and Y Combinator’s signaling power, Petersen ended his time at the accelerator by closing a $4 million seed round with backing from Initialized Capital and Rugged Ventures.

In the years that followed, Petersen succeeded in expanding Flexport’s scope and growing revenue. What began as an ambition to improve global trade through a smoother customs process transformed into a fully-fledged freight forwarder with software at its heart.

It came with bumps in the road. Perhaps the largest arrived after raising $1 billion from Softbank in 2019. As well profiled by Forbes, Petersen revved up the company’s hiring – and burn rate – as if Masayoshi Son’s pockets would remain ever-full. After WeWork’s disastrous collapse, Softbank changed its approach, slowing investments and forcing Flexport to adjust. The company cut 3% of its team (fifty employees) and shifted from hypergrowth to chasing profitability.

It wouldn’t take long. The pandemic set shipping prices skyrocketing, pushing Flexport to a profit of $37 million. The company achieved new relevance during this period, with Petersen becoming an influential voice on various logistical crises. Though Flexport briefly looked to be on rocky footing a couple of years earlier, it entered 2022 with earned self-assurance and $3.2 billion in annual revenue.

The classic dungeon crawl promotes a play style that is very cautious, methodical, and calculated

Friday, November 11th, 2022

Yora of the Spriggan’s Den feels that classic dungeon-crawling is a fascinating and fun form of gameplay, but the archetypical dungeon crawl is not a good basis for a Sword & Sorcery campaign:

The classic dungeon crawl, with its complex underground labyrinths, countless traps, secret doors, and numerous small hidden stashes of treasures all over the place naturally promotes a play style that is very cautious, methodical, and calculated. It encourages players to progress slowly and with care, to examine all the small and possibly insignificant details, and to take any precautions before following through with well thought through plans. In a well deaigned dungeon, this can be hugely exciting and thrilling. But it’s a kind of exitement and tension that is very different from the style of Sword & Sorcery. This is a style that is all about fearless and even reckless initiative, where fortune favors the bold. Heroes are certainly relying heavily on cunning and trickery to take down foes much stronger than themselves, but often these are things improvsed in the heat of the action and more of a gamble than much of a plan. In a Sword & Sorcery themes campaign, players spending a lot of time over maps and rummaging through large boxes of tools to disable a dangerous mechanism with a minimum of risk is something that you want to avoid, not to have as the default approach to playing the game.

This contrast between methodical attacks and dashing ones goes well beyond fantasy roleplaying games. It’s arguably the key distinction between stereotypical Great War tactics and the newer stormtroop tactics that commanders like Rommel used to overwhelm larger forces in strong positions.

(In 1989, then-Commandant of the Marine Corps Alfred M. Gray reenergized the post-Vietnam Marine Corps with the publication of Warfighting, which advocated that more dashing style.)

If an adventuring party takes its time, then the dungeon full of monsters should have time to organize and attack the adventuring party en masse, instead of getting defeated in detail.

(Hat tip to Castalia House.)

It’s a book whose metatextual enigmas attracted credulous postmodernists in hordes

Thursday, November 10th, 2022

The Narrative of Arthur Gordon Pym of Nantucket is the Voynich manuscript of American literature:

As the only novel written by Edgar Allan Poe, its historical importance is unquestionable; as a literary work, it is mystifying. Its catalog of atrocities and incidents, which includes cannibalism, drownings, ax murders, shootings (with muskets and pistols), a ghost ship crammed with rotting corpses, a shark feeding frenzy, a landslide, and a mass-casualty explosion, combined with a nightmare symbolism have inspired both interpretation and incredulity.

Long forgotten after Poe had been buried both literally (in 1849) and critically, Pym moldered in ragged omnibus editions for nearly 100 years before W. H. Auden and the New Critics resurrected it for the Age of Academia. Since then, every subsequent critical school has interpreted Pym through its own narrow aesthetic, or increasingly political, perspective. It is a metaphor for the creative imagination; a meditation on God and Providence; a pre-Freudian return-to-the-womb allegory; a rite-of-passage myth; a parable about race. The most recent critical trends generally focus on Pym’s self-reflexive qualities. It’s a book whose metatextual enigmas attracted credulous postmodernists in hordes from Yale to the University of California, Irvine.

But the real mystery surrounding Pym, aside from its shocking and indeterminate ending, is whether it is a flawed work produced by an author under duress or a conscious literary hoax. This is, after all, a novel that begins with a preface from the narrator, “Arthur Gordon Pym,” that stresses the implausibility of the events he is about to recount and ends with a postscript from Edgar Allan Poe, the “editor,” who refuses to complete the story because of his “disbelief in the entire truth of the latter portions of the narrative.”

Not long after the boom in Pym studies began, a few sharp-eyed critics realized that Poe, with his long history of hoaxes and pranks (to go along with perpetual hardship) had produced something dubious.


Understanding Pym is impossible without delineating the personal challenges Poe faced when he composed it. Starving, with a teenage wife to support, and unemployed during one of the worst depressions America had yet seen, Poe needed to deliver a manuscript as soon as possible, in hopes of a quick payout. That meant repurposing material from the Southern Literary Messenger, plagiarizing from several nonfiction books, and possibly, fusing two different narratives and passing them off as one.

(Hat tip to Castalia House.)

The history of horror is a history of what we aren’t all that frightened of anymore

Wednesday, November 9th, 2022

The history of horror is a history of what we aren’t all that frightened of anymore:

Horror, began appropriately enough during the Reign of Terror. Religion was officially outlawed in France. The Catholic Church had been forced out of the country and if you were going to worship anything at all, it had to be the Goddess of Reason. Graveyards were filled with dead people who were, according to the First Republic, gone forever. They were in an eternal sleep from which there would be no waking.

Into this government-mandated spiritual vacuum stepped Étienne-Gaspard Robert, the creator of the very first horror show: The Phantasmagoria.

The Phantasmagoria was a “Magic Lantern” show that combined sound effects and an eerie music score provided by Ben Franklin’s glass harmonica.

Robert, unlike his various conmen spiritualist predecessors, had to keep an eye out for militantly atheist authorities, so he was very clear about the fact that what his audience was watching was fiction. Ghosts and ghouls weren’t real and it was purely for entertainment.

And by all accounts, audiences found the Phantasmagoria utterly terrifying. Admittedly, Robert was careful to serve them punch laced with laudanum before the show started but that only goes so far. The fear was quite real. But now you can look at what was the most frightening thing in the world in its day and you just sort of shrug.

(Hat tip to Castalia House.)

At long last, we have created the Torment Nexus

Tuesday, November 8th, 2022

One year ago, Alex Blechman made this now-classic tweet:

They would verify the treaty without on-site inspections, using their own assets

Monday, November 7th, 2022

In 1972, the United States and Soviet Union signed the Anti-Ballistic Missile Treaty and the Interim Agreement, collectively known as SALT I:

This was an agreement by the two parties that they would verify the treaty without on-site inspections, using their own assets. Both sides also agreed not to interfere with these “national technical means.”

“National technical means” served as a euphemism for each country’s technical intelligence systems. Although these assets included ground, airborne, and other intelligence collection systems, the primary intelligence collectors for treaty verification were satellites, which both countries had been operating for over a decade, but neither country publicly discussed, certainly not with each other.


Surprisingly, there appears to have been little initial skepticism on the American side about the ability to verify strategic arms control treaties using satellites. In fact, there are indications that by the early 1970s there was an overestimation of their capabilities, although the people who developed and operated them were concerned about their limitations, as well as the misperception about what they could do versus their actual capabilities.

I’ve mentioned before that I always assumed that spy satellites used TV cameras, and it was a real shock to learn that they didn’t start out that way:

The first successful American photo-reconnaissance mission took place in August 1960 as part of the CORONA program. CORONA involved orbiting satellites equipped with cameras and film and recovering that film for processing. The early satellites orbited for approximately a day before their film was recovered, and it could take several days for that film to be transported and processed before it could be looked at by photo-interpreters in Washington, DC. Although the system was cumbersome, the intelligence data produced by each CORONA mission was substantial, revealing facilities and weapons systems throughout the vast landmass of the Soviet Union.

CORONA’s images were low resolution, capable of revealing large objects like buildings, submarines, aircraft, and tanks, but not providing technical details about many of them. In 1963, the National Reconnaissance Office launched the first GAMBIT satellite, which took photographs roughly equivalent to those taken by the U-2 spyplane that could not penetrate Soviet territory. Both CORONA and GAMBIT returned their film to Earth in reentry vehicles. By 1966, CORONA was equipped with two reentry vehicles, and GAMBIT was equipped with one, increased to two reentry vehicles by August 1969. The existence of multiple reentry vehicles on satellites and missiles was to become a source of concern for NRO officials as new arms control treaties were negotiated.

The two satellites complemented each other: CORONA covered large amounts of territory, locating the targets, and GAMBIT took detailed photographs of a small number of them, enabling analysts to make calculations about their capabilities such as the range of a missile or the carrying capability of a bomber. These photographic reconnaissance satellites provided a tremendous amount of data about the Soviet Union. That data was combined with other intelligence, such as interceptions of Soviet missile telemetry, to produce assessments of Soviet strategic capabilities. Signals and communications intelligence, collected by American ground stations around the world as well as satellites operated by the NRO, also contributed to the overall intelligence collection effort.

By the mid-to-late 1960s, these intelligence collection systems, particularly the photo-reconnaissance satellites, had dramatically improved American understanding of Soviet strategic forces and capabilities. A 1968 intelligence report definitively declared, “No new ICBM complexes have been established in the USSR during the past year.” As a CIA history noted, “This statement was made because of the confidence held by the analysts that if an ICBM was there, then CORONA photography would have disclosed them.” This kind of declared confidence in the ability of satellite reconnaissance to detect Soviet strategic weapons soon proved key to signing arms control treaties.

Schooling is actually a net negative

Sunday, November 6th, 2022

The Slime Mold Time Mold crew think that Erik Hoel is right that historical tutoring was better than education today:

It’s no secret that school sux. It’s not that tutoring is good, it’s that mechanized schooling is really bad. If we got rid of formal 20th century K-12 education, and did homeschooling / unschooling / let kids work at the costco, we would get most of the benefits of tutoring without all the overhead and inequality.

Our personal educational philosophy is that, for the most part, the most important thing you can do for your students is expose them to things they wouldn’t have encountered otherwise. Sort of in the spirit of, you can lead a horse to water, but you can’t make him drink. So K-12 education gums up the works by making bad recommendations, having students spend a lot of time on mediocre stuff, and keeping them so busy they can’t follow up on the better recommendations from friends and family.

From this perspective, mechanized schooling is actually a net negative — it is worse than nothing, and if we just let kids run around hitting each other with sticks or whatever, we would get more geniuses.

Why does everybody lie about social mobility?

Thursday, November 3rd, 2022

Why does everybody lie about social mobility?, Peter Saunders asks:

The answer [to a growing concern that the UK was squandering vast pools of potential working-class talent that it could ill afford to lose], addressed in the 1944 Education Act, was to make all state-aided secondary schools, including grammar schools, free for all pupils. A new national examination — the ’11-plus’ — was introduced, and those scoring high-enough marks were selected for grammar schools, regardless of their parents’ means. From now on, children from different social class backgrounds would be given an equal opportunity to get to grammar schools. The only selection criterion was intellectual ability.

It didn’t take long, however, for critics to notice that children from middle-class homes were still out-competing those from working-class backgrounds in the 11-plus competition for grammar school places. The possibility that this might be because middle-class kids are on average brighter than working-class kids was ruled out from the start.


In 1965, the (privately-educated) Labour Education Secretary, Anthony Crosland, issued an instruction to all local education authorities to close down their grammar schools and replace them with ‘comprehensives’ which would be forbidden to select pupils by ability. Within a few years, all but 163 of nearly 1,300 grammar schools in the UK disappeared.


But very rapidly, the familiar pattern reappeared. Middle-class children clustered in disproportionate numbers in the higher streams of the comprehensive schools, and they continued to out-perform working-class children in post-16 examinations and university entry.

One response to this was to weaken or abolish streaming.


The minimum leaving age was raised to 16 in 1972 to force under-performing working-class children to stay in school longer, and when that didn’t make much difference to the attainment gap, the Blair government legislated in 2008 to force everyone to stay in education or training up until the age of 18. Yet still the social class imbalance in educational achievement persisted.


With nearly half of all youngsters getting degrees, more demanding employers started recruiting only from the top universities. Politicians responded to this by putting pressure on the top universities to admit more lower-class applicants.


Looking back over this sorry half-century history of educational reform and upheaval, we see that we have increased coercion (restricting school choice by parents, forcing kids to stay in education even if they don’t want to, limiting the autonomy of universities to select their own students), diluted standards (dumbing down GCSEs, A-levels and degrees), and undermined meritocracy (forcing universities and employers to favour applicants from certain kinds of backgrounds at the expense of others who may be better qualified). What we have conspicuously failed to do, however, is flatten social class differences in educational achievement.

Firm facts about Dyalhis’s life are few

Tuesday, November 1st, 2022

Will Oliver’s list of Golden Age of Sword & Sorcery stories starts with Robert E. Howard’s “The Shadow Kingdom” — the origin of both the sword and sorcery genre and the reptilian conspiracy theory — and continues with stories exclusively from Robert E. Howard and Clark Ashton Smith for the first 25 entries, before getting to “The Sapphire Goddess” by Nictzin Dyalhis:

Nictzin Wilstone Dyalhis (June 4, 1873–May 8, 1942) was an American chemist and short story writer who specialized in the genres of science fiction and fantasy.


Firm facts about Dyalhis’s life are few, as he coupled his limited output of fiction with a penchant for personal privacy, an avoidance of publicity, and intentional deception. Even his name is uncertain. His World War I draft registration card establishes his full name as Nictzin Wilstone Dyalhis, but it marks the earliest known appearance of this name. His first wife’s death certificate gives his first name as “Fred,” and he has been thought to have possibly altered his surname to Dyalhis from a more prosaic “Dallas” — in his stories, Dyalhis played with common spellings, so that “Earth” becomes Aerth and “Venus,” Venhez. According to L. Sprague de Camp, however, Dyalhis was his actual surname, inherited from his Welsh father, and his given name Nictzin was also authentic, bestowed on him due to his father’s fascination with the Aztecs.

His World War I draft registration card and 1920 Census record establish his birthdate as June 4, 1873, and his state of birth as Massachusetts. According to the 1920 census, his father was also born in Massachusetts, and his mother in Guatemala. But in the 1930 census he was reported to have been born about 1880 in Arizona to parents also born in that state. In bibliographic sources, his year of birth was usually cited (with a question mark) as 1879; Dziemianowicz gives it as 1880; and he was speculated to have been born in England — or Pima, Arizona.

Among the imaginative readers of his stories, Dyalhis acquired a reputation for possessing unusual abilities and an exotic history as an adventurer and world traveler. The known facts of his life are more prosaic, mostly centering around Pennsylvania and Maryland. At some time during his youth he lost one eye, as noted on his draft card. He worked as a box nailer in 1918, a chemist in 1920, a machinist in 1930, and a writer for magazines in 1940.