UNSW engineers have modified a conventional Diesel engine to use a mix of hydrogen and a small amount of diesel

January 27th, 2023

Engineers at the University of New South Wales (UNSW) say they have successfully modified a conventional Diesel engine to use a mix of hydrogen and a small amount of diesel, claiming their patented technology has cut carbon dioxide (CO2) emissions by more than 85%:

About 90% of fuel in the UNSW hybrid diesel engine is hydrogen but it must be applied in a carefully calibrated way. If the hydrogen is not introduced into the fuel mix at the right moment “it will create something that is explosive that will burn out the whole system,” Prof Kook explains.

He says that studies have shown that controlling the mixture of hydrogen and air inside the cylinder of the engine can help negate harmful nitrogen oxide emissions, which have been an obstacle to the commercialisation of hydrogen motors.

The Sydney research team believes that any diesel trucks and power equipment in the mining, transportation and agriculture sectors could be retrofitted with the new hybrid system in just a couple of months.

Prof Kook doubts the hybrid would be of much interest in the car industry though, where electric and hybrid vehicles are already advanced and replacing diesel cars.

However, he says Australia’s multibillion-dollar mining industry needs a solution for all its diesel-powered equipment as soon as possible.

The comparatively short lifespan of modern concrete is overwhelmingly the result of corrosion-induced failure

January 26th, 2023

Roman concrete’s ability to last for millennia puts modern concrete to shame, but this ignores that the overwhelming majority of modern concrete is reinforced concrete, Brian Potter explains, with some type of steel embedded in it:

Usually this is in the form of bars (rebar), but it might also be mesh, or fibers, or steel cable. Steel is stronger than concrete, particularly in tension (reinforcing steel has perhaps 10-15x the compressive strength of concrete, but more than 100x the tensile strength of concrete), and a comparatively small amount of steel can greatly increase the strength of a concrete element. By adding steel, you can make shallow concrete elements (beams, slabs, etc.) that can still span long distances and that wouldn’t be possible if the concrete were unreinforced.

Concrete is also brittle, whereas steel is ductile — if a plain concrete element fails, it’s likely to fail suddenly without warning, whereas a steel element will (generally) stretch and sag significantly before it fails, absorbing a lot of energy in the process. This makes reinforced concrete fundamentally safer than unreinforced concrete — if you have a lot of warning before a structure fails, you have time to safely get out of the building. For this reason, structural concrete is often required by code to have some minimum amount of steel reinforcing in it, and concrete that might experience large sudden loads in unpredictable ways (such as from an earthquake) is required to have a LOT of additional reinforcing. Most buildings built in zones of very high seismicity aren’t actually designed to come through the earthquake undamaged — they’re merely designed to not catastrophically collapse so people can safely get out.

(Earthquake design might seem like something that you only need to worry about in a few places, but most of the US can theoretically see a surprisingly strong earthquake and the buildings must be designed accordingly.)

But while reinforcement provides a lot of benefits, it has drawbacks. The primary one is that, over time, the steel in concrete corrodes. This is the result of two mechanisms – chloride ions making their way through the concrete, and concrete absorbing CO2 over time (though the second one happens much more slowly). As the steel corrodes, it expands, putting internal pressure on the concrete, eventually resulting in cracking and spalls (chunks of concrete that have fallen off).

How quickly this happens depends on a lot of factors. Concrete exposed to weather or water will corrode faster than concrete that isn’t. Concrete where the rebar is farther from the surface of the concrete will last longer than concrete where the steel is closer to the surface. Concrete exposed to harsh chemicals such as salts or sulfates will corrode faster than concrete that isn’t.

The comparatively short lifespan of modern concrete is overwhelmingly the result of corrosion-induced failure. Unchecked, reinforced concrete exposed to the elements will often start to decay in a few decades or even less. Precast concrete parking garages, for instance, are exposed to a lot of weather, since they’re open-air structures and vehicles bring moisture and road salts inside them. And a precast garage will often have many exposed steel elements, since steel plates stitch the pieces of concrete together. A precast garage might have a design life of 50 years, and often need very substantial repairs much earlier. Roman concrete, however, is unreinforced, and doesn’t have this failure mechanism.

This type of failure is exacerbated by the fact that modern concrete is designed to come up to strength very quickly, which results in numerous small cracks caused by shrinkage strains in the hardened concrete. These cracks make it easier for water to reach the steel, accelerating the process of corrosion. They also make the concrete more susceptible to other types of decay like freeze-thaw damage. Roman concrete, on the other hand, cured much more slowly.

If we wanted to build more durable concrete structures, the most important thing would be to remove or minimize this failure mechanism, and structures designed for long lives often do. Buddhist or Hindu temples, for instance, will use unreinforced concrete, or concrete with stainless steel rebar, and often have 1000-year design lives (though whether they will actually survive 1000 years is another question). Stainless steel rebar advocates like to trot out a concrete pier in Mexico built in 1941 with stainless steel rebar, which has needed no major repair work despite being in a highly corrosive environment.

[…]

Using unreinforced concrete dramatically limits the sort of construction you can do — even if the code allows it, you’re basically limited to only using concrete in compression. Without reinforcing, modern concrete buildings and bridges would be largely impossible.

Other methods of reducing reinforcement corrosion also have drawbacks, especially cost. Stainless steel rebar is four to six times as expensive as normal rebar. Epoxy coated rebar (commonly used on bridge construction in the US) is also more expensive, and though it can slow down corrosion, it won’t stop it. Basalt rebar won’t corrode (as far as I know) but can apparently decay in other ways.

Adding cost to a building to potentially extend its lifespan is often tough to make the numbers work for a developer. Well-made reinforced concrete that’s protected from the weather can last over a century, so the net present value of any additional lifespan beyond that is pretty low. It’s much more likely that the building will be torn down for other reasons long before the concrete fails.

Accounting is a wonderful tool for converting tautologies into useful information

January 25th, 2023

Accounting is a wonderful tool for converting tautologies into useful information:

Here, for example, is a tautology: when a company spends money, somebody receives that money. And here is a useful mental model that helps investors think about booms and busts, time industry cycles, and spot second- and third-order outcomes of news: one company’s expenditures are, very often, another company’s revenue.

[…]

Higher returns to capital are a subsidy for reinvestment, and economies require a lot of reinvestment to keep going. Roads, railroads, ports, airports, power plants, factories, and homes are all long-lived assets with high upfront costs. For a country to have a lot of them, a smaller share of national income has to go to consumption so a larger share can go to investment instead. Importantly, shifting more returns to capital does not necessarily make all the capitalists rich (though it can have that effect!). It means there’s a race to identify good investments fast since there’s more money chasing them, and when this kind of policy continues for too long, the wave of capital looking for a return can end up subsidizing spending that simply doesn’t make economic sense. For example, in China circa 1980, pretty much any piece of physical infrastructure was probably worth either fixing up or tearing down and rebuilding entirely, so the country got good returns from holding wages down while reinvesting the proceeds of exports. Now that China is a richer country, with lots of infrastructure, it’s harder to find good homes for incremental money — but the money continues to flow.

[…]

High-income workers tend to save more money, and their savings rate goes up when they experience windfall gains. Lower-income workers are usually scrimping, deferring some purchases, and missing out on things they’d like to spend on, so higher wages for them tend to increase consumer spending.

[…]

When fab utilization is low, new demand just means that existing fabs need to run extra shifts. But when utilization gets high enough, it means the world needs more fabs, and needs more $200m-apiece EUV lithography machines to fill them.

This tends to be the big takeaway from looking at the world from a supply-chain perspective. When there’s slack in the system, or an ability to immediately respond to incremental spending, we see a pretty steady impact on every link in the supply chain: a surprise 1% increase in datacenter spending produces a 1% increase in spending on datacenter chips, which also leads the replacement of chipmaking equipment to tick up by about 1% — not because additional equipment was needed to increase supply, but because more is in use, which means more will need to be replaced.

But when there isn’t slack in the system, a small incremental increase in final demand can produce massive changes in total production capacity. The rough way to approximate this is to look at the useful life of the relevant investment, invert it into a depreciation rate, and then compare changes in demand to that depreciation rate. So if there’s some kind of asset that lasts for 10 years, another way to look at it is that in a given year, 10% of those assets are getting replaced as they wear out. A 2% increase in demand for whatever those assets produce, if they’re all being used at full capacity, means a 20% increase in demand for the assets.

Having to extend the lifespan of older planes consumes money that could be used to acquire new aircraft

January 24th, 2023

Years of delays, cost overruns, and technical glitches with the F-35 have put the Pentagon in a dilemma:

If F-35s aren’t fit to fly in sufficient numbers, then older aircraft such as the F-16 must be kept in service to fill the gap. In turn, having to extend the lifespan of older planes consumes money that could be used to acquire new aircraft and results in aging warplanes that may not be capable of fulfilling their missions on the current battlefield.

[…]

The aircraft has been plagued by a seemingly endless series of bugs, including problems with its stealth coating, sustained supersonic flight, helmet-mounted display, excessive vibration from its cannon, and even vulnerability to being hit by lightning.

The military and Lockheed Martin have resolved some of those problems, but the cumulative effect of the delays is that the Air Force has had to shelve plans for the F-35 to replace the F-16, which now will keep flying until the 2040s.

[…]

The remarkable longevity of some aircraft — such as the 71-year-old B-52 bomber or the 41-year-old A-10 — tends to obscure the difficulty of keeping old warplanes flying. Production lines are usually shut down, and the original manufacturers of components and spare parts have long ceased production. In some cases, they are no longer in business.

From their perch in the heavens they could witness solemn oaths between the men of the steppe

January 23rd, 2023

Razib Khan describes the whirlwind of wagons that swept through Eurasia 5,000 years ago:

This eruption of warrior ferocity five thousand years ago was triggered by an economic revolution that swept across Eurasia, the advent of an unbeatable new cultural toolkit that finally harnessed the full productive potential of the cattle, sheep and goats that had long been viewed as simply mobile meat lockers in agricultural societies. Though these animals had already been domesticated by 8500 BC, it took millennia to perfect milk, cheese and wool production, and the harnessing of oxen as beasts of burden. North of the Black Sea, this revolution arrived around 3500 BC, as small groups of farmers huddling on river banks shifted from a mixed agro-pastoralist production system eking out a living cultivating wheat in an unforgivingly short growing season, to one of pure pastoral nomadism that turned over the vast grasslands around them to massive herds of animals.

Within a few generations, these people, known as the Yamnaya to archaeologists, were both grazing their cattle in the heart of Europe and driving their sheep up to the higher pastures of the Mongolian Altai uplands. This 5,000-mile distance (8,000 km) was spanned in just a few generations by the former farmers. Mobility was the first result of the switch to nomadism, as fleets of wagons began to roll across the steppe, like swarms of lumbering migratory villages, eternally bound for greener pastures. But far beyond a simple shift in aggregate economic production, many later knock-on effects were to reshape the culture of Eurasian societies, some of which continue to impact us down to the present.

Foremostly, the status and power of males rose within these cultures, in tandem with the shift to nomadism. Almost all contemporary nomadic pastoralists are patrilineal and patriarchal, so identity and wealth are passed from father to son, just as with the Plains Indians. Men occupy all of the de jure political leadership positions, if not all de facto ones. This is in contrast to rooted farming cultures, which exhibit more diversity in social arrangements, from the patriarchal Eurasian river-valley civilizations to the matrilineal horticultural societies of tropical Africa and Asia. Even within India, the cultures of the wheat-based northern plains were strongly patrilineal, with wives being totally unrelated to their husbands, and always moving to the households of the men they were to marry. In contrast, in tropical Kerala far to the south many groups cultivating rice, bananas and coconuts were matrilineal, with husbands moving to the villages of their wives, and the primary male figure in some boys’ lives even routinely being their maternal uncle.

For nomads though, the switch to livestock as the primary source of wealth and status increased male clout and importance to universally high levels. Whether they are Asian Mongols or African Maasai, herder societies are dominated by male kindreds that control the movable wealth in the form of livestock, and it is their role to on the one hand protect the herds and drive them to more fertile pastures, and on the other steal animals from neighbors. In nomadic societies, paternal kin groups provided exclusively for their women and children. It was senior men in these groups that accumulated wealth and status they could pass on to sons, resulting in a very strong concern over paternity, so as to avoid investing in the offspring of men outside of their lineage. After all, these men strived for wealth and status in the first place to produce sons who would continue their legacy. And just as they were fixated on their sons, nomadic societies were also punctilious in revering the memories of their forefathers. The Bible’s older books are littered with “begats” a dozen deep, Norse sagas begin with a recitation of half a dozen steps of descent from father to son, and the earliest Indian texts are fixated on royal genealogies.

These ancient nomadic obsessions continue down to the present. The kingdom of Jordan is still ruled today by a direct paternal descendent of Hashim ibn Abd Manaf, Muhammad’s great-grandfather and the progenitor of the Ban Hashim clan to which he belonged, 1,525 years after he died. The lineage of Bodonchar Munkhag, Genghis Khan’s ancestor who founded the world conqueror’s clan two centuries before his conquests, still ruled Mongolia as late as 1920, nearly 700 years after Munkhag’s time.

But steppe patriarchy was reflected in more than just age-old customs and long-standing genealogies. It was more than an empire of ideas. Steppe patriarchy expressed itself in a material fashion. The Yamnaya nomads constructed massive burial mounds, kurgans, wherever they went. Within these vast mounds, they inhumed individuals of high status and greater power. The remains found skew heavily male. It is no surprise that just as they preferentially buried their honored male rulers under enormous mounds, these people worshiped male sky-gods. These male deities were culturally important, as from their perch in the heavens they could witness solemn oaths between the men of the steppe.

Eventually, the socialist aspects of the community faded away, and the Herberts ran a general store there

January 22nd, 2023

Frank Patrick Herbert Jr. was not born into an aristocratic family and did not receive specialized training as a mentat or Bene Gesserit witch:

His paternal grandparents had come west in 1905 to join Burley Colony in Kitsap County, one of many utopian communes springing up in Washington State beginning in the 1890s. The Burley communards printed their own currency, paid everyone an identical salary, and championed gender equality. Eventually, the socialist aspects of the community faded away, and the Herberts ran a general store there.

Herbert’s father, Frank Patrick Herbert Sr., had a varied career including operating a bus line, selling electrical equipment, and serving as a state motorcycle patrolman. His mother, Eileen Marie McCarthy, was from a large Irish family in Wisconsin. According to unsubstantiated family lore, during Prohibition, Frank Senior, Eileen, and another couple built and ran the legendary Spanish Castle Ballroom, a speakeasy and dancehall off of Highway 99 between Seattle and Tacoma.

Herbert’s childhood included camping, hunting, fishing, and digging clams. At 8 he is said to have jumped on a table and shouted, “I wanna be an author.” His parents were binge alcoholics, and young Frank often had to care for his only sibling, Patty Lou, who was 13 years younger. He had a checkered career at Tacoma’s Lincoln High School, including dropped and failed classes. But his career as a writer had already been launched. He was an enthusiastic reporter on the student newspaper. A classmate remembered him rushing into the “news shack” behind the school, shouting: “Stop the presses! I’ve got a scoop!”

In May of his senior year, he dropped out. The following summer he worked on the Tacoma Ledger as a copy and office boy, doing some actual reporting as well. In the fall he went back to school, writing feature articles and a regular column for the school paper. At 17, he sold a Western story for $27.50. He was elated, but the next two dozen stories he wrote were all rejected. In 1938, worried about his parents’ drinking and his 5-year-old sister’s safety in the unstable home, he and Patty Lou took a bus to Salem, Oregon, where they sought refuge with an aunt and uncle.

After graduation from Salem High School in 1939, Herbert moved to California and got a job at the Glendale Star as a copy editor. Barely 18, he lied about his age and smoked a briar pipe to seem older. By 1940 the 19-year-old was back with his aunt and uncle in Salem, and talked his way into an “on-call” job with the Oregon Statesman as a photographer, copy editor, feature reporter, and in the advertising department.

In the spring of 1941, he and Flora Parkinson drove three hours north to Tacoma, where they got a night court judge to marry them. Back in California, he worked once more for the Glendale Star, and in February 1942, a few months after Pearl Harbor, he registered for the draft. The next day the couple’s daughter, Penelope Eileen, was born. By July he had enlisted in the Navy and was assigned to Portsmouth, Virginia as a photographer. There, tripping over a tent tie-down, he suffered a head injury that resulted in an honorable discharge. He went back to California, where he discovered his wife and daughter had vanished. His mother-in-law in Oregon wouldn’t tell him where they were. Flora and Frank Herbert were subsequently divorced, and she was given custody of baby Penny.

From 1943 to 1945, Herbert worked as a reporter for the Oregon Journal in Portland. He was writing fiction as well. In 1945 he had his second sale, a suspense story set in Alaska that appeared in Esquire magazine and earned him $200. By August of 1945, he had moved to Seattle and was working on the night desk at the Seattle Post-Intelligencer.

How do these families keep producing such talent, generation after generation?

January 21st, 2023

Reading about Galton’s disappearance from collective memory reminded me of Scott Alexander’s piece on the secrets of the great families, which included this brief description of Galton’s great family:

Charles Darwin discovered the theory of evolution. His grandfather Erasmus Darwin also groped towards some kind of proto-evolutionary theory, made contributions in botany and pathology, and founded the influential Lunar Society of scientists. His other grandfather Josiah Wedgwood was a pottery tycoon who “pioneered direct mail, money back guarantees, self-service, free delivery, buy one get one free, and illustrated catalogues” and became “one of the wealthiest entrepreneurs of the 18th century”. Charles’ cousin Francis Galton invented the modern fields of psychometrics, meteorology, eugenics, and statistics (including standard deviation, correlation, and regression). Charles’ son Sir George Darwin, an astronomer, became president of the Royal Astronomical Society and another Royal Society fellow. Charles’ other son Leonard Darwin, became a major in the army, a Member of Parliament, President of the Royal Geography Society, and a mentor and patron to Ronald Fisher, another pioneer of modern statistics. Charles’ grandson Charles Galton Darwin invented the Darwin-Fowler method in statistics, the Darwin Curve in diffraction physics, Darwin drift in fluid dynamics, and was the director of the UK’s National Physical Laboratory (and vaguely involved in the Manhattan Project).

How, he asks, do these families keep producing such talent, generation after generation?

One obvious answer would be “privilege”. It’s not completely wrong; once the first talented individual makes a family rich and famous, it has a big leg up. And certainly once the actual talent in these families burns out, the next generation becomes semi-famous fashion designers and TV personalities and journalists, which seem like typical jobs for people who are well-connected and good at performing class, but don’t need to be amazingly bright. Sometimes they become politicians, another job which benefits from lots of name recognition.

But I’ve tried to avoid mentioning these careers, and focus on actually impressive achievements that are hard to fake. And also, none of these families except the Tagores were fantastically rich; there are thousands or millions of families richer than they are who don’t have any of their accomplishments. For example, Cornelius Vanderbilt’s many descendants are famous only for being very rich and doing rich people things very well (one of them won a yachting prize; another was an art collector; a third was Anderson Cooper).

The other obvious answer is “genetics!” I think this one is right, but there are some mysteries here that make it less of a slam dunk.

First, don’t genetics dilute quickly? You only share 6.25% of your genes with your great-great-grandfather.

[…]

The answer to the first question is really impressive assortative mating and having vast litters of children.

Take Niels Bohr. He’s a genius, but if he marries a merely does-well-at-Harvard level woman, his son will be less of a genius. But in fact he married Margrethe Nørlund. It’s not really clear how smart she was — she was described as Bohr’s “sounding-board” and “editor”, and that can hide a wide variety of different levels of contribution. But her brother was Niels Nørlund, a famous mathematician who invented the Nørlund–Rice integral and apparently got a mountain range named after him. He may have been the most mathematically gifted person in Denmark who was not himself a member of the Bohr family — so marrying his sister is a pretty big score on the “keep the family genetically good at math” front.

The Darwins were even more selective: they mostly married incestuously among themselves. Charles Darwin married his cousin Emma Wedgwood; Charles’ sister Caroline Darwin married her cousin Josiah Wedgwood III; their second, cousin, Josiah Wedgwood IV, married his cousin, Ethel Bowen (and became a Baron!)

When the Darwins weren’t marrying each other, they were marrying others of their same intellectual caliber. There is at least one Darwin-Huxley marriage: that would be George Pember Darwin (a computer scientist, Charles’ great-grandson) and Angela Huxley (Thomas’ great-granddaughter) in 1964. But also, Margaret Darwin (Charles’ granddaughter) married Geoffrey Keynes (John Maynard Keynes’ brother, and himself no slacker — he pioneered blood transfusion in Britain). And John Maynard and Geoffrey’s sister, Margaret Keynes, married Archibald Hill, who won the Nobel Prize in Medicine. And let’s not forget Marie Curie’s daughter marrying a Nobel Peace Prize laureate.

If you find yourself marrying John Maynard Keynes’ brother, or Niels Nørlund’s sister, or future Nobel laureates, you’re going way above the bar of “just as selective as Harvard or Oxford”. In retrospect, maybe it was stupid of me to think these people would settle so low.

But also, all these people had massive broods, or litters, or however you want to describe it. Charles Darwin had ten children (insert “Darwinian imperative” joke here); Tagore family patriarch Debendranath Tagore had fourteen.

I said before that if an IQ 150 person marries an IQ 130 person, on average their kids will have IQ 124. But I think most of these people are doing better than IQ 130. I don’t know if Charles Darwin can find someone exactly as intelligent as he is, but let’s say IQ 145. And let’s say that instead of having one kid, they have 10. Now the average kid is 129, but the smartest of ten is 147 — ie you’ve only lost three IQ points per generation. And if you’re marrying other people from very smart families — not just other very smart people — then they might have already chopped off the non-genetic portion of their intelligence and won’t regress. This is starting to look more do-able.

[…]

One last thing, which I have no evidence for. Eliezer Yudkowsky sometimes talks about the idea of a Hero License — ie, most people don’t accomplish great things, because they don’t try to accomplish great things, because they don’t think of themselves as the kind of person who could accomplish great things.

[…]

It seems weird to think of “genius” as a career you can aim for. But maybe if your dad is Charles Darwin, you don’t just go into science. You also start making lots of big theories, speculating about lots of stuff. The fact that something is an unsolved problem doesn’t scare you; trying to solve the biggest unsolved problems is just what normal people do. Maybe if your dad founded a religion, and everyone else you know is named Somethingdranath Tagore and has accomplished amazing things, you start trying to write poetry to set the collective soul of your nation on fire.

This final chapter in the history of the planet’s mounted nomads played out in the full light of American history

January 20th, 2023

America had its own steppe nomads, Razib Khan reminds us:

On June 25–26th of 1876, at Little Bighorn in Montana, a coalition of Sioux, Cheyenne and Arapaho led by Sitting Bull and Crazy Horse defeated General George Custer. The outcome shocked the world; the Plains tribes stared down the might of the modern world and then ably dispatched it. But theirs was a Pyrrhic victory. The US government just raised more troops, and all that elan and courage was eventually no match for raw numbers. Across the cold windswept plains of the Dakotas, the Sioux and their allies had denied the American armies outright victory from the 1850’s into the 1870’s. Meanwhile, to the south, in Texas, the Comanche “Empire of the Summer Moon” had been the bane of the Spaniards, and later the Mexicans, for over a century. They first battled the Spanish Empire to a draw in the 1700’s, and continued to periodically pillage Mexico after independence in the 1820’s. Only after the region’s annexation by the US in the 1840’s did the Comanche meet their match, as they were finally defeated in 1870 by American forces. If Americans today remember the Battle of Little Bighorn and the subjugation of the Comanche, it tends to be as the denouement of decades of warfare across the vast North American prairie. But if you zoom out a little, it also marks the end of a 5,000-year saga: the rise and fall of America’s steppe nomads, for that is what all those fearsome tribes of the Plains Indians had become.

Today Americans view these wars with ambivalence, as the expansionist US, seeking its “Manifest Destiny,” conquered the doomed underdog natives of the continent with wanton brutality. But the Plains Indians were themselves a people of conquest, hardened and cruel, and would have bridled at the mantle of the underdog. They espoused an ethos exemplified by their warrior braves who wasted no pity on their enemies and expected none in return. In S.C. Gwynne’s book, Empire of the Summer Moon, he notes that during Comanche raids all “the men were killed, and any men who were captured alive were tortured; the captive women were gang raped. Babies were invariably killed.” Comanche brutality was not total; young boys and girls were captured and enslaved during these raids, but could eventually be adopted into the tribe if they survived a trial by fire: showing courage and toughness even in the face of ill-treatment as slaves. Quanah Parker, the last chief of the Comanche, was the son of a white woman who had been kidnapped when she was nine.

These tribes were warlike because the mobilization of cadres of violent young men was instrumental to the organization of their societies. They were all patrilineal and patriarchal, for though women were not chattel, tribal identity passed from the father to the son. A Sioux or Comanche was by definition the offspring of a Sioux or Comanche father. The birth of a Comanche boy warranted special congratulations for the father, reflecting the importance of sons genealogically for the line to continue. It was the sons who would grow up to feed the tribe through mass-scale horseback buffalo hunts. It was the sons who undertook daring raids and came home draped in plunder. The religion of these warriors was victory, and they stoically accepted that defeat meant death.

These mounted warrior societies of the Plains Indians were a recent product of the Columbian Exchange, forged by the same forces of globalization that birthed the hostile colonial nations hungrily encroaching ever further into their domains from both south and east. The early 1700’s had seen the adoption of horses from the Spaniards, along with the flourishing of rich colonial societies all along the continent’s rim, always ripe for raiding. Together, these catalyzed the rebirth of native nations that lived by the deeds of their predatory cavalry. The warriors of America’s prairies became such adept horsemen in a matter of generations that Comanche boys were reputed to learn riding almost before they learned to walk, echoing Roman observations about the Huns 1,500 years earlier. The introduction of Eurasian horses to their cultures transmuted the farmers and foragers of the Great Plains within a generation into fearsome centaur-like hordes that terrorized half a continent for 150 years, recapitulating the transformation wrought by their distant relatives on the Eurasian Steppe 5,000 years ago.

That this final chapter in the history of the planet’s mounted nomads played out in the full light of American history allows us to vividly imagine the lives of their prehistoric cultural forebears. Just as the Sioux and the Comanche were ruled by the passions of their fearless braves, who were driven to seek glory and everlasting fame on the battlefield, so bands of youth out of the great grassland between Hungary and Mongolia had long ago wreaked havoc on Eurasia from the Atlantic to the Pacific, and the tundra to the Indian ocean. These feral werewolves of the steppe resculpted the cultural topography of the known world three to five thousand years ago. Their ethos was an eagerly grasping pursuit not of what was theirs by right, but of anything they could grab by might. Where the Sioux and Commanche were crushed by the organized might of a future world power, their reign soon consigned to a historical footnote, the warriors of yore marched from victory to conquest. They remade the world in their brutal image, inadvertently laying the seedbeds for gentler ages to come, when roving bands of youth were recast as the barbarian enemy beyond the gates, when peace and tranquility, not a glorious death in battle, became the highest good.

S.C. Gwynne’s Empire of the Summer Moon is excellent, by the way.

An FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy

January 19th, 2023

The FGC-9 stands out from previous 3D-printed firearms designs, in part because it was specifically designed to circumvent European gun regulations:

Thus, unlike its predecessors, the FGC-9 does not require the use of any commercially produced firearm parts. Instead, it can be produced using only unregulated commercial off-the-shelf (COTS) components. For example, instead of an industrially produced firearms barrel, the FGC-9 uses a piece of a pre-hardened 16 mm O.D. hydraulic tubing. The construction files for the FGC-9 also include instructions on how to rifle the hydraulic tubing using electrochemical machining (ECM). The FGC-9 uses a hammer-fired blowback self-loading action, firing from the closed-bolt position. The gun uses a commercially available AR-15 trigger group. In the United States, these components are unregulated. In the European Union and other countries—such as Australia—the FGC-9 can also be built with a slightly modified trigger group used by ‘airsoft’ toys of the same general design. This design choice provides a robust alternative to a regulated component, but also means that the FGC-9 design only offers semi-automatic fire, unless modified. The FGC-9 Mk II files also include a printable AR-15 fire-control group, which may be what was used in this case, as airsoft and ‘gel blaster’ toys are also regulated in Western Australia.

2DD658C4-832D-4C56-8801-7086FDD0CD7D

In tests performed by ARES, an FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy: the firearm shot groups of 60 mm at 23 meters, with no signs of tumbling or unstable flight. Further, in forensic tests with FCG-9 models seized in Europe, the guns generally exhibited good durability. One example, described as not being particularly well built, was able to fire more than 2,000 rounds without a catastrophic failure—albeit with deteriorating accuracy. The cost of producing an FGC-9 can be very low, and even with a rifled barrel and the purchase of commercial components, the total price for all parts, materials, and tools to produce such a firearm is typically less than $1,000 USD. As more firearms are made, the cost per firearm decreases significantly. In a 2021 case in Finland, investigators uncovered a production facility geared up to produce multiple FGC-9 carbines. In this case, the criminal group operating the facility had purchased numerous Creality Ender 3 printers—each sold online for around $200. In recent months, complete FGC-9 firearms have been offered for sale for between approximately 1,500 and 3,500 USD (equivalent), mostly via Telegram groups.

The result was a precociously unified and homogenous polity

January 18th, 2023

Davis Kedrosky explains how institutional reforms built the British Empire:

In 1300, few English institutions actively promoted economic growth. The vast majority of the rural population was composed of unfree peasants bonded either to feudal lords or plots of land. Urban artisans were organized in guilds that regulated who could enter trades like glassblowing, leatherwork, and blacksmithing.

The English state was in turmoil following a century of conflict between Parliament and the Crown, and though nominally strong, it was deficient in fiscal capacity and infrastructural power. The regime lacked both the will and the means to pursue national development aims: integrating domestic markets, acquiring foreign export zones, securing private property, and encouraging innovation, entrepreneurship, and investment. England resembled what has been called a “natural state,” in which violence between factions determined the character of governance. Institutions pushed the meager spoils of an impoverished land into the pockets of rentiers.

By 1800, all this had changed. Britain’s rural life was characterized by agrarian capitalism, in which tenant farmers rented land from landowners and employed free wage labor, incentivizing investment and experimentation with new crops and methods. The preceding two centuries had seen the waning of the guilds, which now served more as organizations for social networking. Elites that had mostly earned their income by collecting taxes were now engaging in commercial enterprises themselves.

The state was now better-financed than any before in history, thanks to an effective tax administration and the ability to contract a mountain of public debt at modest interest rates. This allowed Britain to fund the world’s strongest navy to defend its interests from New York to Calcutta. The British government also intervened frequently in economic life, from enclosure acts to estate bills, and had limited its absolutist and rentier tendencies through the establishment of a strong parliament and professional bureaucracy.

Mark Koyama called the five centuries of institutional evolution the “long transition from a natural state to a liberal economic order.” The state capacity Britain built up during this early modern period went side by side with its emergence as a major commercial power and, within a few years, the first nation to endogenously achieve modern economic growth. Twenty-first-century economists increasingly deem institutions an “ultimate cause” of industrial development. The differences between North and South Korea, for example, are not the result of geographical disparities or long-standing cultural cleavages on either side of the 38th parallel. While it’s not exactly clear which kinds of institutions cause growth, it’s pretty obvious that some sorts inhibit it, if not stifle it altogether. The story of Britain’s rise to global power, then, is also the story of a 500-year-long transformation that saw institutional changes to law, property ownership, the organization of labor, and eventually the makeup of the British elite itself.

In his 1982 book The Rise and Decline of Nations, Mancur Olson argued that societies are engulfed in a perpetual struggle between producers and rent-seekers. The former invent and start businesses, increasing the national income; the latter try to profit off of the producers’ hard work by lobbying for special privileges like monopolies and tax farms. In contrast to Douglass North, who emphasized the importance of secure property rights for economic growth, Olson distinguished between good and bad forms. Bad property rights entitled a specific group to subsidies or protections that imposed costs on consumers and inhibited growth—like, say, a local monopoly on woolen cloth weaving allowing a guild to suppress machinery in favor of labor-intensive hand labor, lowering productivity and output.

Backed by its elite commercial and landed classes, the English and eventually British state came to favor the removal of the barriers to growth that had plagued most pre-modern economies. “Peace and easy taxes,” contra Smith, isn’t a sufficient condition for endogenous development, but its inverse—domestic chaos and rent-seeking—may be sufficient for its absence. But Britain’s real achievement was that its elite class, over time, began to align themselves with market liberalization. In France, by contrast, the nobility and king were constantly at odds, and the monarchy actually supported strong peasant tenures in opposition to large landowners. The pre-1914 Russian Empire would do the same thing.

Applying Olson’s framework to the seventeenth century, what we see is a decline of “rent-seeking distributional coalitions” like guilds, which helps to explain England’s “invention” of modern economic growth. “The success of the British experiment,” write the economists Joel Mokyr and John Nye,

was the result of the emergence of a progressive oligarchic regime that divided the surpluses generated by the new economy between the large landholders and the newly rising businessmen, and that tied both groups to a centralized government structure that promoted uniform rules and regulations at the expense of inefficient relics of an economic ancient regime.

Mokyr and Nye theorize that the state’s demand for revenues led it to strike a bargain with mercantile elites: if you pay taxes, you can use our ships and guns. This was the basis of a grand alliance between “Big Land” and “Big Commerce” who used the government as a broom to sweep away local interests. It manifested in projects like the Virginia Company, whose investors involved both the nobility and mercantile venture capitalists.

Parliament was the instrument for fulfilling the pact, issuing a raft of legislation altering local property rights to open up markets throughout the 1700s. Estate acts, for example, allowed landowners to improve, sell, and lease their plots. Statutory authorities permitted private organizations to set up turnpikes and canals, helping to unify the English market. This allowed firms to increase production, exploit economies of scale, and compete with local artisans. Enclosure acts, meanwhile, provided for the transformation of open-field farming communities, in which decisions were made at the village level, into fully private property.

The origins of this process, however, are deeper than Mokyr and Nye suggest. The development of a national state began soon after the Norman invasion of 1066. William the Conqueror replaced the Anglo-Saxon aristocracy with a Norman one, redistributing the country’s lands to his soldiers and generating a mostly uniform feudal society. The result was a precociously unified and homogenous polity—as opposed to France, which grew by absorbing linguistically distinct territories. English kings who were seeking to fund domestic or military projects called councils with individuals, usually the great barons of the nobility, whose cooperation and money they needed. With the waxing of the late medieval “commercial revolution,” they eventually included representatives of the ports, merchants, and Jewish financiers. Kings would make “contracts” with these factions—often customary restrictions on arbitrary taxation or the granting of other privileges—in exchange for resources. These councils later became Parliament.

The salaries of airmen in the US and UK depended on understanding that strategic bombing could work, would work, and would be a war winner

January 17th, 2023

Strategic airpower aims to win the war on its own, Bret Devereaux explains:

Aircraft cannot generally hold ground, administer territory, build trust, establish institutions, or consolidate gains, so using airpower rapidly becomes a question of ‘what to bomb’ because delivering firepower is what those aircraft can do.

[…]

Like many theorists at the time, Douhet was thinking about how to avoid a repeat of the trench stalemate, which as you may recall was particularly bad for Italy. For Douhet, there was a geometry to this problem; land warfare was two dimensional and thus it was possible to simply block armies. But aircraft – specifically bombers – could move in three dimensions; the sky was not merely larger than the land but massively so as a product of the square-cube law. To stop a bomber, the enemy must find the bomber and in such an enormous space finding the bomber would be next to impossible, especially as flight ceilings increased. In Britain, Stanley Baldwin summed up this vision by famously quipping, “no power on earth can protect the man in the street from being bombed. Whatever people may tell him, the bomber will always get through.” And technology seemed to be moving this way as the possibility for long-range aircraft carrying heavy loads and high altitudes became more and more a reality in the 1920s and early 1930s.

Consequently, Douhet assumed there could be no effective defense against fleets of bombers (and thus little point in investing in air defenses or fighters to stop them). Rather than wasting time on the heavily entrenched front lines, stuck in the stalemate, they could fly over the stalemate to attack the enemy directly. In this case, Douhet imagined these bombers would target – with a mix of explosive, incendiary and poison gas munitions) the “peacetime industrial and commercial establishment; important buildings, private and public; transportation arteries and centers; and certain designated areas of civilian population.” This onslaught would in turn be so severe that the populace would force its government to make peace to make the bombing stop. Douhet went so far to predict (in 1928) that just 300 tons of bombs dropped on civilian centers could end a war in a month; in The War of 19– he offered a scenario where in a renewed war between Germany and France where the latter surrendered under bombing pressure before it could even mobilize. Douhet imagined this, somewhat counterintuitively, as a more humane form of war: while the entire effort would be aimed at butchering as many civilians as possible, he thought doing so would end wars quickly and thus result in less death.

Clever ideas to save lives by killing more people are surprisingly common and unsurprisingly rarely turn out to work.

Now before we move forward, I think we want to unpack that vision just a bit, because there are actually quite a few assumptions there. First, Douhet is assuming that there will be no way to locate or intercept the bombers in the vastness of the sky, that they will be able to accurately navigate to and strike their targets (which are, in the event, major cities) and be able to carry sufficient explosive payloads to destroy those targets. But the largest assumption of all is that the application of explosives to cities would lead to collapsing civilian morale and peace; it was a wholly untested assumption, which was about to become an extremely well-tested assumption. But for Douhet’s theory to work, all of those assumptions in the chain – lack of interception, effective delivery of munitions, sufficient munitions to deliver and bombing triggering morale collapse – needed to be true. In the event, none of them were.

What Douhet couldn’t have known was that one of those assumptions would already be in the process of collapsing before the next major war. The British Tizard Commission tested the first Radio Detection and Finding device successfully in 1935, what we tend to now call radar (for RAdio Detection And Ranging). Douhet had assumed the only way to actually find those bombers would be the venerable Mk. 1 Eyeball and indeed they made doing so a formidable task (the Mk. 1 Ear was actually a more useful device in many cases). But radar changed the game, allowing the detection of flying objects at much greater range and with a fair degree of precision. The British started planning and building a complete network of radar stations covering the coastline in 1936, what would become the ‘Chain Home’ system. The bomber was no longer untrackable.

That was in turn matched by changes in the design of the bomber’s great enemy, fighters. Douhet had assumed big, powerful bombers could not only be undetected, but would fly at altitudes and speeds which would render them difficult to intercept. Fighter designs, however, advanced just as fast. First flown in 1935, the Hawker Hurricane could fly at 340mph and up to 36,000 feet, plenty fast and high enough to catch the bombers of the day. The German Bf 109, deployed in 1937 (the same year the Hurricane saw widespread deployment) was actually a touch faster and could make it to 39,000 feet. If the bomber could be found, it could absolutely be engaged by such planes and those fighters, being faster and more maneuverable could absolutely shoot the bomber down. Indeed, when it came to it over Britain and Germany, bombers proved to be horribly vulnerable to fighters if they weren’t well escorted by their own long-range fighters.

Cracks were thus already appearing in Douhet’s vision of wars won entirely through the air. But the question had already become tied up in institutional rivalries in quite a few countries, particularly Britain and the United States. After all, if future wars would be won by the air, that implied that military spending – a scarce and shrinking commodity in the interwar years – ought to be channeled away from ground or naval forces and towards fledgling air forces like the Royal Air Force (RAF) or the US Army Air Corps (soon to be the US Army Air Forces, then to be the US Air Force), either to fund massive fleets of bombers or fancy new fighters to intercept massive fleets of bombers or, ideally both. Just as importantly, if airpower could achieve independent strategic effects, it made no sense to tie the air arm to the ground by making it a subordinate part of a country’s army; the generals would always prioritize the ground war. Consequently, strategic airpower, as distinct from any other kind of airpower, became the crucial argument for both the funding and independence of a country’s air arm. That matters of course because, while we are discussing strategic airpower here, it is not – as you will recall from above – the only kind. But it was the only kind which could justify a fully independent Air Force.

Upton Sinclair once quipped that, “It is difficult to get a man to understand something, when his salary depends on him not understanding it.” Increasingly That was in turn matched by changes in the design of the bomber’s great enemy, fighters. Douhet had assumed big, powerful bombers could not only be undetected, but would fly at altitudes and speeds which would render them difficult to intercept. Fighter designs, however, advanced just as fast. First flown in 1935, the Hawker Hurricane could fly at 340mph and up to 36,000 feet, plenty fast and high enough to catch the bombers of the day. The German Bf 109, deployed in 1937 (the same year the Hurricane saw widespread deployment) was actually a touch faster and could make it to 39,000 feet. If the bomber could be found, it could absolutely be engaged by such planes and those fighters, being faster and more maneuverable could absolutely shoot the bomber down. Indeed, when it came to it over Britain and Germany, bombers proved to be horribly vulnerable to fighters if they weren’t well escorted by their own long-range fighters.

Cracks were thus already appearing in Douhet’s vision of wars won entirely through the air. But the question had already become tied up in institutional rivalries in quite a few countries, particularly Britain and the United States. After all, if future wars would be won by the air, that implied that military spending – a scarce and shrinking commodity in the interwar years – ought to be channeled away from ground or naval forces and towards fledgling air forces like the Royal Air Force (RAF) or the US Army Air Corps (soon to be the US Army Air Forces, then to be the US Air Force), either to fund massive fleets of bombers or fancy new fighters to intercept massive fleets of bombers or, ideally both. Just as importantly, if airpower could achieve independent strategic effects, it made no sense to tie the air arm to the ground by making it a subordinate part of a country’s army; the generals would always prioritize the ground war. Consequently, strategic airpower, as distinct from any other kind of airpower, became the crucial argument for both the funding and independence of a country’s air arm. That matters of course because, while we are discussing strategic airpower here, it is not – as you will recall from above – the only kind. But it was the only kind which could justify a fully independent Air Force.

Upton Sinclair once quipped that, “It is difficult to get a man to understand something, when his salary depends on him not understanding it.” Increasingly the salaries of airmen in the United States and Britain depended on understanding that strategic bombing – again, distinct from other forms of airpower – could work, would work and would be a war winner.

I’ve mentioned this question of Why do we have an Air Force? before.

Public choice theory is even more useful in understanding foreign policy

January 16th, 2023

Public choice theory was developed to understand domestic politics, but Richard Hanania argues — in Public Choice Theory and the Illusion of Grand Strategy — that public choice is actually even more useful in understanding foreign policy:

First, national defence is “the quintessential public good” in that the taxpayers who pay for “national security” compose a diffuse interest group, while those who profit from it form concentrated interests. This calls into question the assumption that American national security is directly proportional to its military spending (America spends more on defence than most of the rest of the world combined).

Second, the public is ignorant of foreign affairs, so those who control the flow of information have excess influence. Even politicians and bureaucrats are ignorant, for example most(!) counterterrorism officials — the chief of the FBI’s national security branch and a seven-term congressman then serving as the vice chairman of a House intelligence subcommittee, did not know the difference between Sunnis and Shiites. The same favoured interests exert influence at all levels of society, including at the top, for example intelligence agencies are discounted if they contradict what leaders think they know through personal contacts and publicly available material, as was the case in the run-up to the Iraq War.

Third, unlike policy areas like education, it is legitimate for governments to declare certain foreign affairs information to be classified i.e. the public has no right to know. Top officials leaking classified information to the press is normal practice, so they can be extremely selective in manipulating public knowledge.

Fourth, it’s difficult to know who possesses genuine expertise, so foreign policy discourse is prone to capture by special interests. History runs only once — the cause and effect in foreign policy are hard to generalise into measurable forecasts; as demonstrated by Tetlock’s superforecasters, geopolitical experts are worse than informed laymen at predicting world events. Unlike those who have fought the tobacco companies that denied the harms of smoking, or oil companies that denied global warming, the opponents of interventionists may never be able to muster evidence clear enough to win against those in power with special interests backing.

Hanania’s special interest groups are the usual suspects: government contractors (weapons manufacturers [1]), the national security establishment (the Pentagon [2]), and foreign governments [3] (not limited to electoral intervention).

What doesn’t have comparable influence is business interests as argued by IR theorists. Unlike weapons manufacturers, other business interests have to overcome the collective action problem, especially when some businesses benefit from protectionism.

None of the precursors were in place

January 15th, 2023

Once you understand how the Industrial Revolution came about, it’s easy to see why there was no Roman Industrial Revolution — none of the precursors were in place:

The Romans made some use of mineral coal as a heating element or fuel, but it was decidedly secondary to their use of wood and where necessary charcoal. The Romans used rotational energy via watermills to mill grain, but not to spin thread. Even if they had the spinning wheel (and they didn’t; they’re still spinning with drop spindles), the standard Mediterranean period loom, the warp-weighted loom, was roughly an order of magnitude less efficient than the flying shuttle loom, so the Roman economy couldn’t have handled all of the thread the spinning wheel could produce.

And of course the Romans had put functionally no effort into figuring out how to make efficient pressure-cylinders, because they had absolutely no use for them. Remember that by the time Newcomen is designing his steam engine, the kings and parliaments of Europe have been effectively obsessed with who could build the best pressure-cylinder (and then plug it at one end, making a cannon) for three centuries because success in war depended in part on having the best cannon. If you had given the Romans the designs for a Newcomen steam engine, they couldn’t have built it without developing whole new technologies for the purpose (or casting every part in bronze, which introduces its own problems) and then wouldn’t have had any profitable use to put it to.

All of which is why simple graphs of things like ‘global historical GDP’ can be a bit deceptive: there’s a lot of particularity beneath the basic statistics of production because technologies are contingent and path dependent.

The Industrial Revolution happened largely in one place

January 14th, 2023

The Industrial Revolution was more than simply an increase in economic production, Bret Devereaux explains:

Modest increases in economic production are, after all, possible in agrarian economies. Instead, the industrial revolution was about accessing entirely new sources of energy for broad use in the economy, thus drastically increasing the amount of power available for human use. The industrial revolution thus represents not merely a change in quantity, but a change in kind from what we might call an ‘organic’ economy to a ‘mineral’ economy. Consequently, I’d argue, the industrial revolution represents probably just the second time in human history that as a species we’ve undergone a radical change in our production; the first being the development of agriculture in the Neolithic period.

However, unlike farming which developed independently in many places at different times, the industrial revolution happened largely in one place, once and then spread out from there, largely because the world of the 1700s AD was much more interconnected than the world of c. 12,000BP (‘before present,’ a marker we sometimes use for the very deep past). Consequently while we have many examples of the emergence of farming and from there the development of complex agrarian economies, we really only have one ‘pristine’ example of an industrial revolution. It’s possible that it could have occurred with different technologies and resources, though I have to admit I haven’t seen a plausible alternative development that doesn’t just take the same technologies and systems and put them somewhere else.

[…]

Fundamentally this is a story about coal, steam engines, textile manufacture and above all the harnessing of a new source of energy in the economy. That’s not the whole story, by any means, but it is one of the most important through-lines and will serve to demonstrate the point.

The specificity matters here because each innovation in the chain required not merely the discovery of the principle, but also the design and an economically viable use-case to all line up in order to have impact.

[…]

So what was needed was not merely the idea of using steam, but also a design which could actually function in a specific use case. In practice that meant both a design that was far more efficient (though still wildly inefficient) and a use case that could tolerate the inevitable inadequacies of the 1.0 version of the device. The first design to actually square this circle was Thomas Newcomen’s atmospheric steam engine (1712).

[…]

Now that design would be iterated on subsequently to produce smoother, more powerful and more efficient engines, but for that iteration to happen someone needs to be using it, meaning there needs to be a use-case for repetitive motion at modest-but-significant power in an environment where fuel is extremely cheap so that the inefficiency of the engine didn’t make it a worse option than simply having a whole bunch of burly fellows (or draft animals) do the job. As we’ll see, this was a use-case that didn’t really exist in the ancient world and indeed existed almost nowhere but Britain even in the period where it worked.

But fortunately for Newcomen the use case did exist at that moment: pumping water out of coal mines. Of course a mine that runs below the local water-table (as most do) is going to naturally fill with water which has to be pumped out to enable further mining. Traditionally this was done with muscle power, but as mines get deeper the power needed to pump out the water increases (because you need enough power to lift all of the water in the pump system in each movement); cheaper and more effective pumping mechanisms were thus very desirable for mining. But the incentive here can’t just be any sort of mining, it has to be coal mining because of the inefficiency problem: coal (a fuel you can run the engine on) is of course going to be very cheap and abundant directly above the mine where it is being produced and for the atmospheric engine to make sense as an investment the fuel must be very cheap indeed. It would not have made economic sense to use an atmospheric steam engine over simply adding more muscle if you were mining, say, iron or gold and had to ship the fuel in; transportation costs for bulk goods in the pre-railroad world were high. And of course trying to run your atmospheric engine off of local timber would only work for a very little while before the trees you needed were quite far away.

But that in turn requires you to have large coal mines, mining lots of coal deep under ground. Which in turn demands that your society has some sort of bulk use for coal. But just as the Newcomen Engine needed to out-compete ‘more muscle’ to get a foothold, coal has its own competitor: wood and charcoal. There is scattered evidence for limited use of coal as a fuel from the ancient period in many places in the world, but there needs to be a lot of demand to push mines deep to create the demand for pumping. In this regard, the situation on Great Britain (the island, specifically) was almost ideal: most of Great Britain’s forests seem to have been cleared for agriculture in antiquity; by 1000 only about 15% of England (as a geographic sub-unit of the island) was forested, a figure which continued to decline rapidly in the centuries that followed (down to a low of around 5%). Consequently wood as a heat fuel was scarce and so beginning in the 16th century we see a marked shift over to coal as a heating fuel for things like cooking and home heating. Fortunately for the residents of Great Britain there were surface coal seems in abundance making the transition relatively easy; once these were exhausted deep mining followed which at last by the late 1600s created the demand for coal-powered pumps finally answered effectively in 1712 by Newcomen: a demand for engines to power pumps in an environment where fuel efficiency mattered little.6

With a use-case in place, these early steam engines continue to be refined to make them more powerful, more fuel efficient and capable of producing smooth rotational motion out of their initially jerky reciprocal motions, culminating in James Watt’s steam engine in 1776. But so far all we’ve done is gotten very good and pumping out coal mines – that has in turn created steam engines that are now fuel efficient enough to be set up in places that are not coal mines, but we still need something for those engines to do to encourage further development. In particular we need a part of the economy where getting a lot of rotational motion is the major production bottleneck.

What could be a more interesting question?

January 13th, 2023

There are people who are really trying to either kill or at least studiously ignore all of the progress in genomics, Stephen Hsu reports — from first-hand experience:

My research group solved height as a phenotype. Give us the DNA of an individual with no other information other than that this person lived in a decent environment—wasn’t starved as a child or anything like that—and we can predict that person’s height with a standard error of a few centimeters. Just from the DNA. That’s a tour de force.

Then you might say, “Well, gee, I heard that in twin studies, the correlation between twins in IQ is almost as high as their correlation in height. I read it in some book in my psychology class 20 years ago before the textbooks were rewritten. Why can’t you guys predict someone’s IQ score based on their DNA alone?”

Well, according to all the mathematical modeling and simulations we’ve done, we need somewhat more training data to build the machine learning algorithms to do that. But it’s not impossible. In fact, we predicted that if you have about a million genomes and the cognitive scores of those million people, you could build a predictor with a standard error of plus or minus 10 IQ points. So you can ask, “Well, since you guys showed you could do it for height, and since there are 30, or 40, or 50, different disease conditions that we now have decent genetic predictors for, why isn’t there one for IQ?”

Well, the answer is there’s zero funding. There’s no NIH, NSF, or any agency that would take on a proposal saying, “Give me X million dollars to genotype these people, and also measure their cognitive ability or get them to report their SAT scores to me.” Zero funding for that. And some people get very, very aggressive upon learning that you’re interested in that kind of thing, and will start calling you a racist, or they’ll start attacking you. And I’m not making this up, because it actually happened to me.

What could be a more interesting question? Wow, the human brain—that’s what differentiates us from the rest of the animal species on this planet. Well, to what extent is brain development controlled by DNA? Wouldn’t it be amazing if you could actually predict individual variation in intelligence from DNA just as we can with height now? Shouldn’t that be a high priority for scientific discovery? Isn’t this important for aging, because so many people undergo cognitive decline as they age? There are many, many reasons why this subject should be studied. But there’s effectively zero funding for it.