This final chapter in the history of the planet’s mounted nomads played out in the full light of American history

January 20th, 2023

America had its own steppe nomads, Razib Khan reminds us:

On June 25–26th of 1876, at Little Bighorn in Montana, a coalition of Sioux, Cheyenne and Arapaho led by Sitting Bull and Crazy Horse defeated General George Custer. The outcome shocked the world; the Plains tribes stared down the might of the modern world and then ably dispatched it. But theirs was a Pyrrhic victory. The US government just raised more troops, and all that elan and courage was eventually no match for raw numbers. Across the cold windswept plains of the Dakotas, the Sioux and their allies had denied the American armies outright victory from the 1850’s into the 1870’s. Meanwhile, to the south, in Texas, the Comanche “Empire of the Summer Moon” had been the bane of the Spaniards, and later the Mexicans, for over a century. They first battled the Spanish Empire to a draw in the 1700’s, and continued to periodically pillage Mexico after independence in the 1820’s. Only after the region’s annexation by the US in the 1840’s did the Comanche meet their match, as they were finally defeated in 1870 by American forces. If Americans today remember the Battle of Little Bighorn and the subjugation of the Comanche, it tends to be as the denouement of decades of warfare across the vast North American prairie. But if you zoom out a little, it also marks the end of a 5,000-year saga: the rise and fall of America’s steppe nomads, for that is what all those fearsome tribes of the Plains Indians had become.

Today Americans view these wars with ambivalence, as the expansionist US, seeking its “Manifest Destiny,” conquered the doomed underdog natives of the continent with wanton brutality. But the Plains Indians were themselves a people of conquest, hardened and cruel, and would have bridled at the mantle of the underdog. They espoused an ethos exemplified by their warrior braves who wasted no pity on their enemies and expected none in return. In S.C. Gwynne’s book, Empire of the Summer Moon, he notes that during Comanche raids all “the men were killed, and any men who were captured alive were tortured; the captive women were gang raped. Babies were invariably killed.” Comanche brutality was not total; young boys and girls were captured and enslaved during these raids, but could eventually be adopted into the tribe if they survived a trial by fire: showing courage and toughness even in the face of ill-treatment as slaves. Quanah Parker, the last chief of the Comanche, was the son of a white woman who had been kidnapped when she was nine.

These tribes were warlike because the mobilization of cadres of violent young men was instrumental to the organization of their societies. They were all patrilineal and patriarchal, for though women were not chattel, tribal identity passed from the father to the son. A Sioux or Comanche was by definition the offspring of a Sioux or Comanche father. The birth of a Comanche boy warranted special congratulations for the father, reflecting the importance of sons genealogically for the line to continue. It was the sons who would grow up to feed the tribe through mass-scale horseback buffalo hunts. It was the sons who undertook daring raids and came home draped in plunder. The religion of these warriors was victory, and they stoically accepted that defeat meant death.

These mounted warrior societies of the Plains Indians were a recent product of the Columbian Exchange, forged by the same forces of globalization that birthed the hostile colonial nations hungrily encroaching ever further into their domains from both south and east. The early 1700’s had seen the adoption of horses from the Spaniards, along with the flourishing of rich colonial societies all along the continent’s rim, always ripe for raiding. Together, these catalyzed the rebirth of native nations that lived by the deeds of their predatory cavalry. The warriors of America’s prairies became such adept horsemen in a matter of generations that Comanche boys were reputed to learn riding almost before they learned to walk, echoing Roman observations about the Huns 1,500 years earlier. The introduction of Eurasian horses to their cultures transmuted the farmers and foragers of the Great Plains within a generation into fearsome centaur-like hordes that terrorized half a continent for 150 years, recapitulating the transformation wrought by their distant relatives on the Eurasian Steppe 5,000 years ago.

That this final chapter in the history of the planet’s mounted nomads played out in the full light of American history allows us to vividly imagine the lives of their prehistoric cultural forebears. Just as the Sioux and the Comanche were ruled by the passions of their fearless braves, who were driven to seek glory and everlasting fame on the battlefield, so bands of youth out of the great grassland between Hungary and Mongolia had long ago wreaked havoc on Eurasia from the Atlantic to the Pacific, and the tundra to the Indian ocean. These feral werewolves of the steppe resculpted the cultural topography of the known world three to five thousand years ago. Their ethos was an eagerly grasping pursuit not of what was theirs by right, but of anything they could grab by might. Where the Sioux and Commanche were crushed by the organized might of a future world power, their reign soon consigned to a historical footnote, the warriors of yore marched from victory to conquest. They remade the world in their brutal image, inadvertently laying the seedbeds for gentler ages to come, when roving bands of youth were recast as the barbarian enemy beyond the gates, when peace and tranquility, not a glorious death in battle, became the highest good.

S.C. Gwynne’s Empire of the Summer Moon is excellent, by the way.

An FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy

January 19th, 2023

The FGC-9 stands out from previous 3D-printed firearms designs, in part because it was specifically designed to circumvent European gun regulations:

Thus, unlike its predecessors, the FGC-9 does not require the use of any commercially produced firearm parts. Instead, it can be produced using only unregulated commercial off-the-shelf (COTS) components. For example, instead of an industrially produced firearms barrel, the FGC-9 uses a piece of a pre-hardened 16 mm O.D. hydraulic tubing. The construction files for the FGC-9 also include instructions on how to rifle the hydraulic tubing using electrochemical machining (ECM). The FGC-9 uses a hammer-fired blowback self-loading action, firing from the closed-bolt position. The gun uses a commercially available AR-15 trigger group. In the United States, these components are unregulated. In the European Union and other countries—such as Australia—the FGC-9 can also be built with a slightly modified trigger group used by ‘airsoft’ toys of the same general design. This design choice provides a robust alternative to a regulated component, but also means that the FGC-9 design only offers semi-automatic fire, unless modified. The FGC-9 Mk II files also include a printable AR-15 fire-control group, which may be what was used in this case, as airsoft and ‘gel blaster’ toys are also regulated in Western Australia.

2DD658C4-832D-4C56-8801-7086FDD0CD7D

In tests performed by ARES, an FGC-9 with a craft-produced, ECM-rifled barrel exhibited impressive accuracy: the firearm shot groups of 60 mm at 23 meters, with no signs of tumbling or unstable flight. Further, in forensic tests with FCG-9 models seized in Europe, the guns generally exhibited good durability. One example, described as not being particularly well built, was able to fire more than 2,000 rounds without a catastrophic failure—albeit with deteriorating accuracy. The cost of producing an FGC-9 can be very low, and even with a rifled barrel and the purchase of commercial components, the total price for all parts, materials, and tools to produce such a firearm is typically less than $1,000 USD. As more firearms are made, the cost per firearm decreases significantly. In a 2021 case in Finland, investigators uncovered a production facility geared up to produce multiple FGC-9 carbines. In this case, the criminal group operating the facility had purchased numerous Creality Ender 3 printers—each sold online for around $200. In recent months, complete FGC-9 firearms have been offered for sale for between approximately 1,500 and 3,500 USD (equivalent), mostly via Telegram groups.

The result was a precociously unified and homogenous polity

January 18th, 2023

Davis Kedrosky explains how institutional reforms built the British Empire:

In 1300, few English institutions actively promoted economic growth. The vast majority of the rural population was composed of unfree peasants bonded either to feudal lords or plots of land. Urban artisans were organized in guilds that regulated who could enter trades like glassblowing, leatherwork, and blacksmithing.

The English state was in turmoil following a century of conflict between Parliament and the Crown, and though nominally strong, it was deficient in fiscal capacity and infrastructural power. The regime lacked both the will and the means to pursue national development aims: integrating domestic markets, acquiring foreign export zones, securing private property, and encouraging innovation, entrepreneurship, and investment. England resembled what has been called a “natural state,” in which violence between factions determined the character of governance. Institutions pushed the meager spoils of an impoverished land into the pockets of rentiers.

By 1800, all this had changed. Britain’s rural life was characterized by agrarian capitalism, in which tenant farmers rented land from landowners and employed free wage labor, incentivizing investment and experimentation with new crops and methods. The preceding two centuries had seen the waning of the guilds, which now served more as organizations for social networking. Elites that had mostly earned their income by collecting taxes were now engaging in commercial enterprises themselves.

The state was now better-financed than any before in history, thanks to an effective tax administration and the ability to contract a mountain of public debt at modest interest rates. This allowed Britain to fund the world’s strongest navy to defend its interests from New York to Calcutta. The British government also intervened frequently in economic life, from enclosure acts to estate bills, and had limited its absolutist and rentier tendencies through the establishment of a strong parliament and professional bureaucracy.

Mark Koyama called the five centuries of institutional evolution the “long transition from a natural state to a liberal economic order.” The state capacity Britain built up during this early modern period went side by side with its emergence as a major commercial power and, within a few years, the first nation to endogenously achieve modern economic growth. Twenty-first-century economists increasingly deem institutions an “ultimate cause” of industrial development. The differences between North and South Korea, for example, are not the result of geographical disparities or long-standing cultural cleavages on either side of the 38th parallel. While it’s not exactly clear which kinds of institutions cause growth, it’s pretty obvious that some sorts inhibit it, if not stifle it altogether. The story of Britain’s rise to global power, then, is also the story of a 500-year-long transformation that saw institutional changes to law, property ownership, the organization of labor, and eventually the makeup of the British elite itself.

In his 1982 book The Rise and Decline of Nations, Mancur Olson argued that societies are engulfed in a perpetual struggle between producers and rent-seekers. The former invent and start businesses, increasing the national income; the latter try to profit off of the producers’ hard work by lobbying for special privileges like monopolies and tax farms. In contrast to Douglass North, who emphasized the importance of secure property rights for economic growth, Olson distinguished between good and bad forms. Bad property rights entitled a specific group to subsidies or protections that imposed costs on consumers and inhibited growth—like, say, a local monopoly on woolen cloth weaving allowing a guild to suppress machinery in favor of labor-intensive hand labor, lowering productivity and output.

Backed by its elite commercial and landed classes, the English and eventually British state came to favor the removal of the barriers to growth that had plagued most pre-modern economies. “Peace and easy taxes,” contra Smith, isn’t a sufficient condition for endogenous development, but its inverse—domestic chaos and rent-seeking—may be sufficient for its absence. But Britain’s real achievement was that its elite class, over time, began to align themselves with market liberalization. In France, by contrast, the nobility and king were constantly at odds, and the monarchy actually supported strong peasant tenures in opposition to large landowners. The pre-1914 Russian Empire would do the same thing.

Applying Olson’s framework to the seventeenth century, what we see is a decline of “rent-seeking distributional coalitions” like guilds, which helps to explain England’s “invention” of modern economic growth. “The success of the British experiment,” write the economists Joel Mokyr and John Nye,

was the result of the emergence of a progressive oligarchic regime that divided the surpluses generated by the new economy between the large landholders and the newly rising businessmen, and that tied both groups to a centralized government structure that promoted uniform rules and regulations at the expense of inefficient relics of an economic ancient regime.

Mokyr and Nye theorize that the state’s demand for revenues led it to strike a bargain with mercantile elites: if you pay taxes, you can use our ships and guns. This was the basis of a grand alliance between “Big Land” and “Big Commerce” who used the government as a broom to sweep away local interests. It manifested in projects like the Virginia Company, whose investors involved both the nobility and mercantile venture capitalists.

Parliament was the instrument for fulfilling the pact, issuing a raft of legislation altering local property rights to open up markets throughout the 1700s. Estate acts, for example, allowed landowners to improve, sell, and lease their plots. Statutory authorities permitted private organizations to set up turnpikes and canals, helping to unify the English market. This allowed firms to increase production, exploit economies of scale, and compete with local artisans. Enclosure acts, meanwhile, provided for the transformation of open-field farming communities, in which decisions were made at the village level, into fully private property.

The origins of this process, however, are deeper than Mokyr and Nye suggest. The development of a national state began soon after the Norman invasion of 1066. William the Conqueror replaced the Anglo-Saxon aristocracy with a Norman one, redistributing the country’s lands to his soldiers and generating a mostly uniform feudal society. The result was a precociously unified and homogenous polity—as opposed to France, which grew by absorbing linguistically distinct territories. English kings who were seeking to fund domestic or military projects called councils with individuals, usually the great barons of the nobility, whose cooperation and money they needed. With the waxing of the late medieval “commercial revolution,” they eventually included representatives of the ports, merchants, and Jewish financiers. Kings would make “contracts” with these factions—often customary restrictions on arbitrary taxation or the granting of other privileges—in exchange for resources. These councils later became Parliament.

The salaries of airmen in the US and UK depended on understanding that strategic bombing could work, would work, and would be a war winner

January 17th, 2023

Strategic airpower aims to win the war on its own, Bret Devereaux explains:

Aircraft cannot generally hold ground, administer territory, build trust, establish institutions, or consolidate gains, so using airpower rapidly becomes a question of ‘what to bomb’ because delivering firepower is what those aircraft can do.

[…]

Like many theorists at the time, Douhet was thinking about how to avoid a repeat of the trench stalemate, which as you may recall was particularly bad for Italy. For Douhet, there was a geometry to this problem; land warfare was two dimensional and thus it was possible to simply block armies. But aircraft – specifically bombers – could move in three dimensions; the sky was not merely larger than the land but massively so as a product of the square-cube law. To stop a bomber, the enemy must find the bomber and in such an enormous space finding the bomber would be next to impossible, especially as flight ceilings increased. In Britain, Stanley Baldwin summed up this vision by famously quipping, “no power on earth can protect the man in the street from being bombed. Whatever people may tell him, the bomber will always get through.” And technology seemed to be moving this way as the possibility for long-range aircraft carrying heavy loads and high altitudes became more and more a reality in the 1920s and early 1930s.

Consequently, Douhet assumed there could be no effective defense against fleets of bombers (and thus little point in investing in air defenses or fighters to stop them). Rather than wasting time on the heavily entrenched front lines, stuck in the stalemate, they could fly over the stalemate to attack the enemy directly. In this case, Douhet imagined these bombers would target – with a mix of explosive, incendiary and poison gas munitions) the “peacetime industrial and commercial establishment; important buildings, private and public; transportation arteries and centers; and certain designated areas of civilian population.” This onslaught would in turn be so severe that the populace would force its government to make peace to make the bombing stop. Douhet went so far to predict (in 1928) that just 300 tons of bombs dropped on civilian centers could end a war in a month; in The War of 19– he offered a scenario where in a renewed war between Germany and France where the latter surrendered under bombing pressure before it could even mobilize. Douhet imagined this, somewhat counterintuitively, as a more humane form of war: while the entire effort would be aimed at butchering as many civilians as possible, he thought doing so would end wars quickly and thus result in less death.

Clever ideas to save lives by killing more people are surprisingly common and unsurprisingly rarely turn out to work.

Now before we move forward, I think we want to unpack that vision just a bit, because there are actually quite a few assumptions there. First, Douhet is assuming that there will be no way to locate or intercept the bombers in the vastness of the sky, that they will be able to accurately navigate to and strike their targets (which are, in the event, major cities) and be able to carry sufficient explosive payloads to destroy those targets. But the largest assumption of all is that the application of explosives to cities would lead to collapsing civilian morale and peace; it was a wholly untested assumption, which was about to become an extremely well-tested assumption. But for Douhet’s theory to work, all of those assumptions in the chain – lack of interception, effective delivery of munitions, sufficient munitions to deliver and bombing triggering morale collapse – needed to be true. In the event, none of them were.

What Douhet couldn’t have known was that one of those assumptions would already be in the process of collapsing before the next major war. The British Tizard Commission tested the first Radio Detection and Finding device successfully in 1935, what we tend to now call radar (for RAdio Detection And Ranging). Douhet had assumed the only way to actually find those bombers would be the venerable Mk. 1 Eyeball and indeed they made doing so a formidable task (the Mk. 1 Ear was actually a more useful device in many cases). But radar changed the game, allowing the detection of flying objects at much greater range and with a fair degree of precision. The British started planning and building a complete network of radar stations covering the coastline in 1936, what would become the ‘Chain Home’ system. The bomber was no longer untrackable.

That was in turn matched by changes in the design of the bomber’s great enemy, fighters. Douhet had assumed big, powerful bombers could not only be undetected, but would fly at altitudes and speeds which would render them difficult to intercept. Fighter designs, however, advanced just as fast. First flown in 1935, the Hawker Hurricane could fly at 340mph and up to 36,000 feet, plenty fast and high enough to catch the bombers of the day. The German Bf 109, deployed in 1937 (the same year the Hurricane saw widespread deployment) was actually a touch faster and could make it to 39,000 feet. If the bomber could be found, it could absolutely be engaged by such planes and those fighters, being faster and more maneuverable could absolutely shoot the bomber down. Indeed, when it came to it over Britain and Germany, bombers proved to be horribly vulnerable to fighters if they weren’t well escorted by their own long-range fighters.

Cracks were thus already appearing in Douhet’s vision of wars won entirely through the air. But the question had already become tied up in institutional rivalries in quite a few countries, particularly Britain and the United States. After all, if future wars would be won by the air, that implied that military spending – a scarce and shrinking commodity in the interwar years – ought to be channeled away from ground or naval forces and towards fledgling air forces like the Royal Air Force (RAF) or the US Army Air Corps (soon to be the US Army Air Forces, then to be the US Air Force), either to fund massive fleets of bombers or fancy new fighters to intercept massive fleets of bombers or, ideally both. Just as importantly, if airpower could achieve independent strategic effects, it made no sense to tie the air arm to the ground by making it a subordinate part of a country’s army; the generals would always prioritize the ground war. Consequently, strategic airpower, as distinct from any other kind of airpower, became the crucial argument for both the funding and independence of a country’s air arm. That matters of course because, while we are discussing strategic airpower here, it is not – as you will recall from above – the only kind. But it was the only kind which could justify a fully independent Air Force.

Upton Sinclair once quipped that, “It is difficult to get a man to understand something, when his salary depends on him not understanding it.” Increasingly That was in turn matched by changes in the design of the bomber’s great enemy, fighters. Douhet had assumed big, powerful bombers could not only be undetected, but would fly at altitudes and speeds which would render them difficult to intercept. Fighter designs, however, advanced just as fast. First flown in 1935, the Hawker Hurricane could fly at 340mph and up to 36,000 feet, plenty fast and high enough to catch the bombers of the day. The German Bf 109, deployed in 1937 (the same year the Hurricane saw widespread deployment) was actually a touch faster and could make it to 39,000 feet. If the bomber could be found, it could absolutely be engaged by such planes and those fighters, being faster and more maneuverable could absolutely shoot the bomber down. Indeed, when it came to it over Britain and Germany, bombers proved to be horribly vulnerable to fighters if they weren’t well escorted by their own long-range fighters.

Cracks were thus already appearing in Douhet’s vision of wars won entirely through the air. But the question had already become tied up in institutional rivalries in quite a few countries, particularly Britain and the United States. After all, if future wars would be won by the air, that implied that military spending – a scarce and shrinking commodity in the interwar years – ought to be channeled away from ground or naval forces and towards fledgling air forces like the Royal Air Force (RAF) or the US Army Air Corps (soon to be the US Army Air Forces, then to be the US Air Force), either to fund massive fleets of bombers or fancy new fighters to intercept massive fleets of bombers or, ideally both. Just as importantly, if airpower could achieve independent strategic effects, it made no sense to tie the air arm to the ground by making it a subordinate part of a country’s army; the generals would always prioritize the ground war. Consequently, strategic airpower, as distinct from any other kind of airpower, became the crucial argument for both the funding and independence of a country’s air arm. That matters of course because, while we are discussing strategic airpower here, it is not – as you will recall from above – the only kind. But it was the only kind which could justify a fully independent Air Force.

Upton Sinclair once quipped that, “It is difficult to get a man to understand something, when his salary depends on him not understanding it.” Increasingly the salaries of airmen in the United States and Britain depended on understanding that strategic bombing – again, distinct from other forms of airpower – could work, would work and would be a war winner.

I’ve mentioned this question of Why do we have an Air Force? before.

Public choice theory is even more useful in understanding foreign policy

January 16th, 2023

Public choice theory was developed to understand domestic politics, but Richard Hanania argues — in Public Choice Theory and the Illusion of Grand Strategy — that public choice is actually even more useful in understanding foreign policy:

First, national defence is “the quintessential public good” in that the taxpayers who pay for “national security” compose a diffuse interest group, while those who profit from it form concentrated interests. This calls into question the assumption that American national security is directly proportional to its military spending (America spends more on defence than most of the rest of the world combined).

Second, the public is ignorant of foreign affairs, so those who control the flow of information have excess influence. Even politicians and bureaucrats are ignorant, for example most(!) counterterrorism officials — the chief of the FBI’s national security branch and a seven-term congressman then serving as the vice chairman of a House intelligence subcommittee, did not know the difference between Sunnis and Shiites. The same favoured interests exert influence at all levels of society, including at the top, for example intelligence agencies are discounted if they contradict what leaders think they know through personal contacts and publicly available material, as was the case in the run-up to the Iraq War.

Third, unlike policy areas like education, it is legitimate for governments to declare certain foreign affairs information to be classified i.e. the public has no right to know. Top officials leaking classified information to the press is normal practice, so they can be extremely selective in manipulating public knowledge.

Fourth, it’s difficult to know who possesses genuine expertise, so foreign policy discourse is prone to capture by special interests. History runs only once — the cause and effect in foreign policy are hard to generalise into measurable forecasts; as demonstrated by Tetlock’s superforecasters, geopolitical experts are worse than informed laymen at predicting world events. Unlike those who have fought the tobacco companies that denied the harms of smoking, or oil companies that denied global warming, the opponents of interventionists may never be able to muster evidence clear enough to win against those in power with special interests backing.

Hanania’s special interest groups are the usual suspects: government contractors (weapons manufacturers [1]), the national security establishment (the Pentagon [2]), and foreign governments [3] (not limited to electoral intervention).

What doesn’t have comparable influence is business interests as argued by IR theorists. Unlike weapons manufacturers, other business interests have to overcome the collective action problem, especially when some businesses benefit from protectionism.

None of the precursors were in place

January 15th, 2023

Once you understand how the Industrial Revolution came about, it’s easy to see why there was no Roman Industrial Revolution — none of the precursors were in place:

The Romans made some use of mineral coal as a heating element or fuel, but it was decidedly secondary to their use of wood and where necessary charcoal. The Romans used rotational energy via watermills to mill grain, but not to spin thread. Even if they had the spinning wheel (and they didn’t; they’re still spinning with drop spindles), the standard Mediterranean period loom, the warp-weighted loom, was roughly an order of magnitude less efficient than the flying shuttle loom, so the Roman economy couldn’t have handled all of the thread the spinning wheel could produce.

And of course the Romans had put functionally no effort into figuring out how to make efficient pressure-cylinders, because they had absolutely no use for them. Remember that by the time Newcomen is designing his steam engine, the kings and parliaments of Europe have been effectively obsessed with who could build the best pressure-cylinder (and then plug it at one end, making a cannon) for three centuries because success in war depended in part on having the best cannon. If you had given the Romans the designs for a Newcomen steam engine, they couldn’t have built it without developing whole new technologies for the purpose (or casting every part in bronze, which introduces its own problems) and then wouldn’t have had any profitable use to put it to.

All of which is why simple graphs of things like ‘global historical GDP’ can be a bit deceptive: there’s a lot of particularity beneath the basic statistics of production because technologies are contingent and path dependent.

The Industrial Revolution happened largely in one place

January 14th, 2023

The Industrial Revolution was more than simply an increase in economic production, Bret Devereaux explains:

Modest increases in economic production are, after all, possible in agrarian economies. Instead, the industrial revolution was about accessing entirely new sources of energy for broad use in the economy, thus drastically increasing the amount of power available for human use. The industrial revolution thus represents not merely a change in quantity, but a change in kind from what we might call an ‘organic’ economy to a ‘mineral’ economy. Consequently, I’d argue, the industrial revolution represents probably just the second time in human history that as a species we’ve undergone a radical change in our production; the first being the development of agriculture in the Neolithic period.

However, unlike farming which developed independently in many places at different times, the industrial revolution happened largely in one place, once and then spread out from there, largely because the world of the 1700s AD was much more interconnected than the world of c. 12,000BP (‘before present,’ a marker we sometimes use for the very deep past). Consequently while we have many examples of the emergence of farming and from there the development of complex agrarian economies, we really only have one ‘pristine’ example of an industrial revolution. It’s possible that it could have occurred with different technologies and resources, though I have to admit I haven’t seen a plausible alternative development that doesn’t just take the same technologies and systems and put them somewhere else.

[…]

Fundamentally this is a story about coal, steam engines, textile manufacture and above all the harnessing of a new source of energy in the economy. That’s not the whole story, by any means, but it is one of the most important through-lines and will serve to demonstrate the point.

The specificity matters here because each innovation in the chain required not merely the discovery of the principle, but also the design and an economically viable use-case to all line up in order to have impact.

[…]

So what was needed was not merely the idea of using steam, but also a design which could actually function in a specific use case. In practice that meant both a design that was far more efficient (though still wildly inefficient) and a use case that could tolerate the inevitable inadequacies of the 1.0 version of the device. The first design to actually square this circle was Thomas Newcomen’s atmospheric steam engine (1712).

[…]

Now that design would be iterated on subsequently to produce smoother, more powerful and more efficient engines, but for that iteration to happen someone needs to be using it, meaning there needs to be a use-case for repetitive motion at modest-but-significant power in an environment where fuel is extremely cheap so that the inefficiency of the engine didn’t make it a worse option than simply having a whole bunch of burly fellows (or draft animals) do the job. As we’ll see, this was a use-case that didn’t really exist in the ancient world and indeed existed almost nowhere but Britain even in the period where it worked.

But fortunately for Newcomen the use case did exist at that moment: pumping water out of coal mines. Of course a mine that runs below the local water-table (as most do) is going to naturally fill with water which has to be pumped out to enable further mining. Traditionally this was done with muscle power, but as mines get deeper the power needed to pump out the water increases (because you need enough power to lift all of the water in the pump system in each movement); cheaper and more effective pumping mechanisms were thus very desirable for mining. But the incentive here can’t just be any sort of mining, it has to be coal mining because of the inefficiency problem: coal (a fuel you can run the engine on) is of course going to be very cheap and abundant directly above the mine where it is being produced and for the atmospheric engine to make sense as an investment the fuel must be very cheap indeed. It would not have made economic sense to use an atmospheric steam engine over simply adding more muscle if you were mining, say, iron or gold and had to ship the fuel in; transportation costs for bulk goods in the pre-railroad world were high. And of course trying to run your atmospheric engine off of local timber would only work for a very little while before the trees you needed were quite far away.

But that in turn requires you to have large coal mines, mining lots of coal deep under ground. Which in turn demands that your society has some sort of bulk use for coal. But just as the Newcomen Engine needed to out-compete ‘more muscle’ to get a foothold, coal has its own competitor: wood and charcoal. There is scattered evidence for limited use of coal as a fuel from the ancient period in many places in the world, but there needs to be a lot of demand to push mines deep to create the demand for pumping. In this regard, the situation on Great Britain (the island, specifically) was almost ideal: most of Great Britain’s forests seem to have been cleared for agriculture in antiquity; by 1000 only about 15% of England (as a geographic sub-unit of the island) was forested, a figure which continued to decline rapidly in the centuries that followed (down to a low of around 5%). Consequently wood as a heat fuel was scarce and so beginning in the 16th century we see a marked shift over to coal as a heating fuel for things like cooking and home heating. Fortunately for the residents of Great Britain there were surface coal seems in abundance making the transition relatively easy; once these were exhausted deep mining followed which at last by the late 1600s created the demand for coal-powered pumps finally answered effectively in 1712 by Newcomen: a demand for engines to power pumps in an environment where fuel efficiency mattered little.6

With a use-case in place, these early steam engines continue to be refined to make them more powerful, more fuel efficient and capable of producing smooth rotational motion out of their initially jerky reciprocal motions, culminating in James Watt’s steam engine in 1776. But so far all we’ve done is gotten very good and pumping out coal mines – that has in turn created steam engines that are now fuel efficient enough to be set up in places that are not coal mines, but we still need something for those engines to do to encourage further development. In particular we need a part of the economy where getting a lot of rotational motion is the major production bottleneck.

What could be a more interesting question?

January 13th, 2023

There are people who are really trying to either kill or at least studiously ignore all of the progress in genomics, Stephen Hsu reports — from first-hand experience:

My research group solved height as a phenotype. Give us the DNA of an individual with no other information other than that this person lived in a decent environment—wasn’t starved as a child or anything like that—and we can predict that person’s height with a standard error of a few centimeters. Just from the DNA. That’s a tour de force.

Then you might say, “Well, gee, I heard that in twin studies, the correlation between twins in IQ is almost as high as their correlation in height. I read it in some book in my psychology class 20 years ago before the textbooks were rewritten. Why can’t you guys predict someone’s IQ score based on their DNA alone?”

Well, according to all the mathematical modeling and simulations we’ve done, we need somewhat more training data to build the machine learning algorithms to do that. But it’s not impossible. In fact, we predicted that if you have about a million genomes and the cognitive scores of those million people, you could build a predictor with a standard error of plus or minus 10 IQ points. So you can ask, “Well, since you guys showed you could do it for height, and since there are 30, or 40, or 50, different disease conditions that we now have decent genetic predictors for, why isn’t there one for IQ?”

Well, the answer is there’s zero funding. There’s no NIH, NSF, or any agency that would take on a proposal saying, “Give me X million dollars to genotype these people, and also measure their cognitive ability or get them to report their SAT scores to me.” Zero funding for that. And some people get very, very aggressive upon learning that you’re interested in that kind of thing, and will start calling you a racist, or they’ll start attacking you. And I’m not making this up, because it actually happened to me.

What could be a more interesting question? Wow, the human brain—that’s what differentiates us from the rest of the animal species on this planet. Well, to what extent is brain development controlled by DNA? Wouldn’t it be amazing if you could actually predict individual variation in intelligence from DNA just as we can with height now? Shouldn’t that be a high priority for scientific discovery? Isn’t this important for aging, because so many people undergo cognitive decline as they age? There are many, many reasons why this subject should be studied. But there’s effectively zero funding for it.

The internet wants to be fragmented

January 12th, 2023

“You know,” Noah Smith quipped, “fifteen years ago, the internet was an escape from the real world. Now the real world is an escape from the internet.”

When I first got access to the internet as a kid, the very first thing I did was to find people who liked the same things I liked — science fiction novels and TV shows, Dungeons and Dragons, and so on. In the early days, that was what you did when you got online — you found your people, whether on Usenet or IRC or Web forums or MUSHes and MUDs. Real life was where you had to interact with a bunch of people who rubbed you the wrong way — the coworker who didn’t like your politics, the parents who nagged you to get a real job, the popular kids with their fancy cars. The internet was where you could just go be a dork with other dorks, whether you were an anime fan or a libertarian gun nut or a lonely Christian 40-something or a gay kid who was still in the closet. Community was the escape hatch.

Then in the 2010s, the internet changed. It wasn’t just the smartphone, though that did enable it. What changed is that internet interaction increasingly started to revolve around a small number of extremely centralized social media platforms: Facebook, Twitter, and later Instagram.

From a business perspective, this centralization was a natural extension of the early internet — people were getting more connected, so just connect them even more.

[…]

Putting everyone in the world in touch through a single network is what we did with the phone system, and everyone knows that the value of a network scales as the square of the number of users. So centralizing the whole world’s social interaction on two or three platforms would print loads of money while also making for a happier, more connected world.

[…]

It started with the Facebook feed. On the old internet, you could show a different side of yourself in every forum or chat room; but on your Facebook feed, you had to be the same person to everyone you knew. When social unrest broke out in the mid-2010s this got even worse — you had to watch your liberal friends and your conservative friends go at it in the comments of your posts, or theirs. Friendships and even family bonds were destroyed in those comments.

[…]

The early 2010s on Twitter were defined by fights over toxicity and harassment versus early-internet ideals of free speech. But after 2016 those fights no longer mattered, because everyone on the platform simply adopted the same patterns of toxicity and harassment that the extremist trolls had pioneered.

[…]

Why did this happen to the centralized internet when it hadn’t happened to the decentralized internet of previous decades? In fact, there were always Nazis around, and communists, and all the other toxic trolls and crazies. But they were only ever an annoyance, because if a community didn’t like those people, the moderators would just ban them. Even normal people got banned from forums where their personalities didn’t fit; even I got banned once or twice. It happened. You moved on and you found someone else to talk to.

Community moderation works. This was the overwhelming lesson of the early internet. It works because it mirrors the social interaction of real life, where social groups exclude people who don’t fit in. And it works because it distributes the task of policing the internet to a vast number of volunteers, who provide the free labor of keeping forums fun, because to them maintaining a community is a labor of love. And it works because if you don’t like the forum you’re in — if the mods are being too harsh, or if they’re being too lenient and the community has been taken over by trolls — you just walk away and find another forum. In the words of the great Albert O. Hirschman, you always have the option to use “exit”.

[…]

They tinkered at the edges of the platform, but never touched their killer feature, the quote-tweet, which Twitter’s head of product called “the dunk mechanism.” Because dunks were the business model — if you don’t believe me, you can check out the many research papers showing that toxicity and outrage drive Twitter engagement.

[…]

Humanity does not want to be a global hive mind. We are not rational Bayesian updaters who will eventually reach agreement; when we receive the same information, it tends to polarize us rather than unite us. Getting screamed at and insulted by people who disagree with you doesn’t take you out of your filter bubble — it makes you retreat back inside your bubble and reject the ideas of whoever is screaming at you. No one ever changed their mind from being dunked on; instead they all just doubled down and dunked harder. The hatred and toxicity of Twitter at times felt like the dying screams of human individuality, being crushed to death by the hive mind’s constant demands for us to agree with more people than we ever evolved to agree with.

I love to quote-tweet approvingly. I suppose that’s one of my eccentricities.

What are the skills that you really want out of a college graduate?

January 11th, 2023

Stephen Hsu was the most senior administrator who reviewed all the tenure and promotion cases at his university:

We have 50,000 students here. It’s one of the biggest universities in the United States. Each year, there are about 150 faculty who are coming up for promotion from associate professor to full professor or assistant to associate with tenure. And there are sometimes situations where you know what the system wants you to do with a particular person, but there’s a question of your personal integrity—whether you want to actually uphold the standards of the institution in those circumstances.

It’s funny, because the president who hired me actually wanted me to do that. She wanted someone who was very rigorous to control this process. But I knew I was gradually making enemies. Sometimes there’s a popular person, and maybe there’s some diversity goal or gender equality goal. So you have this person maybe who hasn’t done that well with their research, or hasn’t been well-funded with external grants, or maybe their teaching evaluations aren’t that great, but some people really want them promoted. And if you impose the regular standard and they don’t get promoted, you’ve made a lot of enemies.

So if I just thought to myself, “I’m not going to be at Michigan State 10 years from now—let them let them handle the problems if all these people who are not so good get promoted. Let them deal with it,” that would be the smart thing if I were a careerist or self-interested person. Don’t make waves, just put your finger in the wind and say: “Which way is the wind blowing? I’ll just go with that.” But I didn’t do that. Because I thought, “What’s the point of doing this job if you’re not going to do it right?” Now imagine how many congressmen are doing this, imagine how many have really deeply held principles that they’re trying to advance. Maybe it’s 10 percent? I don’t know, But it’s nowhere near 100 percent.

It’s the same in higher ed. There’s something called the College Learning Assessment. It’s a standardized test that was developed over the last 20 years. And it’s supposed to evaluate the skills that were learned by students during college. For less prestigious directional state universities this would be a very good tool, because the subset of graduates who did well on the CLA could get hired by General Motors or whatever with the same confidence as they were able to hire the kid from Harvard, University of Michigan, or anywhere else. So there was interest in building something like the CLA.

In order not to do it in a vacuum, the people who were developing it went to all these big corporations and said “Well, what are the skills that you really want out of a college graduate?” And not surprisingly, they wanted things like being able to read an article in The Economist and write a good summary. Or to look at graphs and make some inferences. Nothing ivory tower—it was all very reasonable, practical stuff. And so they commissioned this huge study by RAND. Twenty universities participated, including MIT, Michigan, some historically black colleges, some directional state universities—a huge spectrum covering all of American higher education.

They found that leaving students’ CLA score was very highly correlated to their incoming SAT score. Well, if you knew anything about psychometrics, it’s no surprise that the delta between your freshman year and your senior year on the CLA score is minimal. So what are kids buying when they go to college for four years? Are they getting skills that GM or McKinsey want, or are they just repackaging themselves?

I showed the results of this Rand CLA study to my colleagues, the senior administrators at Michigan State University, and I tried to get them to understand: “Guys, do you realize that maybe we’re not doing what we think we’re doing on this campus? You probably go out and tell alums and donors, moms and dads that we’re building skills for these kids at Michigan State, so they can be great employees of Ford Motor Company and Anderson Consulting when they get out. But the data doesn’t actually say that we do that.” I’m not talking about specialist majors like accounting or engineering, where we can see the kids are coming out with skills they didn’t enter with. I’m talking about generalist learning and “critical thinking” that schools say they teach, but the CLA says otherwise.

I have all my emails from when I was in that job, so I can tell you exactly how much intellectual curiosity and updating of priors there was among these vice presidents and higher at major Big 10 universities. Now, they could have come back and said, “Steve, I don’t believe this RAND study. My son Johnny learned a lot when he was at Illinois,” or something. They could have come back and contested the findings. Did any of them contest the findings with me? Zero.

Did any of them care about what was revealed about the business that we’re actually in, about what is actually going on our campus? One or two well-meaning VPs emailed me saying “Wow, that’s incredible. I never would have thought…” One of the women who emailed me back had a college-aged kid, and this actually impacted some decisions that were going on in her family at the time.

But there was overall very little concern about the findings, there was very little pushback even denying the findings. Those are the people running your institutions of higher education. I discussed these findings with lots of other top administrators at other universities and very few people care. They’ve got their career, they’re just doing their thing.

The group was elitist, but it was also meritocratic

January 10th, 2023

Sputnik’s success created an overwhelming sense of fear that permeated all levels of U.S. society, including the scientific establishment:

As John Wheeler, a theoretical physicist who popularized the term “black hole” would later tell an interviewer: “It is hard to reconstruct now the sense of doom when we were on the ground and Sputnik was up in the sky.”

Back on the ground, the event spurred a mobilization of American scientists unseen since the war. Six weeks after the launch of Sputnik, President Dwight Eisenhower revived the President’s Scientific Advisory Council (PSAC). It was a group of 16 scientists who reported directly to him, granting them an unprecedented amount of influence and power. Twelve weeks after Sputnik, the Department of Defense launched the Advanced Research Project Agency (ARPA), which was later responsible for the development of the internet. Fifteen months after Sputnik, the Office of the Director of Defense Research and Engineering (ODDRE) was launched to oversee all defense research. A 36-year-old physicist who worked on the Manhattan Project, Herb York, was named head of the Office of the ODDRE. There, he reported directly to the president and was given total authority over all defense research spending.

It was the beginning of a war for technological supremacy. Everyone involved understood that in the nuclear age, the stakes were existential.

It was not the first time the U.S. government had mobilized the country’s leading scientists. World War II had come to be known as “the physicists’ war.” It was physicists who developed proximity bombs and the radar systems that rendered previously invisible enemy ships and planes visible, enabling them to be targeted and destroyed, and it was physicists who developed the atomic bombs that ended the war. The prestige conferred by their success during the war positioned physicists at the top of the scientific hierarchy. With the members of the Manhattan Project now aging, getting the smartest young physicists to work on military problems was of intense interest to York and the ODDRE.

Physicists saw the post-Sputnik era as an opportunity to do well for themselves. Many academic physicists more than doubled their salaries working on consulting projects for the DOD during the summer. A source of frustration to the physicists was that these consulting projects were awarded through defense contractors, who were making twice as much as the physicists themselves. A few physicists based at the University of California Berkeley decided to cut out the middleman and form a company they named Theoretical Physics Incorporated.

Word of the nascent company spread quickly. The U.S.’s elite physics community consisted of a small group of people who all went to the same small number of graduate programs and were faculty members at the same small number of universities. These ties were tightened during the war, when many of those physicists worked closely together on the Manhattan Project and at MIT’s Rad Lab.

Charles Townes, a Columbia University physics professor who would later win a Nobel Prize for his role in inventing the laser, was working for the Institute for Defense Analysis (IDA) at the time and reached out to York when he learned of the proposed company. York knew many of the physicists personally and immediately approved $250,000 of funding for the group. Townes met with the founders of the company in Los Alamos, where they were working on nuclear-rocket research. Appealing to their patriotism, he convinced them to make their project a department of IDA.

A short while later the group met in Washington D.C., where they fleshed out their new organization. They came up with a list of the top people they would like to work with and invited them to Washington for a presentation. Around 80 percent of the people invited joined the group; they were all friends of the founders, and they were all high-level physicists. Seven of the first members, or roughly one-third of its initial membership, would go on to win the Nobel Prize. Other members, such as Freeman Dyson, who published foundational work on quantum field theory, were some of the most renowned physicists to never receive the Nobel.

The newly formed group was dubbed “Project Sunrise” by ARPA, but the group’s members disliked the name. The wife of one of the founders proposed the name JASON, after the Greek mythological hero who led the Argonauts on a quest for the golden fleece. The name stuck and JASON was founded in December 1959, with its members being dubbed “Jasons.”

The key to the JASON program was that it formalized a unique social fabric that already existed among elite U.S. physicists. The group was elitist, but it was also meritocratic. As a small, tight-knit community, many of the scientists who became involved in JASON had worked together before. It was a peer network that maintained strict standards for performance. With permission to select their own members, the Jasons were able to draw from those who they knew were able to meet the expectations of the group.

This expectation superseded existing credentials; Freeman Dyson never earned a PhD, but he possessed an exceptionally creative mind. Dyson became known for his involvement with Project Orion, which aimed to develop a starship design that would be powered through a series of atomic bombs, as well as his Dyson Sphere concept, a hypothetical megastructure that completely envelops a star and captures its energy.

Another Jason was Nick Christofilos, an engineer who developed particle accelerator concepts in his spare time when he wasn’t working at an elevator maintenance business in Greece. Christofilos wrote to physicists in the U.S. about his ideas, but was initially ignored. But he was later offered a job at an American research laboratory when physicists found that some of the ideas in his letters pre-dated recent advances in particle accelerator design. Dyson’s and Christofilios’s lack of formal qualifications would preclude an academic research career today, but the scientific community at the time was far more open-minded.

JASON was founded near the peak of what became known as the military-industrial complex. When President Eisenhower coined this term during his farewell address in 1961, military spending accounted for nine percent of the U.S. economy and 52 percent of the federal budget; 44 percent of the defense budget was being spent on weapons systems.

But the post-Sputnik era entailed a golden age for scientific funding as well. Federal money going into basic research tripled from 1960 to 1968, and research spending more than doubled overall. Meanwhile, the number of doctorates awarded in physics doubled. Again, meritocratic elitism dominated: over half of the funding went to 21 universities, and these universities awarded half of the doctorates.

With a seemingly unlimited budget, the U.S. military leadership had started getting some wild ideas. One general insisted a moon base would be required to gain the ultimate high ground. Project Iceworm proposed to build a network of mobile nuclear missile launchers under the Greenland ice sheet. The U.S. Air Force sought a nuclear-powered supersonic bomber under Project WS-125 that could take off from U.S. soil and drop hydrogen bombs anywhere in the world. There were many similar ideas and each military branch produced analyses showing that not only were the proposed weapons technically feasible, but they were also essential to winning a war against the Soviet Union.

Prior to joining the Jasons, some of its scientists had made radical political statements that could make them vulnerable to having their analysis discredited. Fortunately, JASON’s patrons were willing to take a risk and overlook political offenses in order to ensure that the right people were included in the group. Foreseeing the potential political trap, Townes proposed a group of senior scientific advisers, about 75 percent of whom were well-known conservative hawks. Among this group was Edward Teller, known as the “father of the hydrogen bomb.” This senior layer could act as a political shield of sorts in case opponents attempted to politically tarnish JASON members.

Every spring, the Jasons would meet in Washington D.C. to receive classified briefings about the most important problems facing the U.S. military, then decide for themselves what they wanted to study. JASON’s mandate was to prevent “technological surprise,” but no one at the Pentagon presumed to tell them how to do it.

In July, the group would reconvene for a six-week “study session,” initially alternating yearly between the east and west coasts. Members later recalled these as idyllic times for the Jasons, with the group becoming like an extended family. The Jasons rented homes near each other. Wives became friends, children grew up like cousins, and the community put on backyard plays at an annual Fourth of July party. But however idyllic their off hours, the physicists’ workday revolved around contemplating the end of the world. Questions concerning fighting and winning a nuclear war were paramount. The ideas the Jasons were studying approached the level of what had previously been science fiction.

Some of the first JASON studies focused on ARPA’s Defender missile defense program. Their analysis furthered ideas involving the detection of incoming nuclear attacks through the infrared signature of missiles, applied newly-discovered astronomical techniques to distinguish between nuclear-armed missiles and decoys, and worked on the concept of shooting what were essentially directed lightning bolts through the atmosphere to destroy incoming nuclear missiles.

The lightning bolt idea, known today as directed energy weapons, came from Christofilos, who was described by an ARPA historian as mesmerizing JASON physicists with the “kind of ideas that nobody else had.” Some of his other projects included a fusion machine called Astron, a high-altitude nuclear explosion test codenamed Operation Argus that was dubbed the “greatest scientific experiment ever conducted,” and explorations of a potential U.S. “space fleet.”

The Jasons’ analysis on the effects of nuclear explosions in the upper atmosphere, water, and underground, as well as methods of detecting these explosions, was credited with being critical to the U.S. government’s decision to sign the Limited Test Ban Treaty with the Soviet Union. Because of their analysis, the U.S. government felt confident it could verify treaty compliance; the treaty resulted in a large decline in the concentration of radioactive particles in the atmosphere.

The success of JASON over its first five years increased its influence within the U.S. military and spurred attempts by U.S. allies to copy the program. Britain tried for years to create a version of JASON, even enlisting the help of JASON’s leadership. But the effort failed: British physicists simply did not seem to desire involvement. Earlier attempts by British leaders like Winston Churchill to create a British MIT had run into the same problems.

The difference was not ability, but culture. American physicists did not have a disdain for the applied sciences, unlike their European peers. They were comfortable working as advisors on military projects and were employed by institutions that were dependent on DOD funding. Over 20 percent of Caltech’s budget in 1964 came from the DOD, and it was only the 15th largest recipient of funding; MIT was first and received twelve times as much money. The U.S. military and scientific elite were enmeshed in a way that had no parallel in the rest of the world then or now.

They are very, very careerist people

January 9th, 2023

Stephen Hsu worked for a time as a vice president of a university and notes that administrators are a different group:

The top level administrators at universities are usually drawn from the faculty, or from faculty at other universities. After being a top level administrator at a Big 10 university, and meeting provosts and presidents at the other top universities, I have a pretty good feel for this particular collection of people.

You can imagine what it is that makes someone who’s already a tenured professor in biochemistry decide they want to take on this huge amount of responsibility and maybe even shut down their own research program. They are very, very careerist people. And that is a huge problem, because incentives are heavily misaligned.

The incentive for me as a senior administrator is not to make waves and keep everything kind of calm. Calm down the crazy professor who’s doing stuff, assuage the students that are protesting, make the donors happy, make the board of trustees happy. I found that the people who were in the role so they could advance their career, versus those trying to advance the interests of the institution, were very different. There were times when I felt like I had to do something very dangerous for me career-wise, but it was absolutely essential for the mission of the university. I had to do that repeatedly.

And I told the president who hired me, “I don’t know how long I’m going to last in this job, because I’m going to do the right thing. If I do the right thing and I’m bounced out, that’s fine. I don’t care.” But most people are not like that.

In economics, there’s something called the principal-agent problem. Let’s say you hire a CEO to manage your company. Unless his compensation is completely determined by some long-dated stock options or something, his interests are not aligned with the long-term growth for your company. He can have a great quarter by shipping all your manufacturing off to China, have a great few quarters, and get a huge bonus. Even if, on a long timescale, it’s really bad for your bottom line.

So there’s a principal-agent problem here. Anytime you give centralized power to somebody, you have to be sure that their incentives — or their personal integrity — are aligned with what you want them to promote at the institution. And generally, it’s not well done in the universities right now.

It’s not like it used to be that, “Oh, if Joe or Jane is going to become university president, you can bet that their highest value is higher education and truth, that’s the American way.” It was probably never true. But they don’t claw back your compensation as a president of the university if it later turns out that you really screwed something up. You know, they don’t really even do that with CEOs.

This is James Daunt’s super power

January 8th, 2023

Ted Gioia recently visited a Barnes & Noble store for the first time since the pandemic, saw a lot of interesting books, and bought a couple:

I plan to go back again.

But I’m not the only one.

The turnaround has delivered remarkable results. Barnes & Noble opened 16 new bookstores in 2022, and now will double that pace of openings in 2023. In a year of collapsing digital platforms, this 136-year-old purveyor of print media is enjoying boom times.

How did they fix things?

It’s amazing how much difference a new boss can make.

I’ve seen that firsthand so many times. I now have a rule of thumb: “There is no substitute for good decisions at the top—and no remedy for stupid ones.”

It’s really that simple. When the CEO makes foolish blunders, all the wisdom and hard work of everyone else in the company is insufficient to compensate. You only fix these problems by starting at the top.

In the case of Barnes & Noble, the new boss was named James Daunt. And he had already turned around Waterstones, a struggling book retailing chain in Britain.

Back when he was 26, Daunt had started out running a single bookstore in London—and it was a beautiful store. He had to borrow the money to do it, but he wanted a store that was a showplace for books. And he succeeded despite breaking all the rules.

For a start, he refused to discount his books, despite intense price competition in the market. If you asked him why, he had a simple answer: “I don’t think books are overpriced.”

After taking over Waterstones, he did something similar. He stopped all the “buy-two-books-and-get-one-free” promotions. He had a simple explanation for this too: When you give something away for free, it devalues it.

But the most amazing thing Daunt did at Waterstones was this: He refused to take any promotional money from publishers.

This seemed stark raving mad. But Daunt had a reason. Publishers give you promotional money in exchange for purchase commitments and prominent placement—but once you take the cash, you’ve made your deal with the devil. You now must put stacks of the promoted books in the most visible parts of the store, and sell them like they’re the holy script of some new cure-all creed.

Those promoted books are the first things you see when you walk by the window. They welcome you when you step inside the front door. They wink at you again next to the checkout counter.

Leaked emails show ridiculous deals. Publishers give discounts and thousands of dollars in marketing support, but the store must buy a boatload of copies—even if the book sucks and demand is weak—and push them as aggressively as possible.

Publishers do this in order to force-feed a book on to the bestseller list, using the brute force of marketing money to drive sales. If you flog that bad boy ruthlessly enough, it might compensate for the inferiority of the book itself. Booksellers, for their part, sweep up the promo cash, and maybe even get a discount that allows them to under-price Amazon.

Everybody wins. Except maybe the reader.

Daunt refused to play this game. He wanted to put the best books in the window. He wanted to display the most exciting books by the front door. Even more amazing, he let the people working in the stores make these decisions.

This is James Daunt’s super power: He loves books.

“Staff are now in control of their own shops,” he explained. “Hopefully they’re enjoying their work more. They’re creating something very different in each store.”

This crazy strategy proved so successful at Waterstones, that returns fell almost to zero—97% of the books placed on the shelves were purchased by customers. That’s an amazing figure in the book business.

On the basis of this success, Daunt was put in charge of Barnes & Noble in August 2019.

I almost never need a new book right now, so it feels wrong to pay full price, when I could so easily “get the second marshmallow” by waiting — but I must admit that I enjoy browsing physical books.

What always struck me about bookstores was how random the inventory seemed, especially in a section like Sci-Fi and Fantasy, where you’d find books two and five of a nine-part series and no guidance as to where to start in the genre.

If you sense that NSF or NIH have a view on something, it’s best not to fight city hall

January 7th, 2023

Stephen Hsu gives an example of how politics constrains the scientific process:

This individual is one of the most highly decorated, well-known climate simulators in the world. To give you his history, he did a PhD in general relativity in the UK and then decided he wanted to do something else, because he realized that even though general relativity was interesting, he didn’t feel like he was going to have a lot of impact on society. So he got involved in meteorology and climate modeling and became one of the most well known climate modelers in the world in terms of prizes and commendations. He’s been a co-author on all the IPCC reports going back multiple decades. So he’s a very well-known guy. But he was one of the authors of a paper in which he made the point that climate models are still far from perfect.

To do a really good job, you need to have a small basic cell size, which captures the size of the features being modeled inside the simulation. The best size is actually scaled down quite a bit because of all kinds of nonlinear phenomena: turbulence, convection, transport of heat, moisture, and everything that goes into the making of weather and climate.

And so he made this point that we’re nowhere near actually being able to properly simulate the physics of these very important features. It turns out that the transport of water vapor, which is related to the formation of clouds, is important. And it turns out high clouds reflect sunlight, and have the opposite sign effect on climate change compared to low clouds, which trap infrared radiation. So whether moisture in the atmosphere or additional carbon in the atmosphere causes more high cloud formation versus more low cloud formation is incredibly important, and it carries the whole day in these models.

In no way are these microphysics of cloud formation being modeled right now. And anybody who knows anything knows this. And the people who really understand physics and do climate modeling know this.

So he wrote a paper saying that governments are going to spend billions, maybe trillions of dollars on policy changes or geothermal engineering. If you’re trying to fix the climate change problem, can you at least spend a billion dollars on the supercomputers that we would need to really do a more definitive job forecasting climate change?

And so that paper he wrote was controversial because people in the community maybe knew he was right, but they didn’t want him talking about this. But as a scientist, I fully support what he’s trying to do. It’s intellectually honest. He’s asking for resources to be spent where they really will make a difference, not in some completely speculative area where we’re not quite sure what the consequences will be. This is clearly going to improve climate modeling and is clearly necessary to do accurate climate modeling. But the anecdote gives you a sense of how fraught science is when there are large scale social consequences. There are polarized interest groups interacting with science.

[…]

It was controversial because, in a way, he was airing some well known dirty laundry that all the experts knew about. But many of them would say it’s better to hide laundry for the greater good, because a bad guy—somebody who’s very anti-CO2 emissions reduction— could seize on this guy’s article and say “Look, the leading guy in your field says that you can’t actually do the simulations he wants, and yet you’re trying to shove some very precise policy goal down my throat. This guy’s revealing those numbers have literally no basis.” That would be an extreme version of the counter-utilization of my colleague’s work.

[…]

In my lifetime, the way science is conducted has changed radically, because now it’s accepted—particularly by younger scientists—that we are allowed to make ad hominem attacks on people based on what could be their entirely sincere scientific belief. That was not acceptable 20 or 30 years ago. If you walked into a department, even if it had something to do with the environment or human genetics or something like that, people were allowed to have their contrary opinion as long as the arguments they made were rational and supported by data. There was not a sense that you’re allowed to impute bad moral character to somebody based on some analytical argument that they’re making. It was not socially acceptable to do that. Now people are in danger of losing their jobs.

[…]

I could list a bunch of factors that I think contributed, and one of them is that scientists are under a lot of pressure to get money to fund their labs and pay their graduate students. If you sense that NSF or NIH have a view on something, it’s best not to fight city hall. It’s like fighting the Fed—you’re going to lose. So that enforces a certain kind of conformism.

[…]

As far as how science relates to the outside world, here’s the problem: for some people, when science agrees with their cherished political belief, they say “Hey, you know what? This is the Vulcan Science Academy, man. These guys know what they’re doing. They debated it, they looked at all the evidence, that’s a peer-reviewed paper, my friend—it was reviewed by peers. They’re real scientists.” When they like the results, they’re going to say that.

When they don’t like it, they say, “Oh, come on, those guys know they have to come to that conclusion or they’re going to lose their NIH grant. These scientists are paid a lot of money now and they’re just feathering their own nests, man. They don’t care about the truth. And by the way, papers in this field don’t replicate. Apparently, if you do a study where you look back at the most prominent papers over the last 10 years, and you check to see whether subsequent papers which were better powered, had better technology, and more sample size actually replicated, the replication rate was like 50 percent. So, you can throw half the papers that are published in top journals in the trash.”

As it turned and ran the ice axe fell out of his head

January 6th, 2023

Clint Adams was mountain goat hunting on Alaska’s Baranof Island in October with his friend, Matt Ericksen, his girlfriend, Melody Orozco, and their guide, when he heard the guide yell three words that nobody ever wants to hear in bear country:

“Oh, fuck. Run!”

By the time Adams realized what was happening, his guide was already running past him and reaching for the .375 H&H bolt-action rifle that was slung over his shoulder. Adams’ own rifle was strapped to his pack, and the only weapon at hand was the ice axe he’d been using to claw his way up the mountain. When the big boar chased after the guide and passed within arm’s reach of Adams, he took the ice axe and swung with both hands, burying the pointy end in the bear’s skull just behind its ear.

[…]

Adams then watched as the bear tackled the guide from behind, and the two rolled down to a flat spot below. The guide was on his back trying to shoulder the rifle as the eight- to nine-foot boar reared back on its hind legs. That’s when Adams saw that the axe was still lodged in the bear’s head.

Adams is 6’6” and weighs 285 pounds.

The impaled bear then reared up over the guide, who shouldered his rifle and fired a shot straight up into the air. Adams says he distinctly remembers seeing the muzzle blast ruffle the bear’s fur. The shot spooked the bear just enough for it to step back and hesitate. At this point, Ericksen drew the .357 revolver strapped to his chest and fired three shots at the bear through the brush.

The boar charged the guide again, and the guide leveled his rifle and shot a second time. Ericksen fired two more rounds from his pistol. Adams says they still don’t know if any of those shots even hit the bear, but they all kept screaming and eventually the bear ran off. They never saw the bear again, and although the guide reported the incident, Adams has no idea if the bear died or not. He did, however, get his ice axe back.

“After that second shot [from the guide], the bear looped down and got level with me about 30 yards away,” Adams says. “We’re making a ton of noise at that point, and it bluff charged once or twice. It took two steps forward, two steps back, and as it turned and ran the ice axe fell out of his head.”

[…]

Adams also says the whole experience opened his eyes to how gunshots help stop a charging bear. He says that because they were in dense brush in tight quarters, bear spray would have been useless, and he thinks that the muzzle blast from the guide’s rifle might have deterred the bear even more than the bullet.

“This might sound silly, but after going through that and seeing how the bear responded, I honestly would feel the most safe from a charging bear with a foghorn in my hand,” Adams says. “When I saw that .375 go off, it was not only the sound, but more so it was the air that hit the bear in the face. It was just amazing how that bear reacted when it got hit with the muzzle blast.”

He adds that, in his opinion, if you’re going to carry a pistol in bear country—which, of course, you should—your best would be to carry a 10mm Glock with a 19-round magazine and “make as many bangs as you can.”

Posturing is an important part of fighting. With that in mind, a compensated pistol might be especially effective.

Speaking of Glocks and bears:

Sam Kezar reckons he’d be either dead or disfigured if he hadn’t spent all summer fast-drawing his Glock. He bases that conclusion on a sobering calculus of time and distance—the two seconds required for a Wyoming grizzly bear to cover 20 yards—and the fact that Kezar somehow managed to get off seven shots from his 10mm in that span of time as he was staring terror in the face. As the bear was closing fast, and he was backpedaling into the unknown.