Public choice theory is even more useful in understanding foreign policy

Monday, January 16th, 2023

Public choice theory was developed to understand domestic politics, but Richard Hanania argues — in Public Choice Theory and the Illusion of Grand Strategy — that public choice is actually even more useful in understanding foreign policy:

First, national defence is “the quintessential public good” in that the taxpayers who pay for “national security” compose a diffuse interest group, while those who profit from it form concentrated interests. This calls into question the assumption that American national security is directly proportional to its military spending (America spends more on defence than most of the rest of the world combined).

Second, the public is ignorant of foreign affairs, so those who control the flow of information have excess influence. Even politicians and bureaucrats are ignorant, for example most(!) counterterrorism officials — the chief of the FBI’s national security branch and a seven-term congressman then serving as the vice chairman of a House intelligence subcommittee, did not know the difference between Sunnis and Shiites. The same favoured interests exert influence at all levels of society, including at the top, for example intelligence agencies are discounted if they contradict what leaders think they know through personal contacts and publicly available material, as was the case in the run-up to the Iraq War.

Third, unlike policy areas like education, it is legitimate for governments to declare certain foreign affairs information to be classified i.e. the public has no right to know. Top officials leaking classified information to the press is normal practice, so they can be extremely selective in manipulating public knowledge.

Fourth, it’s difficult to know who possesses genuine expertise, so foreign policy discourse is prone to capture by special interests. History runs only once — the cause and effect in foreign policy are hard to generalise into measurable forecasts; as demonstrated by Tetlock’s superforecasters, geopolitical experts are worse than informed laymen at predicting world events. Unlike those who have fought the tobacco companies that denied the harms of smoking, or oil companies that denied global warming, the opponents of interventionists may never be able to muster evidence clear enough to win against those in power with special interests backing.

Hanania’s special interest groups are the usual suspects: government contractors (weapons manufacturers [1]), the national security establishment (the Pentagon [2]), and foreign governments [3] (not limited to electoral intervention).

What doesn’t have comparable influence is business interests as argued by IR theorists. Unlike weapons manufacturers, other business interests have to overcome the collective action problem, especially when some businesses benefit from protectionism.

None of the precursors were in place

Sunday, January 15th, 2023

Once you understand how the Industrial Revolution came about, it’s easy to see why there was no Roman Industrial Revolution — none of the precursors were in place:

The Romans made some use of mineral coal as a heating element or fuel, but it was decidedly secondary to their use of wood and where necessary charcoal. The Romans used rotational energy via watermills to mill grain, but not to spin thread. Even if they had the spinning wheel (and they didn’t; they’re still spinning with drop spindles), the standard Mediterranean period loom, the warp-weighted loom, was roughly an order of magnitude less efficient than the flying shuttle loom, so the Roman economy couldn’t have handled all of the thread the spinning wheel could produce.

And of course the Romans had put functionally no effort into figuring out how to make efficient pressure-cylinders, because they had absolutely no use for them. Remember that by the time Newcomen is designing his steam engine, the kings and parliaments of Europe have been effectively obsessed with who could build the best pressure-cylinder (and then plug it at one end, making a cannon) for three centuries because success in war depended in part on having the best cannon. If you had given the Romans the designs for a Newcomen steam engine, they couldn’t have built it without developing whole new technologies for the purpose (or casting every part in bronze, which introduces its own problems) and then wouldn’t have had any profitable use to put it to.

All of which is why simple graphs of things like ‘global historical GDP’ can be a bit deceptive: there’s a lot of particularity beneath the basic statistics of production because technologies are contingent and path dependent.

The Industrial Revolution happened largely in one place

Saturday, January 14th, 2023

The Industrial Revolution was more than simply an increase in economic production, Bret Devereaux explains:

Modest increases in economic production are, after all, possible in agrarian economies. Instead, the industrial revolution was about accessing entirely new sources of energy for broad use in the economy, thus drastically increasing the amount of power available for human use. The industrial revolution thus represents not merely a change in quantity, but a change in kind from what we might call an ‘organic’ economy to a ‘mineral’ economy. Consequently, I’d argue, the industrial revolution represents probably just the second time in human history that as a species we’ve undergone a radical change in our production; the first being the development of agriculture in the Neolithic period.

However, unlike farming which developed independently in many places at different times, the industrial revolution happened largely in one place, once and then spread out from there, largely because the world of the 1700s AD was much more interconnected than the world of c. 12,000BP (‘before present,’ a marker we sometimes use for the very deep past). Consequently while we have many examples of the emergence of farming and from there the development of complex agrarian economies, we really only have one ‘pristine’ example of an industrial revolution. It’s possible that it could have occurred with different technologies and resources, though I have to admit I haven’t seen a plausible alternative development that doesn’t just take the same technologies and systems and put them somewhere else.

[…]

Fundamentally this is a story about coal, steam engines, textile manufacture and above all the harnessing of a new source of energy in the economy. That’s not the whole story, by any means, but it is one of the most important through-lines and will serve to demonstrate the point.

The specificity matters here because each innovation in the chain required not merely the discovery of the principle, but also the design and an economically viable use-case to all line up in order to have impact.

[…]

So what was needed was not merely the idea of using steam, but also a design which could actually function in a specific use case. In practice that meant both a design that was far more efficient (though still wildly inefficient) and a use case that could tolerate the inevitable inadequacies of the 1.0 version of the device. The first design to actually square this circle was Thomas Newcomen’s atmospheric steam engine (1712).

[…]

Now that design would be iterated on subsequently to produce smoother, more powerful and more efficient engines, but for that iteration to happen someone needs to be using it, meaning there needs to be a use-case for repetitive motion at modest-but-significant power in an environment where fuel is extremely cheap so that the inefficiency of the engine didn’t make it a worse option than simply having a whole bunch of burly fellows (or draft animals) do the job. As we’ll see, this was a use-case that didn’t really exist in the ancient world and indeed existed almost nowhere but Britain even in the period where it worked.

But fortunately for Newcomen the use case did exist at that moment: pumping water out of coal mines. Of course a mine that runs below the local water-table (as most do) is going to naturally fill with water which has to be pumped out to enable further mining. Traditionally this was done with muscle power, but as mines get deeper the power needed to pump out the water increases (because you need enough power to lift all of the water in the pump system in each movement); cheaper and more effective pumping mechanisms were thus very desirable for mining. But the incentive here can’t just be any sort of mining, it has to be coal mining because of the inefficiency problem: coal (a fuel you can run the engine on) is of course going to be very cheap and abundant directly above the mine where it is being produced and for the atmospheric engine to make sense as an investment the fuel must be very cheap indeed. It would not have made economic sense to use an atmospheric steam engine over simply adding more muscle if you were mining, say, iron or gold and had to ship the fuel in; transportation costs for bulk goods in the pre-railroad world were high. And of course trying to run your atmospheric engine off of local timber would only work for a very little while before the trees you needed were quite far away.

But that in turn requires you to have large coal mines, mining lots of coal deep under ground. Which in turn demands that your society has some sort of bulk use for coal. But just as the Newcomen Engine needed to out-compete ‘more muscle’ to get a foothold, coal has its own competitor: wood and charcoal. There is scattered evidence for limited use of coal as a fuel from the ancient period in many places in the world, but there needs to be a lot of demand to push mines deep to create the demand for pumping. In this regard, the situation on Great Britain (the island, specifically) was almost ideal: most of Great Britain’s forests seem to have been cleared for agriculture in antiquity; by 1000 only about 15% of England (as a geographic sub-unit of the island) was forested, a figure which continued to decline rapidly in the centuries that followed (down to a low of around 5%). Consequently wood as a heat fuel was scarce and so beginning in the 16th century we see a marked shift over to coal as a heating fuel for things like cooking and home heating. Fortunately for the residents of Great Britain there were surface coal seems in abundance making the transition relatively easy; once these were exhausted deep mining followed which at last by the late 1600s created the demand for coal-powered pumps finally answered effectively in 1712 by Newcomen: a demand for engines to power pumps in an environment where fuel efficiency mattered little.6

With a use-case in place, these early steam engines continue to be refined to make them more powerful, more fuel efficient and capable of producing smooth rotational motion out of their initially jerky reciprocal motions, culminating in James Watt’s steam engine in 1776. But so far all we’ve done is gotten very good and pumping out coal mines – that has in turn created steam engines that are now fuel efficient enough to be set up in places that are not coal mines, but we still need something for those engines to do to encourage further development. In particular we need a part of the economy where getting a lot of rotational motion is the major production bottleneck.

What could be a more interesting question?

Friday, January 13th, 2023

There are people who are really trying to either kill or at least studiously ignore all of the progress in genomics, Stephen Hsu reports — from first-hand experience:

My research group solved height as a phenotype. Give us the DNA of an individual with no other information other than that this person lived in a decent environment—wasn’t starved as a child or anything like that—and we can predict that person’s height with a standard error of a few centimeters. Just from the DNA. That’s a tour de force.

Then you might say, “Well, gee, I heard that in twin studies, the correlation between twins in IQ is almost as high as their correlation in height. I read it in some book in my psychology class 20 years ago before the textbooks were rewritten. Why can’t you guys predict someone’s IQ score based on their DNA alone?”

Well, according to all the mathematical modeling and simulations we’ve done, we need somewhat more training data to build the machine learning algorithms to do that. But it’s not impossible. In fact, we predicted that if you have about a million genomes and the cognitive scores of those million people, you could build a predictor with a standard error of plus or minus 10 IQ points. So you can ask, “Well, since you guys showed you could do it for height, and since there are 30, or 40, or 50, different disease conditions that we now have decent genetic predictors for, why isn’t there one for IQ?”

Well, the answer is there’s zero funding. There’s no NIH, NSF, or any agency that would take on a proposal saying, “Give me X million dollars to genotype these people, and also measure their cognitive ability or get them to report their SAT scores to me.” Zero funding for that. And some people get very, very aggressive upon learning that you’re interested in that kind of thing, and will start calling you a racist, or they’ll start attacking you. And I’m not making this up, because it actually happened to me.

What could be a more interesting question? Wow, the human brain—that’s what differentiates us from the rest of the animal species on this planet. Well, to what extent is brain development controlled by DNA? Wouldn’t it be amazing if you could actually predict individual variation in intelligence from DNA just as we can with height now? Shouldn’t that be a high priority for scientific discovery? Isn’t this important for aging, because so many people undergo cognitive decline as they age? There are many, many reasons why this subject should be studied. But there’s effectively zero funding for it.

The internet wants to be fragmented

Thursday, January 12th, 2023

“You know,” Noah Smith quipped, “fifteen years ago, the internet was an escape from the real world. Now the real world is an escape from the internet.”

When I first got access to the internet as a kid, the very first thing I did was to find people who liked the same things I liked — science fiction novels and TV shows, Dungeons and Dragons, and so on. In the early days, that was what you did when you got online — you found your people, whether on Usenet or IRC or Web forums or MUSHes and MUDs. Real life was where you had to interact with a bunch of people who rubbed you the wrong way — the coworker who didn’t like your politics, the parents who nagged you to get a real job, the popular kids with their fancy cars. The internet was where you could just go be a dork with other dorks, whether you were an anime fan or a libertarian gun nut or a lonely Christian 40-something or a gay kid who was still in the closet. Community was the escape hatch.

Then in the 2010s, the internet changed. It wasn’t just the smartphone, though that did enable it. What changed is that internet interaction increasingly started to revolve around a small number of extremely centralized social media platforms: Facebook, Twitter, and later Instagram.

From a business perspective, this centralization was a natural extension of the early internet — people were getting more connected, so just connect them even more.

[…]

Putting everyone in the world in touch through a single network is what we did with the phone system, and everyone knows that the value of a network scales as the square of the number of users. So centralizing the whole world’s social interaction on two or three platforms would print loads of money while also making for a happier, more connected world.

[…]

It started with the Facebook feed. On the old internet, you could show a different side of yourself in every forum or chat room; but on your Facebook feed, you had to be the same person to everyone you knew. When social unrest broke out in the mid-2010s this got even worse — you had to watch your liberal friends and your conservative friends go at it in the comments of your posts, or theirs. Friendships and even family bonds were destroyed in those comments.

[…]

The early 2010s on Twitter were defined by fights over toxicity and harassment versus early-internet ideals of free speech. But after 2016 those fights no longer mattered, because everyone on the platform simply adopted the same patterns of toxicity and harassment that the extremist trolls had pioneered.

[…]

Why did this happen to the centralized internet when it hadn’t happened to the decentralized internet of previous decades? In fact, there were always Nazis around, and communists, and all the other toxic trolls and crazies. But they were only ever an annoyance, because if a community didn’t like those people, the moderators would just ban them. Even normal people got banned from forums where their personalities didn’t fit; even I got banned once or twice. It happened. You moved on and you found someone else to talk to.

Community moderation works. This was the overwhelming lesson of the early internet. It works because it mirrors the social interaction of real life, where social groups exclude people who don’t fit in. And it works because it distributes the task of policing the internet to a vast number of volunteers, who provide the free labor of keeping forums fun, because to them maintaining a community is a labor of love. And it works because if you don’t like the forum you’re in — if the mods are being too harsh, or if they’re being too lenient and the community has been taken over by trolls — you just walk away and find another forum. In the words of the great Albert O. Hirschman, you always have the option to use “exit”.

[…]

They tinkered at the edges of the platform, but never touched their killer feature, the quote-tweet, which Twitter’s head of product called “the dunk mechanism.” Because dunks were the business model — if you don’t believe me, you can check out the many research papers showing that toxicity and outrage drive Twitter engagement.

[…]

Humanity does not want to be a global hive mind. We are not rational Bayesian updaters who will eventually reach agreement; when we receive the same information, it tends to polarize us rather than unite us. Getting screamed at and insulted by people who disagree with you doesn’t take you out of your filter bubble — it makes you retreat back inside your bubble and reject the ideas of whoever is screaming at you. No one ever changed their mind from being dunked on; instead they all just doubled down and dunked harder. The hatred and toxicity of Twitter at times felt like the dying screams of human individuality, being crushed to death by the hive mind’s constant demands for us to agree with more people than we ever evolved to agree with.

I love to quote-tweet approvingly. I suppose that’s one of my eccentricities.

What are the skills that you really want out of a college graduate?

Wednesday, January 11th, 2023

Stephen Hsu was the most senior administrator who reviewed all the tenure and promotion cases at his university:

We have 50,000 students here. It’s one of the biggest universities in the United States. Each year, there are about 150 faculty who are coming up for promotion from associate professor to full professor or assistant to associate with tenure. And there are sometimes situations where you know what the system wants you to do with a particular person, but there’s a question of your personal integrity—whether you want to actually uphold the standards of the institution in those circumstances.

It’s funny, because the president who hired me actually wanted me to do that. She wanted someone who was very rigorous to control this process. But I knew I was gradually making enemies. Sometimes there’s a popular person, and maybe there’s some diversity goal or gender equality goal. So you have this person maybe who hasn’t done that well with their research, or hasn’t been well-funded with external grants, or maybe their teaching evaluations aren’t that great, but some people really want them promoted. And if you impose the regular standard and they don’t get promoted, you’ve made a lot of enemies.

So if I just thought to myself, “I’m not going to be at Michigan State 10 years from now—let them let them handle the problems if all these people who are not so good get promoted. Let them deal with it,” that would be the smart thing if I were a careerist or self-interested person. Don’t make waves, just put your finger in the wind and say: “Which way is the wind blowing? I’ll just go with that.” But I didn’t do that. Because I thought, “What’s the point of doing this job if you’re not going to do it right?” Now imagine how many congressmen are doing this, imagine how many have really deeply held principles that they’re trying to advance. Maybe it’s 10 percent? I don’t know, But it’s nowhere near 100 percent.

It’s the same in higher ed. There’s something called the College Learning Assessment. It’s a standardized test that was developed over the last 20 years. And it’s supposed to evaluate the skills that were learned by students during college. For less prestigious directional state universities this would be a very good tool, because the subset of graduates who did well on the CLA could get hired by General Motors or whatever with the same confidence as they were able to hire the kid from Harvard, University of Michigan, or anywhere else. So there was interest in building something like the CLA.

In order not to do it in a vacuum, the people who were developing it went to all these big corporations and said “Well, what are the skills that you really want out of a college graduate?” And not surprisingly, they wanted things like being able to read an article in The Economist and write a good summary. Or to look at graphs and make some inferences. Nothing ivory tower—it was all very reasonable, practical stuff. And so they commissioned this huge study by RAND. Twenty universities participated, including MIT, Michigan, some historically black colleges, some directional state universities—a huge spectrum covering all of American higher education.

They found that leaving students’ CLA score was very highly correlated to their incoming SAT score. Well, if you knew anything about psychometrics, it’s no surprise that the delta between your freshman year and your senior year on the CLA score is minimal. So what are kids buying when they go to college for four years? Are they getting skills that GM or McKinsey want, or are they just repackaging themselves?

I showed the results of this Rand CLA study to my colleagues, the senior administrators at Michigan State University, and I tried to get them to understand: “Guys, do you realize that maybe we’re not doing what we think we’re doing on this campus? You probably go out and tell alums and donors, moms and dads that we’re building skills for these kids at Michigan State, so they can be great employees of Ford Motor Company and Anderson Consulting when they get out. But the data doesn’t actually say that we do that.” I’m not talking about specialist majors like accounting or engineering, where we can see the kids are coming out with skills they didn’t enter with. I’m talking about generalist learning and “critical thinking” that schools say they teach, but the CLA says otherwise.

I have all my emails from when I was in that job, so I can tell you exactly how much intellectual curiosity and updating of priors there was among these vice presidents and higher at major Big 10 universities. Now, they could have come back and said, “Steve, I don’t believe this RAND study. My son Johnny learned a lot when he was at Illinois,” or something. They could have come back and contested the findings. Did any of them contest the findings with me? Zero.

Did any of them care about what was revealed about the business that we’re actually in, about what is actually going on our campus? One or two well-meaning VPs emailed me saying “Wow, that’s incredible. I never would have thought…” One of the women who emailed me back had a college-aged kid, and this actually impacted some decisions that were going on in her family at the time.

But there was overall very little concern about the findings, there was very little pushback even denying the findings. Those are the people running your institutions of higher education. I discussed these findings with lots of other top administrators at other universities and very few people care. They’ve got their career, they’re just doing their thing.

The group was elitist, but it was also meritocratic

Tuesday, January 10th, 2023

Sputnik’s success created an overwhelming sense of fear that permeated all levels of U.S. society, including the scientific establishment:

As John Wheeler, a theoretical physicist who popularized the term “black hole” would later tell an interviewer: “It is hard to reconstruct now the sense of doom when we were on the ground and Sputnik was up in the sky.”

Back on the ground, the event spurred a mobilization of American scientists unseen since the war. Six weeks after the launch of Sputnik, President Dwight Eisenhower revived the President’s Scientific Advisory Council (PSAC). It was a group of 16 scientists who reported directly to him, granting them an unprecedented amount of influence and power. Twelve weeks after Sputnik, the Department of Defense launched the Advanced Research Project Agency (ARPA), which was later responsible for the development of the internet. Fifteen months after Sputnik, the Office of the Director of Defense Research and Engineering (ODDRE) was launched to oversee all defense research. A 36-year-old physicist who worked on the Manhattan Project, Herb York, was named head of the Office of the ODDRE. There, he reported directly to the president and was given total authority over all defense research spending.

It was the beginning of a war for technological supremacy. Everyone involved understood that in the nuclear age, the stakes were existential.

It was not the first time the U.S. government had mobilized the country’s leading scientists. World War II had come to be known as “the physicists’ war.” It was physicists who developed proximity bombs and the radar systems that rendered previously invisible enemy ships and planes visible, enabling them to be targeted and destroyed, and it was physicists who developed the atomic bombs that ended the war. The prestige conferred by their success during the war positioned physicists at the top of the scientific hierarchy. With the members of the Manhattan Project now aging, getting the smartest young physicists to work on military problems was of intense interest to York and the ODDRE.

Physicists saw the post-Sputnik era as an opportunity to do well for themselves. Many academic physicists more than doubled their salaries working on consulting projects for the DOD during the summer. A source of frustration to the physicists was that these consulting projects were awarded through defense contractors, who were making twice as much as the physicists themselves. A few physicists based at the University of California Berkeley decided to cut out the middleman and form a company they named Theoretical Physics Incorporated.

Word of the nascent company spread quickly. The U.S.’s elite physics community consisted of a small group of people who all went to the same small number of graduate programs and were faculty members at the same small number of universities. These ties were tightened during the war, when many of those physicists worked closely together on the Manhattan Project and at MIT’s Rad Lab.

Charles Townes, a Columbia University physics professor who would later win a Nobel Prize for his role in inventing the laser, was working for the Institute for Defense Analysis (IDA) at the time and reached out to York when he learned of the proposed company. York knew many of the physicists personally and immediately approved $250,000 of funding for the group. Townes met with the founders of the company in Los Alamos, where they were working on nuclear-rocket research. Appealing to their patriotism, he convinced them to make their project a department of IDA.

A short while later the group met in Washington D.C., where they fleshed out their new organization. They came up with a list of the top people they would like to work with and invited them to Washington for a presentation. Around 80 percent of the people invited joined the group; they were all friends of the founders, and they were all high-level physicists. Seven of the first members, or roughly one-third of its initial membership, would go on to win the Nobel Prize. Other members, such as Freeman Dyson, who published foundational work on quantum field theory, were some of the most renowned physicists to never receive the Nobel.

The newly formed group was dubbed “Project Sunrise” by ARPA, but the group’s members disliked the name. The wife of one of the founders proposed the name JASON, after the Greek mythological hero who led the Argonauts on a quest for the golden fleece. The name stuck and JASON was founded in December 1959, with its members being dubbed “Jasons.”

The key to the JASON program was that it formalized a unique social fabric that already existed among elite U.S. physicists. The group was elitist, but it was also meritocratic. As a small, tight-knit community, many of the scientists who became involved in JASON had worked together before. It was a peer network that maintained strict standards for performance. With permission to select their own members, the Jasons were able to draw from those who they knew were able to meet the expectations of the group.

This expectation superseded existing credentials; Freeman Dyson never earned a PhD, but he possessed an exceptionally creative mind. Dyson became known for his involvement with Project Orion, which aimed to develop a starship design that would be powered through a series of atomic bombs, as well as his Dyson Sphere concept, a hypothetical megastructure that completely envelops a star and captures its energy.

Another Jason was Nick Christofilos, an engineer who developed particle accelerator concepts in his spare time when he wasn’t working at an elevator maintenance business in Greece. Christofilos wrote to physicists in the U.S. about his ideas, but was initially ignored. But he was later offered a job at an American research laboratory when physicists found that some of the ideas in his letters pre-dated recent advances in particle accelerator design. Dyson’s and Christofilios’s lack of formal qualifications would preclude an academic research career today, but the scientific community at the time was far more open-minded.

JASON was founded near the peak of what became known as the military-industrial complex. When President Eisenhower coined this term during his farewell address in 1961, military spending accounted for nine percent of the U.S. economy and 52 percent of the federal budget; 44 percent of the defense budget was being spent on weapons systems.

But the post-Sputnik era entailed a golden age for scientific funding as well. Federal money going into basic research tripled from 1960 to 1968, and research spending more than doubled overall. Meanwhile, the number of doctorates awarded in physics doubled. Again, meritocratic elitism dominated: over half of the funding went to 21 universities, and these universities awarded half of the doctorates.

With a seemingly unlimited budget, the U.S. military leadership had started getting some wild ideas. One general insisted a moon base would be required to gain the ultimate high ground. Project Iceworm proposed to build a network of mobile nuclear missile launchers under the Greenland ice sheet. The U.S. Air Force sought a nuclear-powered supersonic bomber under Project WS-125 that could take off from U.S. soil and drop hydrogen bombs anywhere in the world. There were many similar ideas and each military branch produced analyses showing that not only were the proposed weapons technically feasible, but they were also essential to winning a war against the Soviet Union.

Prior to joining the Jasons, some of its scientists had made radical political statements that could make them vulnerable to having their analysis discredited. Fortunately, JASON’s patrons were willing to take a risk and overlook political offenses in order to ensure that the right people were included in the group. Foreseeing the potential political trap, Townes proposed a group of senior scientific advisers, about 75 percent of whom were well-known conservative hawks. Among this group was Edward Teller, known as the “father of the hydrogen bomb.” This senior layer could act as a political shield of sorts in case opponents attempted to politically tarnish JASON members.

Every spring, the Jasons would meet in Washington D.C. to receive classified briefings about the most important problems facing the U.S. military, then decide for themselves what they wanted to study. JASON’s mandate was to prevent “technological surprise,” but no one at the Pentagon presumed to tell them how to do it.

In July, the group would reconvene for a six-week “study session,” initially alternating yearly between the east and west coasts. Members later recalled these as idyllic times for the Jasons, with the group becoming like an extended family. The Jasons rented homes near each other. Wives became friends, children grew up like cousins, and the community put on backyard plays at an annual Fourth of July party. But however idyllic their off hours, the physicists’ workday revolved around contemplating the end of the world. Questions concerning fighting and winning a nuclear war were paramount. The ideas the Jasons were studying approached the level of what had previously been science fiction.

Some of the first JASON studies focused on ARPA’s Defender missile defense program. Their analysis furthered ideas involving the detection of incoming nuclear attacks through the infrared signature of missiles, applied newly-discovered astronomical techniques to distinguish between nuclear-armed missiles and decoys, and worked on the concept of shooting what were essentially directed lightning bolts through the atmosphere to destroy incoming nuclear missiles.

The lightning bolt idea, known today as directed energy weapons, came from Christofilos, who was described by an ARPA historian as mesmerizing JASON physicists with the “kind of ideas that nobody else had.” Some of his other projects included a fusion machine called Astron, a high-altitude nuclear explosion test codenamed Operation Argus that was dubbed the “greatest scientific experiment ever conducted,” and explorations of a potential U.S. “space fleet.”

The Jasons’ analysis on the effects of nuclear explosions in the upper atmosphere, water, and underground, as well as methods of detecting these explosions, was credited with being critical to the U.S. government’s decision to sign the Limited Test Ban Treaty with the Soviet Union. Because of their analysis, the U.S. government felt confident it could verify treaty compliance; the treaty resulted in a large decline in the concentration of radioactive particles in the atmosphere.

The success of JASON over its first five years increased its influence within the U.S. military and spurred attempts by U.S. allies to copy the program. Britain tried for years to create a version of JASON, even enlisting the help of JASON’s leadership. But the effort failed: British physicists simply did not seem to desire involvement. Earlier attempts by British leaders like Winston Churchill to create a British MIT had run into the same problems.

The difference was not ability, but culture. American physicists did not have a disdain for the applied sciences, unlike their European peers. They were comfortable working as advisors on military projects and were employed by institutions that were dependent on DOD funding. Over 20 percent of Caltech’s budget in 1964 came from the DOD, and it was only the 15th largest recipient of funding; MIT was first and received twelve times as much money. The U.S. military and scientific elite were enmeshed in a way that had no parallel in the rest of the world then or now.

They are very, very careerist people

Monday, January 9th, 2023

Stephen Hsu worked for a time as a vice president of a university and notes that administrators are a different group:

The top level administrators at universities are usually drawn from the faculty, or from faculty at other universities. After being a top level administrator at a Big 10 university, and meeting provosts and presidents at the other top universities, I have a pretty good feel for this particular collection of people.

You can imagine what it is that makes someone who’s already a tenured professor in biochemistry decide they want to take on this huge amount of responsibility and maybe even shut down their own research program. They are very, very careerist people. And that is a huge problem, because incentives are heavily misaligned.

The incentive for me as a senior administrator is not to make waves and keep everything kind of calm. Calm down the crazy professor who’s doing stuff, assuage the students that are protesting, make the donors happy, make the board of trustees happy. I found that the people who were in the role so they could advance their career, versus those trying to advance the interests of the institution, were very different. There were times when I felt like I had to do something very dangerous for me career-wise, but it was absolutely essential for the mission of the university. I had to do that repeatedly.

And I told the president who hired me, “I don’t know how long I’m going to last in this job, because I’m going to do the right thing. If I do the right thing and I’m bounced out, that’s fine. I don’t care.” But most people are not like that.

In economics, there’s something called the principal-agent problem. Let’s say you hire a CEO to manage your company. Unless his compensation is completely determined by some long-dated stock options or something, his interests are not aligned with the long-term growth for your company. He can have a great quarter by shipping all your manufacturing off to China, have a great few quarters, and get a huge bonus. Even if, on a long timescale, it’s really bad for your bottom line.

So there’s a principal-agent problem here. Anytime you give centralized power to somebody, you have to be sure that their incentives — or their personal integrity — are aligned with what you want them to promote at the institution. And generally, it’s not well done in the universities right now.

It’s not like it used to be that, “Oh, if Joe or Jane is going to become university president, you can bet that their highest value is higher education and truth, that’s the American way.” It was probably never true. But they don’t claw back your compensation as a president of the university if it later turns out that you really screwed something up. You know, they don’t really even do that with CEOs.

This is James Daunt’s super power

Sunday, January 8th, 2023

Ted Gioia recently visited a Barnes & Noble store for the first time since the pandemic, saw a lot of interesting books, and bought a couple:

I plan to go back again.

But I’m not the only one.

The turnaround has delivered remarkable results. Barnes & Noble opened 16 new bookstores in 2022, and now will double that pace of openings in 2023. In a year of collapsing digital platforms, this 136-year-old purveyor of print media is enjoying boom times.

How did they fix things?

It’s amazing how much difference a new boss can make.

I’ve seen that firsthand so many times. I now have a rule of thumb: “There is no substitute for good decisions at the top—and no remedy for stupid ones.”

It’s really that simple. When the CEO makes foolish blunders, all the wisdom and hard work of everyone else in the company is insufficient to compensate. You only fix these problems by starting at the top.

In the case of Barnes & Noble, the new boss was named James Daunt. And he had already turned around Waterstones, a struggling book retailing chain in Britain.

Back when he was 26, Daunt had started out running a single bookstore in London—and it was a beautiful store. He had to borrow the money to do it, but he wanted a store that was a showplace for books. And he succeeded despite breaking all the rules.

For a start, he refused to discount his books, despite intense price competition in the market. If you asked him why, he had a simple answer: “I don’t think books are overpriced.”

After taking over Waterstones, he did something similar. He stopped all the “buy-two-books-and-get-one-free” promotions. He had a simple explanation for this too: When you give something away for free, it devalues it.

But the most amazing thing Daunt did at Waterstones was this: He refused to take any promotional money from publishers.

This seemed stark raving mad. But Daunt had a reason. Publishers give you promotional money in exchange for purchase commitments and prominent placement—but once you take the cash, you’ve made your deal with the devil. You now must put stacks of the promoted books in the most visible parts of the store, and sell them like they’re the holy script of some new cure-all creed.

Those promoted books are the first things you see when you walk by the window. They welcome you when you step inside the front door. They wink at you again next to the checkout counter.

Leaked emails show ridiculous deals. Publishers give discounts and thousands of dollars in marketing support, but the store must buy a boatload of copies—even if the book sucks and demand is weak—and push them as aggressively as possible.

Publishers do this in order to force-feed a book on to the bestseller list, using the brute force of marketing money to drive sales. If you flog that bad boy ruthlessly enough, it might compensate for the inferiority of the book itself. Booksellers, for their part, sweep up the promo cash, and maybe even get a discount that allows them to under-price Amazon.

Everybody wins. Except maybe the reader.

Daunt refused to play this game. He wanted to put the best books in the window. He wanted to display the most exciting books by the front door. Even more amazing, he let the people working in the stores make these decisions.

This is James Daunt’s super power: He loves books.

“Staff are now in control of their own shops,” he explained. “Hopefully they’re enjoying their work more. They’re creating something very different in each store.”

This crazy strategy proved so successful at Waterstones, that returns fell almost to zero—97% of the books placed on the shelves were purchased by customers. That’s an amazing figure in the book business.

On the basis of this success, Daunt was put in charge of Barnes & Noble in August 2019.

I almost never need a new book right now, so it feels wrong to pay full price, when I could so easily “get the second marshmallow” by waiting — but I must admit that I enjoy browsing physical books.

What always struck me about bookstores was how random the inventory seemed, especially in a section like Sci-Fi and Fantasy, where you’d find books two and five of a nine-part series and no guidance as to where to start in the genre.

If you sense that NSF or NIH have a view on something, it’s best not to fight city hall

Saturday, January 7th, 2023

Stephen Hsu gives an example of how politics constrains the scientific process:

This individual is one of the most highly decorated, well-known climate simulators in the world. To give you his history, he did a PhD in general relativity in the UK and then decided he wanted to do something else, because he realized that even though general relativity was interesting, he didn’t feel like he was going to have a lot of impact on society. So he got involved in meteorology and climate modeling and became one of the most well known climate modelers in the world in terms of prizes and commendations. He’s been a co-author on all the IPCC reports going back multiple decades. So he’s a very well-known guy. But he was one of the authors of a paper in which he made the point that climate models are still far from perfect.

To do a really good job, you need to have a small basic cell size, which captures the size of the features being modeled inside the simulation. The best size is actually scaled down quite a bit because of all kinds of nonlinear phenomena: turbulence, convection, transport of heat, moisture, and everything that goes into the making of weather and climate.

And so he made this point that we’re nowhere near actually being able to properly simulate the physics of these very important features. It turns out that the transport of water vapor, which is related to the formation of clouds, is important. And it turns out high clouds reflect sunlight, and have the opposite sign effect on climate change compared to low clouds, which trap infrared radiation. So whether moisture in the atmosphere or additional carbon in the atmosphere causes more high cloud formation versus more low cloud formation is incredibly important, and it carries the whole day in these models.

In no way are these microphysics of cloud formation being modeled right now. And anybody who knows anything knows this. And the people who really understand physics and do climate modeling know this.

So he wrote a paper saying that governments are going to spend billions, maybe trillions of dollars on policy changes or geothermal engineering. If you’re trying to fix the climate change problem, can you at least spend a billion dollars on the supercomputers that we would need to really do a more definitive job forecasting climate change?

And so that paper he wrote was controversial because people in the community maybe knew he was right, but they didn’t want him talking about this. But as a scientist, I fully support what he’s trying to do. It’s intellectually honest. He’s asking for resources to be spent where they really will make a difference, not in some completely speculative area where we’re not quite sure what the consequences will be. This is clearly going to improve climate modeling and is clearly necessary to do accurate climate modeling. But the anecdote gives you a sense of how fraught science is when there are large scale social consequences. There are polarized interest groups interacting with science.

[…]

It was controversial because, in a way, he was airing some well known dirty laundry that all the experts knew about. But many of them would say it’s better to hide laundry for the greater good, because a bad guy—somebody who’s very anti-CO2 emissions reduction— could seize on this guy’s article and say “Look, the leading guy in your field says that you can’t actually do the simulations he wants, and yet you’re trying to shove some very precise policy goal down my throat. This guy’s revealing those numbers have literally no basis.” That would be an extreme version of the counter-utilization of my colleague’s work.

[…]

In my lifetime, the way science is conducted has changed radically, because now it’s accepted—particularly by younger scientists—that we are allowed to make ad hominem attacks on people based on what could be their entirely sincere scientific belief. That was not acceptable 20 or 30 years ago. If you walked into a department, even if it had something to do with the environment or human genetics or something like that, people were allowed to have their contrary opinion as long as the arguments they made were rational and supported by data. There was not a sense that you’re allowed to impute bad moral character to somebody based on some analytical argument that they’re making. It was not socially acceptable to do that. Now people are in danger of losing their jobs.

[…]

I could list a bunch of factors that I think contributed, and one of them is that scientists are under a lot of pressure to get money to fund their labs and pay their graduate students. If you sense that NSF or NIH have a view on something, it’s best not to fight city hall. It’s like fighting the Fed—you’re going to lose. So that enforces a certain kind of conformism.

[…]

As far as how science relates to the outside world, here’s the problem: for some people, when science agrees with their cherished political belief, they say “Hey, you know what? This is the Vulcan Science Academy, man. These guys know what they’re doing. They debated it, they looked at all the evidence, that’s a peer-reviewed paper, my friend—it was reviewed by peers. They’re real scientists.” When they like the results, they’re going to say that.

When they don’t like it, they say, “Oh, come on, those guys know they have to come to that conclusion or they’re going to lose their NIH grant. These scientists are paid a lot of money now and they’re just feathering their own nests, man. They don’t care about the truth. And by the way, papers in this field don’t replicate. Apparently, if you do a study where you look back at the most prominent papers over the last 10 years, and you check to see whether subsequent papers which were better powered, had better technology, and more sample size actually replicated, the replication rate was like 50 percent. So, you can throw half the papers that are published in top journals in the trash.”

As it turned and ran the ice axe fell out of his head

Friday, January 6th, 2023

Clint Adams was mountain goat hunting on Alaska’s Baranof Island in October with his friend, Matt Ericksen, his girlfriend, Melody Orozco, and their guide, when he heard the guide yell three words that nobody ever wants to hear in bear country:

“Oh, fuck. Run!”

By the time Adams realized what was happening, his guide was already running past him and reaching for the .375 H&H bolt-action rifle that was slung over his shoulder. Adams’ own rifle was strapped to his pack, and the only weapon at hand was the ice axe he’d been using to claw his way up the mountain. When the big boar chased after the guide and passed within arm’s reach of Adams, he took the ice axe and swung with both hands, burying the pointy end in the bear’s skull just behind its ear.

[…]

Adams then watched as the bear tackled the guide from behind, and the two rolled down to a flat spot below. The guide was on his back trying to shoulder the rifle as the eight- to nine-foot boar reared back on its hind legs. That’s when Adams saw that the axe was still lodged in the bear’s head.

Adams is 6’6” and weighs 285 pounds.

The impaled bear then reared up over the guide, who shouldered his rifle and fired a shot straight up into the air. Adams says he distinctly remembers seeing the muzzle blast ruffle the bear’s fur. The shot spooked the bear just enough for it to step back and hesitate. At this point, Ericksen drew the .357 revolver strapped to his chest and fired three shots at the bear through the brush.

The boar charged the guide again, and the guide leveled his rifle and shot a second time. Ericksen fired two more rounds from his pistol. Adams says they still don’t know if any of those shots even hit the bear, but they all kept screaming and eventually the bear ran off. They never saw the bear again, and although the guide reported the incident, Adams has no idea if the bear died or not. He did, however, get his ice axe back.

“After that second shot [from the guide], the bear looped down and got level with me about 30 yards away,” Adams says. “We’re making a ton of noise at that point, and it bluff charged once or twice. It took two steps forward, two steps back, and as it turned and ran the ice axe fell out of his head.”

[…]

Adams also says the whole experience opened his eyes to how gunshots help stop a charging bear. He says that because they were in dense brush in tight quarters, bear spray would have been useless, and he thinks that the muzzle blast from the guide’s rifle might have deterred the bear even more than the bullet.

“This might sound silly, but after going through that and seeing how the bear responded, I honestly would feel the most safe from a charging bear with a foghorn in my hand,” Adams says. “When I saw that .375 go off, it was not only the sound, but more so it was the air that hit the bear in the face. It was just amazing how that bear reacted when it got hit with the muzzle blast.”

He adds that, in his opinion, if you’re going to carry a pistol in bear country—which, of course, you should—your best would be to carry a 10mm Glock with a 19-round magazine and “make as many bangs as you can.”

Posturing is an important part of fighting. With that in mind, a compensated pistol might be especially effective.

Speaking of Glocks and bears:

Sam Kezar reckons he’d be either dead or disfigured if he hadn’t spent all summer fast-drawing his Glock. He bases that conclusion on a sobering calculus of time and distance—the two seconds required for a Wyoming grizzly bear to cover 20 yards—and the fact that Kezar somehow managed to get off seven shots from his 10mm in that span of time as he was staring terror in the face. As the bear was closing fast, and he was backpedaling into the unknown.

Strange things have been happening to the human body over the last few decades

Thursday, January 5th, 2023

Strange things have been happening to the human body over the last few decades:

Why have human body temperatures declined in the United States over the last 150 years? Or why has the age of first puberty been declining among teenagers since the mid-nineteenth century, from 16.5 years in 1840 to 13 years in 1995?

Or—to take a more troubling and immediate case—why have rates of autism been increasing so dramatically? After having been very rare a few decades prior, the rate has grown from about 1 in 150 children in 2000 to 1 in 44 in 2018, according to the Center for Disease Control. The standard explanation for this increase—changing diagnostic criteria and increased awareness—simply does not explain how sustained the uptick has been, nor does it explain the first-hand accounts of the increase by teachers. In fact, studies have found that changing diagnostic criteria account for only one-fourth of observed increases. Something else is causing the rest.

Or consider something as seemingly straightforward as obesity. In 1975, about 12 percent of American adults were obese; now that figure sits above 40 percent. The standard explanation of the remarkable increase in obesity over the last few decades—the “big two,” more calories and less physical exertion—have an intuitive appeal, but they do not seem to capture the full picture. Between 1999 and 2017, per capita caloric intake among Americans did not change, while the rate of obesity increased by about a third. The increase is so dramatic that a drop-off in physical exertion in so brief a period is unlikely to be the sole explanation, especially since the majority of human energy expenditure is non-behavioral.

Obesity thus remains, in the words of an article in the American Journal of Clinical Nutrition, an “unexplained epidemic.” This is why many scientists have sought to locate contributing factors to the secular increase in obesity, from the decline in cigarette use to increases in atmospheric CO2 levels.

There are many conditions like this: allergies, irritable bowel syndrome, eczema, and autoimmune conditions like juvenile arthritis are other notable examples. These are not the well-known “diseases of modernity,” like heart disease or Type 2 diabetes, whose causes are reasonably well-known. Disturbingly, there seem to be connections between all of these conditions: the “autistic enterocolitis” gut disorder that resembles Crohn’s disease in autistic children, the obesity-asthma link, the irritable bowel syndrome-eczema link, the eczema-allergies link. These “diseases of postmodernity” appear to be a package deal: autistic children report higher rates of stomach pain, and obese people be more likely to develop eczema-like skin diseases. There is some common root underlying these conditions.

A wealth of scientific work mostly done in the last decade by scientists like Martin Blaser of Rutgers may point to the answer. The origin lies in the extraordinary pressure we have been placing on a part of the body about which we know and think little: the microbiome of the human gut.

[…]

We have known for a long time that antibiotics induce rapid weight gain in everything from mice to humans. The specific dynamic—antibiotics cause gut dysbiosis, and gut dysbiosis leads to obesity and other diseases—is now becoming increasingly clear. Similar studies for conditions like asthma or juvenile arthritis, all conducted only in the last few years, have found the same link.

This is especially worrying because antibiotics are everywhere.

[…]

Consider animal agriculture, the main force for antibiotic pollution in the United States. Antibiotics are now crucial to the industrial production of chicken, pig, and cow protein; in recent years antibiotics have even begun to be used in aquaculture. The reasons are simple: antibiotics used prophylactically can prevent and suppress infectious diseases, like bovine footrot and anaplasmosis, that are common in the claustrophobic quarters of concentrated animal-feeding operations (CAFOs). More insidiously, antibiotics can make livestock larger by disrupting their gut biomes and metabolisms, allowing them to be slaughtered at younger ages and at greater weights. In 2019, of the antibiotics sold in the United States, only about a third went to humans, with the rest consumed by livestock.

Antibiotics have been used in American animal agriculture since the late 1940s. It was then that Thomas Jukes, a biologist for the pharmaceutical company Lederle Laboratories, discovered that treating chickens with even trace amounts of the antibiotic chlortetracycline—a drug that had been discovered in 1945 at Lederle—caused them to gain much more weight. The more chlortetracycline the birds got, the larger they were; the chickens that had gotten the highest doses weighed two and a half times more than the ones that hadn’t gotten anything.

[…]

Per capita consumption of chicken—once a rare and expensive kind of meat, typically consumed as a Sunday treat—more than tripled between 1960 and 2020, growing from a relatively marginal part of the American diet in the first few decades of the twentieth century into the country’s premier staple protein.

[…]

As with chickens, the biological effects on cows were significant. The year that monensin was licensed, the average weight of cows at slaughter was 1,047 pounds; by 2005, it had grown thirty percent, to 1,369 pounds. By 2017, American cattle producers used about 171 milligrams of antibiotic per kilogram of livestock—four times as much as in France, and six times as much as in the United Kingdom.

[…]

As a result of this mass pharmaceutical use in animal agriculture, natural bodies of water now contain remarkable amounts of antibiotic waste. One study of a river in Colorado found that “the only site at which no antibiotics were detected was the pristine site in the mountains before the river had encountered urban or agricultural landscapes.” Antibiotics like macrolides and tetracyclines have been found in chlorinated drinking water, while the antibiotic triclosan has been found in rivers and streams around the world. This effluent trickles into everything else: research has detected uptake of veterinary antibiotics in carrots and lettuce, as well as in human breast milk.

[…]

It was not until 2017, well after European countries had strictly limited the use of antibiotics, that the FDA was finally able to ban the use of antibiotics for growth promotion in livestock, mandating that all antibiotics given to cattle needed a prescription. After peaking in 2015, antibiotic use on farms has declined by about 40 percent, with most of the effect taking place in the year of the ban.

But antibiotic use remains elevated, above an average of 100 milligrams per kilogram per year—far more than the 50 milligram limit that reports on antibiotic resistance have proposed, and several times more than is normal in European countries like France or Norway. The reason, Lewis believes, goes back to his anaplasmosis episode. He believes that anaplasmosis is commonly used as a pretext for administering growth-promoting antibiotics, and that this is an open secret among farmers and livestock veterinarians. The “motorway veterinarian,” dependent on the business of growth-hungry farmers, remains alive and well.

[…]

One study in Science found that 42 percent of lots that were certified by the Department of Agriculture as “Raised Without Antibiotics” actually contained cattle that had been given antibiotics, with five percent of lots being composed entirely of cattle raised on antibiotics.

The Overfitted Brain Hypothesis explains why dreams are so dreamlike

Wednesday, January 4th, 2023

None of the leading hypotheses about the purpose of dreaming are convincing, Erik Hoel explains:

E.g., some scientists think the brain replays the day’s events during dreams to consolidate the day’s new memories with the existing structure. Yet, such theories face the seemingly insurmountable problem that only in the most rare cases do dreams involve specific memories. So if true, they would mean that the actual dreams themselves are merely phantasmagoric effluvia, a byproduct of some hazily-defined neural process that “integrates” and “consolidates” memories (whatever that really means). In fact, none of the leading theories of dreaming fit well with the phenomenology of dreams—what the experience of dreaming is actually like.

First, dreams are sparse in that they are less vivid and detailed than waking life. As an example, you rarely if ever read a book or look at your phone screen in dreams, because the dreamworld lacks the resolution for tiny scribblings or icons. Second, dreams are hallucinatory in that they are often unusual, either by being about unlikely events, or involve nonsensical objects or borderline categories. People who are two people, places that are both your home and a spaceship. Many dreams could be short stories by Kafka, Borges, Márquez, or some other fabulist. A theory of dreams must explain why every human, even the most unimaginative accountant, has within them a surrealist author scribbling away at night.

To explain the phenomenology of dreams I recently outlined a scientific theory called the Overfitted Brain Hypothesis (OBH). The OBH posits that dreams are an evolved mechanism to avoid a phenomenon called overfitting. Overfitting, a statistical concept, is when a neural network learns overly specifically, and therefore stops being generalizable. It learns too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at data sets it hasn’t seen before. All learning is basically a tradeoff between specificity and generality in this manner. Real brains, in turn, rely on the training set of lived life. However, that set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain, and relying solely on it likely leads to overfitting.

Common practices in deep learning, where overfitting is a constant concern, lend support to the OBH. One such practice is that of “dropout,” in which a portion of the training data or network itself is made sparse by dropping out some of the data, which forces the network to generalize. This is exactly like the spareness of dreams. Another example is the practice of “domain randomization,” where during training the data is warped and corrupted along particular dimensions, often leading to hallucinatory or fabulist inputs. Other practices include things like feeding the network its own outputs when it’s undergoing random or biased activity.

What the OBH suggests is that dreams represent the biological version of a combination of such techniques, a form of augmentation or regularization that occurs after the day’s learning—but the point is not to enforce the day’s memories, but rather combat the detrimental effects of their memorization. Dreams warp and play with always-ossifying cognitive and perceptual categories, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning, then, during sleep, the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models and incorrect associations.

The OBH fits with the evidence from human sleep research: sleep seems to be associated not so much with assisting pure memorization, as other hypotheses about dreams would posit, but with an increase in abstraction and generalization. There’s also the famous connection between dreams and creativity, which also fits with the OBH. Additionally, if you stay awake too long you will begin to hallucinate (perhaps because your perceptual processes are becoming overfitted). Most importantly, the OBH explains why dreams are so, well, dreamlike.

This connects to another question. Why are we so fascinated by things that never happened?

If the OBH is true, then it is very possible writers and artists, not to mention the entirety of the entertainment industry, are in the business of producing what are essentially consumable, portable, durable dreams. Literally. Novels, movies, TV shows—it is easy for us to suspend our disbelief because we are biologically programmed to surrender it when we sleep.

[…]

Just like dreams, fictions and art keep us from overfitting our perception, models, and understanding of the world.

[…]

There is a sense in which something like the hero myth is actually more true than reality, since it offers a generalizability impossible for any true narrative to possess.

Galton’s disappearance from collective memory would have been surprising to his contemporaries

Tuesday, January 3rd, 2023

Some people get famous for discovering one thing, Adam Mastroianni notes, like Gregor Mendel:

Some people get super famous for discovering several things, like Einstein and Newton.

So surely if one person came up with a ton of different things — say, correlation, standard deviation, regression to the mean, “nature vs. nurture,” questionnaires, twin studies, the wisdom of the crowd, fingerprinting, the first map of Namibia, synesthesia, weather maps, anticyclones, the best method to cut a round cake, and eugenics (yikes) — they’d be super DUPER famous.

But most people have never heard of Sir Francis Galton (1822-1911). Psychologists still use many of the tools he developed, but the textbooks barely mention him. Charles Darwin, Galton’s half-cousin, seems to get a new biography every other year; Galton has had three in a century.

Galton’s disappearance from collective memory would have been surprising to his contemporaries. Karl Pearson (of regression coefficient fame) thought Galton might ultimately be bigger than Darwin or Mendel:

Twenty years ago, no one would have questioned which was the greater man [...] If Darwinism is to survive the open as well as covert attacks of the Mendelian school, it will only be because in the future a new race of biologists will arise trained up in Galtonian method and able to criticise from that standpoint both Darwinism and Mendelism, for both now transcend any treatment which fails to approach them with adequate mathematical knowledge [...] Darwinism needs the complement of Galtonian method before it can become a demonstrable truth…

So, what happened? How come this dude went from being mentioned in the same breath as Darwin to never being mentioned at all? Psychologists are still happy to talk about the guy who invented “penis envy,” so what did this guy do to get scrubbed from history?

I started reading Galton’s autobiography, Memories of My Life, because I thought it might be full of juicy, embarrassing secrets about the origins of psychology. I’m telling you about it today because it is, and it’s full of so much more. There are adventures in uncharted lands, accidental poisonings, brushes with pandemics, some dabbling in vivisection, self-induced madness, a dash of blood and gore, and some poo humor for the lads. And, ultimately, a chance to wonder whether moral truth exists and how to find it.

Readers of this blog — certainly the ones of proper breeding — will already know what Galton did “wrong” to end up down the memory hole.

I felt a bit embarrassed that I’d never read his biography, but I doubt I’ve ever come across a physical copy.

Gone With the Wind is the new improved Vanity Fair

Monday, January 2nd, 2023

Lex Fridman’s reading list doesn’t include William Makepeace Thackeray‘s Vanity Fair, but Steve Sailer’s review has me intrigued:

It’s extremely enjoyable. Despite a fairly rambling plot covering almost 800 pages from roughly 1813 to 1828, it’s a page-turner because the characters and situations are interesting enough that you want to find out what happens.

I’d describe Vanity Fair as the precursor to Gone With the Wind, in that it centers around two young women, the nice but mopey Amelia (the precursor of Melanie Hamilton, played by Olivia De Havilland) and the not nice but more interesting Becky Sharp (Scarlett O’Hara, played by Vivien Leigh).

[…]

The male characters in both tend to be army officers who go off to a big battle, Waterloo in VF and Gettysburg in GWTH.

Overall, I’d say that GWTH is the new improved VF, with more memorable characters and settings. Margaret Mitchell always denied having read Vanity Fair, but Gone With the Wind sure seems like a punched up version of Vanity Fair, with Mitchell raising the stakes wherever Thackeray was inclined to let them ride.

For instance, while the British win at Waterloo and so English society mostly goes on as before, the Southerners lose at Gettysburg and soon the old society is, like the title says, gone with the wind. The Southerners need to learn a lot of hard new lessons about life. Melanie and her husband Ashley Wilkes fail to adapt to the new world, while Scarlett, despite her self-centered sense of entitlement and general knuckleheadedness, eventually succeeds.

In contrast, from the first page of Vanity Fair, Becky Sharp, a poor orphan, is smarter than the rich people around her. Thackeray points out near the beginning of the book that when she claims to love children that she would soon learn not to make claims so easily disproved. “The little adventuress” seldom learns over the 800 pages because she was already supremely worldly wise from a tender age.

In contrast to Scarlett, Becky is always rational to the point of being cold-blooded. Becky wants material comfort and to rise in status, but she lacks particular passions (until late in the book when she starts to develop a gambling problem). She has no Ashley Wilkes to pine over.

Indeed, Becky is so reasonable that she often behaves surprisingly nicely to the other characters because, having calculated all the factors, she doesn’t see how it could cost her much.

And Mitchell takes Thackeray’s admirable but stiff Captain Dobbin, who is lovelorn over Amelia throughout who foolishly ignores him, and turns him into the pirate king Rhett Butler (Clark Gable), who instead of being lovelorn over Melanie is lovelorn over Scarlett. This creates the 20th Century’s most popular fictional couple.