The self-described dark elf who yearns for a king

Tuesday, November 29th, 2022

Andrew Prokop of Vox recently spoke with Curtis Yarvin, the monarchist, anti-democracy blogger that many of us still remember as Mencius Moldbug:

When I first asked to speak with Yarvin, he requested that I prove my “professional seriousness as a current historian” by “reading or at least skimming” three books, and I complied. One of them, Public Opinion by Walter Lippmann — a classic of the journalism school canon — describes how people can respond when their previous beliefs about how the world works are called into question.

“Sometimes, if the incident is striking enough, and if he has felt a general discomfort with his established scheme, he may be shaken to such an extent as to distrust all accepted ways of looking at life, and to expect that normally a thing will not be what it is generally supposed to be,” Lippmann wrote. “In the extreme case, especially if he is literary, he may develop a passion for inverting the moral canon by making Judas, Benedict Arnold, or Caesar Borgia the hero of his tale.”

There, I thought of Yarvin — the self-described dark elf who yearns for a king.

The Moon is a Harsh Mistress is not an instruction manual

Saturday, November 19th, 2022

In what ways, Casey Handmer asks, does The Moon is a Harsh Mistress (and other novels in the genre) fail as an instruction manual?

We know that a Moon city is not a good place to grow plants, that water is relatively abundant on the surface near the poles, and that underground construction is pointlessly difficult. So any future Moon city will have to be structured around some other premise, which is to say its foundational architecture on both a social and technical level will be completely different.

We know that AIs are pretty good at tweaking our amygdala, but strictly speaking we don’t need to build one on the Moon, and I would hope its existence is strictly orthogonal to the question of political control.

Lunar cities, and all other space habitats, are tremendously vulnerable to physical destruction. This means that, for all practical purposes, Earthling power centers hold absolute escalation dominance. No combination of sneaky AIs, secret mass drivers, or sabotage would be enough to attain political independence through force. If space habitats want some degree of political autonomy, they will have to obtain it through non-violent means. Contemporary science fiction author Kim Stanley Robinson makes this argument powerfully in this recent podcast, when discussing how he structured the revolutions in his Mars trilogy.

Lastly, the “Brass cannon” story is like “Starship Troopers” – a falsifiably satirical critique of popular conceptions of political control. For some reason, libertarians swarm Heinlein novels and space advocacy conferences like aphids in spring. I will resist the temptation to take easy shots, but point out merely that every real-world attempt at implementation of libertarianism as the dominant political culture has failed, quickly and predictably. This is because libertarianism, like many other schools of thought that fill out our diverse political scene, functions best as an alternative actually practiced by very few people. It turns out a similar thing occurs in salmon mating behavior.

Opioid prescriptions are not correlated with drug-related deaths

Friday, November 18th, 2022

Six years ago, the Centers for Disease Control and Prevention (CDC) issued guidelines that discouraged doctors from prescribing opioids for pain and encouraged legislators to restrict the medical use of such drugs, based on the assumption that overprescribing was responsible for rising drug-related deaths:

Using data for 2010 through 2019, Aubry and Carr looked at the relationship between prescription opioid sales, measured by morphine milligram equivalents (MME) per capita, and four outcomes: total drug-related deaths, total opioid-related deaths, deaths tied specifically to prescription opioids, and “opioid use disorder” treatment admissions. “The analyses revealed that the direct correlations (i.e., significant, positive slopes) reported by the CDC based on data from 1999 to 2010 no longer exist,” they write. “The relationships between [the outcome variables] and Annual Prescription Opioid Sales (i.e., MME per Capita) are either non-existent or significantly negative/inverse.”

Those findings held true in “a strong majority of states,” Aubry and Carr report. From 2010 through 2019, “there was a statistically significant negative correlation (95% confidence level) between [opioid deaths] and Annual Prescription Opioid Sales in 38 states, with significant positive correlations occurring in only 2 states. Ten states did not exhibit significant (95% confidence level) relationships between overdose deaths and prescription opioid sales during the 2010–2019 time period.”

During that period, MME per capita dropped precipitously, falling by nearly 50 percent between 2009 and 2019. By 2021, prescription opioid sales had fallen to the lowest level in two decades.

Policies and practices inspired by the CDC’s 2016 guidelines contributed to that downward trend. Aubry and Carr note that “forty-seven states and the District of Columbia” now “have laws that set time or dosage limits for controlled substances.” In a 2019 survey by the American Board of Pain Medicine, the American Medical Association reports, “72 percent of pain medicine specialists” said they had been “been required to reduce the quantity or dose of medication” they prescribed as a result of the CDC guidelines.

The consequences for patients have not been pretty. They include undertreatment, reckless “tapering” of pain medication, and outright denial of care.

Most of the time, the road is far too big, and the rest of the time, it’s far too small

Sunday, November 13th, 2022

Casey Handmer did a bunch of transport economics when he worked at Hyperloop:

Let’s not bury the lede here. As pointed out in The Original Green blog, the entire city of Florence, in Italy, could fit inside one Atlanta freeway interchange. One of the most powerful, culturally important, and largest cities for centuries in Europe with a population exceeding 100,000 people. For readers who have not yet visited this incredible city, one can walk, at a fairly leisurely pace, from one side to the other in 45 minutes.


There are thousands of cities on Earth and not a single one where mass car ownership hasn’t led to soul-destroying traffic congestion.

Cars are both amazing and terrible:

Imagine there existed a way to move people, children, and almost unlimited quantities of cargo point to point, on demand, using an existing public network of graded and paved streets practically anywhere on Earth, in comfort, style, speed, and safety. Practically immune to weather. Operable by nearly any adult with only basic training, regardless of physical (dis)ability. Anyone who has made a habit of camping on backpacking trips knows well the undeniable luxury of sitting down in air-conditioned comfort and watching the scenery go by. At roughly $0.10/passenger mile, cars are also incredibly cheap to operate.


Some American cities have nearly 60% of their surface area devoted to cars, and yet they are the most congested of all. Would carving off another 10% of land, worth trillions in unimproved value alone, solve the problem? No. According to simulations I’ve run professionally, latent demand for surface transport in large cities exceeds supply by a factor of 30. Not 30%. 3000%. That is, Houston could build freeways to every corner of the city 20 layers deep and they would still suffer congestion during peak hours.

Why is that? Roads and freeways are huge, and expensive to build and maintain, but they actually don’t move very many people around. Typically peak capacity is about 1000 vehicles per lane per hour. In most cities, that means 1000 people/lane/hour. This is a laughably small number. All the freeways in LA over the four hour morning peak move perhaps 200,000 people, or ~1% of the overall population of the city. 30x capacity would enable 30% of the population to move around simultaneously.


Spacing between the bicycles, while underway, is a few meters, compared to 100 m for cars with a 3.7 m lane width. Bicycles and pedestrians take up roughly the same amount of space.


Like a lot of public infrastructure, the cost comes down to patterns of utilization. For any given service, avoiding congestion means building enough capacity to meet peak demand. But revenue is a function of average demand, which may be 10x lower than the peak. This problem occurs in practically all areas of life that involve moving or transforming things. Roads. Water. Power. Internet. Docks. Railways. Computing. Organizational structures. Publishing. Tourism. Engineering.

This effect is intuitively obvious for roads. Most of the time, the roads in my sleepy suburb of LA are lifeless expanses of steadily crumbling asphalt baking in the sun. The adjacent houses command property prices as high as $750/sqft, and yet every house has half a basketball court’s worth of nothing just sitting there next to it. Come peak hour, the road is now choked with cars all trying to get home, because even half a basketball court per house isn’t enough to fit all the cars that want to move there at that moment. And of an evening, onstreet parking is typically overwhelmed because now every car, which spends >95% of its life empty and unused, now needs 200 sqft of kerb to hang out. Most of the time, the road is far too big, and the rest of the time, it’s far too small.

People often underestimate the cost of having resources around that they aren’t currently using. And since our culture expects roads and parking to be both limitless, available, and free, we can’t rely on market mechanisms to correctly price and trade the cost. Seattle counted how many parking spaces were in the city and came up with 1.6 million. That’s more than five per household! Obviously most of them are vacant most of the time, just sitting there consuming space, and yet there will never be enough when they are needed!

At long last, we have created the Torment Nexus

Tuesday, November 8th, 2022

One year ago, Alex Blechman made this now-classic tweet:

They would verify the treaty without on-site inspections, using their own assets

Monday, November 7th, 2022

In 1972, the United States and Soviet Union signed the Anti-Ballistic Missile Treaty and the Interim Agreement, collectively known as SALT I:

This was an agreement by the two parties that they would verify the treaty without on-site inspections, using their own assets. Both sides also agreed not to interfere with these “national technical means.”

“National technical means” served as a euphemism for each country’s technical intelligence systems. Although these assets included ground, airborne, and other intelligence collection systems, the primary intelligence collectors for treaty verification were satellites, which both countries had been operating for over a decade, but neither country publicly discussed, certainly not with each other.


Surprisingly, there appears to have been little initial skepticism on the American side about the ability to verify strategic arms control treaties using satellites. In fact, there are indications that by the early 1970s there was an overestimation of their capabilities, although the people who developed and operated them were concerned about their limitations, as well as the misperception about what they could do versus their actual capabilities.

I’ve mentioned before that I always assumed that spy satellites used TV cameras, and it was a real shock to learn that they didn’t start out that way:

The first successful American photo-reconnaissance mission took place in August 1960 as part of the CORONA program. CORONA involved orbiting satellites equipped with cameras and film and recovering that film for processing. The early satellites orbited for approximately a day before their film was recovered, and it could take several days for that film to be transported and processed before it could be looked at by photo-interpreters in Washington, DC. Although the system was cumbersome, the intelligence data produced by each CORONA mission was substantial, revealing facilities and weapons systems throughout the vast landmass of the Soviet Union.

CORONA’s images were low resolution, capable of revealing large objects like buildings, submarines, aircraft, and tanks, but not providing technical details about many of them. In 1963, the National Reconnaissance Office launched the first GAMBIT satellite, which took photographs roughly equivalent to those taken by the U-2 spyplane that could not penetrate Soviet territory. Both CORONA and GAMBIT returned their film to Earth in reentry vehicles. By 1966, CORONA was equipped with two reentry vehicles, and GAMBIT was equipped with one, increased to two reentry vehicles by August 1969. The existence of multiple reentry vehicles on satellites and missiles was to become a source of concern for NRO officials as new arms control treaties were negotiated.

The two satellites complemented each other: CORONA covered large amounts of territory, locating the targets, and GAMBIT took detailed photographs of a small number of them, enabling analysts to make calculations about their capabilities such as the range of a missile or the carrying capability of a bomber. These photographic reconnaissance satellites provided a tremendous amount of data about the Soviet Union. That data was combined with other intelligence, such as interceptions of Soviet missile telemetry, to produce assessments of Soviet strategic capabilities. Signals and communications intelligence, collected by American ground stations around the world as well as satellites operated by the NRO, also contributed to the overall intelligence collection effort.

By the mid-to-late 1960s, these intelligence collection systems, particularly the photo-reconnaissance satellites, had dramatically improved American understanding of Soviet strategic forces and capabilities. A 1968 intelligence report definitively declared, “No new ICBM complexes have been established in the USSR during the past year.” As a CIA history noted, “This statement was made because of the confidence held by the analysts that if an ICBM was there, then CORONA photography would have disclosed them.” This kind of declared confidence in the ability of satellite reconnaissance to detect Soviet strategic weapons soon proved key to signing arms control treaties.

Why does everybody lie about social mobility?

Thursday, November 3rd, 2022

Why does everybody lie about social mobility?, Peter Saunders asks:

The answer [to a growing concern that the UK was squandering vast pools of potential working-class talent that it could ill afford to lose], addressed in the 1944 Education Act, was to make all state-aided secondary schools, including grammar schools, free for all pupils. A new national examination — the ’11-plus’ — was introduced, and those scoring high-enough marks were selected for grammar schools, regardless of their parents’ means. From now on, children from different social class backgrounds would be given an equal opportunity to get to grammar schools. The only selection criterion was intellectual ability.

It didn’t take long, however, for critics to notice that children from middle-class homes were still out-competing those from working-class backgrounds in the 11-plus competition for grammar school places. The possibility that this might be because middle-class kids are on average brighter than working-class kids was ruled out from the start.


In 1965, the (privately-educated) Labour Education Secretary, Anthony Crosland, issued an instruction to all local education authorities to close down their grammar schools and replace them with ‘comprehensives’ which would be forbidden to select pupils by ability. Within a few years, all but 163 of nearly 1,300 grammar schools in the UK disappeared.


But very rapidly, the familiar pattern reappeared. Middle-class children clustered in disproportionate numbers in the higher streams of the comprehensive schools, and they continued to out-perform working-class children in post-16 examinations and university entry.

One response to this was to weaken or abolish streaming.


The minimum leaving age was raised to 16 in 1972 to force under-performing working-class children to stay in school longer, and when that didn’t make much difference to the attainment gap, the Blair government legislated in 2008 to force everyone to stay in education or training up until the age of 18. Yet still the social class imbalance in educational achievement persisted.


With nearly half of all youngsters getting degrees, more demanding employers started recruiting only from the top universities. Politicians responded to this by putting pressure on the top universities to admit more lower-class applicants.


Looking back over this sorry half-century history of educational reform and upheaval, we see that we have increased coercion (restricting school choice by parents, forcing kids to stay in education even if they don’t want to, limiting the autonomy of universities to select their own students), diluted standards (dumbing down GCSEs, A-levels and degrees), and undermined meritocracy (forcing universities and employers to favour applicants from certain kinds of backgrounds at the expense of others who may be better qualified). What we have conspicuously failed to do, however, is flatten social class differences in educational achievement.

American geneticists now face an even more drastic form of censorship

Thursday, October 27th, 2022

A policy of deliberate ignorance has corrupted top scientific institutions in the West, James Lee suggests:

It’s been an open secret for years that prestigious journals will often reject submissions that offend prevailing political orthodoxies — especially if they involve controversial aspects of human biology and behavior — no matter how scientifically sound the work might be. The leading journal Nature Human Behaviour recently made this practice official in an editorial effectively announcing that it will not publish studies that show the wrong kind of differences between human groups.

American geneticists now face an even more drastic form of censorship: exclusion from access to the data necessary to conduct analyses, let alone publish results. Case in point: the National Institutes of Health now withholds access to an important database if it thinks a scientist’s research may wander into forbidden territory. The source at issue, the Database of Genotypes and Phenotypes (dbGaP), is an exceptional tool, combining genome scans of several million individuals with extensive data about health, education, occupation, and income. It is indispensable for research on how genes and environments combine to affect human traits. No other widely accessible American database comes close in terms of scientific utility.

My colleagues at other universities and I have run into problems involving applications to study the relationships among intelligence, education, and health outcomes. Sometimes, NIH denies access to some of the attributes that I have just mentioned, on the grounds that studying their genetic basis is “stigmatizing.” Sometimes, it demands updates about ongoing research, with the implied threat that it could withdraw usage if it doesn’t receive satisfactory answers. In some cases, NIH has retroactively withdrawn access for research it had previously approved.

Note that none of the studies I am referring to include inquiries into race or sex differences. Apparently, NIH is clamping down on a broad range of attempts to explore the relationship between genetics and intelligence.

That’s ten times the population-to-restaurant ratio

Wednesday, October 26th, 2022

Thai restaurants are everywhere in America, but why?

With over 36 million Mexican-Americans and around five million Chinese-Americans, it’s no surprise that these populations’ cuisines have become woven into America’s cultural fabric. Comparatively, according to a representative from the Royal Thai Embassy in DC, there are just 300,000 Thai-Americans — less than 1 percent the size of the the Mexican-American population. Yet there are an estimated 5,342 Thai restaurants in the United States, compared to around 54,000 Mexican restaurants; that’s ten times the population-to-restaurant ratio. So, why are there so many Thai restaurants in the US?


Using a tactic now known as gastrodiplomacy or culinary diplomacy, the government of Thailand has intentionally bolstered the presence of Thai cuisine outside of Thailand to increase its export and tourism revenues, as well as its prominence on the cultural and diplomatic stages. In 2001, the Thai government established the Global Thai Restaurant Company, Ltd., in an effort to establish at least 3,000 Thai restaurants worldwide. At the time, Thai deputy commerce minister Goanpot Asvinvichit told the Wall Street Journal that the government hoped the chain would be “like the McDonald’s of Thai food.” Apparently, the government had been training chefs at its culinary training facilities to send abroad for the previous decade, but this project formalized and enhanced these efforts significantly.


At the time of the Global Thai program’s launch, there were about 5,500 Thai restaurants beyond Thailand’s borders; today there are over 15,000. The number in the US increased from around 2,000 to over 5,000.


Inspired by Thailand’s success, South Korea, for example, has earmarked tens of millions of dollars beginning in 2009 for its Korean Cuisine to the World campaign. Taiwan has followed suit, as has Peru with its Cocina Peruana Para el Mundo (“Peruvian Cuisine for the World;” quite creative) initiative, as well as Malaysia (“Malaysia Kitchen for the World 2010” — clearly there’s a pattern here).

The power to control the creation of money has moved from central banks to governments

Wednesday, October 19th, 2022

Russell Napier, who experienced the Asian Financial Crisis 25 years ago at first hand at the brokerage house CLSA in Hong Kong, wrote for years about the deflationary power of the globalised world economy — before predicting inflation two years ago:

This is structural in nature, not cyclical. We are experiencing a fundamental shift in the inner workings of most Western economies. In the past four decades, we have become used to the idea that our economies are guided by free markets. But we are in the process of moving to a system where a large part of the allocation of resources is not left to markets anymore. Mind you, I’m not talking about a command economy or about Marxism, but about an economy where the government plays a significant role in the allocation of capital. The French would call this system «dirigiste». This is nothing new, as it was the system that prevailed from 1939 to 1979. We have just forgotten how it works, because most economists are trained in free market economics, not in history.

Why is this shift happening?

The main reason is that our debt levels have simply grown too high. Total private and public sector debt in the US is at 290% of GDP. It’s at a whopping 371% in France and above 250% in many other Western economies, including Japan. The Great Recession of 2008 has already made clear to us that this level of debt was way too high.

How so?

Back in 2008, the world economy came to the brink of a deflationary debt liquidation, where the entire system was at risk crashing down. We’ve known that for years. We can’t stand normal, necessary recessions anymore without fearing a collapse of the system. So the level of debt – private and public – to GDP has to come down, and the easiest way to do that is by increasing the growth rate of nominal GDP. That was the way it was done in the decades after World War II.

What has triggered this process now?

My structural argument is that the power to control the creation of money has moved from central banks to governments. By issuing state guarantees on bank credit during the Covid crisis, governments have effectively taken over the levers to control the creation of money. Of course, the pushback to my prediction was that this was only a temporary emergency measure to combat the effects of the pandemic. But now we have another emergency, with the war in Ukraine and the energy crisis that comes with it.

You mean there is always going to be another emergency?

Exactly, which means governments won’t retreat from these policies. Just to give you some statistics on bank loans to corporates within the European Union since February 2020: Out of all the new loans in Germany, 40% are guaranteed by the government. In France, it’s 70% of all new loans, and in Italy it’s over 100%, because they migrate old maturing credit to new, government-guaranteed schemes. Just recently, Germany has come up with a huge new guarantee scheme to cover the effects of the energy crisis. This is the new normal. For the government, credit guarantees are like the magic money tree: the closest thing to free money. They don’t have to issue more government debt, they don’t need to raise taxes, they just issue credit guarantees to the commercial banks.

And by controlling the growth of credit, governments gain an easy way to control and steer the economy?

It’s easy for them in the way that credit guarantees are only a contingent liability on the balance sheet of the state. By telling banks how and where to grant guaranteed loans, governments can direct investment where they want it to, be it energy, projects aimed at reducing inequality, or general investments to combat climate change. By guiding the growth of credit and therefore the growth of money, they can control the nominal growth of the economy.

And given that nominal growth consists of real growth plus inflation, the easiest way to do this is through higher inflation?

Yes. Engineering a higher nominal GDP growth through a higher structural level of inflation is a proven way to get rid of high levels of debt. That’s exactly how many countries, including the US and the UK, got rid of their debt after World War II.


What tells you that this is in fact happening today?

When I see that we are headed into a significant growth slowdown, even a recession, and bank credit is still growing. The classic definition of a banker used to be that he lends you an umbrella but would take it away at the first sight of rain. Not this time. Banks keep lending, they even reduce their provisions for bad debt. The CFO of Commerzbank was asked about this fact in July, and she said that the government would not allow large debtors to fail. That, to me, was a transformational statement. If you are a banker who believes in private sector credit risk, you stop lending when the economy is headed into a recession. But if you are a banker who believes in government guarantees, you keep lending. This is happening today. Banks keep lending, and nominal GDP will keep growing. That’s why, in nominal terms, we won’t see an economic contraction.

Telemedicine is rehospitalized

Saturday, October 15th, 2022

Before Covid, telehealth accounted for less than 1% of outpatient care. Then it shot up — to as high as 40% of outpatient visits for mental health and substance use. Now telemedicine is declining:

Over the past year, nearly 40 states and Washington, D.C., have ended emergency declarations that made it easier for doctors to use video visits to see patients in another state, according to the Alliance for Connected Care, which advocates for telemedicine use.

Alex Tabarrok knows people who have had to travel over the Virginia–Maryland border just to find a wifi spot to have a telemedicine appointment with their Maryland physician.

When people get richer, they get more resilient

Friday, October 14th, 2022

We are incessantly told about disasters — heat waves, floods, wildfires, and storms — when people have become much, much safer from all these weather events over the past century:

In the 1920s, around half a million people were killed by weather disasters, whereas in the last decade the death toll averaged around 18,000. This year, like both 2020 and 2021, is tracking below that. Why? Because when people get richer, they get more resilient.

Weather-fixated television news would make us think disasters are all getting worse. They’re not. Around 1900, about 4.5 per cent of the land area of the world burned every year. Over the last century, this declined to about 3.2 per cent In the last two decades, satellites show even further decline: in 2021 just 2.5 per cent burned. This has happened mostly because richer societies prevent fires. Models show that by the end of the century, despite climate change, human adaptation will mean even less burning.

And despite what you may have heard about record-breaking costs from weather disasters — mainly because wealthier populations build more expensive houses along coastlines — damage costs are actually declining, not increasing, as a per cent of GDP.

But it’s not only weather disasters that are getting less damaging despite dire predictions. A decade ago, environmentalists loudly declared that Australia’s magnificent Great Barrier Reef was nearly dead, killed by bleaching caused by climate change. The Guardian newspaper even published an obituary. This year, scientists revealed that two-thirds of the Great Barrier Reef shows the highest coral cover seen since records began in 1985. The good-news report got a fraction of the attention the bad news did.

Not long ago, environmentalists constantly used pictures of polar bears to highlight the dangers of climate change. Polar bears even featured in Al Gore’s terrifying movie An Inconvenient Truth. But the reality is that polar bear numbers have been increasing — from somewhere between five and 10,000 polar bears in the 1960s up to around 26,000 today. We don’t hear this news, however. Instead, campaigners just quietly stopped using polar bears in their activism.

Scientific authority is one of the foundations of power in our society

Monday, October 10th, 2022

Scientific authority is one of the foundations of power in our society, Samo Burja notes:

Consider a scientific study demonstrating a new medicine to be safe and efficacious. An FDA official can use this study to justify the medicine’s approval, and a doctor can use it to justify a patient’s treatment plan. The study has this legitimacy even when incorrect.

In contrast, even if a blog post by a detail-oriented self-experimenter contained accurate facts, those facts would not have the same legitimacy: a doctor may be sued for malpractice or the FDA may spark public outcry if they based their decisions on reports of this sort. The blog itself would also risk demonetization for violating terms of service, which usually as a matter of policy favors particular “authoritative” sources.

Intellectual authority is too useful to power centers to be ignored:

It will be deployed, one way or another. Social engineers have used it to guide behavior, loyalties, and flows of resources for all of recorded history, and likely long before as well. The most impressive example is the Catholic Church, which built its authority on the interpretation of religious matters, synthesizing human psychology, law, and metaphysics. The state church of the Roman Empire outlived the empire by many centuries. By the 11th century, Church authority was sufficient to organize and pursue political aims at the highest level. It was sufficient to force the Holy Roman Emperor Henry IV to kneel for three days as a blizzard raged, waiting for the Pope’s forgiveness. Few military and political victories are as clear. The Pope was revealed to be more powerful than kings.

The power of the Pope didn’t rest primarily in his wealth, armies, or charisma. Rather it rested on a claim of final authority in matters of theology, a field considered as or even more prestigious than cosmology is today. This can be compared to the transnational influence of contemporary academia on policy and credibility.

Such exercises of power weren’t completely unopposed. Today it is often forgotten that Martin Luther’s ninety-five theses and debates such as that at the Diet of Worms were first a challenge of intellectual authority, and only consequently a political struggle. The centuries-long consequences of the Protestant Reformation are myriad, but one of them is the negative connotation of the word “authority” in the English-speaking West. Protestant pamphlets had harsh and at times vulgar critiques of Papal “authority.” Merely making a word carry a negative connotation didn’t stop Protestant nations such as England or Sweden from creating their own state churches with much the same structure as the Catholic Church. Their new institutional authority was then a transformation of the old, using much the same social technology, rather than a revolution.

Inheritance of such authority shows some surprising patterns. The Anglican Church would famously have its own dissenters who ended up settling in the North American colonies. America’s Ivy League universities run on a bequeathment of intellectual authority which they first acquired as divinity schools serving different denominations of the many experiments in theocracy that made up the initial English colonies of the region. Harvard’s founding curriculum conformed to the tenets of Puritanism and used the University of Cambridge as its model. Amusingly, the enterprising Massachusetts colonists decided to rename the colony of Newtowne to Cambridge a mere two years after Harvard’s founding. Few attempts to bootstrap intellectual authority by associating with a good name are quite as brazen!

Many know that the University of Pennsylvania served the Quakers of Pennsylvania, since the colony and consequently the university was named after its founder, the Quaker thinker William Penn. But fewer know Yale was founded as a school for Congregationalist ministers and that Princeton was founded because of Yale professors and students who disagreed with prevalent Congregationalist views. The intellectual authority of modern academia can be traced back to an era when theology was the basis of its intellectual authority. Today, theology has nothing to do with it and the authority has been re-justified on new grounds. This shows that intellectual authority can be inherited by institutions even as they change the intellectual justification of that authority.

That such jumps are possible allows for interesting use of social technology, such as the King of Sweden bestowing credibility on physicists through the Nobel Prize or Elon Musk ensuring that non-technical employees at his companies listen to engineers through designing the right kind of performance art. Different types of intellectual authority are easily conflated for both good and bad. This also explains why we see uncritical belief in those who wear the trappings of science without doing science itself. When medicine suffered a worse reputation than science in the 19th century, doctors adapted by starting to wear white lab coats. This trick in particular continues to work in the present day.

Intellectual golden ages occur when new intellectual authority is achievable for those at the frontiers of knowledge. This feat of social engineering that legitimizes illegible but intellectually productive individuals is then upstream of material incentives, which is why a merely independently wealthy person cannot just throw money at any new scientific field or institution and expect it to grow in legitimacy. It ultimately rests on political authority. The most powerful individuals in a society must lend their legitimacy to the most promising scientific minds and retract it only when they fail as scientists, rather than as political players. The society in which science can not just exist, but flourish, is one where powerful individuals can elevate people with crazy new ideas on a whim.

The dreams of automating scientific progress with vast and well-funded bureaucracies have evidently failed. This is because bureaucracies are only as dynamic as the live players who pilot them. Without a live player at the helm who is a powerful individual in control of the bureaucracy, the existing distribution of legitimacy is just frozen in place, and more funding works only to keep it more frozen rather than to drive scientific progress forward. Powerful individuals will not always make the right bets on crazy new ideas and the crazy people who come up with them, but individuals have a chance to make the right bets, whereas bureaucracies can only pretend to make them. Outsourcing science to vast and well-funded bureaucracies then gives us the impression of intense work on the cutting edge of science, but without any of the substance.

The solution is not just to grant more funding and legitimacy to individual scientists rather than scientific bureaucracies, but to remind powerful individuals, and especially those with sovereign authority, that if they don’t grant this legitimacy, no one else will. Science lives or dies on personal endorsement by powerful patrons. Only the most powerful individuals in society can afford to endorse the right immature and speculative ideas, which is where all good ideas begin their life cycle.

There are no Black Valyrians

Sunday, October 9th, 2022

“Game of Thrones” superfans Linda Antonsson and Elio M. García Jr. have been collaborating with George R.R. Martin since before HBO’s hit adaptation of his “A Song of Ice and Fire” books, but when Martin publicized their new book on social media, they were “called out” for their supposed racism:

Soon after Antonsson and García created online forum in 1999, Martin recruited them as fact-checkers for his book “A Feast for Crows.” In 2014, they served as coauthors on “The World of Ice & Fire,” an illustrated companion book for the series of novels.

Critics have taken issue with Antonsson’s blog posts, some dating to more than a decade ago, in which she decries the casting of people of color in “Game of Thrones” to play characters that are white in Martin’s books. In one post from March 2012, for example, Antonsson complained about Nonso Anozie, a Black man, getting cast in the role of Xaro Xhoan Daxos, who is described as pale in the books. Five months later, she celebrated the fact that white actor Ed Skrein was cast to play Daario Naharis, despite a rumor claiming the network was looking to fill the role with someone of another ethnicity.

More recently, Antonsson wrote that the character of Corlys, portrayed by Steve Toussaint on “House of the Dragon,” was miscast. “There are no Black Valyrians and there should not be any in the show,” she said of the common ancestors of Velaryons and Targaryens.

Antonsson contends that upset fans are criticizing “cherry-picked statements stripped of context.” She tells Variety that it bothers her to be “labeled a racist, when my focus has been solely on the world building.” According to the author, she has no issue with inclusive casting, but she strongly believes that “diversity should not trump story.”

“If George had indeed made the Valyrians Black instead of white, as he mused on his ‘Not a Blog’ in 2013, and this new show proposed to make the Velaryons anything other than Black, we would have had the same issue with it and would have shared the same opinion,” Antonsson says.

The Disney version is devoid of any moral teaching whatsoever

Saturday, October 1st, 2022

The last four generations of Americans have been swimming in a sea of feminist propaganda our whole lives, Rachel Wilson argues:

We don’t even notice the feminist themes and messaging bombarding us daily. They feel like universal truths because that’s all we’ve ever known. A fish doesn’t know it has always lived in water until it somehow ends up on dry land. De-programming feminism works much the same way. This analogy is a brilliant segue for me to ruin one of people’s favorite childhood movies, Disney’s The Little Mermaid.


The Little Mermaid was originally a Danish folk tale which told of a young mermaid who lived under the sea with her widower grandmother and five sisters. She rescues a handsome prince from drowning and falls in love with him. She learns from her grandmother that humans have a much shorter lifespan than the mermaids’ 300 years, but that humans have eternal souls and can enter heaven, while mermaids become seafoam and cease to exist upon death. The longing for an eternal soul is just as much a part of the Little Mermaids’ longing to become human as is her infatuation with the prince in the original story. In the folk tale, The Little Mermaid does gain human legs in exchange for her beautiful voice, but she always feels as if she is walking on knives. She also is only able to gain a human soul through union in marriage to the prince. If not, she will die with a broken heart and turn to sea foam. In this version, the prince does not marry the Little Mermaid, but chooses to marry a princess from a neighboring kingdom. The Little Mermaid despairs, thinking of how much she sacrificed and her imminent demise. She is offered one final chance when her sisters bring her a dagger from the sea witch. If she kills the prince and lets his blood drop onto her feet, she can become a mermaid once more and return to her life in the sea. The Little Mermaid can’t bring herself to do this and instead throws herself and the dagger into the sea. Because of her selflessness, she is granted an afterlife as an earthbound ghost. She can earn an immortal soul by doing 300 years of good works for mankind and watches over the prince and his wife.


In the Disney version, Ariel is a little girl with big dreams. She has a stern patriarchal father who wants to keep her under lock and key for the sake of tradition, societal expectation, and safety. Yes, she falls in love with Prince Eric, but her main motivation for wanting to live on land are her dreams of independence and liberation from her father’s rules. She is a privileged princess who has everything, but only wants the one thing she can’t have- life on land as a human. When she expresses this to her father, he ruins her secret trove of human treasures and forbids her to return to the surface, knowing that it would likely spell her demise.


This song was an anthem for rebellion against the patriarchy. In fact, that is the central theme of the Disney version of the story. Ariel disobeys her father, uses witchcraft to do exactly what her father warned her not to, and ends up getting herself kidnapped by the sea witch. Her father, King Triton, then has to intervene and save her by allowing himself to be captured in her place. Because of this, the whole sea kingdom ends up under the dominion of the evil sea witch, spelling certain doom for the merfolk. Eric risks his own life to kill the sea witch and frees King Triton. The king then (absurdly) apologizes for trying to stand in the way of his sixteen-year-old daughter’s foolish dreams. He forgives her disobedience and recklessness which almost got the whole kingdom annihilated. Ariel gets everything she wants, and the message sent to young girls everywhere is that your dad is a big meanie head who just doesn’t want you to be independent and have fun. All the men in your life must sacrifice their very lives and even all of society if that’s what will make their little princess happy. Also, there is no negative consequence for being disobedient, lying, deceiving others, or practicing witchcraft as long as it makes you happy. The “happiness” of young beautiful women is all that really matters. The men will rescue you from all the trouble YOU are responsible for causing because that’s all they’re good for. The End.

In contrast to the original Danish folk story, we see that the Disney version is devoid of any moral teaching whatsoever. The original story teaches that the most moral path possible, the one that leads to eternal salvation, is self-sacrifice for the love of others. It teaches young women that a life of service to those they love and to humankind is what saves them. The original Little Mermaid was willing to sacrifice her own life and even her chance at a soul to prevent harm to the man she loved, even if he married someone else. There was nothing in it for her whatsoever. Her motivation couldn’t be purer, and this is what saved her in the end, even though she did not receive temporal reward in this life. This is in line with Christian morality, which is probably why Disney, run by Jeffrey Katzenberg at the time, completely inverted it. The meaning and moral of the story was turned into the polar opposite of the original.