When mainstream media speaks, it is reality that gets mugged

Wednesday, February 1st, 2023

Richard Hanania recently decided to argue that everyone is wrong, and the media is actually good and honest. Bryan Caplan agrees that Hanania zeroes in on the right question — “Compared to what?” — and answers, the mainstream media is awful compared to silence:

The problem isn’t limited to race, gender, and sexual orientation, where Richard agrees that the media is crazy. The problem isn’t specific factual errors, either. The central problem is that the mainstream media’s standard operating standard is to use selective presentation to spread absurd views about practically everything that matters.

Nor is this a recent failing! Mainstream media has been deplorable for as long as I can remember. Let me list some of its chief sins against the Big Picture.

Endlessly complaining about alleged social problems. Poverty, the environment, racism, Covid, Ukraine, terrorism, immigration, education, drugs, Elon… Even if all of the coverage were true, the media is still — per Huemer — aggressively promoting the absurd view that life is on balance terrible and reliably getting worse.

Painting government intervention as the obvious solution to social problems. Often the media openly asks loaded questions to this effect, like “Why isn’t the government doing more about this?!” with an exasperated tone. The rest of the time, they rely on heavy-handed insinuation, like “The people of Flint, Michigan feel like they’ve been forgotten.” Forgotten by who? Government Our Savior, of course. Mainstream media barely considers whether past government policies have worked, or how much they cost, or whether they have important downsides.

Spreading innumeracy. The media endlessly shows grotesque stories about ultra-rare problems like terrorism, plane crashes, police murdering innocents, school shootings, toddlers dying of Covid, and the like. They show almost nothing about statistically common problems like car crashes or death by old age. The media doesn’t just spread paranoia; it spreads inverted paranoia.

Promoting Social Desirability Bias. The media standardly talks as if stuff that superficially sounds good is reliably good, and stuff that superficially sounds bad is reliably bad. As a result, they foment hostility to good stuff that sounds bad, and engender support for bad stuff that sounds good. If a firm downsizes due to technological change, what are the odds that the media chides, “This is how progress works! Tractors put a lot of farmers out of work, too, you know”? If the government cuts spending, what are the odds that the media muses, “We could interview the visible losers, but that’s hardly fair unless we interview the invisible winners. Which we can’t do, so let’s just move on.”
Whipping up support for the latest crusade. Like me, Hanania wasn’t happy with the media’s Covid coverage. But I say the problem goes way back. Just in my living memory, the media has promoted mass hysterias about Islamist Iran (“the hostage crisis”), the War on Drugs, “Free Kuwait,” the War on Terror, the Iraq War, the 2008 financial crisis, Covid, Black Lives Matter, and now the Ukraine War. The mere fact that they keep these topics in the news for months or years, with almost no skeptical or apathetic voices, is a thinly-veiled declaration that “These are the most important problems on Earth – and we should all enthusiastically be on the bandwagon to solve them.” Yet in hindsight, the problems the media deems important are highly arbitrary, and the bandwagon usually turns out to be a major problem in itself.

An old joke talks about being “mugged by reality.” When mainstream media speaks, it is reality that gets mugged. Truly, mainstream media is the great Mugger of Reality. Even when its individual stories are rock solid, it promotes a deeply false Big Picture of the world. And unless you have the intellectual steel to constantly remind yourself, “This is a horribly misleading perspective,” consuming media tends to make you believe this deeply false Big Picture.

Crowds can beat smart people, but crowds of smart people do best of all

Saturday, January 28th, 2023

Last January, Scott Alexander — along with amateur statisticians Sam Marks and Eric Neyman — solicited predictions from 508 people:

Contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?”

[…]

Are some people really “superforecasters” who do better than everyone else? Is there a “wisdom of crowds”? Does the Efficient Markets Hypothesis mean that prediction markets should beat individuals? Armed with 508 people’s predictions, can we do math to them until we know more about the future (probabilistically, of course) than any ordinary mortal?

After 2022 ended, Sam and Eric used a technique called log-loss scoring to grade everyone’s probability estimates. Lower scores are better. The details are hard to explain, but for our contest, guessing 50% for everything would give a score of 40.21, and complete omniscience would give a perfect score of 0.

[…]

As mentioned above: guessing 50% corresponds to a score of 40.2. This would have put you in the eleventh percentile (yes, 11% of participants did worse than chance).

Philip Tetlock and his team have identified “superforecasters” — people who seem to do surprisingly well at prediction tasks, again and again. Some of Tetlock’s picks kindly agreed to participate in this contest and let me test them. The median superforecaster outscored 84% of other participants.

The “wisdom of crowds” hypothesis says that averaging many ordinary people’s predictions produces a “smoothed-out” prediction at least as good as experts. That proved true here. An aggregate created by averaging all 508 participants’ guesses scored at the 84th percentile, equaling superforecaster performance.

There are fancy ways to adjust people’s predictions before aggregating them that outperformed simple averaging in the previous experiments. Eric tried one of these methods, and it scored at the 85th percentile, barely better than the simple average.

Crowds can beat smart people, but crowds of smart people do best of all. The aggregate of the 12 participating superforecasters scored at the 97th percentile.

Prediction markets did extraordinarily well during this competition, scoring at the 99.5th percentile — ie they beat 506 of the 508 participants, plus all other forms of aggregation. But this is an unfair comparison: our participants were only allowed to spend five minutes max researching each question, but we couldn’t control prediction market participants; they spent however long they wanted. That means prediction markets’ victory doesn’t necessarily mean they’re better than other aggregation methods — it might just mean that people who can do lots of research beat people who do less research.2 Next year’s contest will have some participants who do more research, and hopefully provide a fairer test.

The single best forecaster of our 508 participants got a score of 25.68. That doesn’t necessarily mean he’s smarter than aggregates and prediction markets. There were 508 entries, ie 508 lottery tickets to outperform the markets by coincidence. Most likely he won by a combination of skill and luck. Still, this is an outstanding performance, and must have taken extraordinary skill, regardless of how much luck was involved.

Accounting is a wonderful tool for converting tautologies into useful information

Wednesday, January 25th, 2023

Accounting is a wonderful tool for converting tautologies into useful information:

Here, for example, is a tautology: when a company spends money, somebody receives that money. And here is a useful mental model that helps investors think about booms and busts, time industry cycles, and spot second- and third-order outcomes of news: one company’s expenditures are, very often, another company’s revenue.

[…]

Higher returns to capital are a subsidy for reinvestment, and economies require a lot of reinvestment to keep going. Roads, railroads, ports, airports, power plants, factories, and homes are all long-lived assets with high upfront costs. For a country to have a lot of them, a smaller share of national income has to go to consumption so a larger share can go to investment instead. Importantly, shifting more returns to capital does not necessarily make all the capitalists rich (though it can have that effect!). It means there’s a race to identify good investments fast since there’s more money chasing them, and when this kind of policy continues for too long, the wave of capital looking for a return can end up subsidizing spending that simply doesn’t make economic sense. For example, in China circa 1980, pretty much any piece of physical infrastructure was probably worth either fixing up or tearing down and rebuilding entirely, so the country got good returns from holding wages down while reinvesting the proceeds of exports. Now that China is a richer country, with lots of infrastructure, it’s harder to find good homes for incremental money — but the money continues to flow.

[…]

High-income workers tend to save more money, and their savings rate goes up when they experience windfall gains. Lower-income workers are usually scrimping, deferring some purchases, and missing out on things they’d like to spend on, so higher wages for them tend to increase consumer spending.

[…]

When fab utilization is low, new demand just means that existing fabs need to run extra shifts. But when utilization gets high enough, it means the world needs more fabs, and needs more $200m-apiece EUV lithography machines to fill them.

This tends to be the big takeaway from looking at the world from a supply-chain perspective. When there’s slack in the system, or an ability to immediately respond to incremental spending, we see a pretty steady impact on every link in the supply chain: a surprise 1% increase in datacenter spending produces a 1% increase in spending on datacenter chips, which also leads the replacement of chipmaking equipment to tick up by about 1% — not because additional equipment was needed to increase supply, but because more is in use, which means more will need to be replaced.

But when there isn’t slack in the system, a small incremental increase in final demand can produce massive changes in total production capacity. The rough way to approximate this is to look at the useful life of the relevant investment, invert it into a depreciation rate, and then compare changes in demand to that depreciation rate. So if there’s some kind of asset that lasts for 10 years, another way to look at it is that in a given year, 10% of those assets are getting replaced as they wear out. A 2% increase in demand for whatever those assets produce, if they’re all being used at full capacity, means a 20% increase in demand for the assets.

The result was a precociously unified and homogenous polity

Wednesday, January 18th, 2023

Davis Kedrosky explains how institutional reforms built the British Empire:

In 1300, few English institutions actively promoted economic growth. The vast majority of the rural population was composed of unfree peasants bonded either to feudal lords or plots of land. Urban artisans were organized in guilds that regulated who could enter trades like glassblowing, leatherwork, and blacksmithing.

The English state was in turmoil following a century of conflict between Parliament and the Crown, and though nominally strong, it was deficient in fiscal capacity and infrastructural power. The regime lacked both the will and the means to pursue national development aims: integrating domestic markets, acquiring foreign export zones, securing private property, and encouraging innovation, entrepreneurship, and investment. England resembled what has been called a “natural state,” in which violence between factions determined the character of governance. Institutions pushed the meager spoils of an impoverished land into the pockets of rentiers.

By 1800, all this had changed. Britain’s rural life was characterized by agrarian capitalism, in which tenant farmers rented land from landowners and employed free wage labor, incentivizing investment and experimentation with new crops and methods. The preceding two centuries had seen the waning of the guilds, which now served more as organizations for social networking. Elites that had mostly earned their income by collecting taxes were now engaging in commercial enterprises themselves.

The state was now better-financed than any before in history, thanks to an effective tax administration and the ability to contract a mountain of public debt at modest interest rates. This allowed Britain to fund the world’s strongest navy to defend its interests from New York to Calcutta. The British government also intervened frequently in economic life, from enclosure acts to estate bills, and had limited its absolutist and rentier tendencies through the establishment of a strong parliament and professional bureaucracy.

Mark Koyama called the five centuries of institutional evolution the “long transition from a natural state to a liberal economic order.” The state capacity Britain built up during this early modern period went side by side with its emergence as a major commercial power and, within a few years, the first nation to endogenously achieve modern economic growth. Twenty-first-century economists increasingly deem institutions an “ultimate cause” of industrial development. The differences between North and South Korea, for example, are not the result of geographical disparities or long-standing cultural cleavages on either side of the 38th parallel. While it’s not exactly clear which kinds of institutions cause growth, it’s pretty obvious that some sorts inhibit it, if not stifle it altogether. The story of Britain’s rise to global power, then, is also the story of a 500-year-long transformation that saw institutional changes to law, property ownership, the organization of labor, and eventually the makeup of the British elite itself.

In his 1982 book The Rise and Decline of Nations, Mancur Olson argued that societies are engulfed in a perpetual struggle between producers and rent-seekers. The former invent and start businesses, increasing the national income; the latter try to profit off of the producers’ hard work by lobbying for special privileges like monopolies and tax farms. In contrast to Douglass North, who emphasized the importance of secure property rights for economic growth, Olson distinguished between good and bad forms. Bad property rights entitled a specific group to subsidies or protections that imposed costs on consumers and inhibited growth—like, say, a local monopoly on woolen cloth weaving allowing a guild to suppress machinery in favor of labor-intensive hand labor, lowering productivity and output.

Backed by its elite commercial and landed classes, the English and eventually British state came to favor the removal of the barriers to growth that had plagued most pre-modern economies. “Peace and easy taxes,” contra Smith, isn’t a sufficient condition for endogenous development, but its inverse—domestic chaos and rent-seeking—may be sufficient for its absence. But Britain’s real achievement was that its elite class, over time, began to align themselves with market liberalization. In France, by contrast, the nobility and king were constantly at odds, and the monarchy actually supported strong peasant tenures in opposition to large landowners. The pre-1914 Russian Empire would do the same thing.

Applying Olson’s framework to the seventeenth century, what we see is a decline of “rent-seeking distributional coalitions” like guilds, which helps to explain England’s “invention” of modern economic growth. “The success of the British experiment,” write the economists Joel Mokyr and John Nye,

was the result of the emergence of a progressive oligarchic regime that divided the surpluses generated by the new economy between the large landholders and the newly rising businessmen, and that tied both groups to a centralized government structure that promoted uniform rules and regulations at the expense of inefficient relics of an economic ancient regime.

Mokyr and Nye theorize that the state’s demand for revenues led it to strike a bargain with mercantile elites: if you pay taxes, you can use our ships and guns. This was the basis of a grand alliance between “Big Land” and “Big Commerce” who used the government as a broom to sweep away local interests. It manifested in projects like the Virginia Company, whose investors involved both the nobility and mercantile venture capitalists.

Parliament was the instrument for fulfilling the pact, issuing a raft of legislation altering local property rights to open up markets throughout the 1700s. Estate acts, for example, allowed landowners to improve, sell, and lease their plots. Statutory authorities permitted private organizations to set up turnpikes and canals, helping to unify the English market. This allowed firms to increase production, exploit economies of scale, and compete with local artisans. Enclosure acts, meanwhile, provided for the transformation of open-field farming communities, in which decisions were made at the village level, into fully private property.

The origins of this process, however, are deeper than Mokyr and Nye suggest. The development of a national state began soon after the Norman invasion of 1066. William the Conqueror replaced the Anglo-Saxon aristocracy with a Norman one, redistributing the country’s lands to his soldiers and generating a mostly uniform feudal society. The result was a precociously unified and homogenous polity—as opposed to France, which grew by absorbing linguistically distinct territories. English kings who were seeking to fund domestic or military projects called councils with individuals, usually the great barons of the nobility, whose cooperation and money they needed. With the waxing of the late medieval “commercial revolution,” they eventually included representatives of the ports, merchants, and Jewish financiers. Kings would make “contracts” with these factions—often customary restrictions on arbitrary taxation or the granting of other privileges—in exchange for resources. These councils later became Parliament.

Public choice theory is even more useful in understanding foreign policy

Monday, January 16th, 2023

Public choice theory was developed to understand domestic politics, but Richard Hanania argues — in Public Choice Theory and the Illusion of Grand Strategy — that public choice is actually even more useful in understanding foreign policy:

First, national defence is “the quintessential public good” in that the taxpayers who pay for “national security” compose a diffuse interest group, while those who profit from it form concentrated interests. This calls into question the assumption that American national security is directly proportional to its military spending (America spends more on defence than most of the rest of the world combined).

Second, the public is ignorant of foreign affairs, so those who control the flow of information have excess influence. Even politicians and bureaucrats are ignorant, for example most(!) counterterrorism officials — the chief of the FBI’s national security branch and a seven-term congressman then serving as the vice chairman of a House intelligence subcommittee, did not know the difference between Sunnis and Shiites. The same favoured interests exert influence at all levels of society, including at the top, for example intelligence agencies are discounted if they contradict what leaders think they know through personal contacts and publicly available material, as was the case in the run-up to the Iraq War.

Third, unlike policy areas like education, it is legitimate for governments to declare certain foreign affairs information to be classified i.e. the public has no right to know. Top officials leaking classified information to the press is normal practice, so they can be extremely selective in manipulating public knowledge.

Fourth, it’s difficult to know who possesses genuine expertise, so foreign policy discourse is prone to capture by special interests. History runs only once — the cause and effect in foreign policy are hard to generalise into measurable forecasts; as demonstrated by Tetlock’s superforecasters, geopolitical experts are worse than informed laymen at predicting world events. Unlike those who have fought the tobacco companies that denied the harms of smoking, or oil companies that denied global warming, the opponents of interventionists may never be able to muster evidence clear enough to win against those in power with special interests backing.

Hanania’s special interest groups are the usual suspects: government contractors (weapons manufacturers [1]), the national security establishment (the Pentagon [2]), and foreign governments [3] (not limited to electoral intervention).

What doesn’t have comparable influence is business interests as argued by IR theorists. Unlike weapons manufacturers, other business interests have to overcome the collective action problem, especially when some businesses benefit from protectionism.

None of the precursors were in place

Sunday, January 15th, 2023

Once you understand how the Industrial Revolution came about, it’s easy to see why there was no Roman Industrial Revolution — none of the precursors were in place:

The Romans made some use of mineral coal as a heating element or fuel, but it was decidedly secondary to their use of wood and where necessary charcoal. The Romans used rotational energy via watermills to mill grain, but not to spin thread. Even if they had the spinning wheel (and they didn’t; they’re still spinning with drop spindles), the standard Mediterranean period loom, the warp-weighted loom, was roughly an order of magnitude less efficient than the flying shuttle loom, so the Roman economy couldn’t have handled all of the thread the spinning wheel could produce.

And of course the Romans had put functionally no effort into figuring out how to make efficient pressure-cylinders, because they had absolutely no use for them. Remember that by the time Newcomen is designing his steam engine, the kings and parliaments of Europe have been effectively obsessed with who could build the best pressure-cylinder (and then plug it at one end, making a cannon) for three centuries because success in war depended in part on having the best cannon. If you had given the Romans the designs for a Newcomen steam engine, they couldn’t have built it without developing whole new technologies for the purpose (or casting every part in bronze, which introduces its own problems) and then wouldn’t have had any profitable use to put it to.

All of which is why simple graphs of things like ‘global historical GDP’ can be a bit deceptive: there’s a lot of particularity beneath the basic statistics of production because technologies are contingent and path dependent.

The Industrial Revolution happened largely in one place

Saturday, January 14th, 2023

The Industrial Revolution was more than simply an increase in economic production, Bret Devereaux explains:

Modest increases in economic production are, after all, possible in agrarian economies. Instead, the industrial revolution was about accessing entirely new sources of energy for broad use in the economy, thus drastically increasing the amount of power available for human use. The industrial revolution thus represents not merely a change in quantity, but a change in kind from what we might call an ‘organic’ economy to a ‘mineral’ economy. Consequently, I’d argue, the industrial revolution represents probably just the second time in human history that as a species we’ve undergone a radical change in our production; the first being the development of agriculture in the Neolithic period.

However, unlike farming which developed independently in many places at different times, the industrial revolution happened largely in one place, once and then spread out from there, largely because the world of the 1700s AD was much more interconnected than the world of c. 12,000BP (‘before present,’ a marker we sometimes use for the very deep past). Consequently while we have many examples of the emergence of farming and from there the development of complex agrarian economies, we really only have one ‘pristine’ example of an industrial revolution. It’s possible that it could have occurred with different technologies and resources, though I have to admit I haven’t seen a plausible alternative development that doesn’t just take the same technologies and systems and put them somewhere else.

[…]

Fundamentally this is a story about coal, steam engines, textile manufacture and above all the harnessing of a new source of energy in the economy. That’s not the whole story, by any means, but it is one of the most important through-lines and will serve to demonstrate the point.

The specificity matters here because each innovation in the chain required not merely the discovery of the principle, but also the design and an economically viable use-case to all line up in order to have impact.

[…]

So what was needed was not merely the idea of using steam, but also a design which could actually function in a specific use case. In practice that meant both a design that was far more efficient (though still wildly inefficient) and a use case that could tolerate the inevitable inadequacies of the 1.0 version of the device. The first design to actually square this circle was Thomas Newcomen’s atmospheric steam engine (1712).

[…]

Now that design would be iterated on subsequently to produce smoother, more powerful and more efficient engines, but for that iteration to happen someone needs to be using it, meaning there needs to be a use-case for repetitive motion at modest-but-significant power in an environment where fuel is extremely cheap so that the inefficiency of the engine didn’t make it a worse option than simply having a whole bunch of burly fellows (or draft animals) do the job. As we’ll see, this was a use-case that didn’t really exist in the ancient world and indeed existed almost nowhere but Britain even in the period where it worked.

But fortunately for Newcomen the use case did exist at that moment: pumping water out of coal mines. Of course a mine that runs below the local water-table (as most do) is going to naturally fill with water which has to be pumped out to enable further mining. Traditionally this was done with muscle power, but as mines get deeper the power needed to pump out the water increases (because you need enough power to lift all of the water in the pump system in each movement); cheaper and more effective pumping mechanisms were thus very desirable for mining. But the incentive here can’t just be any sort of mining, it has to be coal mining because of the inefficiency problem: coal (a fuel you can run the engine on) is of course going to be very cheap and abundant directly above the mine where it is being produced and for the atmospheric engine to make sense as an investment the fuel must be very cheap indeed. It would not have made economic sense to use an atmospheric steam engine over simply adding more muscle if you were mining, say, iron or gold and had to ship the fuel in; transportation costs for bulk goods in the pre-railroad world were high. And of course trying to run your atmospheric engine off of local timber would only work for a very little while before the trees you needed were quite far away.

But that in turn requires you to have large coal mines, mining lots of coal deep under ground. Which in turn demands that your society has some sort of bulk use for coal. But just as the Newcomen Engine needed to out-compete ‘more muscle’ to get a foothold, coal has its own competitor: wood and charcoal. There is scattered evidence for limited use of coal as a fuel from the ancient period in many places in the world, but there needs to be a lot of demand to push mines deep to create the demand for pumping. In this regard, the situation on Great Britain (the island, specifically) was almost ideal: most of Great Britain’s forests seem to have been cleared for agriculture in antiquity; by 1000 only about 15% of England (as a geographic sub-unit of the island) was forested, a figure which continued to decline rapidly in the centuries that followed (down to a low of around 5%). Consequently wood as a heat fuel was scarce and so beginning in the 16th century we see a marked shift over to coal as a heating fuel for things like cooking and home heating. Fortunately for the residents of Great Britain there were surface coal seems in abundance making the transition relatively easy; once these were exhausted deep mining followed which at last by the late 1600s created the demand for coal-powered pumps finally answered effectively in 1712 by Newcomen: a demand for engines to power pumps in an environment where fuel efficiency mattered little.6

With a use-case in place, these early steam engines continue to be refined to make them more powerful, more fuel efficient and capable of producing smooth rotational motion out of their initially jerky reciprocal motions, culminating in James Watt’s steam engine in 1776. But so far all we’ve done is gotten very good and pumping out coal mines – that has in turn created steam engines that are now fuel efficient enough to be set up in places that are not coal mines, but we still need something for those engines to do to encourage further development. In particular we need a part of the economy where getting a lot of rotational motion is the major production bottleneck.

Most of the time, the road is far too big, and the rest of the time, it’s far too small

Sunday, November 13th, 2022

Casey Handmer did a bunch of transport economics when he worked at Hyperloop:

Let’s not bury the lede here. As pointed out in The Original Green blog, the entire city of Florence, in Italy, could fit inside one Atlanta freeway interchange. One of the most powerful, culturally important, and largest cities for centuries in Europe with a population exceeding 100,000 people. For readers who have not yet visited this incredible city, one can walk, at a fairly leisurely pace, from one side to the other in 45 minutes.

[…]

There are thousands of cities on Earth and not a single one where mass car ownership hasn’t led to soul-destroying traffic congestion.

Cars are both amazing and terrible:

Imagine there existed a way to move people, children, and almost unlimited quantities of cargo point to point, on demand, using an existing public network of graded and paved streets practically anywhere on Earth, in comfort, style, speed, and safety. Practically immune to weather. Operable by nearly any adult with only basic training, regardless of physical (dis)ability. Anyone who has made a habit of camping on backpacking trips knows well the undeniable luxury of sitting down in air-conditioned comfort and watching the scenery go by. At roughly $0.10/passenger mile, cars are also incredibly cheap to operate.

[…]

Some American cities have nearly 60% of their surface area devoted to cars, and yet they are the most congested of all. Would carving off another 10% of land, worth trillions in unimproved value alone, solve the problem? No. According to simulations I’ve run professionally, latent demand for surface transport in large cities exceeds supply by a factor of 30. Not 30%. 3000%. That is, Houston could build freeways to every corner of the city 20 layers deep and they would still suffer congestion during peak hours.

Why is that? Roads and freeways are huge, and expensive to build and maintain, but they actually don’t move very many people around. Typically peak capacity is about 1000 vehicles per lane per hour. In most cities, that means 1000 people/lane/hour. This is a laughably small number. All the freeways in LA over the four hour morning peak move perhaps 200,000 people, or ~1% of the overall population of the city. 30x capacity would enable 30% of the population to move around simultaneously.

[…]

Spacing between the bicycles, while underway, is a few meters, compared to 100 m for cars with a 3.7 m lane width. Bicycles and pedestrians take up roughly the same amount of space.

[…]

Like a lot of public infrastructure, the cost comes down to patterns of utilization. For any given service, avoiding congestion means building enough capacity to meet peak demand. But revenue is a function of average demand, which may be 10x lower than the peak. This problem occurs in practically all areas of life that involve moving or transforming things. Roads. Water. Power. Internet. Docks. Railways. Computing. Organizational structures. Publishing. Tourism. Engineering.

This effect is intuitively obvious for roads. Most of the time, the roads in my sleepy suburb of LA are lifeless expanses of steadily crumbling asphalt baking in the sun. The adjacent houses command property prices as high as $750/sqft, and yet every house has half a basketball court’s worth of nothing just sitting there next to it. Come peak hour, the road is now choked with cars all trying to get home, because even half a basketball court per house isn’t enough to fit all the cars that want to move there at that moment. And of an evening, onstreet parking is typically overwhelmed because now every car, which spends >95% of its life empty and unused, now needs 200 sqft of kerb to hang out. Most of the time, the road is far too big, and the rest of the time, it’s far too small.

People often underestimate the cost of having resources around that they aren’t currently using. And since our culture expects roads and parking to be both limitless, available, and free, we can’t rely on market mechanisms to correctly price and trade the cost. Seattle counted how many parking spaces were in the city and came up with 1.6 million. That’s more than five per household! Obviously most of them are vacant most of the time, just sitting there consuming space, and yet there will never be enough when they are needed!

The power to control the creation of money has moved from central banks to governments

Wednesday, October 19th, 2022

Russell Napier, who experienced the Asian Financial Crisis 25 years ago at first hand at the brokerage house CLSA in Hong Kong, wrote for years about the deflationary power of the globalised world economy — before predicting inflation two years ago:

This is structural in nature, not cyclical. We are experiencing a fundamental shift in the inner workings of most Western economies. In the past four decades, we have become used to the idea that our economies are guided by free markets. But we are in the process of moving to a system where a large part of the allocation of resources is not left to markets anymore. Mind you, I’m not talking about a command economy or about Marxism, but about an economy where the government plays a significant role in the allocation of capital. The French would call this system «dirigiste». This is nothing new, as it was the system that prevailed from 1939 to 1979. We have just forgotten how it works, because most economists are trained in free market economics, not in history.

Why is this shift happening?

The main reason is that our debt levels have simply grown too high. Total private and public sector debt in the US is at 290% of GDP. It’s at a whopping 371% in France and above 250% in many other Western economies, including Japan. The Great Recession of 2008 has already made clear to us that this level of debt was way too high.

How so?

Back in 2008, the world economy came to the brink of a deflationary debt liquidation, where the entire system was at risk crashing down. We’ve known that for years. We can’t stand normal, necessary recessions anymore without fearing a collapse of the system. So the level of debt – private and public – to GDP has to come down, and the easiest way to do that is by increasing the growth rate of nominal GDP. That was the way it was done in the decades after World War II.

What has triggered this process now?

My structural argument is that the power to control the creation of money has moved from central banks to governments. By issuing state guarantees on bank credit during the Covid crisis, governments have effectively taken over the levers to control the creation of money. Of course, the pushback to my prediction was that this was only a temporary emergency measure to combat the effects of the pandemic. But now we have another emergency, with the war in Ukraine and the energy crisis that comes with it.

You mean there is always going to be another emergency?

Exactly, which means governments won’t retreat from these policies. Just to give you some statistics on bank loans to corporates within the European Union since February 2020: Out of all the new loans in Germany, 40% are guaranteed by the government. In France, it’s 70% of all new loans, and in Italy it’s over 100%, because they migrate old maturing credit to new, government-guaranteed schemes. Just recently, Germany has come up with a huge new guarantee scheme to cover the effects of the energy crisis. This is the new normal. For the government, credit guarantees are like the magic money tree: the closest thing to free money. They don’t have to issue more government debt, they don’t need to raise taxes, they just issue credit guarantees to the commercial banks.

And by controlling the growth of credit, governments gain an easy way to control and steer the economy?

It’s easy for them in the way that credit guarantees are only a contingent liability on the balance sheet of the state. By telling banks how and where to grant guaranteed loans, governments can direct investment where they want it to, be it energy, projects aimed at reducing inequality, or general investments to combat climate change. By guiding the growth of credit and therefore the growth of money, they can control the nominal growth of the economy.

And given that nominal growth consists of real growth plus inflation, the easiest way to do this is through higher inflation?

Yes. Engineering a higher nominal GDP growth through a higher structural level of inflation is a proven way to get rid of high levels of debt. That’s exactly how many countries, including the US and the UK, got rid of their debt after World War II.

[…]

What tells you that this is in fact happening today?

When I see that we are headed into a significant growth slowdown, even a recession, and bank credit is still growing. The classic definition of a banker used to be that he lends you an umbrella but would take it away at the first sight of rain. Not this time. Banks keep lending, they even reduce their provisions for bad debt. The CFO of Commerzbank was asked about this fact in July, and she said that the government would not allow large debtors to fail. That, to me, was a transformational statement. If you are a banker who believes in private sector credit risk, you stop lending when the economy is headed into a recession. But if you are a banker who believes in government guarantees, you keep lending. This is happening today. Banks keep lending, and nominal GDP will keep growing. That’s why, in nominal terms, we won’t see an economic contraction.

Telemedicine is rehospitalized

Saturday, October 15th, 2022

Before Covid, telehealth accounted for less than 1% of outpatient care. Then it shot up — to as high as 40% of outpatient visits for mental health and substance use. Now telemedicine is declining:

Over the past year, nearly 40 states and Washington, D.C., have ended emergency declarations that made it easier for doctors to use video visits to see patients in another state, according to the Alliance for Connected Care, which advocates for telemedicine use.

Alex Tabarrok knows people who have had to travel over the Virginia–Maryland border just to find a wifi spot to have a telemedicine appointment with their Maryland physician.

When people get richer, they get more resilient

Friday, October 14th, 2022

We are incessantly told about disasters — heat waves, floods, wildfires, and storms — when people have become much, much safer from all these weather events over the past century:

In the 1920s, around half a million people were killed by weather disasters, whereas in the last decade the death toll averaged around 18,000. This year, like both 2020 and 2021, is tracking below that. Why? Because when people get richer, they get more resilient.

Weather-fixated television news would make us think disasters are all getting worse. They’re not. Around 1900, about 4.5 per cent of the land area of the world burned every year. Over the last century, this declined to about 3.2 per cent In the last two decades, satellites show even further decline: in 2021 just 2.5 per cent burned. This has happened mostly because richer societies prevent fires. Models show that by the end of the century, despite climate change, human adaptation will mean even less burning.

And despite what you may have heard about record-breaking costs from weather disasters — mainly because wealthier populations build more expensive houses along coastlines — damage costs are actually declining, not increasing, as a per cent of GDP.

But it’s not only weather disasters that are getting less damaging despite dire predictions. A decade ago, environmentalists loudly declared that Australia’s magnificent Great Barrier Reef was nearly dead, killed by bleaching caused by climate change. The Guardian newspaper even published an obituary. This year, scientists revealed that two-thirds of the Great Barrier Reef shows the highest coral cover seen since records began in 1985. The good-news report got a fraction of the attention the bad news did.

Not long ago, environmentalists constantly used pictures of polar bears to highlight the dangers of climate change. Polar bears even featured in Al Gore’s terrifying movie An Inconvenient Truth. But the reality is that polar bear numbers have been increasing — from somewhere between five and 10,000 polar bears in the 1960s up to around 26,000 today. We don’t hear this news, however. Instead, campaigners just quietly stopped using polar bears in their activism.

Talent and not money is the truly scarce variable

Tuesday, October 4th, 2022

Rob Henderson finds Tyler Cowen’s latest book, written with Daniel Gross, thorough yet breezy, providing useful tips for how to develop a talent-spotting mindset with insights from psychometrics, management, economics, and sociology:

Cowen and Gross note that in the U.S., from 1980 to 2000, the main cause of income inequality was whether a person graduated from college. But from 2000 to 2017, income inequality primarily existed within educational groupings. In other words, talent appears to be more responsible than education for economic returns.

Cowen and Gross each describe how often they reject proposals, and they conclude that “talent and not money is the truly scarce variable.” But where does it come from? They acknowledge that talent can differ between individuals, but they also stress the importance of practice. Indeed, those with the potential to cultivate serious talent sometimes practice to the point of obsession. Discussing which attributes predict eminence in a field, psychology professor David Lubinski has said that passion for work is key, and that highly creative people tend to be “almost myopically” fixated on work.

Relatedly, Cowen and Gross observe, “If you are hiring a writer, look for signs that the person is writing literally every day. If you are hiring an executive, try to discern what they are doing all the time to improve networking, decision-making, and knowledge of the sectors they work in.” Developing the habit of practice and self-discipline — the authors describe it as “sturdiness” — is critical for talent acquisition. “Sturdiness is the quality of getting work done every day, with extreme regularity and without long streaks of non-achievement,” they write. “If you are a writer, sturdiness is a very powerful virtue, even if you do not always feel you are being extremely productive.”

Accordingly, the book cites research indicating that perseverance is a stronger predictor than passion for success. When it comes to achievement, persistence pays off more than pure passion.

The authors’ favorite interview question about browser tabs is meant to tap into this question about whether a person spends his or her free time practicing. What the book describes as “downtime revealed preferences” are more interesting than “stories about your prior jobs.” For instance, asking what newsletters or subreddits a person reads is often more illuminating than asking what a person did at their previous job.

The book is very much about identifying high performers, as opposed to average workers. This is particularly true of its interview section, which gives guidance on unstructured, as opposed to structured, interviews. Most research indicates that interviews are more effective for higher-level jobs.

Talent provides several fascinating questions designed to yield interesting answers. How did you prepare for this interview? What’s a story one of your references might tell me when I call them? Which of your beliefs are you most likely wrong about? Whether the candidate can draw on intellectual and emotional resources to answer is a sign of broader stores of intellect and energy that he or she will bring to the job. The authors suggest that interviewers should not be afraid to let a question hang in the air after asking it; better to hold the tension to make clear you expect an answer.

The authors suggest using challenging and unusual questions to identify those with more style than substance. As they put it, “Beware of verbally adept storytellers.” Most of us have a bias toward well-spoken and articulate individuals. Bear this in mind, for it can lead you to hire what the authors describe as “glib but unsubstantial people.” They conclude this line of advice with, “Do not overestimate the importance of a person’s articulateness.”

Grids have excess capacity 95% of the time

Monday, August 29th, 2022

There are many ways Texas’s grid could have avoided disaster during winter storm Uri:

Being synchronized to one of the other wide-area grids in the US is one way. Another is not to have ~50% of its households rely on electric heat.

Cold weather causes demand to spike while also hampering supply. ERCOT is not the only grid to have suffered significant supply outages during cold weather. But other grids like PJM in 2014 were bailed out by imports and lower shares of customers using electric heating.

Customers using electric heat don’t pay the costs of their impact on the grid when they only pay a fixed price per kilowatt-hour. Electric resistance heaters and air source heat pumps see power usage spike dramatically during the coldest events. The overall kilowatt-hour usage only sees a slight increase on the monthly bill, but the peak power might be two or three times higher than the norm.

The owner can by right operate a bar, a restaurant, a boutique, a small workshop on the ground floor

Monday, August 22nd, 2022

There are a number of reasons small business in Tokyo is so vibrant:

A huge one that you can look at cities around the world and ask is how many flexible microspaces are available across your city. By microspaces, I mean small little nooks and crannies in the commercial or residential sectors of the city that you can do a lot of different things with and don’t need to pay a huge amount of money in rent.

This is going to sound wild to anyone who lives in the US, but for any two-story rowhouse in Tokyo, the owner can by right operate a bar, a restaurant, a boutique, a small workshop on the ground floor — even in the most residential zoned sections of the city. That means you have an incredible supply of potential microspaces. Any elderly homeowner could decide to rent out the bottom floor of their place to some young kid who wants to start a coffee shop, for example. When you look at what we call yokocho alleyways — charming, dingy alleyways that grew out of the black markets post-World War II, which are some of the the most iconic and beloved sections of the city now — it’s all of these tiny little bars and restaurants just crammed into every available space.

[...]

Liquor licenses are extremely cheap and easy. A liquor license in an American city can sometimes run up to $500,000. You’re not going to have a little four-seat, mom-and-pop bar for the locals. So those regulatory and policy choices that we make fundamentally determine what our cities are going to feel like.

Crime’s costs are even higher than we thought

Tuesday, August 16th, 2022

How bad is crime?, Ben Southwood asks:

In the paper, whose calculations were done in 2006, Americans were willing to pay $25,000 to avert a burglary across their society, $70,000 to avoid a serious assault, and nearly $10m to avoid a murder.

A more practical situation comes when juries award money to ‘make people whole’ for physical injury, pain, suffering, mental anguish, shock, and discomfort that they have experienced due to some illegal action. For example, one 68-year old lady was shot through the spine in a drive-by shooting, and left paraplegic — a jury gave her $2.7m in addition to her medical costs.

If you combine these awards, in a large sample, with separate ‘physician impairment ratings’ — basically how bad doctors think the injury is compared to death — then this is another method of estimating the statistical value of a life, something we have hundreds of estimates for, which typically comes out somewhere above $5m, depending on the wealth of the country and the methodology.

[...]

Their central estimate is that crime costs America $2.6 trillion annually, mostly coming from violent crime. This is about 12 percent of US GDP. By this metric, it would be, in GDP terms, one of the US’s biggest problems, on par with housing. For a country like the UK with a murder rate about five times lower, the problem is probably about five times smaller.

I actually think the American problem is considerably bigger than this estimate, because this study only includes the costs of crimes that actually get committed. However, people try their damnedest to avoid being the victims of crime. This leads to many extremely socially costly behaviours.

What are some of these extremely socially costly behaviors?

For example, one study by Julie Cullen and Steven Levitt finds that when crime rates across the city rise ten percent, city centre populations fall one percent — with people generally moving to the suburbs. One crime tends to push one person out of the city centre, on average.

Quantifying this in terms of a real world city, the roughly 400 percent increase in New York City’s murders from 1955 to 1975 (from around 300 to over 1,500 per year) would have been expected to empty the densest parts of the city out by about 40 percent, assuming that other crimes rose in line with murder. And indeed, the population of the centre city — Manhattan — fell about 35 percent over that period, while the population and physical extent of the suburbs grew rapidly.

Murders in New York City peaked in 1990 at over 2,000 per year, roughly as population reached its nadir in the city centre. They have cratered by over three quarters, to about 300. This would have likely driven city centre population up massively, much moreso than it actually did recover, but building restrictions have prevented this happening anywhere near as much as it might, meaning that it has driven up prices instead.

So this story implies that crime in city cores drives people to the suburbs, creating urban sprawl. If so, then crime’s costs are even higher than we thought.