Their overriding goal is not enlightenment

Thursday, March 14th, 2019

The admissions scandal is an opportunity to separate the lofty mythology of college from the sordid reality:

Despite the grand aspirations that students avow on their admission essays, their overriding goal is not enlightenment, but status.

Consider why these parents would even desire to fake their kids’ SAT scores. We can imagine them thinking, I desperately want my child to master mathematics, writing and history — and no one teaches math, writing and history like Yale does! But we all know this is fanciful. People don’t cheat because they want to learn more. They cheat to get a diploma from Yale or Stanford — modernity’s preferred passport to great careers and high society.

What, then, is the point of sneaking into an elite school, if you lack the ability to master the material? If the cheaters planned to major in one of the rare subjects with clear standards and well-defined career paths — like computer science, electrical engineering or chemistry — this would be a show-stopping question. Most majors, however, ask little of their students — and get less. Standards were higher in the 1960s, when typical college students toiled about 40 hours a week. Today, however, students work only two-thirds as hard. Full-time college has become a part-time job.

If computer-science students slacked off like this, employers would soon notice. Most of their peers, however, have little reason to dread a day of reckoning — because, to be blunt, most of what college students study is irrelevant in the real world. Think of all the math, history, science, poetry and foreign language you had to study in school — if you can. Indeed, you’ve probably long since forgotten most of what you learned about these subjects. Few of us use it, so almost all of us lose it. The average high school student studies a foreign language for a full two years, but, according to my own research, less than 1% of American adults even claim they gained fluency in a classroom.

Why do employers put up with such a dysfunctional educational system? Part of the answer is that government and donors lavish funding on the status quo with direct subsidies, student loans and alumni donations. As a result, any unsubsidized alternative, starved of resources, must be twice as good to do half as well. The deeper answer, though, is that American higher education tolerably performs one useful service for American business: certification. Most students at places like Yale and Stanford aren’t learning much, but they’re still awesome to behold if you’re looking to fill a position. Ivy Leaguers are more than just smart; when tangible rewards are on the line, they’re hardworking conformists. They hunger for conventional success. From employers’ point of view, it doesn’t matter if college fosters these traits or merely flags them. As long as elite students usually make excellent employees, the mechanism doesn’t matter.

So why cheat your kid into the Ivy League or a similarly elite school? For the lifelong benefits of corrupt certification. When I was in high school, my crusty health teacher loved to single out a random teen and scoff, “You’re wanted … for impersonating a student.” If you can get your less-than-brilliant, less-than-driven child admitted, he’ll probably get to impersonate a standardly awesome Ivy League graduate for the rest of his life. Of course, the superrich parents the FBI is accusing could have just let their kids skip college and live off their trust funds, but it’s not merely a matter of money. It’s also about youthful self-esteem — and parental bragging rights.

The Complexity of the World repeatedly makes fools of them

Thursday, February 7th, 2019

Bryan Caplan is a fan of dystopian fiction, but he had overlooked Henry Hazlitt’s The Great Idea (subsequently republished as Time Will Run Back) until last December, because he had feared a long-winded, clunky version of Economics in One Lesson — but he gave it a chance, and his gamble paid off:

I read the whole thing (almost 400 pages) on a red-eye flight – feeling wide awake the whole way.

The book’s premise: Centuries hence, mankind groans under a world Communist government centered in Moscow. People live in Stalinist fear and penury. Censorship is so extreme that virtually all pre-revolutionary writings have been destroyed; even Marx has been censored, to prevent anyone from reverse engineering whatever “capitalism” was. However, due to a marital dispute, Peter Uldanov, the dictator’s son, was raised in an island paradise, free of both the horrors and the rationalizations of his dystopian society. When the dictator nears death, he brings Peter to Moscow and appoints him his heir. The well-meaning but naive Peter is instantly horrified by Communism, and sets out to fix it. In time, he rediscovers free-market economics, and sets the world to right.

Yes, this sounds trite to me, too. But Hazlitt is a master of pacing. It takes almost 200 pages before any of Peter’s reforms start to work. Until then, it’s one false start after another, because so many of the seemingly dysfunctional policies of the Stalinist society are remedies for other dysfunctional policies.

[...]

In most literary dialogues, at least one of the characters has the answers. (“Yes, Socrates, you are quite right!”) What’s novel about Hazlitt’s dialogues is that all the characters are deeply confused. Even when they sound reasonable, the Complexity of the World repeatedly makes fools of them.

The Great Idea was originally published in 1951. Stalin was still alive.

Pave the muddy paths

Monday, February 4th, 2019

We often think of “law” and “legislation” as synonyms, Mike Munger notes, but Hayek argued otherwise:

Habits that are shared might be called “customs,” informal rules that might be written down nowhere. These are agreements, in the sense that we all agree that is the way we do things, even though we never actually sat down and signed anything.

A while back I wrote about the Pittsburgh left turn as an example of such a custom. It is important that the habit of waiting for someone to turn left in front of you be “agreed” on, in the sense that the expectation is widely shared — and met — because otherwise it wouldn’t be effective in making traffic move faster. These customs can come to govern behavior, however, precisely because they shape expectations, and violating expectations may be expensive or dangerous.

Those customs, if they consistently lead to useful outcomes, are “laws.” They are discoverable by experience and emerge in the form of traditions. But it is useful to write them down so that they can be enforced more effectively and can be easily learned by new generations. Laws that are written down are rules, commands, and prohibitions we call “legislation.”

The problem is that legislation need not arise from law at all.

Hayek was rightly concerned about the conceit that experts know what is best for everyone else:

I often illustrate this with what I call the Hayek Sidewalk Plan. Imagine that a new university has been built, and you are on the committee charged with laying out the sidewalks. What would you do?

You might walk around, look at aerial maps of the campus, and draw lines to try to guess where people will want to walk. Or you might want to have a purely aesthetic conception of the problem, and put the sidewalks in places or in patterns that are pleasing to the eye as you look out the windows of the administration building.

But all of that is legislation. No individual, or small committee of individuals, could possibly have enough information or foresight to be able to know in advance where people are going to want to walk. After all, universities are peopled by broadly diverse groups, with heterogeneous plans and purposes. People are often willing to walk on the sidewalks, if that serves their purpose at that point. But you probably don’t want to build a sidewalk from every doorway to every other doorway on the campus.

What would a law look like, in this setting? No one person, after all, has any effect walking on the grass, and all the different plans and purposes, taken one at a time, contain no information that you can use. But there is a physical manifestation of the aggregation of all these plans and purposes working themselves out over time. I don’t intend to make a path, and neither do you. But if enough of us, over time, find it useful to walk in the same place to accomplish our own idiosyncratic purposes, a visible record of the shared pattern emerges: a muddy path.

So, the law for the Hayek Sidewalk Plan committee will be discoverable if we adjourn for six months or so and then have a drone take some overhead photographs. It is clear now where people, acting as individuals but observable together in the shared result called a muddy path, want the sidewalks to be placed. And the task of the committee is simply to “legislate” by paving the muddy paths.

If we think of the process of discovering law as “looking for the muddy paths,” and legislation as “paving the muddy paths,” we have a simple but quite powerful way of thinking about the rule of law.

Affordability has its costs

Saturday, February 2nd, 2019

Besides its obvious shortcomings, Los Angeles has a number of subtle problems that go back to decisions made long ago:

Much of the Los Angeles area would be better today if early city fathers had realized how valuable the property would eventually become. Los Angeles has quite high population density these days, but lacks urban amenities. The San Fernando Valley on the north side of the city of Los Angeles, for instance, was built up under the assumption that it would remain a rural retreat from the big city, but it now has over 1.75 million residents.

In contrast, Chicago was laid out after its 1871 fire by men like Daniel Burnham who took “Make no little plans” as their motto. L.A. wasn’t. And it’s hard to fix urban-planning mistakes afterward.

To take a seemingly trivial example, Chicago, where I lived from 1982 to 2000, was set up with most streets having sidewalks, and the sidewalks are usually wide enough for two people to walk abreast while conversing. In contrast, sidewalks on residential streets in Los Angeles often peter out at the developers’ whims, and those that exist are usually a little too narrow for two people. So pedestrians end up conversing over their shoulders.

One reason for the sidewalk shortage is that Los Angeles was the first major city in America to develop after the automobile.

Another is that much of it was laid out to be affordable after the stock-market crash of 1929. That introduced a more democratic, less elitist ethos. There’s a lot to be said for the remarkable living standards of average people in postwar L.A., but the city is paying the price today for cutting corners back then.

Chicago, in contrast, was mostly built during the era before the New Deal when upscale bourgeois values dominated tastes. For instance, my Chicago condo was in a three-story brick building on an elegant block of other three-story brick buildings. It was a very respectable-looking block, with every building striving to live up to proper bourgeois standards.

This doesn’t mean that everybody can keep up appearances at all times. My Chicago condo had been built in 1923 with optimistic touches like nine-foot ceilings. During the Depression, the owners must have been ruined as the units were split up into two apartments. But a couple of generations later, the building was rehabbed, and the tall ceilings and other generous touches were still there.

Los Angeles, in contrast, reflects an odd combination of mass-market needs and celebrity tastes.

In 1915, Charlie Chaplin, rapidly becoming the most famous man in the world, lived in Chicago a couple of blocks from where my old condo would go up. But in 1916, as filmmakers realized the advantages of sunshine, he moved from Chicago to Los Angeles.

The movies did in the chance of Los Angeles developing physically along bourgeois lines. Film people valued privacy and self-expression. Screenwriter Nathanael West’s 1939 novel Day of the Locust complained of the excessive diversity of Hollywood houses:

But not even the soft wash of dusk could help the houses. Only dynamite would be of any use against the Mexican ranch houses, Samoan huts, Mediterranean villas, Egyptian and Japanese temples, Swiss chalets, Tudor cottages, and every possible combination of these styles that lined the slopes of the canyon.

One of the most popular architects of celebrity homes was an African-American named Paul Revere Williams whose view, in contrast to the more academically celebrated Los Angeles architects such as Schindler and Neutra, was that his movie-star clients paid him to make their whims come true. So if, say, Frank Sinatra desired a Japanese Modern house with superb acoustics for his state-of-the-art stereo, Williams would figure out how to give the client what he wanted.

Another need celebrities have is privacy from tourists. Not having a sidewalk in front of your house for your stalkers to assemble upon makes sense if you are a world-famous actor.

The peculiar needs of movie stars influence everybody else’s tastes in L.A., with generally unfortunate results. If you are in constant danger of being pestered by crazed fans, it can be a good idea to go everywhere by car. But not being able to walk down your own street without risking being hit by traffic is a dumb idea if you are a nobody.

One lesson from Los Angeles ought to be that it’s hard to retrofit urban-planning mistakes made for reasons of affordability and expedience.

For example, the Los Angeles River, a floodplain that is dry most of the year, almost washed the city away in the 1938 flood. The Army Corps of Engineers were called in and rapidly built the notorious concrete ditch that is now the L.A. River to keep, say, Lockheed from being carried out to sea in the next deluge, causing America to lose the upcoming war.

After the war, newer desert communities like Scottsdale and Palm Springs realized that it makes more sense to convert natural flood channels into parks and golf courses that can absorb runoff. Moreover, the 1994 earthquake in Los Angeles demonstrated that putting up apartment buildings on the old sand and gravel riverbed had been a bad idea, as numerous apartment buildings near the river collapsed.

For decades, public-spirited Angelenos have generated countless plans to replace the ugly concrete culvert. But to do that would require a broader channel, which would demand using eminent domain to purchase all the very expensive real estate along the river. And so nothing ever gets done.

Similarly, it’s hard to undo affordable-housing construction, unless it happens to be in a hugely valuable location, such as along the beach. Gentrification is most likely where there’s something to gentrify.

For instance, Van Nuys in the heart of the San Fernando Valley was built as an affordable place for people who couldn’t afford cars. I recall it in the 1960s being a dump.

Driving through Van Nuys last week, it was still the same dump.

Affordability has its costs.

If some idiot from the South tried to be polite, the system broke down

Friday, February 1st, 2019

As you travel the world, some of the local rules you can look up or read about, but often the rules are just assumed because “everyone” knows them:

I described an experience of mine in Erlangen, Germany, in an earlier column, where I didn’t know about the practice of collecting a deposit on shopping carts. No one told me about this, and I thought I recognized the context of “grocery store” as familiar, one where I knew the rules. But I didn’t.

I had another experience in Germany, one that made me think of the importance of what Hayek called “the particular circumstances of time and place.” Erlangen, where I taught at Friedrich Alexander University, is a city of bicycles. There are roads, but most are narrow and there are so many bikes that it can be frustrating to drive.

The bike riders, as is true in many American cities, paid little attention to the traffic lights. Often, there were so many bikes that it was not possible to cross the street without getting in the way. But I noticed that people did cross, just walking right out into the street.

I tried this, several times, in my first time in Erlangen. But being from the southern United States, I’m polite and deferential. So, I would start across the street, but then look up the street, and if a bike was close and coming fast I’d stop.

And get hit by a large, sturdy German on a large, sturdy German bicycle. And then I got yelled at, in German. What had I done wrong? Eventually, I figured it out: there had evolved a convention for crossing the street and for riding bicycles. The pedestrian simply walked at a constant speed, without even looking. The bicyclist would ride directly at the pedestrian, actually aiming at the spot where the pedestrian was at that point in time. Since the pedestrian kept moving in a predictable fashion, the cyclist would pass directly and safely behind the pedestrian.

If some idiot from the southern United States, in an effort to impose his own views of “polite” behavior on people whose evolved rules were different, tried to be polite and stop, the system broke down. Though that idiot (me) was stopping to avoid being hit, I was actually being rude by violating the rules. These rules were not written down and could not easily be changed.

In fact, a number of my German colleagues even denied that it was a rule, at first. But then they would say, “Well, right, you can’t stop. That would be dumb. So, okay, I guess it is a rule, after all.”

More precisely, this rule — like many other important rules you encounter in “foreign” settings — is really a convention. A convention, according to Lewis (1969), is a persistent (though not necessarily permanent) regularity in the resolution of recurring coordination problems, in situations characterized by recurrent interactions where outcomes are (inter)dependent.

Conventions, then, exist when people all agree on a rule of behavior, even if no one ever said the rule out loud or wrote it down. No one actor can choose an outcome, and no actor can challenge the regularity by unilaterally deviating from the conventional behavior. But deviation can result in substantial harm, as when someone tries to drive on the left in a country where “we” drive on the right, or social sanction, as when there is intentional punishment on behalf of other actors if deviation is observed and publicized.

According to David Hume, convention is

a general sense of common interest; which sense all the members of the society express to one another, and which induces them to regulate their conduct by certain rules. I observe that it will be to my interest [e.g.] to leave another in the possession of his goods, provided he will act in the same manner with regard to me. When this common sense of interest is mutually expressed and is known to both, it produces a suitable resolution and behavior. And this may properly enough be called a convention or agreement betwixt us, though without the interposition of a promise; since the actions of each of us have a reference to those of the other, and are performed upon the supposition that something is to be performed on the other part. (Hume, 1978; llI.ii.2)

Notice how different this is from the “gamer” conception of laws and rules. For the gamer, all the rules can be — in fact, must be — written down and can be examined and rearranged. For the world traveler, the experience of finding out the rules can involve trial and error, and even the natives likely do not fully understand that the rules and norms of their culture are unique.

One of my favorite examples is actually from the United States, the so-called Pittsburgh Left Turn. In an article in the Pittsburgh City Paper in 2006, Chris Potter wrote:

As longtime residents know, the Pittsburgh Left takes place when two or more cars — one planning to go straight, and the other to turn left — face off at a red light without a “left-turn only” lane or signal. The Pittsburgh Left occurs when the light turns green, and the driver turning left takes the turn without yielding to the oncoming car.

Pittsburgh is an old city, many of whose streets were designed before automobiles held sway. [That means] that street grids are constricted, with little room for amenities like left-turn-only lanes. The absence of such lanes means drivers have to solve traffic problems on their own. Instead of letting one car at the head of an intersection bottle up traffic behind it, the Pittsburgh Left gives the turning driver a chance to get out of everyone else’s way. In exchange for a few seconds of patience, the Pittsburgh Left allows traffic in both directions to move smoothly for the duration of the signal. Of course, the system only works if both drivers know about it. No doubt that’s why newcomers find it so vexing.

The Pittsburgh Left is a very efficient convention. On two-lane streets, turning left can block traffic as the turning car waits for an opening. And left-turn arrows are expensive and add time to each traffic light cycle. Far better to let the left turners — if there are any — go first. If there are no left turners, traffic just proceeds normally, not waiting on a left arrow.

Of course, if some idiot from the southern United States (yes, me again) is driving in Pittsburgh, that person expects to go when the light turns green. I blew my horn when two cars turned left in front of me. And people on the sidewalk yelled at me, as did the left-turning drivers. Once again, I didn’t know the rules, because I was a foreigner, at least in terms of the rules of the road in Pittsburgh.

Actually, it’s worse than that. The Pittsburgh Left is technically illegal, according to the Pennsylvania Driver’s Handbook (p. 47): “Drivers turning left must yield to oncoming vehicles going straight ahead.” The written rules, the gamer rules, appear to endorse one pattern of action. But the actual rules, the ones you have to travel around to learn, may be quite different. Real rules are not written down, and the people living in that rule system may not understand either the nature or effects of the rules. It is very difficult to change conventions, because they represent the expectations people have developed in dealing with each other over years or decades.

Hayek understood this clearly, and argued for what I have called the “world traveler” conception over what I have called the “gamer” conception of rules and laws. As Hayek said in 1988, in The Fatal Conceit:

To understand our civilisation, one must appreciate that the extended order resulted not from human design or intention but spontaneously: it arose from unintentionally conforming to certain traditional and largely moral practices, many of which men tend to dislike, whose significance they usually fail to understand, whose validity they cannot prove, and which have nonetheless fairly rapidly spread by means of an evolutionary selection — the comparative increase of population and wealth — of those groups that happened to follow them.… This process is perhaps the least appreciated facet of human evolution.

Throw out your used books

Sunday, January 27th, 2019

You should simply throw out your used books, Tyler Cowen argues, instead of gifting them:

If you donate the otherwise-thrashed book somewhere, someone might read it. OK, maybe that person will read one more book in life but more likely that book will substitute for that person reading some other book instead. Or substitute for watching a wonderful movie.

So you have to ask yourself — this book — is it better on average than what an attracted reader might otherwise spend time with? Even within any particular point of view most books simply aren’t that good, and furthermore many books end up being wrong. These books are traps for the unwary, and furthermore gifting the book puts some sentimental value on it, thereby increasing the chance that it is read. Gift very selectively! And ponder the margin.

You should be most likely to give book gifts to people whose reading taste you don’t respect very much. That said, sometimes a very bad book can be useful because it might appeal to “bad” readers and lure them away from even worse books. Please make all the appropriate calculations.

Few even had wallets

Tuesday, January 22nd, 2019

A century ago the market economy was important, but a lot of economic activity still took place within the family, Peter Frost notes, especially in rural areas:

In the late 1980s I interviewed elderly French Canadians in a small rural community, and I was struck by how little the market economy mattered in their youth. At that time none of them had bank accounts. Few even had wallets. Coins and bills were kept at home in a small wooden box for special occasions, like the yearly trip to Quebec City. The rest of the time these people grew their own food and made their own clothes and furniture. Farms did produce food for local markets, but this surplus was of secondary importance and could just as often be bartered with neighbors or donated to the priest. Farm families were also large and typically brought together many people from three or four generations.

By the 1980s things had changed considerably. Many of my interviewees were living in circumstances of extreme social isolation, with only occasional visits from family or friends. Even among middle-aged members of the community there were many who lived alone, either because of divorce or because of relationships that had never gone anywhere. This is a major cultural change, and it has occurred in the absence of any underlying changes to the way people think and feel.

Whenever I raise this point I’m usually told we’re nonetheless better off today, not only materially but also in terms of enjoying varied and more interesting lives. That argument made sense back in the 1980s — in the wake of a long economic boom that had doubled incomes, increased life expectancy, and improved our lives through labor-saving devices, new forms of home entertainment, and stimulating interactions with a broader range of people.

Today, that argument seems less convincing. Median income has stagnated since the 1970s and may even be decreasing if we adjust for monetization of activities, like child care, that were previously nonmonetized. Life expectancy too has leveled off and is now declining in the U.S. because of rising suicide rates among people who live alone. Finally, cultural diversity is having the perverse effect of reducing intellectual diversity. More and more topics are considered off-limits in public discourse and, increasingly, in private conversation.

Liberalism is no longer delivering the goods — not only material goods but also the goods of long-term relationships and rewarding social interaction.

Previously they had been a lumpenproletariat of single men and women

Monday, January 21st, 2019

Liberal regimes tend to erode their own cultural and genetic foundations, thus undermining the cause of their success:

Liberalism emerged in northwest Europe. This was where conditions were most conducive to dissolving the bonds of kinship and creating communities of atomized individuals who produce and consume for a market. Northwest Europeans were most likely to embark on this evolutionary trajectory because of their tendency toward late marriage, their high proportion of adults who live alone, their weaker kinship ties and, conversely, their greater individualism. This is the Western European Marriage Pattern, and it seems to go far back in time. The market economy began to take shape at a later date, possibly with the expansion of North Sea trade during early medieval times and certainly with the take-off of the North Sea trading area in the mid-1300s (Note 1).

Thus began a process of gene-culture coevolution: people pushed the limits of their phenotype to exploit the possibilities of the market economy; selection then brought the mean genotype into line with the new phenotype. The cycle then continued anew, with the mean phenotype always one step ahead of the mean genotype.

This gene-culture coevolution has interested several researchers. Gregory Clark has linked the demographic expansion of the English middle class to specific behavioral changes in the English population: increasing future time orientation; greater acceptance of the State monopoly on violence and consequently less willingness to use violence to settle personal disputes; and, more generally, a shift toward bourgeois values of thrift, reserve, self-control, and foresight. Heiner Rindermann has presented the evidence for a steady rise in mean IQ in Western Europe during the late medieval and early modern era. Henry Harpending and myself have investigated genetic pacification during the same timeframe in English society. Finally, hbd*chick has written about individualism in relation to the Western European Marriage Pattern (Note 2).

This process of gene-culture coevolution came to a halt in the late 19th century. Cottage industries gave way to large firms that invested in housing and other services for their workers, and this corporate paternalism eventually became the model for the welfare state, first in Germany and then elsewhere in the West. Working people could now settle down and have families, whereas previously they had largely been a lumpenproletariat of single men and women. Meanwhile, middle-class fertility began to decline, partly because of the rising cost of maintaining a middle-class lifestyle and partly because of sociocultural changes (increasing acceptance and availability of contraception, feminism, etc.).

This reversal of class differences in fertility seems to have reversed the gene-culture coevolution of the late medieval and early modern era.

Liberalism delivered the goods

Sunday, January 20th, 2019

How did liberalism become so dominant?

In a word, it delivered the goods. Liberal regimes were better able to mobilize labor, capital, and raw resources over long distances and across different communities. Conservative regimes were less flexible and, by their very nature, tied to a single ethnocultural community. Liberals pushed and pushed for more individualism and social atomization, thereby reaping the benefits of access to an ever larger market economy.

The benefits included not only more wealth but also more military power. During the American Civil War, the North benefited not only from a greater capacity to produce arms and ammunition but also from a more extensive railway system and a larger pool of recruits, including young migrants of diverse origins — one in four members of the Union army was an immigrant (Doyle 2015).

During the First World War, Britain and France could likewise draw on not only their own manpower but also that of their colonies and elsewhere. France recruited half a million African soldiers to fight in Europe, and Britain over a million Indian troops to fight in Europe, the Middle East, and East Africa (Koller 2014; Wikipedia 2018b). An additional 300,000 laborers were brought to Europe and the Middle East for non-combat roles from China, Egypt, India, and South Africa (Wikipedia 2018a). In contrast, the Central Powers had to rely almost entirely on their own human resources. The Allied powers thus turned a European civil war into a truly global conflict.

The same imbalance developed during the Second World War. The Allies could produce arms and ammunition in greater quantities and far from enemy attack in North America, India, and South Africa, while recruiting large numbers of soldiers overseas. More than a million African soldiers fought for Britain and France, their contribution being particularly critical to the Burma campaign, the Italian campaign, and the invasion of southern France (Krinninger and Mwanamilongo 2015; Wikipedia 2018c). Meanwhile, India provided over 2.5 million soldiers, who fought in North Africa, Europe, and Asia (Wikipedia 2018d). India also produced armaments and resources for the war effort, notably coal, iron ore, and steel.

Liberalism thus succeeded not so much in the battle of ideas as on the actual battlefield.

If you make a community truly open it will eventually become little more than a motel

Saturday, January 19th, 2019

The emergence of the middle class was associated with the rise of liberalism and its belief in the supremacy of the individual:

John Locke (1632–1704) is considered to be the “father of liberalism,” but belief in the individual as the ultimate moral arbiter was already evident in Protestant and pre-Protestant thinkers going back to John Wycliffe (1320s–1384) and earlier. These are all elaborations and refinements of the same mindset.

Liberalism has been dominant in Britain and its main overseas offshoot, the United States, since the 18th century. There is some difference between right-liberals and left-liberals, but both see the individual as the fundamental unit of society and both seek to maximize personal autonomy at the expense of kinship-based forms of social organization, i.e., the nuclear family, the extended family, the kin group, the community, and the ethnie. Right-liberals are willing to tolerate these older forms and let them gradually self-liquidate, whereas left-liberals want to use the power of the State to liquidate them. Some left-liberals say they simply want to redefine these older forms of sociality to make them voluntary and open to everyone. Redefine, however, means eliminate. If you make a community truly “open” it will eventually become little more than a motel: a place where people share space, where they may or may not know each other, and where very few if any are linked by longstanding ties — certainly not ties of kinship.

For a long time, liberalism was merely dominant in Britain and the U.S. The market economy coexisted with kinship as the proper way to organize social and economic life. The latter form of sociality was even dominant in some groups and regions, such as the Celtic fringe, Catholic communities, the American “Bible Belt,” and rural or semi-rural areas in general. Today, those subcultures are largely gone. Opposition to liberalism is for the most part limited, ironically, to individuals who act on their own.

This is the mindset that enabled northwest Europeans to exploit the possibilities of the market economy

Friday, January 18th, 2019

There is reason to believe that northwest Europeans were pre-adapted to the market economy:

They were not the first to create markets, but they were the first to replace kinship with the market as the main way of organizing social and economic life. Already in the fourteenth century, their kinship ties were weaker than those of other human populations, as attested by marriage data going back to before the Black Death and in some cases to the seventh century (Frost 2017). The data reveal a characteristic pattern:

  • men and women marry relatively late
  • many people never marry
  • children usually leave the nuclear family to form new households
  • households often have non-kin members

This behavioral pattern was associated with a psychological one:

  • weaker kinship and stronger individualism;
  • framing of social rules in terms of moral universalism and moral absolutism, as opposed to kinship-based morality (nepotism, amoral familialism);
  • greater tendency to use internal controls on behavior (guilt proneness, empathy) than external controls (public shaming, community surveillance, etc.)

This is the mindset that enabled northwest Europeans to exploit the possibilities of the market economy. Because they could more easily move toward individualism and social atomization, they could go farther in reorganizing social relationships along market-oriented lines. They could thus mobilize capital, labor, and raw resources more efficiently, thereby gaining more wealth and, ultimately, more military power.

This new cultural environment in turn led to further behavioral and psychological changes. Northwest Europeans have adapted to it just as humans elsewhere have adapted to their own cultural environments, through gene-culture coevolution.

[...]

Northwest Europeans adapted to the market economy, especially those who formed the nascent middle class of merchants, yeomen, and petty traders. Over time, this class enjoyed higher fertility and became demographically more important, as shown by Clark (2007, 2009a, 2009b) in his study of medieval and post-medieval England: the lower classes had negative population growth and were steadily replaced, generation after generation, by downwardly mobile individuals from the middle class. By the early 19th century most English people were either middle-class or impoverished descendants of the middle class.

This demographic change was associated with behavioral and psychological changes to the average English person. Time orientation became shifted toward the future, as seen by increased willingness to save money and defer gratification. There was also a long-term decline in personal violence, with male homicide falling steadily from 1150 to 1800 and, parallel to this, a decline in blood sports and other violent though legal practices (cock fighting, bear and bull baiting, public executions). This change can largely be attributed to the State’s monopoly on violence and the consequent removal of violence-prone individuals through court-ordered or extrajudicial executions. Between 1500 and 1750, court-ordered executions removed 0.5 to 1.0% of all men of each generation, with perhaps just as many dying at the scene of the crime or in prison while awaiting trial (Clark 2007; Frost and Harpending 2015).

Similarly, Rindermann (2018) has argued that mean IQ steadily rose in Western Europe during late medieval and post-medieval times. More people were able to reach higher stages of mental development. Previously, the average person could learn language and social norms well enough, but their ability to reason was hindered by cognitive egocentrism, anthropomorphism, finalism, and animism (Rindermann 2018, p. 49). From the sixteenth century onward, more and more people could better understand probability, cause and effect, and the perspective of another person, whether real or hypothetical. This improvement preceded universal education and improvements in nutrition and sanitation (Rindermann 2018, pp. 86-87).

Macroeconomics is a combination of voodoo complex systems and politics

Wednesday, January 16th, 2019

In a recent interview, Shane Parrish asked Naval Ravikant, What big ideas have you changed your mind on in the last few years?

There’s a lot on kind of the life level. There’s a couple, obviously, in the business level. I think on a more practical basis, I’ve just stopped believing in macroeconomics. I studied economics in school and computer science. There was a time when I thought I was going to be a PhD in economics and all of that. The further I get, the more I realize macroeconomics is a combination of voodoo complex systems and politics. You can find macroeconomists that take every side of every argument. I think that discipline, because it doesn’t make falsifiable predictions, which is the hallmark of science, because it doesn’t make falsifiable predictions, it’s become corrupted.

You never have the counterexample on the economy. You can never take the US economy and run two different experiments at the same time. Because there’s so much data, people kind of cherry-pick for whatever political narrative they’re trying to push. To the extent that people spend all their time watching the macroeconomy or the fed forecasts or which way the stocks are going to go the next year, is it going to be a good year or bad year, that’s all junk. It’s no better than astrology. In fact, it’s probably even worse because it’s less entertaining. It’s just more stress-inducing. I think of macroeconomics as a junk science. All apologies to macroeconomists.

That said, microeconomics and game theory are fundamental. I don’t think you can be successful in business or even navigating through most of our modern capital society without an extremely good understanding of supply and demand and labor versus capital and game theory and tit for tat and those kinds of things. Macroeconomics is a religion that I gave up, but there are many others. I’ve changed my mind on death, on the nature of life, on the purpose of life, on marriage. I was originally not someone who wanted to be married and have kids. There have been a lot of fundamental changes. The most practical one is I gave up macro and I embraced micro.

I would say that’s not just true in macroeconomics, that true in everything. I don’t believe in macro-environmentalism, I believe in microenvironmentalism. I don’t believe in macro-charity. I believe in micro-charity.

I don’t believe in macro improving the world. There’s a lot of people out there who get really fired up about I’m going to change the world, I’m going to change this person, I’m going to change the way people think.

I think it’s all micro. It’s like change yourself, then maybe change your family and your neighbor before you get into abstract concepts about I’m going to change the world.

Culture is too important to be left to the sociologists

Wednesday, December 19th, 2018

Culture matters, Virginia Postrel reminds us:

The mid-20th century period in which the modern libertarian movement arose is now looked upon with great nostalgia, especially in the United States. As my friend Brink Lindsey puts it, the right wants to live there and the left wants to work there.

When Donald Trump says “Make America Great Again,” the again refers to the world in which he grew up. The war was over, standards of living were rising, and new technologies from vaccines to synthetic fibers promised a better future.

Social critics of the day deplored mass production, mass consumption, and mass media, but the general public enjoyed their fruits. The burgeoning middle class happily replaced tenements with “little boxes made of ticky-tacky.” Snobs might look down on the suburbs, but families were delighted to settle in them. Faith in government was high, and other institutions—universities, churches, corporations, unions, and civic groups—enjoyed widespread respect.

It looked like a satisfactory equilibrium. But it wasn’t. The 1950s, after all, produced the 1960s.

Consider a series of best-selling books: The Lonely Crowd, by David Riesman, published in 1950; Atlas Shrugged by Ayn Rand and The Organization Man by William Whyte, both published in 1957; and The Feminine Mystique by Betty Friedan, published in 1963. All of these books, and undoubtedly others I’ve overlooked, took up the same essential theme: the frustration of the person of talent and integrity in a society demanding conformity and what Riesman called “other-directedness.”

These books succeeded in the economic marketplace, as well as the marketplace of ideas, because they tapped a growing sense of discontent with the prevailing social and business ethos. Their audience might have been a minority of the population, but it was a large, gifted, and ultimately influential one. Despite the era’s prosperity—or perhaps because of it—many people had come to resent social norms that demanded that they keep their heads down, do what was expected of them, and be content to be treated as homogeneous threads in the social fabric. The ensuing cultural upheaval, which peaked in the late 1970s, took many different forms, with unanticipated results.

One of the most paradoxical examples I’ve run across comes from Dana Thomas’s 2015 book Gods and Kings, on the fashion designers Alexander McQueen and John Galliano. It’s about Galliano, who was born in Gibraltar and grew up in South London as the son of a plumber. His career, Thomas comments in passing, was made possible by two cultural phenomena: Thatcherism and punk.

How could that be? After all, Thatcherism and punk are usually seen as antagonistic. I asked Thomas about it in an interview. “Both were breaking down British social rules and constraints,” she said. Punk brought together kids of all classes, while Thatcher’s economic reforms encouraged entrepreneurship.

    If you had an idea and you had the backing then you could make it happen, no matter what your dad did in life or your mother did in life or where you came from or what your background was, or where you grew up or what your accent sounded like. These were all barriers before. So it double-whammied for Galliano. It was great. Because it allowed him to get out of South London, get into a good art school and be seen as a bona fide talent on his own standing, as opposed to where he came from. And he was also able to get the backing to start his company, because there was more money out there. It gave him more freedom. Before punk and before Thatcherism, chances were the son of a plumber was not going to wind up being the head of a couture house.

If you care about the open society, how could you not be interested in a phenomenon like that? How exactly do such transformations take place, and what are their unexpected ripple effects? What processes of experimentation and feedback are at work? Could a young designer do the same thing today and, if not, why not? Are these moments of cultural and economic opportunity inherently fleeting?

(Hat tip to Arnold Kling.)

Why Is American mass transit so bad?

Friday, December 14th, 2018

Why Is American mass transit so bad? It’s a long story:

One hundred years ago, the United States had a public transportation system that was the envy of the world. Today, outside a few major urban centers, it is barely on life support. Even in New York City, subway ridership is well below its 1946 peak. Annual per capita transit trips in the U.S. plummeted from 115.8 in 1950 to 36.1 in 1970, where they have roughly remained since, even as population has grown.

This has not happened in much of the rest of the world.

[...]

What happened? Over the past hundred years the clearest cause is this: Transit providers in the U.S. have continually cut basic local service in a vain effort to improve their finances. But they only succeeded in driving riders and revenue away. When the transit service that cities provide is not attractive, the demand from passengers that might “justify” its improvement will never materialize.

[...]

[The Age of Rail] was an era when transit could usually make money when combined with real-estate speculation on the newly accessible lands, at least in the short term. But then as now, it struggled to cover its costs over the long term, let alone turn a profit. By the 1920s, as the automobile became a fierce competitor, privately run transit struggled.

But public subsidy was politically challenging: There was a popular perception of transit as a business controlled by rapacious profiteers—as unpopular as cable companies and airlines are today. In 1920, the President’s Commission on Electric Railways described the entire industry as “virtually bankrupt,” thanks to rapid inflation in the World War I years and the nascent encroachment of the car.

The Depression crushed most transit companies, and the handful of major projects that moved forward in the 1930s were bankrolled by the New-Deal-era federal government: See the State and Milwaukee-Dearborn subways in Chicago, the South Broad Street subway in Philadelphia, and the Sixth Avenue subway in New York. But federal infrastructure investment would soon shift almost entirely to highways.

[...]

It is not a coincidence that, while almost every interurban and streetcar line in the U.S. failed, nearly every grade-separated subway or elevated system survived. Transit agencies continued to provide frequent service on these lines so they remained viable, and when trains did not have to share the road and stop at intersections, they could also be time competitive with the car. The subways and els of Chicago, Philadelphia, New York, and Boston are all still around, while the vast streetcar and interurban networks of Los Angeles, Minneapolis, Atlanta, Detroit, and many others are long gone. Only when transit didn’t need to share the road with the car, and frequent service continued, was it able to survive.

[...]

All of these [systems introduced in the 1970s] featured fast, partially automated trains running deep into the suburbs, often in the median of expressways. With their plush seating and futuristic design, they were designed to attract people who could afford to drive.

But these high-tech systems were a skeleton without a body, unable to provide access to most of the urban area without an effective connecting bus network. The bus lines that could have fed passengers to the stations had long atrophied, or they never existed at all. In many cases, the new rapid transit systems weren’t even operated by the same agency as the local buses, meaning double fares and little coordination. With no connecting bus services and few people within walking distance in low-density suburbs, the only way to get people to stations was to provide vast lots for parking. But even huge garages can’t fit enough people to fill a subway. Most people without cars were left little better off than they had been before the projects, and many people with cars chose to drive the whole way rather than parking at the station and getting on the train.

[...]

Service drives demand. When riders started to switch to the car in the early postwar years, American transit systems almost universally cut service to restore their financial viability. But this drove more people away, producing a vicious cycle until just about everybody who could drive, drove. In the fastest-growing areas, little or no transit was provided at all, because it was deemed to be not economically viable. Therefore, new suburbs had to be entirely auto-oriented.

Do the rich capture all the gains from economic growth?

Tuesday, November 13th, 2018

Do the rich capture all the gains from economic growth? Russ Roberts explains why it matters how you measure these things:

But the biggest problem with the pessimistic studies is that they rarely follow the same people to see how they do over time. Instead, they rely on a snapshot at two points in time. So for example, researchers look at the median income of the middle quintile in 1975 and compare that to the median income of the median quintile in 2014, say. When they find little or no change, they conclude that the average American is making no progress.

But the people in the snapshots are not the same people. These snapshots fail to correct for changes in the composition of workers and changes in household structure that distort the measurement of economic progress. There is immigration. There are large changes in the marriage rate over the period being examined. And there is economic mobility as people move up and down the economic ladder as their luck and opportunities fluctuate.

How important are these effects? One way to find out is to follow the same people over time. When you follow the same people over time, you get very different results about the impact of the economy on the poor, the middle, and the rich.

Studies that use panel data — data that is generated from following the same people over time — consistently find that the largest gains over time accrue to the poorest workers and that the richest workers get very little of the gains. This is true in survey data. It is true in data gathered from tax returns.