The Complexity of the World repeatedly makes fools of them

Thursday, February 7th, 2019

Bryan Caplan is a fan of dystopian fiction, but he had overlooked Henry Hazlitt’s The Great Idea (subsequently republished as Time Will Run Back) until last December, because he had feared a long-winded, clunky version of Economics in One Lesson — but he gave it a chance, and his gamble paid off:

I read the whole thing (almost 400 pages) on a red-eye flight – feeling wide awake the whole way.

The book’s premise: Centuries hence, mankind groans under a world Communist government centered in Moscow. People live in Stalinist fear and penury. Censorship is so extreme that virtually all pre-revolutionary writings have been destroyed; even Marx has been censored, to prevent anyone from reverse engineering whatever “capitalism” was. However, due to a marital dispute, Peter Uldanov, the dictator’s son, was raised in an island paradise, free of both the horrors and the rationalizations of his dystopian society. When the dictator nears death, he brings Peter to Moscow and appoints him his heir. The well-meaning but naive Peter is instantly horrified by Communism, and sets out to fix it. In time, he rediscovers free-market economics, and sets the world to right.

Yes, this sounds trite to me, too. But Hazlitt is a master of pacing. It takes almost 200 pages before any of Peter’s reforms start to work. Until then, it’s one false start after another, because so many of the seemingly dysfunctional policies of the Stalinist society are remedies for other dysfunctional policies.

[...]

In most literary dialogues, at least one of the characters has the answers. (“Yes, Socrates, you are quite right!”) What’s novel about Hazlitt’s dialogues is that all the characters are deeply confused. Even when they sound reasonable, the Complexity of the World repeatedly makes fools of them.

The Great Idea was originally published in 1951. Stalin was still alive.

Leftist mobs burned convents and churches, while Republican police stood by

Wednesday, February 6th, 2019

Rod Dreher recently watched a 1983 British television documentary about the Spanish Civil War and came away with some scattered impressions:

Maybe it’s an American thing, but it’s hard to look at a conflict like this without imposing a simple moralistic narrative on it, between the Good Guys and the Bad Guys. Certainly the received history of the conflict frames it as an unambiguous fight between democracy and fascism — and the evil fascists won. The truth is far more complicated.

In fact, the filmmakers make a point of saying that ideologues and others who project certain narratives onto the conflict do so by ignoring aspects of it that were particularly Spanish. That is to say, though the civil war did become a conflict between fascism and communism (and therefore a proxy war between Nazi Germany and the Soviet Union), that’s not the whole story. Its roots have a lot to do with the structure and history of Spain itself.

The first episode covers the years 1931-35, which covers the background to the war. In 1930, the military dictatorship was overthrown, and municipal elections across the country the next year led to a big win for combined parties of left and right who favored a democratic republic. (N.B., not all leftists and rightists wanted a republic!) After the vote, the king abdicated, and the Republic was declared. Later that spring, leftist mobs burned convents and churches in various cities, while Republican police stood by doing nothing. This sent a deep shock wave through Spanish Catholicism.

The Republic, in typical European fashion, was strongly anticlerical. It quickly passed laws stripping the Catholic Church of property and the right to educate young people. There were other anticlerical measures taken. Anti-Christian laws, and violent mob action, were present at the beginning of the Republic. Prior to watching this documentary, I assumed they happened as part of the civil war itself. Imagine what it was like to see a new constitutional order (the Republic) come into being, and suddenly you can’t give your children a religious education, and your churches and convents are being torched. How confident would you be in the new order?

According to the film, Spain was still in the 19th century, in terms of economics. It was largely agrarian, with a massive peasantry that was underfed, and tended to be religious and traditional. On the other hand, they were dependent on large landowners who favored the semi-feudal conditions. These landowners were extremely conservative. Their interests clashed, obviously, and became violent when the land reform promised by the liberal Republicans did not materialize fast enough for the peasantry. Mind you, the Republic was declared in the middle of the global Great Depression, with all the political and economic turmoil that came with it.

The urban working class was organized along Marxist lines, though the left was badly fractured, and unstable. There were democratic socialists, but also communists who hewed closely to the Stalinist line. Plus, anarchists were a really significant force in Spain, something unique in Europe at the time. They competed politically, and usually aligned with the left in fighting the right. But they refused to compromise their principles by taking formal power, even when the defense of the Republic required it.

Regional autonomy also played a role in defining sides. When the civil war started, Catholics supported the Nationalist side (the Francoists) … but not in the Basque Country, which was religious, but which wanted more self-rule — something the Nationalists despised. Catalonia also wanted more independence, which meant it was firmly Republican. Barcelona, the Catalan capital, was a Republican stronghold for left-wing reasons, to be sure. I bring up the situation with the Basques and the Catalans simply to illustrate the complexity of the conflict.

Anyway, the 1933 elections resulted in a swing back to the right, with a coalition of center-right and far-right parties winning control, and reversing some of the initiatives of the previous government. Socialists, anarchists, and coal miners in the province of Asturias rebelled against the Republic. They murdered priests and government officials; the military, led by Gen. Franco, brutally suppressed the uprising. All of this radicalized the left even more.

By 1935, left-right opinion had become so polarized that there was practically no middle ground left. Both sides came to distrust democracy because it was the means by which their enemies might take power. And, as one Nationalist interviewed in the documentary puts it, people on the left and right just flat out hated each other. The whole country was a powder keg.

By the 1936 campaign, the centrist parties had practically disappeared. A leftist coalition won the vote, but deadly violence between left and right began ramping up. A far-right fascist militia, the Falange, formed. Mutual assassinations on both sides, and street fighting between Falangists and Republican forces, triggered a military coup against the government. The coup failed to overthrow the Republic, but it did divide the country, and spark a civil war between Nationalists and Republicans. Gen. Francisco Franco quickly emerged as the Nationalist leader.

I give you all that history to show what was news to me: that this was by no means a simple case of right-wing military figures trying to overthrow a democratically elected government — though it was that too!

The series devotes an hour each to the complicated internal politics of both the left and the right. All my life I’ve heard Franco and the Nationalist side described as “fascist,” but it’s not accurate. True, the Nationalist had real fascists in their ranks — that was the Falange — but Franco exploited and controlled them. The Falange’s founder, Jose Antonio Rivera, was killed by the Republicans, and turned into a martyr by the Nationalists. Doing so allowed Franco to embrace the Falange but also to defang them as a political force. In the film, an elderly Falangist complains that Franco was not a real fascist, and he wouldn’t seriously implement the Falange’s program (e.g., Falangism’s opposition to capitalism).

The documentary says Franco ought to be understood as a hard-right conservative authoritarian, not a fascist. Mussolini was a big supporter, and sent troops and military aid, but was frustrated by Franco’s failure to be affirmatively fascist. Hitler sent lots of military aid, which was critically important to the Nationalist victory, but was angry at Franco for not being willing to be more Nazi-like. The truth is, Franco was trying to lead a reactionary coalition of fascists, monarchists, traditionalist Catholics, and others on the Right. The Spanish Right by and large did not trust the Spanish fascists, who were revolutionary modernists. This is an example of the filmmakers’ point that you can’t get a true grasp on what was happening in Spain at the time by imposing a narrative that overlooks particularly Spanish characteristics of the conflict.

Franco managed to unite the right, but the left remained hopelessly mired in internal rivalry. If you’ve read Orwell’s Homage To Catalonia — which I did in the early 1990s, and forgot all about — you know something about how fissiparous and treacherous left-wing politics were in the Spanish Civil War. Orwell went to Spain to fight with the POUM, the democratic socialists. They were set upon and betrayed by Spanish communists loyal to the Soviet Union. The Soviets were open supporters, military and otherwise, of the Republicans, but also instructed their Spanish followers to undermine the non-communist left.

Two things struck me about the left. I mentioned earlier the role of the anarchist militias, and how they were both crucial to the Republican war effort — they were fierce fighters — but also an Achilles heel, because they were obstinately principled. There’s a passage in the film in which a Republican veteran talks about how hard it was to get the anarchists to take military orders (naturally!). They would stand around debating about whether or not they should obey an order, while the far more disciplined Nationalists would be making gains. Isn’t that cartoonish, in a herding-cats way? But it happened.

The other thing — and this, to me, was the more important thing — was how off-the-hook crazy the Spanish left was. In 1936, after the start of the war, the anarchists and left-wing supporters led a revolution within the Republic. Here’s Orwell describing revolutionary Barcelona:

It was the first time that I had ever been in a town where the working class was in the saddle. Practically every building of any size had been seized by the workers and was draped with red flags and with the red and black flag of the Anarchists; every wall was scrawled with the hammer and sickle and with the initials of the revolutionary parties; almost every church had been gutted and its images burnt. Churches here and there were being systematically demolished by gangs of workmen. Every shop and cafe had an inscription saying that it had been collectivized; even the bootblacks had been collectivized and their boxes painted red and black.

That’s from Orwell, but this is reported in the Granada documentary too. It’s this kind of thing that made me aware that had I been alive then, I would have 100 percent supported the Nationalists. It was truly a revolution, and violently anti-Christian to the core. It was brought low by the communists, on Moscow’s orders, on the grounds that defeating fascism had to come before the revolution. The communists were right.

There is no trace to follow

Tuesday, February 5th, 2019

The Internet is full of commercial activity, not all of it legal. Dropgangs may be the future of darknet markets:

To prevent the problems of customer binding, and losing business when darknet markets go down, merchants have begun to leave the specialized and centralized platforms and instead ventured to use widely accessible technology to build their own communications and operational back-ends.

Instead of using websites on the darknet, merchants are now operating invite-only channels on widely available mobile messaging systems like Telegram. This allows the merchant to control the reach of their communication better and be less vulnerable to system take-downs. To further stabilize the connection between merchant and customer, repeat customers are given unique messaging contacts that are independent of shared channels and thus even less likely to be found and taken down. Channels are often operated by automated bots that allow customers to inquire about offers and initiate the purchase, often even allowing a fully bot-driven experience without human intervention on the merchant’s side.

The use of messaging platforms provides a much better user experience to the customers, who can now reach their suppliers with mobile applications they are used to already. It also means that a larger part of the communication isn’t routed through the Tor or I2P networks anymore but each side — merchant and customer — employ their own protection technology, often using widely spread VPNs.

The other major change is the use of “dead drops” instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means — he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn’t have to safeguard it anymore. Less data means less risk for everyone.

The use of dead drops also significantly reduces the risk of the merchant to be discovered by tracking within the postal system. He does not have to visit any easily to surveil post office or letter box, instead the whole public space becomes his hiding territory.

Cryptocurrencies are still the main means of payment, but due to the higher customer-binding, and vetting process by the merchant, escrows are seldom employed. Usually only multi-party transactions between customer and merchant are established, and often not even that.

Marketing and initial vetting of both merchant and customer now happens in darknet forums and chat channels that themselves aren’t involved in any deal anymore. In these places merchants and customers take part in the discussion of best procedures, methods and prices. The market connects and develops best practices by sharing experience. Furthermore these places also serve as record of reputation, though in a still very primitive way.

Other than allowing much more secure and efficient business for both sides of the transaction, this has also led to changes in the organizational structure of merchants:

Instead of the flat hierarchies witnessed with darknet markets, merchants today employ hierarchical structures again. These consist of procurement layer, sales layer, and distribution layer. The people constituting each layer usually do not know the identity of the higher layers nor are ever in personal contact with them. All interaction is digital — messaging systems and cryptocurrencies again, product moves only through dead drops.

The procurement layer purchases product wholesale and smuggles it into the region. It is then sold for cryptocurrency to select people that operate the sales layer. After that transaction the risks of both procurement and sales layer are isolated.

The sales layer divides the product into smaller units and gives the location of those dead drops to the distribution layer. The distribution layer then divides the product again and places typical sales quantities into new dead drops. The location of these dead drops is communicated to the sales layer which then sells these locations to the customers through messaging systems.

To prevent theft by the distribution layer, the sales layer randomly tests dead drops by tasking different members of the distribution layer with picking up product from a dead drop and hiding it somewhere else, after verification of the contents. Usually each unit of product is tagged with a piece of paper containing a unique secret word which is used to prove to the sales layer that a dead drop was found. Members of the distribution layer have to post security — in the form of cryptocurrency — to the sales layer, and they lose part of that security with every dead drop that fails the testing, and with every dead drop they failed to test. So far, no reports of using violence to ensure performance of members of these structures has become known.

This concept of using messaging, cryptocurrency and dead drops even within the merchant structure allows for the members within each layer being completely isolated from each other, and not knowing anything about higher layers at all. There is no trace to follow if a distribution layer member is captured while servicing a dead drop. He will often not even be distinguishable from a regular customer. This makes these structures extremely secure against infiltration, takeover and capture. They are inherently resilient.

Furthermore the members of the sales layer often employ advanced physical tradecraft to prevent surveillance by the procurement layer when they pick up product. This makes it very hard to dismantle such a structure from the top.

If members of such a structure are captured they usually have no critical information to share, no information about persons, places, times of meeting. No interaction that would make this information necessary ever takes place.

It is because of the use of dead drops and hierarchical structures that we call this kind of organization a Dropgang.

The result of this evolution is a highly decentralized, specialized and resilient method of running black market commerce. Less information is acquired, shipments are faster, isolation between participants is high, and multiple independent sales channels are established.

Pave the muddy paths

Monday, February 4th, 2019

We often think of “law” and “legislation” as synonyms, Mike Munger notes, but Hayek argued otherwise:

Habits that are shared might be called “customs,” informal rules that might be written down nowhere. These are agreements, in the sense that we all agree that is the way we do things, even though we never actually sat down and signed anything.

A while back I wrote about the Pittsburgh left turn as an example of such a custom. It is important that the habit of waiting for someone to turn left in front of you be “agreed” on, in the sense that the expectation is widely shared — and met — because otherwise it wouldn’t be effective in making traffic move faster. These customs can come to govern behavior, however, precisely because they shape expectations, and violating expectations may be expensive or dangerous.

Those customs, if they consistently lead to useful outcomes, are “laws.” They are discoverable by experience and emerge in the form of traditions. But it is useful to write them down so that they can be enforced more effectively and can be easily learned by new generations. Laws that are written down are rules, commands, and prohibitions we call “legislation.”

The problem is that legislation need not arise from law at all.

Hayek was rightly concerned about the conceit that experts know what is best for everyone else:

I often illustrate this with what I call the Hayek Sidewalk Plan. Imagine that a new university has been built, and you are on the committee charged with laying out the sidewalks. What would you do?

You might walk around, look at aerial maps of the campus, and draw lines to try to guess where people will want to walk. Or you might want to have a purely aesthetic conception of the problem, and put the sidewalks in places or in patterns that are pleasing to the eye as you look out the windows of the administration building.

But all of that is legislation. No individual, or small committee of individuals, could possibly have enough information or foresight to be able to know in advance where people are going to want to walk. After all, universities are peopled by broadly diverse groups, with heterogeneous plans and purposes. People are often willing to walk on the sidewalks, if that serves their purpose at that point. But you probably don’t want to build a sidewalk from every doorway to every other doorway on the campus.

What would a law look like, in this setting? No one person, after all, has any effect walking on the grass, and all the different plans and purposes, taken one at a time, contain no information that you can use. But there is a physical manifestation of the aggregation of all these plans and purposes working themselves out over time. I don’t intend to make a path, and neither do you. But if enough of us, over time, find it useful to walk in the same place to accomplish our own idiosyncratic purposes, a visible record of the shared pattern emerges: a muddy path.

So, the law for the Hayek Sidewalk Plan committee will be discoverable if we adjourn for six months or so and then have a drone take some overhead photographs. It is clear now where people, acting as individuals but observable together in the shared result called a muddy path, want the sidewalks to be placed. And the task of the committee is simply to “legislate” by paving the muddy paths.

If we think of the process of discovering law as “looking for the muddy paths,” and legislation as “paving the muddy paths,” we have a simple but quite powerful way of thinking about the rule of law.

Affordability has its costs

Saturday, February 2nd, 2019

Besides its obvious shortcomings, Los Angeles has a number of subtle problems that go back to decisions made long ago:

Much of the Los Angeles area would be better today if early city fathers had realized how valuable the property would eventually become. Los Angeles has quite high population density these days, but lacks urban amenities. The San Fernando Valley on the north side of the city of Los Angeles, for instance, was built up under the assumption that it would remain a rural retreat from the big city, but it now has over 1.75 million residents.

In contrast, Chicago was laid out after its 1871 fire by men like Daniel Burnham who took “Make no little plans” as their motto. L.A. wasn’t. And it’s hard to fix urban-planning mistakes afterward.

To take a seemingly trivial example, Chicago, where I lived from 1982 to 2000, was set up with most streets having sidewalks, and the sidewalks are usually wide enough for two people to walk abreast while conversing. In contrast, sidewalks on residential streets in Los Angeles often peter out at the developers’ whims, and those that exist are usually a little too narrow for two people. So pedestrians end up conversing over their shoulders.

One reason for the sidewalk shortage is that Los Angeles was the first major city in America to develop after the automobile.

Another is that much of it was laid out to be affordable after the stock-market crash of 1929. That introduced a more democratic, less elitist ethos. There’s a lot to be said for the remarkable living standards of average people in postwar L.A., but the city is paying the price today for cutting corners back then.

Chicago, in contrast, was mostly built during the era before the New Deal when upscale bourgeois values dominated tastes. For instance, my Chicago condo was in a three-story brick building on an elegant block of other three-story brick buildings. It was a very respectable-looking block, with every building striving to live up to proper bourgeois standards.

This doesn’t mean that everybody can keep up appearances at all times. My Chicago condo had been built in 1923 with optimistic touches like nine-foot ceilings. During the Depression, the owners must have been ruined as the units were split up into two apartments. But a couple of generations later, the building was rehabbed, and the tall ceilings and other generous touches were still there.

Los Angeles, in contrast, reflects an odd combination of mass-market needs and celebrity tastes.

In 1915, Charlie Chaplin, rapidly becoming the most famous man in the world, lived in Chicago a couple of blocks from where my old condo would go up. But in 1916, as filmmakers realized the advantages of sunshine, he moved from Chicago to Los Angeles.

The movies did in the chance of Los Angeles developing physically along bourgeois lines. Film people valued privacy and self-expression. Screenwriter Nathanael West’s 1939 novel Day of the Locust complained of the excessive diversity of Hollywood houses:

But not even the soft wash of dusk could help the houses. Only dynamite would be of any use against the Mexican ranch houses, Samoan huts, Mediterranean villas, Egyptian and Japanese temples, Swiss chalets, Tudor cottages, and every possible combination of these styles that lined the slopes of the canyon.

One of the most popular architects of celebrity homes was an African-American named Paul Revere Williams whose view, in contrast to the more academically celebrated Los Angeles architects such as Schindler and Neutra, was that his movie-star clients paid him to make their whims come true. So if, say, Frank Sinatra desired a Japanese Modern house with superb acoustics for his state-of-the-art stereo, Williams would figure out how to give the client what he wanted.

Another need celebrities have is privacy from tourists. Not having a sidewalk in front of your house for your stalkers to assemble upon makes sense if you are a world-famous actor.

The peculiar needs of movie stars influence everybody else’s tastes in L.A., with generally unfortunate results. If you are in constant danger of being pestered by crazed fans, it can be a good idea to go everywhere by car. But not being able to walk down your own street without risking being hit by traffic is a dumb idea if you are a nobody.

One lesson from Los Angeles ought to be that it’s hard to retrofit urban-planning mistakes made for reasons of affordability and expedience.

For example, the Los Angeles River, a floodplain that is dry most of the year, almost washed the city away in the 1938 flood. The Army Corps of Engineers were called in and rapidly built the notorious concrete ditch that is now the L.A. River to keep, say, Lockheed from being carried out to sea in the next deluge, causing America to lose the upcoming war.

After the war, newer desert communities like Scottsdale and Palm Springs realized that it makes more sense to convert natural flood channels into parks and golf courses that can absorb runoff. Moreover, the 1994 earthquake in Los Angeles demonstrated that putting up apartment buildings on the old sand and gravel riverbed had been a bad idea, as numerous apartment buildings near the river collapsed.

For decades, public-spirited Angelenos have generated countless plans to replace the ugly concrete culvert. But to do that would require a broader channel, which would demand using eminent domain to purchase all the very expensive real estate along the river. And so nothing ever gets done.

Similarly, it’s hard to undo affordable-housing construction, unless it happens to be in a hugely valuable location, such as along the beach. Gentrification is most likely where there’s something to gentrify.

For instance, Van Nuys in the heart of the San Fernando Valley was built as an affordable place for people who couldn’t afford cars. I recall it in the 1960s being a dump.

Driving through Van Nuys last week, it was still the same dump.

Affordability has its costs.

If some idiot from the South tried to be polite, the system broke down

Friday, February 1st, 2019

As you travel the world, some of the local rules you can look up or read about, but often the rules are just assumed because “everyone” knows them:

I described an experience of mine in Erlangen, Germany, in an earlier column, where I didn’t know about the practice of collecting a deposit on shopping carts. No one told me about this, and I thought I recognized the context of “grocery store” as familiar, one where I knew the rules. But I didn’t.

I had another experience in Germany, one that made me think of the importance of what Hayek called “the particular circumstances of time and place.” Erlangen, where I taught at Friedrich Alexander University, is a city of bicycles. There are roads, but most are narrow and there are so many bikes that it can be frustrating to drive.

The bike riders, as is true in many American cities, paid little attention to the traffic lights. Often, there were so many bikes that it was not possible to cross the street without getting in the way. But I noticed that people did cross, just walking right out into the street.

I tried this, several times, in my first time in Erlangen. But being from the southern United States, I’m polite and deferential. So, I would start across the street, but then look up the street, and if a bike was close and coming fast I’d stop.

And get hit by a large, sturdy German on a large, sturdy German bicycle. And then I got yelled at, in German. What had I done wrong? Eventually, I figured it out: there had evolved a convention for crossing the street and for riding bicycles. The pedestrian simply walked at a constant speed, without even looking. The bicyclist would ride directly at the pedestrian, actually aiming at the spot where the pedestrian was at that point in time. Since the pedestrian kept moving in a predictable fashion, the cyclist would pass directly and safely behind the pedestrian.

If some idiot from the southern United States, in an effort to impose his own views of “polite” behavior on people whose evolved rules were different, tried to be polite and stop, the system broke down. Though that idiot (me) was stopping to avoid being hit, I was actually being rude by violating the rules. These rules were not written down and could not easily be changed.

In fact, a number of my German colleagues even denied that it was a rule, at first. But then they would say, “Well, right, you can’t stop. That would be dumb. So, okay, I guess it is a rule, after all.”

More precisely, this rule — like many other important rules you encounter in “foreign” settings — is really a convention. A convention, according to Lewis (1969), is a persistent (though not necessarily permanent) regularity in the resolution of recurring coordination problems, in situations characterized by recurrent interactions where outcomes are (inter)dependent.

Conventions, then, exist when people all agree on a rule of behavior, even if no one ever said the rule out loud or wrote it down. No one actor can choose an outcome, and no actor can challenge the regularity by unilaterally deviating from the conventional behavior. But deviation can result in substantial harm, as when someone tries to drive on the left in a country where “we” drive on the right, or social sanction, as when there is intentional punishment on behalf of other actors if deviation is observed and publicized.

According to David Hume, convention is

a general sense of common interest; which sense all the members of the society express to one another, and which induces them to regulate their conduct by certain rules. I observe that it will be to my interest [e.g.] to leave another in the possession of his goods, provided he will act in the same manner with regard to me. When this common sense of interest is mutually expressed and is known to both, it produces a suitable resolution and behavior. And this may properly enough be called a convention or agreement betwixt us, though without the interposition of a promise; since the actions of each of us have a reference to those of the other, and are performed upon the supposition that something is to be performed on the other part. (Hume, 1978; llI.ii.2)

Notice how different this is from the “gamer” conception of laws and rules. For the gamer, all the rules can be — in fact, must be — written down and can be examined and rearranged. For the world traveler, the experience of finding out the rules can involve trial and error, and even the natives likely do not fully understand that the rules and norms of their culture are unique.

One of my favorite examples is actually from the United States, the so-called Pittsburgh Left Turn. In an article in the Pittsburgh City Paper in 2006, Chris Potter wrote:

As longtime residents know, the Pittsburgh Left takes place when two or more cars — one planning to go straight, and the other to turn left — face off at a red light without a “left-turn only” lane or signal. The Pittsburgh Left occurs when the light turns green, and the driver turning left takes the turn without yielding to the oncoming car.

Pittsburgh is an old city, many of whose streets were designed before automobiles held sway. [That means] that street grids are constricted, with little room for amenities like left-turn-only lanes. The absence of such lanes means drivers have to solve traffic problems on their own. Instead of letting one car at the head of an intersection bottle up traffic behind it, the Pittsburgh Left gives the turning driver a chance to get out of everyone else’s way. In exchange for a few seconds of patience, the Pittsburgh Left allows traffic in both directions to move smoothly for the duration of the signal. Of course, the system only works if both drivers know about it. No doubt that’s why newcomers find it so vexing.

The Pittsburgh Left is a very efficient convention. On two-lane streets, turning left can block traffic as the turning car waits for an opening. And left-turn arrows are expensive and add time to each traffic light cycle. Far better to let the left turners — if there are any — go first. If there are no left turners, traffic just proceeds normally, not waiting on a left arrow.

Of course, if some idiot from the southern United States (yes, me again) is driving in Pittsburgh, that person expects to go when the light turns green. I blew my horn when two cars turned left in front of me. And people on the sidewalk yelled at me, as did the left-turning drivers. Once again, I didn’t know the rules, because I was a foreigner, at least in terms of the rules of the road in Pittsburgh.

Actually, it’s worse than that. The Pittsburgh Left is technically illegal, according to the Pennsylvania Driver’s Handbook (p. 47): “Drivers turning left must yield to oncoming vehicles going straight ahead.” The written rules, the gamer rules, appear to endorse one pattern of action. But the actual rules, the ones you have to travel around to learn, may be quite different. Real rules are not written down, and the people living in that rule system may not understand either the nature or effects of the rules. It is very difficult to change conventions, because they represent the expectations people have developed in dealing with each other over years or decades.

Hayek understood this clearly, and argued for what I have called the “world traveler” conception over what I have called the “gamer” conception of rules and laws. As Hayek said in 1988, in The Fatal Conceit:

To understand our civilisation, one must appreciate that the extended order resulted not from human design or intention but spontaneously: it arose from unintentionally conforming to certain traditional and largely moral practices, many of which men tend to dislike, whose significance they usually fail to understand, whose validity they cannot prove, and which have nonetheless fairly rapidly spread by means of an evolutionary selection — the comparative increase of population and wealth — of those groups that happened to follow them.… This process is perhaps the least appreciated facet of human evolution.

Standing on the shoulders of jerks

Thursday, January 31st, 2019

Eric Weinstein discusses the origin of the Intellectual Dark Web:

Widespread use would provide an entire new category for the Darwin Awards

Thursday, January 31st, 2019

The Four Thieves Vinegar Collective is a volunteer network of anarchists and hackers developing DIY medicines:

Four Thieves claims to have successfully synthesized five different kinds of pharmaceuticals, all of which were made using MicroLab. The device attempts to mimic an expensive machine usually only found in chemistry laboratories for a fraction of the price using readily available off-the-shelf parts. In the case of the MicroLab, the reaction chambers consist of a small mason jar mounted inside a larger mason jar with a 3D-printed lid whose printing instructions are available online. A few small plastic hoses and a thermistor to measure temperature are then attached through the lid to circulate fluids through the contraption to induce the chemical reactions necessary to manufacture various medicines. The whole process is automated using a small computer that costs about $30.

To date, Four Thieves has used the device to produce homemade Naloxone, a drug used to prevent opiate overdoses better known as Narcan; Daraprim, a drug that treats infections in people with HIV; Cabotegravir, a preventative HIV medicine that may only need to be taken four times per year; and mifepristone and misoprostol, two chemicals needed for pharmaceutical abortions.

[...]

As for the DEA, none of the pharmaceuticals produced by the collective are controlled substance, so their possession is only subject to local laws about prescription medicines. If a person has a disease and prescription for the drug to treat that disease, they shouldn’t run into any legal issues if they were to manufacture their own medicine. Four Thieves is effectively just liberating information on how to manufacture certain medicines at home and developing the open source tools to make it happen. If someone decides to make drugs using the collective’s guides then that’s their own business, but Four Thieves doesn’t pretend that the information it releases is for “educational purposes only.”

[...]

The catalyst for Four Thieves Vinegar Collective was a trip Laufer took to El Salvador in 2008 when he was still in graduate school. While visiting a rural medical clinic as part of an envoy documenting human rights violations in the country, he learned that it had run out of birth control three months prior. When the clinic contacted the central hospital in San Salvador, it was informed the other hospital had also run out of birth control. Laufer told me he was stunned that the hospitals were unable to source birth control, a relatively simple drug to manufacture that’s been around for over half-a-century. He figured if drug dealers in the country were able to use underground labs to manufacture illicit drugs, a similar approach could be taken to life-saving medicines.

This doesn’t seem wise:

Eric Von Hippel, an economist at MIT that researches “open innovation,” is enthusiastic about the promise of DIY drug production, but only under certain conditions. He cited a pilot program in the Netherlands that is exploring the independent production of medicines that are tailor made for individual patients as a good example of safe, DIY drug production. These drugs are made in the hospital by trained experts. Von Hippel believes it can be dangerous when patients undertake drug production on their own.
“If one does not do chemical reactions under just-right conditions, one can easily create dangerous by-products along with the drug one is trying to produce,” von Hippel told me in an email. “Careful control of reactor conditions is unlikely in DIY chemical reactors such as the MicroLab design offered for free by the Four Thieves Vinegar Collective.”

His colleague, Harold DeMonaco, a visiting scientist at MIT, agreed. DeMonaco suggested that a more rational solution to the problems addressed would be for patients to work with compounding pharmacies. Compounding pharmacies prepare personalized medicine for their customers and DeMonaco said they are able to synthesize the same drugs Four Thieves is producing at low costs, but with “appropriate safeguards.”

“Unless the system is idiot proof and includes validation of the final product, the user is exposed to a laundry list of rather nasty stuff,” DeMonaco told me in an email. “Widespread use [of Four Thieves’ devices] would provide an entire new category for the Darwin Awards.”

It was the usual horror story

Friday, January 25th, 2019

I can’t say I know much about Mother Jones, but I was surprised to see them publish a “scary” look into the science of smoking pot:

It’s been a few years since Alex Berenson has “committed journalism,” as he likes to say. As a New York Times reporter, Berenson did two tours covering the Iraq War, an experience that inspired him to write his first of nearly a dozen spy novels. Starting with the 2006 Edgar Award-winning The Faithful Spy, his books were so successful that he left the Times in 2010 to write fiction full time. But his latest book, out January 8, strays far from the halls of Langley and the jihadis of Afghanistan. Tell Your Children is nonfiction that takes a sledgehammer to the promised benefits of marijuana legalization, and cannabis enthusiasts are not going to like it one bit.

The book was seeded one night a few years ago when Berenson’s wife, a psychiatrist who evaluates mentally ill criminal defendants in New York, started talking about a horrific case she was handling. It was “the usual horror story, somebody who’d cut up his grandmother or set fire to his apartment — typical bedtime chat in the Berenson house,” he writes. But then, his wife added, “Of course he was high, been smoking pot his whole life.”

Berenson, who smoked a bit in college, didn’t have strong feelings about marijuana one way or another, but he was skeptical that it could bring about violent crime. Like most Americans, he thought stoners ate pizza and played video games — they didn’t hack up family members. Yet his Harvard-trained wife insisted that all the horrible cases she was seeing involved people who were heavy into weed. She directed him to the science on the subject.

We look back and laugh at Reefer Madness, which was pretty over-the-top, after all, but Berenson found himself immersed in some pretty sobering evidence: Cannabis has been associated with legitimate reports of psychotic behavior and violence dating at least to the 19th century, when a Punjabi lawyer in India noted that 20 to 30 percent of patients in mental hospitals were committed for cannabis-related insanity. The lawyer, like Berenson’s wife, described horrific crimes — including at least one beheading — and attributed far more cases of mental illness to cannabis than to alcohol or opium. The Mexican government reached similar conclusions, banning cannabis sales in 1920 — nearly 20 years before the United States did — after years of reports of cannabis-induced madness and violent crime.

Over the past couple of decades, studies around the globe have found that THC — the active compound in cannabis — is strongly linked to psychosis, schizophrenia, and violence. Berenson interviewed far-flung researchers who have quietly but methodically documented the effects of THC on serious mental illness, and he makes a convincing case that a recreational drug marketed as an all-around health product may, in fact, be really dangerous — especially for people with a family history of mental illness and for adolescents with developing brains.

A 2002 study in BMJ (formerly the British Medical Journal) found that people who used cannabis by age 15 were four times as likely to develop schizophrenia or a related syndrome as those who’d never used. Even when the researchers excluded kids who had shown signs of psychosis by age 11, they found that the adolescent users had a threefold higher risk of demonstrating symptoms of schizophrenia later on. One Dutch marijuana researcher that Berenson spoke with estimated, based on his own work, that marijuana could be responsible for as much as 10 percent of psychosis in places where heavy use is common.

These studies are hardly Reagan-esque, drug warrior hysteria. In 2017, the National Academies of Sciences, Engineering, and Medicine issued a report nearly 500 pages long on the health effects of cannabis and concluded that marijuana use is strongly associated with the development of psychosis and schizophrenia. The researchers also noted that there’s decent evidence linking pot consumption to worsening symptoms of bipolar disorder and to a heightened risk of suicide, depression, and social anxiety disorders: “The higher the use, the greater the risk.”

Given that marijuana use is up 50 percent over the past decade, if the studies are accurate, we should be experiencing a big increase in psychotic diseases. And we are, Berenson argues. He reports that from 2006 to 2014, the most recent year for which data is available, the number of ER visitors co-diagnosed with psychosis and a cannabis use disorder tripled, from 30,000 to 90,000.

Legalization advocates would say Berenson and the researchers have it backwards: Pot doesn’t cause mental illness; mental illness drives self-medication with pot. But scientists find that theory wanting. Longitudinal studies in New Zealand, Sweden, and the Netherlands spanning several decades identified an association between cannabis and mental illness even when accounting for prior signs of mental illness. In an editorial published alongside the influential 2002 BMJ study on psychosis and marijuana, two Australian psychiatrists wrote that these and other findings “strengthen the argument that use of cannabis increases the risk of schizophrenia and depression, and they provide little support for the belief that the association between marijuana use and mental health problems is largely due to self-medication.”

One of the book’s most convincing arguments against the self-medication theory is that psychosis and schizophrenia are diseases that typically strike people during adolescence or in their early 20s. But with increasing pot use, the number of people over 30 coming into the ER with psychosis has also shot up, suggesting that cannabis might be a cause of mental illness in people with no prior history of it.”

Malcolm Gladwell wrote a similar piece in the New Yorker, emphasizing how little we know about marijuana compared to legal drugs, and Berenson himself has an opinion piece in the New York Times, where he points out that many of the same people pressing for marijuana legalization argued that the risks of opioid addiction could be easily managed.

Few even had wallets

Tuesday, January 22nd, 2019

A century ago the market economy was important, but a lot of economic activity still took place within the family, Peter Frost notes, especially in rural areas:

In the late 1980s I interviewed elderly French Canadians in a small rural community, and I was struck by how little the market economy mattered in their youth. At that time none of them had bank accounts. Few even had wallets. Coins and bills were kept at home in a small wooden box for special occasions, like the yearly trip to Quebec City. The rest of the time these people grew their own food and made their own clothes and furniture. Farms did produce food for local markets, but this surplus was of secondary importance and could just as often be bartered with neighbors or donated to the priest. Farm families were also large and typically brought together many people from three or four generations.

By the 1980s things had changed considerably. Many of my interviewees were living in circumstances of extreme social isolation, with only occasional visits from family or friends. Even among middle-aged members of the community there were many who lived alone, either because of divorce or because of relationships that had never gone anywhere. This is a major cultural change, and it has occurred in the absence of any underlying changes to the way people think and feel.

Whenever I raise this point I’m usually told we’re nonetheless better off today, not only materially but also in terms of enjoying varied and more interesting lives. That argument made sense back in the 1980s — in the wake of a long economic boom that had doubled incomes, increased life expectancy, and improved our lives through labor-saving devices, new forms of home entertainment, and stimulating interactions with a broader range of people.

Today, that argument seems less convincing. Median income has stagnated since the 1970s and may even be decreasing if we adjust for monetization of activities, like child care, that were previously nonmonetized. Life expectancy too has leveled off and is now declining in the U.S. because of rising suicide rates among people who live alone. Finally, cultural diversity is having the perverse effect of reducing intellectual diversity. More and more topics are considered off-limits in public discourse and, increasingly, in private conversation.

Liberalism is no longer delivering the goods — not only material goods but also the goods of long-term relationships and rewarding social interaction.

Language in this exchange does not reflect Carolina’s values

Monday, January 21st, 2019

The same nonprofit that is suing Harvard University for racial discrimination — against Asians and Whites — is now suing the University of North Carolina:

In their filing, the plaintiffs said their analysis of UNC’s admissions data showed race is a “determinative” factor for many underrepresented minorities, particularly African-American and Hispanic applicants from outside the state.

In a 2003 ruling, the Supreme Court said universities can use race as a “plus” factor in admissions, but must evaluate each applicant individually and not consider race as the defining feature of the application.

The plaintiffs also say the school has violated Supreme Court precedent by failing to seriously attempt race-neutral alternatives to achieving diversity.

Lawyers for UNC said in Friday’s filing that race is not a dominant factor in admissions. UNC said it uses a holistic approach to admissions, with application readers scoring applicants in five categories: academic program, academic performance, extracurricular activities, essays and personal qualities, like “curiosity, integrity, and history of overcoming obstacles.”

The school said race has no numerical weight at any point in the review.

[...]

For applicants to UNC-Chapel Hill in 2012, the average SAT score for admitted Asian or Asian-American students was 1431, compared with 1360 for white applicants and 1229 for African Americans, according to the plaintiffs. They said that differential, as well as a similar gap in grade-point averages, shows the school gives an unfair tip to applicants of certain races or ethnicities, despite weaker academic credentials.

In Friday’s filing, the plaintiffs also said UNC admissions readers frequently highlight the applicant’s race, citing one reader’s comment that even with an ACT score of 26, they should “give these brown babies a shot at these merit $$.” Another reader wrote, “Stellar academics for a Native Amer/African Amer kid,” the plaintiffs said.

Steve Farmer, the university’s vice provost for enrollment and undergraduate admissions, said in response: “Language in this exchange does not reflect Carolina’s values or our admissions process.”

Previously they had been a lumpenproletariat of single men and women

Monday, January 21st, 2019

Liberal regimes tend to erode their own cultural and genetic foundations, thus undermining the cause of their success:

Liberalism emerged in northwest Europe. This was where conditions were most conducive to dissolving the bonds of kinship and creating communities of atomized individuals who produce and consume for a market. Northwest Europeans were most likely to embark on this evolutionary trajectory because of their tendency toward late marriage, their high proportion of adults who live alone, their weaker kinship ties and, conversely, their greater individualism. This is the Western European Marriage Pattern, and it seems to go far back in time. The market economy began to take shape at a later date, possibly with the expansion of North Sea trade during early medieval times and certainly with the take-off of the North Sea trading area in the mid-1300s (Note 1).

Thus began a process of gene-culture coevolution: people pushed the limits of their phenotype to exploit the possibilities of the market economy; selection then brought the mean genotype into line with the new phenotype. The cycle then continued anew, with the mean phenotype always one step ahead of the mean genotype.

This gene-culture coevolution has interested several researchers. Gregory Clark has linked the demographic expansion of the English middle class to specific behavioral changes in the English population: increasing future time orientation; greater acceptance of the State monopoly on violence and consequently less willingness to use violence to settle personal disputes; and, more generally, a shift toward bourgeois values of thrift, reserve, self-control, and foresight. Heiner Rindermann has presented the evidence for a steady rise in mean IQ in Western Europe during the late medieval and early modern era. Henry Harpending and myself have investigated genetic pacification during the same timeframe in English society. Finally, hbd*chick has written about individualism in relation to the Western European Marriage Pattern (Note 2).

This process of gene-culture coevolution came to a halt in the late 19th century. Cottage industries gave way to large firms that invested in housing and other services for their workers, and this corporate paternalism eventually became the model for the welfare state, first in Germany and then elsewhere in the West. Working people could now settle down and have families, whereas previously they had largely been a lumpenproletariat of single men and women. Meanwhile, middle-class fertility began to decline, partly because of the rising cost of maintaining a middle-class lifestyle and partly because of sociocultural changes (increasing acceptance and availability of contraception, feminism, etc.).

This reversal of class differences in fertility seems to have reversed the gene-culture coevolution of the late medieval and early modern era.

Liberalism delivered the goods

Sunday, January 20th, 2019

How did liberalism become so dominant?

In a word, it delivered the goods. Liberal regimes were better able to mobilize labor, capital, and raw resources over long distances and across different communities. Conservative regimes were less flexible and, by their very nature, tied to a single ethnocultural community. Liberals pushed and pushed for more individualism and social atomization, thereby reaping the benefits of access to an ever larger market economy.

The benefits included not only more wealth but also more military power. During the American Civil War, the North benefited not only from a greater capacity to produce arms and ammunition but also from a more extensive railway system and a larger pool of recruits, including young migrants of diverse origins — one in four members of the Union army was an immigrant (Doyle 2015).

During the First World War, Britain and France could likewise draw on not only their own manpower but also that of their colonies and elsewhere. France recruited half a million African soldiers to fight in Europe, and Britain over a million Indian troops to fight in Europe, the Middle East, and East Africa (Koller 2014; Wikipedia 2018b). An additional 300,000 laborers were brought to Europe and the Middle East for non-combat roles from China, Egypt, India, and South Africa (Wikipedia 2018a). In contrast, the Central Powers had to rely almost entirely on their own human resources. The Allied powers thus turned a European civil war into a truly global conflict.

The same imbalance developed during the Second World War. The Allies could produce arms and ammunition in greater quantities and far from enemy attack in North America, India, and South Africa, while recruiting large numbers of soldiers overseas. More than a million African soldiers fought for Britain and France, their contribution being particularly critical to the Burma campaign, the Italian campaign, and the invasion of southern France (Krinninger and Mwanamilongo 2015; Wikipedia 2018c). Meanwhile, India provided over 2.5 million soldiers, who fought in North Africa, Europe, and Asia (Wikipedia 2018d). India also produced armaments and resources for the war effort, notably coal, iron ore, and steel.

Liberalism thus succeeded not so much in the battle of ideas as on the actual battlefield.

Longines Chronoscope with Princess Alexandra Kropotkin

Saturday, January 19th, 2019

Longines Chronoscope with Princess Alexandra Kropotkin sounds like the title of a steampunk novel, but it’s actually a 1951 television interview with the daughter of Peter Kropotkin, one of the most prominent left-anarchist figures of the late 19th and early 20th centuries. As Jesse Walker of Reason notes, sometimes the very fact that something exists is reason enough to watch it:

Henry Hazlitt, author of Economics in One Lesson, makes a brief appearance.

If you make a community truly open it will eventually become little more than a motel

Saturday, January 19th, 2019

The emergence of the middle class was associated with the rise of liberalism and its belief in the supremacy of the individual:

John Locke (1632–1704) is considered to be the “father of liberalism,” but belief in the individual as the ultimate moral arbiter was already evident in Protestant and pre-Protestant thinkers going back to John Wycliffe (1320s–1384) and earlier. These are all elaborations and refinements of the same mindset.

Liberalism has been dominant in Britain and its main overseas offshoot, the United States, since the 18th century. There is some difference between right-liberals and left-liberals, but both see the individual as the fundamental unit of society and both seek to maximize personal autonomy at the expense of kinship-based forms of social organization, i.e., the nuclear family, the extended family, the kin group, the community, and the ethnie. Right-liberals are willing to tolerate these older forms and let them gradually self-liquidate, whereas left-liberals want to use the power of the State to liquidate them. Some left-liberals say they simply want to redefine these older forms of sociality to make them voluntary and open to everyone. Redefine, however, means eliminate. If you make a community truly “open” it will eventually become little more than a motel: a place where people share space, where they may or may not know each other, and where very few if any are linked by longstanding ties — certainly not ties of kinship.

For a long time, liberalism was merely dominant in Britain and the U.S. The market economy coexisted with kinship as the proper way to organize social and economic life. The latter form of sociality was even dominant in some groups and regions, such as the Celtic fringe, Catholic communities, the American “Bible Belt,” and rural or semi-rural areas in general. Today, those subcultures are largely gone. Opposition to liberalism is for the most part limited, ironically, to individuals who act on their own.