Self-consciously tall young men who went on to study at Cambridge

Saturday, August 18th, 2018

While listening to the audiobook version of The Hitchhiker’s Guide to the Galaxy, I started thinking about the writing style, and I was immediately reminded of Monty Python’s Flying Circus.

I did a little digging, and it turns out that Douglas Adams was a self-consciously tall young man who went on to study at Cambridge — just like John Cleese, whose autobiography, So, Anyway…, I very much enjoyed, especially as an audiobook, with Cleese himself narrating.

Adams went on to be discovered by Graham Chapman — tall, Cambridge grad — and co-wrote a Monty Python sketch with him:

Adams is one of only two people other than the original Python members to get a writing credit (the other being Neil Innes).

Adams had two brief appearances in the fourth series of Monty Python’s Flying Circus. At the beginning of episode 42, “The Light Entertainment War”, Adams is in a surgeon’s mask (as Dr. Emile Koning, according to on-screen captions), pulling on gloves, while Michael Palin narrates a sketch that introduces one person after another but never gets started. At the beginning of episode 44, “Mr. Neutron”, Adams is dressed in a pepper-pot outfit and loads a missile onto a cart driven by Terry Jones, who is calling for scrap metal (“Any old iron…”). The two episodes were broadcast in November 1974.

Anyway, I found Adams’ style very, very English, and thus Stephen Fry‘s narration fit it very, very well. What’s that? Why, yes, Stephen Fry is conspicuously tall, isn’t he? I wonder where he went to… Oh! Cambridge! Fancy that.

When the west started losing wars

Saturday, August 18th, 2018

When the West took up the “White Man’s Burden” is when the west started losing wars:

It led the British general who was invading Afghanistan to believe he was doing Afghans a favor, and if he was sufficiently nice to them they would throw flowers at his troops. So he forbade his troops to take necessary measures for self defense, and, as a result, he and his troops died.

The white man’s burden was profoundly counterproductive to social cohesion, because it led to them sacrificing near (British officers and troops) for far (afghan officers and troops)

If it is a burden, then you proceed to conspicuously display your holiness by burden carrying — which is apt to mean making your troops carry burdens.

Before the British intervened in Afghanistan, the most recent news that most people had of it was records of Alexander’s army passing through two millenia ago.

The empire of the East India company was expanding, and the empire of the Russias was expanding, and it was inevitable that the two would meet. And so it came to pass that the Kings of Afghanistan encountered both, and played each against the other.

When the British became aware of Afghanistan, they interpreted its inhabitants as predominantly white or whitish – as descendants of Alexander’s troops and camp followers and/or descendants of Jews converted to Islam at swordpoint.

Afghanistan was, and arguably still is, an elective monarchy, and the fractious electors tended to fight each other and elect weak kings who could scarcely control their followers, and so it has been ever since Alexander’s troops lost Alexander.

Mister Mountstuart Elphinstone, in his account of is mission to Kabul in 1809, says he once urged upon a very intelligent old man of the tribe of Meankheile, the superiority of a quiet life under a powerful monarch, over the state of chaotic anarchy that so frequently prevailed.

The reply was “We are content with alarms, we are content with discord, we are content with blood, but we will never be content with a master!”

As Machiavelli observed, such places are easy to conquer, but hard to hold, and so it proved.

To conquer and hold such places, one must massacre, castrate, or enslave all of the ruling elite that seems fractious, which is pretty much all of them, and replace them with your own people, speaking your own language, and practicing your own customs, as the Normans did in England, and the French did in Algeria, starting 1830. The British of 1840, however, had no stomach for French methods, and were already starting to fall short of the population growth necessary for such methods.

So what the British could have done is paid the occasional visit to kill any king that they found obnoxious, kill his friends, family, his children, and leading supporters, install a replacement king, and leave. The replacement king would have found his throne shaky, because Afghan Kings have usually found their thrones shaky, but the British did not need to view that as their problem, knowing the solution to that problem to be drastic and extreme. If the throne has been shaky for two thousand years, it is apt to be difficult to stop it from rocking.

After a long period of disorderly violence, where brother savagely tortured brother to death, and all sorts of utterly horrifying crimes were committed, King Dost Mahomed Khan took power in Kabul in 1826, and proceeded to rule well, creating order, peace, and prosperity, and receiving near universal support from the fractious and quarreling clans of Afghanistan.

The only tax under his rule was a tariff of one fortieth on goods entering and leaving the country. This and the Jizya poll tax are the only taxes allowed by the Koran, at least as Islamic law is interpreted in this rebellious country which has historically been disinclined to pay taxes, and because this tax was actually paid, it brought him unprecedented revenues. On paying this tax “the merchant may travel without guard or protection from one border to the other, an unheard of circumstance”

However he did not rule Herat, which was controlled by one of his enemies, who been King before and had ambitions to be King again. He therefore offered Herat to the Shah of Persia in return for the Shah’s support against another of his enemies, Runjeet Singh. He was probably scarcely aware that Runjeet Singh was allied to the British, and the Shah was allied to the Tsar of all the Russias.

Notice that this deal was remarkably tight fisted, as was infamously typical of deals made by Dost Mahomed Khan. He would give the Persians that which he did not possess, in return for them taking care of one of his enemies and helping him against another.

The British East India Company, however, saw this as Afghanistan moving into Russian empire, though I am pretty sure that neither the Shah of Persia nor the King of Aghanistan thought they were part of anyone’s empire.

So Russia and the East India Company sent ambassadors to the King of Afghanistan, who held a bidding contest asking which of them could best protect him against Runjeet Singh. He then duplicitously accepted both bids from both empires, which was a little too clever by half, though absolutely typical of the deals he made with his neighbors.

Dost Mahomed Khan was a very clever king, but double crossing the East India Company was never very clever at all. No one ever got ahead double crossing the East India Company. It is like borrowing money from the Mafia and forgetting to pay them back.

Russia and England then agreed to not get overly agitated over the doings of unreliable and duplicitous proxies that they could scarcely control – which agreement the East India Company took as permission to hold a gun to the head of the Shah of Persia. The East India company seized control of the Persian Gulf, an implicit threat to invade if the Shah intervened in Afghanistan to protect Dost Mahomed Khan. It then let Runjeet Singh off the leash, and promised to support his invasion of Afghanistan.

So far, so sane. Someone double crosses you, then you make an horrible example of him, and no one will do it again. Then get out, and whoever rules in Afghanistan, if anyone does manage to rule, will refrain from pissing you off a second time.

The British decided to give a large part of Afghanistan to Runjeet Singh, and install Shah Shoudjah-ool-Moolk, a Kinglet with somewhat plausible pretensions to the Afghan throne, in place of Dost Mahomet Khan.

Up to this point everything the East India Company is doing is sane, honorable, competent, just, and wonderfully eighteenth century.

Unfortunately, it is the nineteenth century. And the nineteenth century is when the rot set in.

His Majesty Shah Shoudjah-ool-Moolk will enter Afghanistan, surrounded by his own troops, and will be supported against foreign interference, and factious opposition, by the British Army. The Governor-general confidently hopes, that the Shah will be speedily replaced on his throne by his own subjects and adherents, and that the independence and integrity of Afghanistan established, the British army will be withdrawn. The Governor-general has been led to these acts by the duty which is imposed upon him, of providing for the security of the possessions of the British crown, but he rejoices, that, in the discharge of this duty, he will be enabled to assist in restoring the union and prosperity of the Afghan people.

So: The English tell themselves and each other: We not smacking Afghans against a wall to teach them not to play games with the East India Company. On the contrary, we are doing them a favor. A really big favor. Because we love everyone. We even love total strangers in far away places very different from ourselves. We are defending the independence of Afghanistan by removing the strongest King it has had in centuries and installing our puppet, and defending its integrity by arranging for invasion, conquest, rape and pillage by its ancient enemies the Sikhs, in particular Runjeet Singh. Because we love far away strangers who speak a language different from our own and live in places we cannot find on the map. We just love them to pieces. And when we invade, we will doubtless be greeted by people throwing flowers at us.

You might ask who would believe such guff? Obviously not the Afghans, who are being smacked against the wall. Obviously not the Russians. Obviously not the Persians. Obviously not the British troops who are apt to notice they are not being pelted with flowers.

The answer is, the commanding officer believed this guff. And not long thereafter, he and his troops died of it, the first great defeat of British colonialism. And, of course, the same causes are today leading to our current defeat in Afghanistan.

The commanding officer of the British expedition made a long series of horrifyingly evil and stupid decisions, which decisions only made sense if he was doing the Afghans a big favor, if the Afghans were likely to appreciate the big favor he was doing them, and his troops were being pelted with flowers, or Afghans were likely to start pelting them with flowers real soon now. The East India company was no stranger to evil acts, being in the business of piracy, brigandry, conquest, and extortion, but people tend to forgive evil acts that lead to success, prosperity, good roads, safe roads, and strong government. These evil acts, the evil acts committed by the British expedition to Afghanistan, are long remembered because they led to failure, defeat, lawlessness, disorder, and weak government.

As a result, he, his men, and their camp followers, were all killed.

It is their pleasure to open for you

Friday, August 17th, 2018

Netflix has a new animated animated show coming out, called Next Gen, which features lots of robots:

What caught my attention though was the self-satisfied door at the end of the trailer, since I had just listened to this passage, from The Hitchhiker’s Guide to the Galaxy:

“All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.”

Psychological stress induces neural inflammation and thus depression

Friday, August 17th, 2018

A group of Japanese researchers has discovered that neural inflammation caused by our innate immune system plays an unexpectedly important role in stress-induced depression:

A group of Japanese researchers has discovered that neural inflammation caused by our innate immune system plays an unexpectedly important role in stress-induced depression:

Previous research had already hinted at the link between inflammation and depression: increased levels of inflammation-related cytokines in the blood of patients suffering from depression, activation of microglia (inflammation-related cells in the brain) in depressive patients, and a high percentage of depression outbreaks in patients suffering from chronic inflammatory disease. However, the exact relationship between depression and inflammation still contains many unknowns. Psychological stress caused by social and environmental factors can trigger a variety of changes in both mind and body. Moderate levels of stress will provoke a defensive response, while extreme stress can lower our cognitive functions, cause depression and elevated anxiety, and is a risk factor for mental illnesses. The research team focused on repeated social defeat stress (a type of environmental stress) with the aim of clarifying the mechanism that causes an emotional response to repeated stress.

neural-inflammation-critical-for-stress-induced-depression-306569

First, they looked at changes of gene expression in the brain caused by repeated social defeat stress and found that repeated stress increased a putative ligand for the innate immune receptors TLR2 and TLR4 (TLR2/4) in the brain. Their next step was to investigate the role of TLR2/4 in repeated stress using a mouse with the TLR2/4 genes deleted. They found that TLR2/4-deficient mice did not show social avoidance or extreme anxiety when exposed to repeated stress. Repeated stress usually triggers microglial activation in specific areas of the brain such as the medial prefrontal cortex, causing impaired response and atrophy of neurons, but these responses were not present in the TLR2/4-deficient mice. The research team then developed a method to selectively block the expression of TLR2/4 in the microglia of specific areas of the brain. By blocking the expression of TLR2/4 in the microglia of the medial prefrontal cortex, they managed to suppress depressive behavior in response to repeated social defeat stress. They found that repeated stress induced the expression of inflammation-related cytokines IL-1a and TNFa in the microglia of the medial prefrontal cortex via TLR2/4. The depressive behavior was suppressed by treating the medial prefrontal cortex with neutralizing antibodies for the inflammation-related cytokines. These results show that repeated social defeat stress activates microglia in the medial prefrontal cortex via the innate immune receptors TLR2/4. This triggers the expression of inflammation-related cytokines IL-1a and TNFa, leading to the atrophy and impaired response of neurons in the medial prefrontal cortex, and causing depressive behavior.

Removing all barriers to communication between different races and cultures

Thursday, August 16th, 2018

I still enjoy the Babel Fish segment from the old BBC version of The Hitchhiker’s Guide To The Galaxy:

That last witty bit seems more darkly humorous than I remembered:

Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation.

From Terman to Today

Thursday, August 16th, 2018

David Lubinski of Vanderbilt University reviews a century of findings on intellectual precocity:

As Terman launched his longitudinal study in 1921, Hollingworth, Pressey, Thorndike, and others advocated for the special educational needs and the importance of studying intellectually precocious students (Witty, 1951). In a compelling publication in Science, “The Gifted Student and Research,” Seashore (1922) argued that for every 100 incoming college freshman chosen at random, the top five assimilate five times as much information as the bottom five and stressed that these differences necessitate different opportunities for meeting their respective needs. He emphasized that optimal learning environments for all students avoided the undesirable extremes of frustration and boredom destined for appreciable numbers of students when inflexible, lock-step learning environments were enforced upon all.

Adjusting the depth and pace of the curriculum to the rate at which each student learned would “keep each student busy at his highest level of achievement in order that he may be successful, happy, and good” (italics in original, Seashore, 1922, p. 644). For the gifted, Seashore recommended that instead of whipping them into line, we “whip them out of line.” Seashore (1930, 1942) leveraged this idea when he marshaled his campaign for establishing honors colleges throughout major U.S. universities. Although his name does not always surface in historical treatments of the gifted movement, Seashore’s impact was profound (Miles, 1956). He traveled to 46 of the contiguous states within the United States meeting with university officials to discuss the importance of honors colleges and more challenging curricula and opportunities for the most talented university students.

Large-scale empirical evidence for these considerations was introduced a few years later by the extensive longitudinal findings of Learned and Wood (1928, 1938). Figure 1 is reproduced from their extensive analysis of tens of thousands of high school and college students, many of whom were tracked for years and systematically assessed on academic knowledge. For decades, major textbooks on individual differences (Anastasi, 1958; Tyler, 1965; Willerman, 1979) and policy recommendations for restructuring classrooms (Benbow & Stanley, 1996; Pressey, 1949; Terman, 1954a) cited this important study. It was cited as empirical evidence for why instruction needs to be adjusted to the individual learning needs of each student — and intellectually precocious students, in particular.

From Terman to Today Figure 1

When Terman (1939) reviewed Learned and Wood (1938) for the Journal of Higher Education, he regarded it as the most relevant research contribution that addressed higher education problems in the United States. Terman (1939, p. 111) maintained it “warrants a thorough overhauling of our educational procedures,” because it documented the extent to which vast knowledge differentials exist among students in lock-step systems. It demonstrated that the range of individual differences in knowledge among high school seniors, college sophomores, and college seniors, across wide varieties of professionally developed achievement tests, was vast. For example, about 10% of 12th-grade students younger than 18 years of age had more scientific knowledge than the average college senior. Within all grade levels, younger students were more knowledgeable than the older students. And, if graduation from college were based on demonstrated knowledge rather than time in the educational system, a full 15% of the entering freshmen class would be deemed ready to graduate. Indeed, they would make the top 20% cut on the broad-spectrum 1,200-item achievement test in the combined (Freshman + Sophomore + Junior + Senior) college sample.

(Hat tip to Eric Raymond.)

His job is not to wield power but to draw attention away from it

Wednesday, August 15th, 2018

The newish Hitchhiker’s Guide to the Galaxy movie (which is on HBO Now through the end of the month) didn’t catch my fancy, but the audiobook (narrated by Stephen Fry) did, and this prescient passage caught my attention:

The President in particular is very much a figurehead — he wields no real power whatsoever. He is apparently chosen by the government, but the qualities he is required to display are not those of leadership but those of finely judged outrage. For this reason the President is always a controversial choice, always an infuriating but fascinating character. His job is not to wield power but to draw attention away from it. On those criteria Zaphod Beeblebrox is one of the most successful Presidents the Galaxy has ever had — he has already spent two of his ten presidential years in prison for fraud.

Has the United States now arrived at the brink of a veritable civil war?

Wednesday, August 15th, 2018

How, when, and why, Victor Davis Hanson asks, has the United States now arrived at the brink of a veritable civil war?

Globalization

Globalization had an unfortunate effect of undermining national unity. It created new iconic billionaires in high tech and finance, and their subsidiaries of coastal elites, while hollowing out the muscular jobs largely in the American interior.

Ideologies and apologies accumulated to justify the new divide. In a reversal of cause and effect, losers, crazies, clingers, American “East Germans,” and deplorables themselves were blamed for driving industries out of their neighborhoods (as if the characters out of Duck Dynasty or Ax Men turned off potential employers). Or, more charitably to the elites, the muscular classes were too racist, xenophobic, or dense to get with the globalist agenda, and deserved the ostracism and isolation they suffered from the new “world is flat” community. London and New York shared far more cultural affinities than did New York and Salt Lake City.

Meanwhile, the naturally progressive, more enlightened, and certainly cooler and hipper transcended their parents’ parochialism and therefore plugged in properly to the global project. And they felt that they were rightly compensated for both their talent and their ideological commitment to building a better post-American, globalized world.

One cultural artifact was that as our techies and financiers became rich, as did those who engaged in electric paper across time and space (lawyers, academics, insurers, investors, bankers, bureaucratic managers), the value of muscularity and the trades was deprecated. That was a strange development. After all, prestige cars, kitchen upgrades, gentrified home remodels, and niche food were never more in demand by the new elite. But who exactly laid the tile, put the engine inside the cars, grew the arugula, or put slate on the new hip roof?

In this same era, a series of global financial shocks, from the dot-com bust to the more radical 2008 near–financial meltdown, reflected a radical ongoing restructuring in American middle-class life, characterized by stagnant net income, family disintegration, and eroding consumer confidence. No longer were youth so ready to marry in their early twenties, buy a home, and raise a family of four or five. Compensatory ideology made the necessary adjustments to explain the economic doldrums and began to characterize what was impossible first as undesirable and later as near toxic. Pajama Boy sipping hot chocolate in his jammies, and the government-subsidized Life of Julia profile, became our new American Gothic.

High Tech

The mass production of cheap consumer goods, most assembled abroad, redefined wealth or, rather, disguised poverty. Suddenly the lower middle classes and the poor had in their palms the telecommunications power of the Pentagon of the 1970s, the computing force of IBM in the 1980s, and the entertainment diversity of the rich of the 1990s. They could purchase big screens for a fraction of what their grandparents paid for black-and-white televisions and with a computer be entertained just as well cocooning in their basement as by going out to a concert, movie, or football game.

The Campus

Higher education surely helped split the country in two. In the 1980s, the universities embraced two antithetical agendas, both costly and reliant on borrowed money. On the one hand, campuses competed for scarcer students by styling themselves as Club Med–type resorts with costly upscale dorms, tony student-union centers, lavish gyms, and an array of in loco parentis social services. The net effect was to make colleges responsible not so much for education, but more for shielding now-fragile youth from the supposed reactionary forces that would buffet them after graduation.

An entire generation of students left college with record debt, mostly ignorant of the skills necessary to read, write, and argue effectively, lacking a general body of shared knowledge — and angry. They were often arrogant in their determination to actualize the ideologies of their professors in the real world. A generation ignorant, arrogant, and poor is a prescription for social volatility.

Illegal Immigration

Immigration was recalibrated hand-in-glove by progressives who wanted a new demographic to vote for leftist politicians and by Chamber of Commerce conservatives who wished an unlimited pool of cheap unskilled labor. The result was waves of illegal, non-diverse immigrants who arrived at precisely the moment when the old melting pot was under cultural assault.

The Obama Project

We forget especially the role of Barack Obama. He ran as a Biden Democrat renouncing gay marriage, saying, “I believe marriage is between a man and a woman. I am not in favor of gay marriage.” Then he “evolved” on the question and created a climate in which to agree with this position could get one fired. He promised to close the border and reduce illegal immigration: “We will try to do more to speed the deportation of illegal aliens who are arrested for crimes, to better identify illegal aliens in the workplace. We are a nation of immigrants. But we are also a nation of laws.” Then he institutionalized the idea that to agree with that now-abandoned agenda was a career-ender.

Read the whole thing. (I edited down each point.)

Surely You’re Joking, Mr. Feynman! is free to read via Prime Reading

Tuesday, August 14th, 2018

If you recently enjoyed Feynman’s anecdote about an early wartime engineering job he had, you should know that Amazon’s Prime Reading program just added Surely You’re Joking, Mr. Feynman! to its list of free Kindle reads.

Peterson comes across as pompous, self-absorbed, and not very self-aware

Tuesday, August 14th, 2018

Like many folks recently, Robin Hanson decided to learn more about Jordan Peterson, so he read Maps of Meaning:

He doesn’t offer readers any degree of certainty in his claims, nor distinguish in which claims he’s more confident. He doesn’t say how widely others agree with him, he doesn’t mention any competing accounts to his own, and he doesn’t consider examples that might go against his account. He seems to presume that the common underlying structures of past cultures embody great wisdom for human behavior today, yet he doesn’t argue for that explicitly, he doesn’t consider any other forces that might shape such structures, and he doesn’t consider how fast their relevance declines as the world changes. The book isn’t easy to read, with overly long and obscure words, and way too much repetition. He shouldn’t have used his own voice for his audiobook.

In sum, Peterson comes across as pompous, self-absorbed, and not very self-aware. But on the one key criteria by which such a book should most be judged, I have to give it to him: the book offers insight. The first third of the book felt solid, almost self-evident: yes such structures make sense and do underlie many cultural patterns. From then on the book slowly became more speculative, until at the end I was less nodding and more rolling my eyes. Not that most things he said even then were obviously wrong, just that it felt too hard to tell if they were right. (And alas, I have no idea how original is this book’s insight.)

Hanson shares one of his own insights that he had while reading the book:

It occurs to me that this [evolvability] is also an advantage of traditional ways of encoding cultural values. An explicit formal encoding of values, such as found in modern legal codes, is far less redundant. Most random changes to such an abstract formal encoding create big bad changes to behavior. But when values are encoded in many stories, histories, rituals, etc., a change to any one of them needn’t much change overall behavior. So the genotype can drift until it is near a one-step change to a better phenotype. This allows culture to evolve more incrementally, and avoid local maxima.

Implicit culture seems more evolvable, at least to the extent slow evolution is acceptable. We today are changing culture quite rapidly, and often based on pretty abstract and explicit arguments. We should worry more about getting stuck in local maxima.

See if you detect any testiness, confusion, or exasperation

Monday, August 13th, 2018

James Fallows — who trained for and got his instrument rating at Boeing Field in Seattle in 1999, and flew frequently in Seattle airspace when he lived there in 1999 and 2000 — reviews the Seattle plane crash:

The specifics: The most useful overall summary I’ve seen is in The Aviationist. It gives details about the plane (a Horizon Air Bombardier Dash 8, with no passengers aboard but capable of carrying more than 70); the route of flight; the response of air traffic control; and the dispatch of two F-15 fighter jets from the Oregon Air National Guard’s base, in Portland, which broke the sound barrier en route toward Seattle and were prepared if necessary to shoot down the errant plane.

The real-time drama: A video of the plane’s barrel rolls and other maneuvers, plus the F-15 interception, from John Waldon of KIRO, is here.The recordings of the pilot’s discussions with air traffic control (ATC) are absolutely riveting. A 10-minute summary, featuring the pilot’s loopy-sounding stream-of-consciousness observations in what were his final moments of life, is here. A 25-minute version, which includes the other business the Seattle controllers were doing at the same time, is here. The pilot makes his final comments at around time 19:00 of this longer version. A few minutes later, you hear the controllers telling other waiting airline pilots that the “ground hold” has been lifted and normal operations have resumed. In between, the controllers have learned that the pilot they were talking to has flown his plane into the ground.

How did he do it? Part 1: The Dash 8, which most airline passengers would think of as a “commuter” or even a “puddle jumper” aircraft, differs from familiar Boeing or Airbus longer-haul planes in having a built-in staircase. When the cabin door opens, a set of stairs comes out, and you can walk right onto the plane. This is a very basic difference from larger jets. The big Boeing and Airbus planes require a “Jetway” connection with the terminal, which is the normal way that passengers, flight crew, and maintenance staff get on and off, or an external set of stairs. Also, big jets usually require an external tug to pull or push them away from the Jetway and the terminal, before they can taxi to the runway. They cannot just start up and drive away, as the Dash 8 did. Was the Dash 8’s door already open, and the stairs down, so a ground-staff member could just walk on? Did he have to open the door himself? I don’t know. But either way, anyone who has been to a busy airport knows that it’s normal rather than odd to see ground-crew members getting into planes.

How did he do it? Part 2: However the pilot started the plane (switches? spare set of keys?), the available ATC recordings suggest he didn’t fool the Seattle controllers into giving him permission to taxi to the runway or take off. He just started taxiing, rolled onto the runway, accelerated, and left. As you can hear from the 25-minute recording, ATC at big, busy airports is an elaborately choreographed set of permissions—to push back from the gate, to taxi to a specific runway, to move onto the runway, to take off. For safety reasons (avoiding collisions on the runway), in this case the Seattle controllers had to tell normal traffic to freeze in position, as the unknown rogue plane barged through.

How did he do it? Part 3: In the 10-minute ATC version, you can hear the pilot asking what different dials mean, saying that he knows about airplanes only from flight simulators, and generally acting surprised about where he finds himself. But the video shows him performing maneuvers that usually require careful training—for instance, leveling off the plane after completing a barrel roll. Was this just blind luck? The equivalent of movie scenes of a child at the wheel of a speeding car, accidentally steering it past danger? Was his simulator training more effective than he thought? Did he have more flying background than he let on? At the moment I’ve seen no explanation of this discrepancy.

How everyone else did: I challenge anyone to listen to the ATC tapes, either the condensed or (especially) the extended version, and not come away impressed by the calm, humane, sophisticated, utterly unflappable competence of the men and women who talked with the pilot while handling this emergency. My wife, Deb, has written often about the respect she’s gained for controllers by talking with them in our travels over the years. These are public employees, faced with a wholly unprecedented life-and-death challenge, and comporting themselves in a way that does enormous credit to them as individuals and to the system in which they work. In addition to talking to the hijacker pilot, Seattle ATC was talking with the scores of other airline pilots whose flights were affected by the emergency. See if you detect any testiness, confusion, or exasperation in those pilots’ replies.

We all know that the voice of the airline pilot is calm, not testy.

(Hat tip à mon père.)

Robertson’s The Last Utopians is instructive and touching, if sometimes inadvertently funny

Monday, August 13th, 2018

Michael Robertson’s The Last Utopians: Four Late Nineteenth-Century Visionaries and Their Legacy is instructive and touching, Adam Gopnik says, if sometimes inadvertently funny:

The instructive parts rise from Robertson’s evocation and analysis of a series of authors who aren’t likely to be well known to American readers, even those of a radical turn of mind. All four wrote books and imagined ideal societies with far more of an effect on their time than we now remember. The touching parts flow from the quixotic and earnest imaginations of his heroes and heroine: the pundit Edward Bellamy, the designer William Morris, the pioneering gay writer Edward Carpenter, and the feminist social reformer Charlotte Perkins Gilman. His utopians showed enormous courage in imagining and, to one degree or another, trying to create new worlds against the grain of the one they had inherited. They made blueprints of a better place, detailed right down to the wallpaper, and a pleasing aura of pious intent rises from these pages.

The comedy, which is inadvertent, springs from Robertson’s absence of common sense about these utopian projects, pious intent being very different from pragmatic achievement. Hugely sympathetic to his subjects, he discovers again and again as he inspects their projects that, for all the commendable bits that anticipate exactly the kinds of thing we like now, there are disagreeable bits right alongside, of exactly the kinds that we don’t like now. The utopian feminists are also eugenicists and anti-Semites; the men who dream of a perfect world where same-sex attraction is privileged also unconsciously mimic the hierarchy of patriarchy, putting effeminate or cross-dressing “Uranians” at the bottom of their ladder. The socialists are also sexists, and the far-seeing anarchists are also muddle-headed, mixed-up mystics.

The sensible lesson one might draw from this is that the human condition is one in which the distribution of bad and good is forever in flux, and so any blueprint of perfection is doomed to failure. Instead, Robertson assumes that if we can just add to the utopian visions of 1918 the progressive pieties of 2018 — if we reform their gender essentialism and their implicit hierarchism and several other nasty isms — then we will at last arrive at the right utopia.

Read the whole thing.

Organisation is suppression

Sunday, August 12th, 2018

Nick Land argues that organisation is suppression in a 1997 Wired UK interview:

According to Dr. Nick Land, lecturer in Continental Philosophy at the University of Warwick (a title that he hates), pretty much everything the Western tradition has come up with in the way of thinking about itself and the world around it is not only wrong but bad. Using the work of French writers Gilles Deleuze and Felix Guattari as a jumping off point, Land substitutes a vision of a world of flux forever constructing and reconstructing itself via the operations of countless “machinic processes” for the models supplied by the linear, rationalist thought of the classical, modernist and postmodernist traditions. He draws parallels between the processes of late twentieth century capitalism, fascism, and schizophrenia, and strongly resists attempts to categorise his work, ridiculing the notion that there is even such a thing as “philosophy”. He has no time for the academic consensus that you have to produce a turgid tome every two years to prove that you are “serious”. At present, his favoured medium is multimedia performance, and he works closely with arts collective Orphan Drift.

James Flint: Why is it that much of the content on the Internet, this supposedly amazingly democratic, anarchic forum, is becoming dull and corporate and organised?

Nick Land: Your question suggests that there’s some pre-existing social pool of liberatory, revolutionary, emancipatory creative potential that could be expected to spontaneously express itself as soon as it had an opportunity to do so. But there is no such intrinsic power of innovation latent in the human organism that’s just waiting to bounce out onto the web. So the question really is what are the assemblages that are emerging? And correspondingly to what extent are distributed systems becoming operative as such?

JF: So how do systems which are initially freeform and distributed give way to centralised power structures?

NL: You have to understand that organisation involves subordinating low level units to some higher level functional program. In the most extreme cases, like in biological organisms, every cell is defunctionalised, turned off, except for that one specialised function that it is allocated by the organic totality. And hence the preponderant part of its potential is deactivated in the interests of some higher level unity. That’s why the more organised things get, the less interesting their behaviour becomes — “interesting” simply meaning here how freely they explore a range of possible behaviours, or how “nomadic” they are.

JF: I take it from that that you are not as keen on the idea of “self organsiation” as some thinkers.

NL: Organisation is suppression. It’s more accurate to say that systems which avoid self-organisation whilst maintaining trajectories of productive innovation end up parasitically inhabited by organisms of all kinds, whether those organism are biological organisms, corporations or state systems. The history of life on this planet right through to Microsoft is of the successive suppression of distributed, innovated systems.

JF: Can you give me an example?

NL: Well, first of all one has autocatalytic chemical systems that are subject to code control by RNA. When RNA begins to complicate enough to start exhibiting various kinds of lateral interference and experimental deviations, it becomes overcoded by DNA. The absolute crucial event in the whole history of the planet is the point at which the earth’s bacterial life system — which is very loosely code controlled, comparatively — is subjected to exterminatory gassing by oxygen-emitting, massively highly structured securo-maniac metazoan organisms. Many of the bacteria disappear except insofar as they are captured as productive subcomponents of highly organised, nucleated, concentrational systems which are now what dominate all life on the planet and have done for five hundred million years.

JF: So how would you interpret the classical picture of evolution as a tree-like structure?

NL: The bacterial net is successively suppressed by levels of organisation, tiers of control that have a tree-like structure. But that tree-like structure is not at all inherent, it’s actually produced by organisation. It’s incredibly similar to the relation between corporations and markets, in the sense that markets are potentially open ended, distributed transaction systems which are subjected to regularisation, hierarchical structuralisation, specialisation and concentration by the corporate structures that superimpose themselves upon them.

JF: Might the widespread use of computers and the net challenge these structures?

NL: The thing about the potentialities of massively distributed computation capacity is that they disperse productive potential. And there’s a certain sense in which the personal computer introduces a fundamental break in the traditional structure of investment by being simultaneously a piece of consumer electronics and a piece of productive apparatus. But although this is the case, the old structures are being artificially maintained.

JF: How?

NL: Buying a personal computer is treated as productive investment if it is done by a corporate entity and as a piece of personal consumption if it is done by dis-integrated [sic] consumers. And presumably this kind of trompe d’oeil is getting results, because the intersection between software, broadcast media and telecommunications is at the moment in an absolute orgiastic state of capital concentration. And clearly the key actors in this sector think that their strategies are based upon some viable avenue of continued advantage — a continuation of the modernist situation of economies of scale, if you like. Their picture is clearly not one of disintegration into small scale horizontal agents.

JF: But can’t the net itself help us overcome these illusions, through increasingly universal access to knowledge and communication?

NL: Certainly the great potential in the technical infrastructure of the net is the telecoms base rather than the broadcasting base. This is not a very original thought, but nevertheless it seems of crucial importance. Capitalist and state organisations have an absolutely immense investment in disabling the telecoms dynamics of the forthcoming digital media system. But that doesn’t mean that much has yet been done that is particularly exciting with this telecoms infrastructure. The more of it the better, the more that you have a multi-switched high bandwidth communications oriented digital system rather than a one to many broadcast oriented, media-production-media-consumption oriented system, the more chance there is of actually eliciting innovative behaviour out of innovative systems. But I’d be very cynical with regard to the extent to which we have seen any of that yet.

Extraordinarily pessimistic, and yet still extraordinarily motivational

Saturday, August 11th, 2018

Peter Thiel speaks to Die Weltwoche, in English — after beginning the conversation in German with an American accent:

At the moment, Silicon Valley still looks all-powerful.

The big question is: Will the future of the computer age be decentralized or centralized? Back in the 60s, you had this Star Trek idea of an IBM computer running a planet for thousands of years, where people were happy but unfree. Today, again we are thinking that it is going to be centralized: Big companies, big governments, surveillance states like China. When we started Paypal in 1999, it was exactly the opposite: This vision of a libertarian, anarchistic internet. History tells me that the pendulum has swung back and forth. So, today I would bet on decentralization and on more privacy. I don’t think we are at the end of history and it’s just going to end in the world surveillance state.

What has become the problem with Silicon Valley?

One of the paradoxes of Silicon Valley is that this internet technology revolution is supposed to get rid of the tyranny of place and geography. And yet, it was all happening in one place. There is, however, always a tipping point with network effects. At the beginning, they are very positive, but at some point they can become negative. In economic terms, they become negative when the costs get too high. If you have to pay 2000 dollars a month for a one-bedroom apartment in San Francisco, maybe that is a sign of the boom. But when it is 4000 dollars a month – with a city government where the police don’t work, the roads don’t work, the schools don’t work – 4000 dollars is just a very high tax, in effect. There is also a cultural component: At one point, the wisdom of crowds tips into the madness of crowds – and you end up with a sort of conformity, lemming-like behavior. It actually becomes a somewhat less creative place.

You label yourself a “contrarian”. How did you become one? How does one become a contrarian?

It is a label that has been given to me, not one that I give normally to myself. I don’t think a contrarian per se is the right thing to be. A pure contrarian just attaches a minus sign to whatever the crowd thinks. I don’t think it should be as simple as that. What I think is important for people is to try to think very hard for oneself. But yes, I do deeply mistrust all these kinds of almost hypnotic mass and crowd phenomena and I think they happen to a disturbing degree.

Why do they happen in a supposedly enlightened society?

The advanced technological civilization of the early 21st century is a complicated world where it is not possible for anybody to think through everything for themselves. You cannot be a polymath in quite the way people were in the 18th century enlightenments. You cannot be like Goethe. So there is some need to listen to experts, to defer to other people. And then, there is always the danger of that going too far and people not thinking critically. This happens in spades in Silicon Valley. There is certainly something about it that made it very prone to the dotcom bubble in the nineties or to the cleantech bubble in the last decade.

Tell us about how your support for Donald Trump for president of the United States was received in the Silicon Valley.

That was quite striking. My support for Donald Trump was, on some level, the least contrarian thing I have ever done. If it is half the country, it cannot be that contrarian. And yet, in the Silicon Valley context it has felt extraordinarily contrarian. It is not that politics is the most important thing. I think there are many things that are much more important than politics: Science is more important, technology is more important, philosophy, religion… We normally think that political correctness is literally about politics. But politics is sort of a natural place to start. If you cannot even have differences of opinion in politics, that’s a sign that things are very unhealthy.

What was unique about the Trump campaign?

Republican candidates have always been way too glibly optimistic about everything. I’ve thought for many years that it was critical for the Republicans to somehow run a more pessimistic candidate just because that was a more honest description of what was going on. It is very hard to know how to do that because if you are too pessimistic, you demotivate people: If everything is just going down the drain, no point even voting for me. Somehow, the genius of Trump was that it was extraordinarily pessimistic, and yet still extraordinarily motivational. The slogan “Make America Great Again”, the most pessimistic slogan of any presidential candidate in a hundred years: The country used to be great, it is no longer great. That is a shocking, shocking statement!

Another issue that is debated very controversially is Trump’s trade policy. People are shocked by his imposition of tariffs.

At the center of this is the question with China. The US exports something like 100 bn a year to China, we import 475 bn. What’s extraordinary, is that if we had a globalizing world, we would actually expect the reverse to hold: you would expect the US to have trade surpluses with China and current account surpluses because we would expect that there is a higher return in China because it is a faster growing country than the US. This is what it looked, let’s say, in 1900, when Great Britain had a trade surplus of 2 percent and a current account surplus of 4 percent of GDP. And the extra capital was invested in Argentinean railroads or Russian bonds.

The fact that the US does not have a surplus, that actually it has a massive deficit, tells you that something is completely wrong with the standard globalization picture that we have. It is sort of like: Chinese peasants are saving money and it is flowing uphill into low-return investments in the US and bonds in Europe with negative interest rates. There is something completely crazy about that dynamic.

What’s your view on Switzerland?

Switzerland is an extraordinarily well functioning country. I don’t like the neighborhood it is in, but it is really remarkable. If you compare Switzerland with Austria or Scandinavia, human capital is equally good but the per capita income in Switzerland is 50 to 100 percent higher. It does tell you that there is something that people are doing that is dramatically better. The question is whether its cities are big enough. If you are a talented young person: Do you move to Geneva or do you move to London? It was good if Switzerland had a somewhat better answer to that sort of question. But as I stated at the beginning, I think the technology will be more decentralized and so I think what has been a limitation for Switzerland will be much less going forward.

Are you jealous that you didn’t invent Bitcoin?

It is hard to be jealous of something that you weren’t remotely capable of doing. I have to acknowledge I would never come up with anything like that. So I can’t even be jealous. I was very interested in all these virtual currencies in the late 90s. We started Paypal thinking about that, but at the end it was a payment system for existing fiat money. Somehow, that experience weirdly primed me to underestimate Bitcoin early on. It was on the radar in 2011 and there were people telling me I should buy it, and we didn’t really get involved until 2014. When you have experiences and you learn things, it is often very dangerous and my experience in the late 90s was that cryptocurrencies didn’t work. And it was largely correct, but you always have to be open to think about it.

(Those are just some of the questions and answers.)

Primates managed to keep most of their neurons the same size

Friday, August 10th, 2018

Eugène Dubois gathered the brain and body weights of several dozen animal species and calculated the mathematical rate at which brain size expands relative to body size:

Dubois reasoned that as body size increases, the brain must expand for reasons of neural housekeeping: Bigger animals should require more neurons just to keep up with the mounting chores of running a larger body. This increase in brain size would add nothing to intelligence, he believed. After all, a cow has a brain at least 200 times larger than a rat, but it doesn’t seem any smarter. But deviations from that mathematical line, Dubois thought, would reflect an animal’s intelligence. Species with bigger-than-predicted brains would be smarter than average, while those with smaller-than-predicted brains would be dumber. Dubois’s calculations suggested that his Java Man was indeed a smart cookie, with a relative brain size — and intelligence — that fell somewhere between modern humans and chimpanzees.

Dubois’s formula was later revised by other scientists, but his general approach, which came to be known as “allometric scaling,” persisted. More modern estimates have suggested that the mammalian brain mass increases by an exponent of two-thirds compared to body mass. So a dachshund, weighing roughly 27 times more than a squirrel, should have a brain about 9 times bigger — and in fact, it does. This concept of allometric scaling came to permeate the discussion of how brains relate to intelligence for the next hundred years.

Seeing this uniform relationship between body and brain mass, scientists developed a new measure called encephalization quotient (EQ). EQ is the ratio of a species’s actual brain mass to its predicted brain mass. It became a widely used shorthand for intelligence. As expected, humans led the pack with an EQ of 7.4 to 7.8, followed by other high achievers such as dolphins (about 5), chimpanzees (2.2 to 2.5), and squirrel monkeys (roughly 2.3). Dogs and cats fell in the middle of the pack, with EQs of around 1.0 to 1.2, while rats, rabbits, and oxen brought up the rear, with values of 0.4 to 0.5. This way of thinking about brains and intelligence has been “very, very dominant” for decades, says Evan MacLean, an evolutionary anthropologist at the University of Arizona in Tucson. “It’s sort of a fundamental insight.”

Comparative EQ

A century later, Suzana Herculano-Houzel found a (gruesome) way to count neurons efficiently:

An entire rat brain contains about 200 million nerve cells.

She looked at brains from five other rodents, from the 40-gram mouse to the 48-kilogram capybara (the largest rodent in the world, native to Herculano-Houzel’s home country of Brazil). Her results revealed that as brains get larger and heavier from one species of rodent to another, the number of neurons grows more slowly than the mass of the brain itself: A capybara’s brain is 190 times larger than a mouse’s, but it has only 22 times as many neurons.

Then in 2006, Herculano-Houzel got her hands on the brains of six primate species during a visit with Jon Kaas, a brain scientist at Vanderbilt University in Nashville, Tennessee. And this is where things got even more interesting.

[...]

As the primate brain expands from one species to another, the number of neurons rises quickly enough to keep pace with the growing brain size. This means that the neurons aren’t ballooning in size and taking up more space, as they do in rodents. Instead, they stay compact. An owl monkey, with a brain twice as large as a marmoset, actually has twice as many neurons — whereas doubling the size of a rodent brain often yields only 20 to 30 percent more neurons. And a macaque monkey, with a brain 11 times larger than a marmoset, has 10 times as many nerve cells.

[...]

The usual curse of an ever-expanding neuron size may stem from the basic fact that brains function as networks in which individual neurons send signals to one another. As brains get bigger, each nerve cell must stay connected with more and more other neurons. And in bigger brains, those other neurons are located farther and farther away.

[...]

A large rodent called an agouti has eight times as many cortical nerve cells as a mouse, while its white matter takes up an astonishing 77 times as much space. But a capuchin monkey, with eight times as many cortical neurons as a small primate called a galago, has only 11 times as much white matter.

[...]

Kaas thinks that primates managed to keep most of their neurons the same size by shifting the burden of long-distance communication onto a small subset of nerve cells. He points to microscopic studies showing that perhaps 1 percent of neurons do expand in big-brained primates: These are the neurons that gather information from huge numbers of nearby cells and send it to other neurons that are far away. Some of the axons that make these long-distance connections also get thicker; this allows time-sensitive information, such as a visual image of a rapidly moving predator, or prey, to reach its destination without delay. But less-urgent information — that is, most of it — is sent through slower, skinnier axons. So in primates, the average thickness of axons doesn’t increase, and less white matter is needed.

This pattern of keeping most connections local, and having only a few cells transmit information long-distance, had huge consequences for primate evolution. It didn’t merely allow primate brains to squeeze in more neurons. Kaas thinks that it also had a more profound effect: It actually changed how the brain does its work. Since most cells communicated only with nearby partners, these groups of neurons became cloistered into local neighborhoods. Neurons in each neighborhood worked on a specific task — and only the end result of that work was transmitted to other areas far away. In other words, the primate brain became more compartmentalized. And as these local areas increased in number, this organizational change allowed primates to evolve more and more cognitive abilities.

All mammal brains are divided into compartments, called “cortical areas,” that each contain a few million neurons. And each cortical area handles a specialized task: The visual system, for example, includes different areas for spotting the simple edges of shapes and for recognizing objects. Rodent brains don’t seem to become more compartmentalized as they get larger, says Kaas. Every rodent from the bite-sized mouse to the Doberman-sized capybara has about the same number of cortical areas — roughly 40. But primate brains are different. Small primates, such as galagos, have around 100 areas; marmosets have about 170, macaques about 270 — and humans around 360.