Black Lies Matter

October 19th, 2016

The Black Lives Matter movement is based on a lie, Heather MacDonald argues:

Last year, the police shot 990 people, the vast majority armed or violently resisting arrest, according to the Washington Post’s database of fatal police shootings. Whites made up 49.9 percent of those victims, blacks, 26 percent. That proportion of black victims is lower than what the black violent crime rate would predict.

Blacks constituted 62 percent of all robbery defendants in America’s 75 largest counties in 2009, 57 percent of all murder defendants and 45 percent of all assault defendants, according to the Bureau of Justice Statistics, even though blacks comprise only 15 percent of the population in those counties.

In New York City, where blacks make up 23 percent of the city’s population, blacks commit three-quarters of all shootings and 70 percent of all robberies, according to victims and witnesses in their reports to the New York Police Department. Whites, by contrast, commit less than 2 percent of all shootings and 4 percent of all robberies, though they are nearly 34 percent of the city’s population.

In Chicago, 80 percent of all known murder suspects were black in 2015, as were 80 percent of all known nonfatal shooting suspects, though they are a little less than a third of the population. Whites made up 0.9 percent of known murder suspects in Chicago in 2015 and 1.4 percent of known nonfatal shooting suspects, though they are about a third of the city’s residents.

Such racially skewed crime ratios are repeated in virtually all American metropolises. They mean that when officers are called to the scene of a drive-by shooting or an armed robbery, they will overwhelmingly be summoned to minority neighborhoods, looking for minority suspects in the aid of minority victims.

Gang shootings occur almost exclusively in minority areas. Police use of force is most likely in confrontations with violent and resisting criminals, and those confrontations happen disproportionately in minority communities.

You would never know it from the activists, but police shootings are responsible for a lower percentage of black homicide deaths than white and Hispanic homicide deaths. Twelve percent of all whites and Hispanics who die of homicide are killed by police officers, compared to 4 percent of black homicide victims.

That disparity is driven by the greatly elevated rates of criminal victimization in the black community. More blacks die each year from homicide, more than 6,000, than homicide victims of all other races combined. Their killers are not the police, and not whites, but other blacks. In Chicago this year through Aug. 30, 2,870 people, mostly black, were shot.

If you believed the Black Lives Matter narrative, you would assume that the assailants of those black victims were in large part cops. In fact, the police shot 17 people, most of whom were threatening lethal force, accounting for 0.6 percent of the total.

Gun-related murders of officers are up 52 percent this year through Aug. 30 compared to last year.

Police critics have never answered the question of what they think non-biased policing data should look like, in light of the vast differences in rates of criminal offending. Blacks commit homicide at eight times the rate of whites and Hispanics combined. Black males between the ages of 14-17 commit gun homicide at nearly 10 times the rate of white and Hispanic male teens combined.

Should police stops, arrests and those rare instances of police shootings nevertheless mirror population ratios, rather than crime ratios?

The Usual You-Go-Girl Fare

October 18th, 2016

The creators of Zootopia explain the original concept and the big story shift that turned the film upside down:

Steve Sailer suggests that it “started out culturally rebellious but then got throttled by the test marketers and executives into the usual You-Go-Girl fare.”

Instructional Videos

October 18th, 2016

Instructional videos are popular and effective, because we’re designed to learn through imitation:

Last year, it was estimated that YouTube was home to more than 135 million how-to videos. In a 2008 survey, “instructional videos” were ranked to be the site’s third most popular content category — albeit a “distant third” behind “performance and exhibition” and “activism and outreach.” More recent data suggest that distance may have closed: In 2015, Google noted that “how to” searches on YouTube were increasing 70 percent annually. The genre is by now so mature that it makes for easy satire.


A 2014 study showed that when a group of marmosets were presented with an experimental “fruit” apparatus, most of those that watched a video of marmosets successfully opening it were able to replicate the task. They had, in effect, watched a “how to” video. Of the 12 marmosets who managed to open the box, just one figured it out sans video (in the human world, he might be the one making YouTube videos).


“We are built to observe,” as Proteau tells me. There is, in the brain, a host of regions that come together under a name that seems to describe YouTube itself, called the action-observation network. “If you’re looking at someone performing a task,” Proteau says, “you’re in fact activating a bunch of neurons that will be required when you perform the task. That’s why it’s so effective to do observation.”


This ability to learn socially, through mere observation, is most pronounced in humans. In experiments, human children have been shown to “over-imitate” the problem-solving actions of a demonstrator, even when superfluous steps are included (chimps, by contrast, tend to ignore these). Susan Blackmore, author of The Meme Machine, puts it this way: “Humans are fundamentally unique not because they are especially clever, not just because they have big brains or language, but because they are capable of extensive and generalised imitation.” In some sense, YouTube is catnip for our social brains. We can watch each other all day, every day, and in many cases it doesn’t matter much that there’s not a living creature involved. According to Proteau’s research, learning efficiency is unaffected, at least for simple motor skills, by whether the model being imitated is live or presented on video.

There are ways to learn from videos better:

The first has to do with intention. “You need to want to learn,” Proteau says. “If you do not want to learn, then observation is just like watching a lot of basketball on the tube. That will not make you a great free throw shooter.” Indeed, as Emily Cross, a professor of cognitive neuroscience at Bangor University told me, there is evidence — based on studies of people trying to learn to dance or tie knots (two subjects well covered by YouTube videos) — that the action-observation network is “more strongly engaged when you’re watching to learn, as opposed to just passively spectating.” In one study, participants in an fMRI scanner asked to watch a task being performed with the goal of learning how to do it showed greater brain activity in the parietofrontal mirror system, cerebellum and hippocampus than those simply being asked to watch it. And one region, the pre-SMA (for “supplementary motor area”), a region thought to be linked with the “internal generation of complex movements,” was activated only in the learning condition — as if, knowing they were going to have to execute the task themselves, participants began internally rehearsing it.

It also helps to arrange for the kind of feedback that makes a real classroom work so well. If you were trying to learn one of Beyonce’s dance routines, for example, Cross suggests using a mirror, “to see if you’re getting it right.” When trying to learn something in which we do not have direct visual access to how well we are doing — like a tennis serve or a golf swing — learning by YouTube may be less effective.


The final piece of advice is to look at both experts and amateurs. Work by Proteau and others has shown that subjects seemed to learn sample tasks more effectively when they were shown videos of both experts performing the task effortlessly, and the error-filled efforts of novices (as opposed to simply watching experts or novices alone). It may be, Proteau suggests, that in the “mixed” model, we learn what to strive for as well as what to avoid.

Blade Runner’s Uplifting Ending

October 17th, 2016

Ridley Scott discusses his way of working — and drops a fun bit of trivia about Blade Runner at the end:

Crowds and Technology

October 17th, 2016

Mobs, demagogues, and populist movements are obviously not new:

What is new and interesting is how social media has transformed age-old crowd behaviors. In the past decade, we’ve built tools that have reconfigured the traditional, centuries-old relationship between crowds and power, transforming what used to be sporadic, spontaneous, and transient phenomena into permanent features of the social landscape. The most important thing about digitally transformed crowds is this: unlike IRL crowds, they can persist indefinitely. And this changes everything.


To translate Canetti’s main observations to digital environments:

  1. The crowd always wants to grow — and always can, unfettered by physical limitations
  2. Within the crowd there is equality — but higher levels of deception, suspicion, and manipulation
  3. The crowd loves density — and digital identities can be more closely packed
  4. The crowd needs a direction — and clickbait makes directions cheap to manufacture

Translating Eric Hoffer’s ideas to digital environments is even simpler: the Internet is practically designed to enable the formation of self-serving patterns of “true belief.”

If We Want to Restore Balance

October 16th, 2016

Irrational optimism works, but Spandrell’s not very good at it, so he has been thinking about how to generate it exogenously:

The main issue people ask is that you can’t just make up a new religion. That’s a good point. It’s also a bummer, given that my shtick for 5 years has been that We Need a New Religion (see 1, 2, 3, 4, 5). But once you understand what religion is about, what it is for, it’s obvious that you can’t just make one up from thin air. Any coordination mechanism for groups, any set of ideas to generate loyalty is more likely to work if it feeds upon previous ideas which are out there, preferably for a long time. If only to make people not feel inadequate about their past ideological stances. If you want Christians to join your group you should make them feel good about having been a Christian; at least parts of it. Ever read the Quran? The writer was very, very familiar with Christianity and Judaism. Christianity was of course also based on Judaism. And Judaism on ages old tribal traditions of the Hebrew tribes. Hardly any religion has ever been produced ex-nihilo. Japan tried to make a religion out of the (purported) tribal traditions of the Japanese people but they just couldn’t beat up centuries of Buddhist faith.

It follows that the solution would be to come up with a slightly modified version of Christianity. It would make it easier to get our natural allies on the right side of the Christian community to join the institution of a reactionary society. The problem is, as many correctly argue on the comments at Jim’s, that Christianity is a leftist cult. The teachings of Jesus are pure and simple leftist agitation. The rich go to hell. The poor will inherit the earth. Prostitutes are as noble as any of you. If some white guy wrote a Medium long-form post talking on his experiences touching and healing lepers we would all call him a holier-than-thou virtue signaller.


What made Christianity so successful?

Well first of all Christianity wasn’t successful everywhere. It certainly was in Europe. But not in the Middle East. Islam surely beat it there. And the few Christian communities that remained since antiquity until the 2003 Iraq War weren’t anything to call home about.

It seems to me that Christianity as a mildly leftist, i.e. socialist and feminist cult, it had an important role to play in the ancient and medieval world. Especially the medieval world, where barbarians roamed Europe at will. The world of a barbarian is the complete opposite of a modern one. Barbarians are manly. Very much so. There’s this Jack Donovan guy pulling a Yukio Mishima and translating his gayness into poetry about how cool the barbarian Way of Man is, how awesome are the men it produces. Which it is. We all love Conan. It’s cool. It looks like tons of fun.

It’s still messed up in many ways. In modern parlance, the barbarian world is a world of toxic masculinity. It’s a world where men do whatever the hell they want. In my parlance, it’s a world of bro signalling spirals. Which is a lot of fun for men. But it produces pretty crappy societies. It’s stupidly violent. It despises menial, boring work. It despises family life for the pursuit of vainglory and pussy. It’s nasty, brutish and short. That’s what you get when men do what their feel like.

In that kind of world, having Christian institutions trying to get men to stop hunting for a while and just fucking till the land and feeding their children, is actually a pretty good idea. Shaming a man to sticking with his ugly and nagging wife even though she’s a total bitch is a pretty good idea if you want children to survive and food surplus to get grown. Getting elite men to not shoot each other over stupid slights, to not drink too much and moderate their appetites, to don’t spend their inheritance in women and parties… was pretty much hopeless for the most part. But to the extent it succeeded it had a civilizing effect.

So to speak in modern terms, if you have a society which is, due to its historical background or its technological level, naturally shifted to the right, having a pole of lefty ideas produces a pretty healthy balance, one where men get a bit of what they want, women get a bit of what they want, and we’re all better off thanks to it.

That’s obviously not what we got today. The situation in 2016 is one where feminism is the law of the land, men doing what men do by nature (cf. Trump) is illegal and strictly punished, and every single institution with some power just pushes the same leftist ideas. Women are better, open borders is good, everybody has the right to organize and fight for their selfish interests except white men. In this circumstances if we want to restore some balance, if we want civilization to work, we need the complete opposite of what Christianity was. We need a big fat magnet of rightist ideas, a rightist pole to exert the same influence on our feminized society that Christianity had on the manly society of the Middle Ages.

It seems to me that Christianity can’t possibly be that. What could be? Your guess is as good as mine. If you’ve been reading this blog you probably know one answer. But again I like it as little as you do. For all purposes I’m still for a New Religion.

Albion’s Ashes

October 16th, 2016

J.D. Vance’s Hillbilly Elegy is not a tale of economic privation among the Kentucky Scots-Irish exodus:

It is closer to the opposite: His Kentucky-exile grandparents are secure and prosperous in spite of their own humble origins and a long period of alcohol-fueled domestic strife; they own a nice, four-bedroom home and drive new high-end cars — convertibles, even. Growing up in a small town in Ohio in the 1990s, Vance lived in a household with an annual income exceeding $100,000, or the equivalent of about $175,000 a year in today’s dollars. He had a close-knit extended family, including a grandmother who read to him and a grandfather who helped him get ahead of the other children in math, which served him well: After college and law school — at Yale — Vance went on to become the principal of a Silicon Valley investment firm. He is 31 years old.

His family was indeed miserable, but theirs wasn’t the misery of poverty and privation. It was the misery of people determined to be miserable at any price. The great American bounty was wheeled out for their enjoyment like room service at the Ritz Carlton, and they decided they preferred Wendy’s and Night Train and OxyContin and desultory sex with strangers from bars.

Nothing happened to them — they happened.

The main difference between Vance and his unhappy forebears with their Byzantine marital histories and “Mountain Dew mouth” — exactly what it sounds like — is that he had the good sense to say yes to the happiness that was offered him.

What’s interesting about his story — his only real excuse for writing a memoir, in fact — is that he almost said no, and that he is one of those unusual men who actually understands the decisions he has made, why and how he made them, and the effects they have had.

Vance was saved by the intervention of certain “loving people”:

That is not usually how one hears Marine drill instructors described.

Vance had the good sense to delay college and enlist in the Marine Corps instead. And the Marine Corps is one of the few remaining American institutions that delivers more or less exactly as advertised. Vance entered the boot camp pudgy, disorganized, immature, and lacking in confidence. He left it harder, wiser, and more capable. His account of his time in the Marines is in fact one of the most interesting sections of the book, and the one that points both to the promise and shortcomings of public-policy interventions to counter the dysfunction of the white underclass. As Vance puts it, the Marines take in new recruits under an assumption of maximum ignorance, i.e., that they do not know the basics of anything, from personal hygiene to keeping a schedule. The Marine Corps interferes in Vance’s life in intensely invasive and personal ways: When he decides he needs to buy a car, an older Marine is dispatched to make sure he doesn’t buy something stupid and stops him from signing a high-interest financing contract with the dealer, steering him instead toward a much better deal available through the Marines’ credit union.

The man who did not know how to handle automotive financing works in finance today. By his own account, he did not know that “finance” was an industry and a career option until well into his college education. Things like how to dress for a job interview and how to conduct himself at a business dinner — he’s flummoxed to learn that there’s more than one kind of white wine — simply were not within his experience.

That sort of thing is awkward, and there are tens of millions of Americans who have had such fish-out-of-water experiences on their way up. The truth is, our schools and other institutions do a pretty good job of identifying the J.D. Vances of the world, thanks in no small part to standardized testing, though of course committed and engaged teachers play an indispensable role, too. But consider what it took to turn Vance’s life around and get him ready for Ohio State and Yale. Short of universal or near-universal military conscription — something that would be resisted both by the public and by the military, which is still resisting the politicians’ efforts to transform it entirely into a social-services agency — what policy options do we have to intervene in the lives of young men and women who come from backgrounds like Vance’s, but who are even worse off in both economic and social-capital terms, and who do not have the innate intelligence to cut it in Silicon Valley or who lack comparable skills and talents? We know what to do about poor kids with IQs of 120 — what about the ones with IQs of 100? What about those with IQs of 90?

See What I Did There?

October 15th, 2016

Alan Moore progresses from eccentric comic-book writer to insane novel writer with his latest work, Jerusalem:

Like Joyce’s “Ulysses,” “Jerusalem” largely hinges on the events of a single day (in this case May 26, 2006) and a particular place: the Boroughs, the depressed neighborhood in Northampton where Moore grew up. (The Jerusalem of the title is the metaphorical one William Blake imagined building “in England’s green and pleasant land.”) As with “Ulysses,” Moore shifts his narrative technique and point of view from chapter to chapter. And, as with “Ulysses,” no detail, however minute, is purely decorative; it’s all part of the mammoth Rube Goldberg machinery, including an actual mammoth (or, rather, its ghost) that sets the story’s denouement into motion.

The equivalent of Stephen Dedalus here — Moore’s stand-in — is a painter in her 50s named Alma Warren (her name is a clear play on the author’s), who comes from a long line of artists, lunatics and “deathmongers,” that being a Northampton tradition of midwife/morticians. The moment during which the characters and their actions converge is the eve of Alma’s opening reception for a series of paintings inspired by her brother’s recollections of a near-death experience from when he choked on a cough drop at the age of 3. But then there’s also a chapter concerning the then-unknown Charlie Chaplin’s experiences in Northampton in 1909, and one in which a Christian pilgrim brings a relic to “Hamtun” (as it was then called) in 810, and one about how Alma’s great-great-grandfather lost his mind in 1865 when the fresco he was repairing in St. Paul’s Cathedral started talking to him, and so forth.

That’s all to prime the reader for the central third of “Jerusalem,” which takes place above time itself, in “Mansoul” (as in John Bunyan’s allegory “The Holy War”), where “The Dead Dead Gang,” a crew of ghostly children led by a girl in a cape made of decomposing rabbits, are having adventures and investigating mysteries. (Their Northampton accents are augmented by “wiz” and “wizzle,” the afterlife’s conflation of “was,” “is” and “will be.”) One advantage of being dead, it turns out, is that you can perceive space-time from the outside, as when the gang encounters the Platonic form of a Northampton landmark:

“The Guildhall, the Gilhalda of Mansoul, was an immense and skyscraping confection of warm-colored stone, completely overgrown with statues, carven tableaux and heraldic crests. It was as if an architecture-bomb had gone off in slow motion, with countless historic forms exploding out of nothingness and into solid granite. Saints and Lionhearts and poets and dead queens looked down on them through the blind pebbles of their emery-smoothed eyes and up above it all, tall as a lighthouse, were the sculpted contours of the Master Builder, Mighty Mike, the local champion.” (That would be the Archangel Michael, who is engaged in an eternal metaphysical snooker tournament that determines the fates of the city’s residents.)

Read that passage out loud, and you can’t miss its galumphing iambic rhythm. Moore, in fact, keeps that meter running for the entire length of the novel, and that’s just where his acrobatic wordplay begins. One chapter takes the form of rhymed stanzas. Another is blank verse, run together into paragraphs but pausing for breath every 10 syllables. A third is a play whose central seam is a conversation between Thomas Becket and Samuel Beckett.

The novel’s most difficult and wittiest chapter is written in a convincing pastiche of Joyce’s portmanteau-mad language from “Finnegans Wake,” and concerns Joyce’s daughter, Lucia, who spent her final decades in a Northampton mental hospital. At one point, the malign spirit of the River Nene tries to persuade her to drown herself: “It is a ferry splashionable wayter go, I’m trold, for laydies of o blitterary inclinocean. But then fameills of that sport are oftun willd, vergin’ near wolf, quereas with you there’s fomething vichy gugling on.” (Note the allusion to Virginia Woolf, who did drown herself.) Lucia declines, and goes on to encounter Dusty Springfield (“Dust’ny Singfeeld”), with whom she has sex while Number 6 from “The Prisoner” looks on. Yes, this is relevant to the plot, more or less.

Books this forbiddingly steep need to be entertaining in multiple ways to make them worth the climb, and Moore keeps lobbing treats to urge his readers onward: luscious turns of phrase, unexpected callbacks and internal links, philosophical digressions, Dad jokes, fantastical inventions like the flower resembling a cluster of fairies — the “Puck’s Hat” or “Bedlam Jenny” — that is the only food the dead can eat. Those who have read Moore’s comics will recognize some of his favorite themes too. Snowy Vernall, who experiences his life as predestined, is in the same boat as Dr. Manhattan from “Watchmen”; there’s a strain of Ripperology left over from “From Hell”; the demon Asmodeus, who appeared in “Promethea,” plays a prominent role here in a different guise.

If cleverness were all that mattered, “Jerusalem” would be everything. Its pyrotechnics never let up, and Moore never stops calling attention to them. Again and again, he threatens to crash into the slough of See What I Did There?, then comes up with another idea so clever he pulls out of the dive. (When the book, in its homestretch, hasn’t yet demonstrated much of a connection to William Blake, Alma Warren effectively engages a detective to work one out, in the person of the real-world actor Robert Goodman jokingly pretending to be a private eye called “Studs.”) The only way to endure “Jerusalem” is to surrender to its excesses — its compulsion to outdo any challenger in its lushness of language, grandness of scope, sheer monomaniacal duration — and confess it really is as ingenious as it purports to be.

What redeems the relentless spectacle, though, is that it’s in the service of a passionate argument. Behind all the formalism and eccentric virtuosity, there’s personal history from a writer who has rarely put himself into his own fiction before: the family legends and tragedies that Moore has blown up to mythical size to preserve them from the void, and the streets and buildings, lost and soon to be lost, whose every cracked stone is holy to him. Northampton, Moore suggests, is the center of all meaning, because so is every other place.

The ending of the liberal interregnum

October 15th, 2016

Razib Khan shares a talk from Alice Dreger, author of Galileo’s Middle Finger: Heretics, Activists, and One Scholar’s Search for Justice, and notes a passage where she waxes eloquently about the Enlightenment, and freedom of thought:

At a certain point the cultural Left no longer made any pretense to being liberal, and transformed themselves into “progressives.” They have taken Marcuse’s thesis in Repressive Tolerance to heart.

Though I hope that Dreger and her fellow travelers succeed in rolling back the clock, I suspect that the battle here is lost. She points out, correctly, that the total politicization of academia will destroy its existence as a producer of truth in any independent and objective manner. More concretely, she suggests it is likely that conservatives will simply start to defund and direct higher education even more stridently than they do now, because they will correctly see higher education as purely a tool toward the politics of their antagonists. I happen to be a conservative, and one who is pessimistic about the persistence of a public liberal space for ideas that offend. If progressives give up on liberalism of ideas, and it seems that many are (the most famous defenders of the old ideals are people from earlier generations, such as Nadine Strossen and Wendy Kaminer, with Dreger being a young example), I can’t see those of us in the broadly libertarian wing of conservatism making the last stand alone.

Honestly, I don’t want any of my children learning “liberal arts” from the high priests of the post-colonial cult. In the near future the last resistance on the Left to the ascendency of identity politics will probably be extinguished, as the old guard retires and dies naturally. The battle will be lost. Conservatives who value learning, and intellectual discourse, need to regroup. Currently there is a populist moood in conservatism that has been cresting for a generation. But the wave of identity politics is likely to swallow the campus Left with its intellectual nihilism. Instead of expanding outward it is almost certain that academia will start cannibalizing itself in internecine conflict when all the old enemies have been vanquished.

Let the private universities, such as Oberlin, wallow in their identity politics contradictions. Dreger already points to the path we will probably have to take: gut the public universities even more than we have. Leave STEM and some professional schools intact, and transform them for all practical purposes into technical universities. All the other disciplines? Some private universities, the playgrounds of the rich and successful, will continue to be traditionalist in maintaining “liberal arts,” which properly parrot the latest post-colonial cant. But much learning will be privatized, and knowledge will spread through segregated “safe spaces.” Those of us who read and think will continue to read and think, like we always have. We just won’t have institutional backing, because there’s not going to be a societal consensus for such support.

I hope I’m wrong.

He shares two more conclusions in a comment:

It’s getting worse, not better, and it’s not about tenure or money. It’s about social sanction and approval. so two sad conclusions:

1) Truth can only move in hidden channels now if it conflicts with power. No one gives a shit if you appeal to truth; they know that it is not intrinsic value except in the serve of status and power. I admire Heterodox Academy, but part of me wonders if they’d be better served by being stealth and just creating a secret society that doesn’t put the academy on notice that some people know that reality is different from the official narratives.

2) The post-modernists are right to a first approximation: everything is power. So “we” have to capture and crush; it’s only victory or defeat. The odds are irrelevant. I put we in quotes because it doesn’t matter who you are, the game is on, whether you think you are a player or not.

Open data and crowd-sourcing mean that a whole ecosystem of knowledge can emerge that doesn’t need to be nakedly exposed and put people’s livelihoods and reputations at risk from the kommissars.

Some of my friends have argued this for a long time, and I resisted because I’m a liberal in the old sense. but reality is reality, and the fact is that no one wants the truth, and they’ll destroy you to deny it.

For every Alice Dreger there are 1,000 who support her. but they’ll stand aside while the 100 tear her to shreds, and talk sadly amongst themselves about what happened to her career…

Todd Orr, Bear Attack Survivor

October 14th, 2016

When Todd Orr‘s post-bear-attack video went viral, I had no idea he was competitive shooter Mike Seeklander’s cousin — until Mike interviewed him.

Chuck Yeager Describes How He Broke The Sound Barrier

October 14th, 2016

Chuck Yeager describes how he broke the Sound Barrier:

Everything was set inside X-1 as Cardenas started the countdown. Frost assumed his position and the mighty crack from the cable release hurled the X-1 into the abyss. I fired chamber No. 4, then No. 2, then shut off No. 4 and fired No. 3, then shut off No. 2 and fired No. 1. The X-1 began racing toward the heavens, leaving the B-29 and the P-80 far behind. I then ignited chambers No. 2 and No. 4, and under a full 6000 pounds of thrust, the little rocket plane accelerated instantly, leaving a contrail of fire and exhaust. From .83 Mach to .92 Mach, I was busily engaged testing stabilizer effectiveness. The rudder and elevator lost their grip on the thinning air, but the stabilizer still proved effective, even as speed increased to .95 Mach. At 35,000 ft., I shut down two of the chambers and continued to climb on the remaining two. We were really hauling! I was excited and pleased, but the flight report I later filed maintained that outward cool: “With the stabilizer setting at 2 degrees, the speed was allowed to increase to approximately .95 to .96 Mach number. The airplane was allowed to continue to accelerate until an indication of .965 on the cockpit Machmeter was obtained. At this indication, the meter momentarily stopped and then jumped up to 1.06, and the hesitation was assumed to be caused by the effect of shock waves on the static source.”

I had flown at supersonic speeds for 18 seconds. There was no buffet, no jolt, no shock. Above all, no brick wall to smash into. I was alive.

And although it was never entered in the pilot report, the casualness of invading a piece of space no man had ever visited was best reflected in the radio chatter. I had to tell somebody, anybody, that we’d busted straight through the sound barrier. But transmissions were restricted. “Hey Ridley!” I called. “Make another note. There’s something wrong with this Machmeter. It’s gone completely screwy!”

“If it is, we’ll fix it,” Ridley replied, catching my drift. “But personally, I think you’re seeing things.”

The Deep Roots of Prosperity

October 14th, 2016

Today’s rich countries tend to be in East Asia, Northern and Western Europe — or are heavily populated by people who came from those two regions:

The major exceptions are oil-rich countries. East Asia and Northwest Europe are precisely the areas of the world that made the biggest technological advances over the past few hundred years. These two regions experienced “civilization,” an ill-defined but unmistakable combination of urban living, elite prosperity, literary culture, and sophisticated technology. Civilization doesn’t mean kindness, it doesn’t mean respect for modern human rights: It means the frontier of human artistic and technological achievement. And over the extremely long run, a good predictor of your nation’s current economic behavior is your nation’s ancestors’ past behavior. Exceptions exist, but so does the rule.

Recently, a small group of economists have found more systematic evidence on how the past predicts the present. Overall, they find that where your nation’s citizens come from matters a lot. From “How deep are the roots of economic development?” published in the prestigious Journal of Economic Literature:

A growing body of new empirical work focuses on the measurement and estimation of the effects of historical variables on contemporary income by explicitly taking into account the ancestral composition of current populations. The evidence suggests that economic development is affected by traits that have been transmitted across generations over the very long run.

From “Was the Wealth of Nations determined in 1000 B.C.?” (coauthored by the legendary William Easterly):

[W]e are measuring the association of the place’s technology today with the technology in 1500 AD of the places from where the ancestors of the current population came from…[W]e strongly confirm…that history of peoples matters more than history of places.

And finally, from “Post-1500 Population Flows and the Economic Determinants of Economic Growth and Inequality,” published in Harvard’s Quarterly Journal of Economics:

The positive effect of ancestry-adjusted early development on current income is robust…The most likely explanation for this finding is that people whose ancestors were living in countries that developed earlier (in the sense of implementing agriculture or creating organized states) brought with them some advantage—such as human capital, knowledge, culture, or institutions—that raises the level of income today.

To sum up some of the key findings of this new empirical literature: There are three major long-run predictors of a nation’s current prosperity, which combine to make up a nation’s SAT score:

S: How long ago the nation’s ancestors lived under an organized state.

A: How long ago the nation’s ancestors began to use Neolithic agriculture techniques.

T: How much of the world’s available technology the nation’s ancestors were using in 1000 B.C., 0 B.C., or 1500 A.D.

When estimating each nation’s current SAT score, it’s important to adjust for migration: Indeed, all three of these papers do some version of that. For instance, without adjusting for migration, Australia has quite a low ancestral technology score: Aboriginal Australians used little of the world’s cutting edge technology in 1500 A.D. But since Australia is now overwhelmingly populated by the descendants of British migrants, Australia’s migration-adjusted technology score is currently quite high.

On average, nations with high migration-adjusted SAT scores are vastly richer than nations with lower SAT scores: Countries in the top 10% of migration-adjusted technology (T) in 1500 are typically at least 10 times richer than countries in the bottom 10%. If instead you mistakenly tried to predict a country’s income today based on who lived there in 1500, the relationship would only be about one-third that size. The migration adjustment matters crucially: Whether in the New World, across Southeast Asia, or in Southern Africa, one can do a better job predicting today’s prosperity when you keep track of who moved where. It looks like at least in the distant past, migrants shaped today’s prosperity.

Wealth, Health, and Child Development

October 13th, 2016

Swedish researchers looked at wealth, health, and child development, by studying lottery players:

We use administrative data on Swedish lottery players to estimate the causal impact of substantial wealth shocks on players’ own health and their children’s health and developmental outcomes. Our estimation sample is large, virtually free of attrition, and allows us to control for the factors conditional on which the prizes were randomly assigned.

In adults, we find no evidence that wealth impacts mortality or health care utilization, with the possible exception of a small reduction in the consumption of mental health drugs. Our estimates allow us to rule out effects on 10-year mortality one sixth as large as the cross-sectional wealth-mortality gradient.

In our intergenerational analyses, we find that wealth increases children’s health care utilization in the years following the lottery and may also reduce obesity risk. The effects on most other child outcomes, including drug consumption, scholastic performance, and skills, can usually be bounded to a tight interval around zero.

Overall, our findings suggest that in affluent countries with extensive social safety nets, causal effects of wealth are not a major source of the wealth-mortality gradients, nor of the observed relationships between child developmental outcomes and household income.

Insulin and Alzheimer’s

October 13th, 2016

Insulin resistance may be a powerful force in the development of Alzheimer’s Disease:

In the body, one of insulin’s responsibilities is to unlock muscle and fat cells so they can absorb glucose from the bloodstream. When you eat something sweet or starchy that causes your blood sugar to spike, the pancreas releases insulin to usher the excess glucose out of the bloodstream and into cells. If blood sugar and insulin spike too high too often, cells will try to protect themselves from overexposure to insulin’s powerful effects by toning down their response to insulin — they become “insulin resistant.” In an effort to overcome this resistance, the pancreas releases even more insulin into the blood to try to keep glucose moving into cells. The more insulin levels rise, the more insulin resistant cells become. Over time, this vicious cycle can lead to persistently elevated blood glucose levels, or type 2 diabetes.

In the brain, it’s a different story. The brain is an energy hog that demands a constant supply of glucose. Glucose can freely leave the bloodstream, waltz across the blood-brain barrier, and even enter most brain cells — no insulin required. In fact, the level of glucose in the cerebrospinal fluid surrounding your brain is always about 60% as high as the level of glucose in your bloodstream — even if you have insulin resistance — so, the higher your blood sugar, the higher your brain sugar.

Not so with insulin — the higher your blood insulin levels, the more difficult it can become for insulin to penetrate the brain. This is because the receptors responsible for escorting insulin across the blood-brain barrier can become resistant to insulin, restricting the amount of insulin allowed into the brain. While most brain cells don’t require insulin in order to absorb glucose, they do require insulin in order to process glucose. Cells must have access to adequate insulin or they can’t transform glucose into the vital cellular components and energy they need to thrive.

Despite swimming in a sea of glucose, brain cells in people with insulin resistance literally begin starving to death.

Which brain cells go first? The hippocampus is the brain’s memory center. Hippocampal cells require so much energy to do their important work that they often need extra boosts of glucose. While insulin is not required to let a normal amount of glucose into the hippocampus, these special glucose surges do require insulin, making the hippocampus particularly sensitive to insulin deficits. This explains why declining memory is one of the earliest signs of Alzheimer’s, despite the fact that Alzheimer’s Disease eventually destroys the whole brain.

Can War Foster Cooperation?

October 12th, 2016

Can war foster cooperation? Of course it can:

In the past decade, nearly 20 studies have found a strong, persistent pattern in surveys and behavioral experiments from over 40 countries: individual exposure to war violence tends to increase social cooperation at the local level, including community participation and prosocial behavior. Thus while war has many negative legacies for individuals and societies, it appears to leave a positive legacy in terms of local cooperation and civic engagement. We discuss, synthesize and reanalyze the emerging body of evidence, and weigh alternative explanations. There is some indication that war violence especially enhances in-group or “parochial” norms and preferences, a finding that, if true, suggests that the rising social cohesion we document need not promote broader peace.

Hat tip to Tyler Cowen, who adds:

That is an all-star line-up of authors, and no this doesn’t mean any of those individuals are in favor of war. That would be the fallacy of mood affiliation, and we all know that MR readers never commit the fallacy of mood affiliation…