Real potential benefits without being a panacea

February 11th, 2017

The empiricists’ anti-charter arguments that were trotted out against Betsy DeVos weren’t particularly empirical, Ross Douthat notes:

There’s no evidence that DeVos-backed charters actually visited disaster on Detroit’s students. Instead, the very studies that get cited to critique her efforts actually show the city’s charters modestly outperforming public schools.

That “modestly” is important, because it tracks with much of what we know about school choice in general — that it offers real potential benefits without being a panacea. Decades of experiments suggest that choice can save money, improve outcomes for very poor kids whose public options are disastrous, and increase parental satisfaction. (The last is no small thing!) But the available evidence also suggests that choice alone won’t revolutionize schools or turn slow learners into geniuses, that the clearest success stories are hard to replicate, and some experiments in privatization (like Louisiana’s recent voucher push) can badly disappoint.

So in DeVos, we have an education secretary who perhaps errs a little too much on the side of choice-as-panacea, overseeing (with limited powers) an American education bureaucracy that pretty obviously errs the other way. And wherever you come down on striking the right balance, it’s hard to see this situation as empirically deserving the level of political controversy that’s attached to it.

What happened when the U.S. got rid of guest workers?

February 10th, 2017

What happened when the U.S. got rid of guest workers?

A team of economists looked at the mid-century “bracero” program, which allowed nearly half a million seasonal farm workers per year into the U.S. from Mexico. The Johnson administration terminated the program in 1964, creating a large-scale experiment on labor supply and demand.

The result wasn’t good news for American workers. Instead of hiring more native-born Americans at higher wages, farmers automated, changed crops or reduced production.

What Steve Bannon Wants You to Read

February 10th, 2017

I wasn’t familiar with Steve Bannon, the White House’s chief strategist, before the Trump campaign, but his reading list feels awfully familiar:

Bannon, described by one associate as “the most well-read person in Washington,” is known for recommending books to colleagues and friends, according to multiple people who have worked alongside him. He is a voracious reader who devours works of history and political theory “in like an hour,” said a former associate whom Bannon urged to read Sun Tzu’s The Art of War. “He’s like the Rain Man of nationalism.”

[...]

Bannon’s 2015 documentary, “Generation Zero,” drew heavily on one of his favorite books, “The Fourth Turning” by William Strauss and Neil Howe. The book explains a theory of history unfolding in 80- to 100-year cycles, or “turnings,” the fourth and final stage of which is marked by periods of cataclysmic change in which the old order is destroyed and replaced—a current period that, in Bannon’s view, was sparked by the 2008 financial crisis and has now been manifested in part by the rise of Trump.

[...]

Many political onlookers described Trump’s election as a “black swan” event: unexpected but enormously consequential. The term was popularized by Nassim Taleb, the best-selling author whose 2014 book Antifragile—which has been read and circulated by Bannon and his aides—reads like a user’s guide to the Trump insurgency.

It’s a broadside against big government, which Taleb faults for suppressing the randomness, volatility and stress that keeps institutions and people healthy. “As with neurotically overprotective parents, those who are trying to help us are hurting us the most,” he writes. Taleb also offers a withering critique of global elites, whom he describes as a corrupt class of risk-averse insiders immune to the consequences of their actions: “We are witnessing the rise of a new class of inverse heroes, that is, bureaucrats, bankers, Davos-attending members of the I.A.N.D (International Association of Name Droppers), and academics with too much power and no real downside and/or accountability. They game the system while citizens pay the price.”

[...]

Curtis Yarvin, the self-proclaimed “neoreactionary” who blogs under the name “Mencius Moldbug,” attracted a following in 2008 when he published a wordy treatise asserting, among other things, that “nonsense is a more effective organizing tool than the truth.” When the organizer of a computer science conference canceled his appearance following an outcry over his blogging under his nom de web, Bannon took note: Breitbart News decried the act of censorship in an article about the programmer-blogger’s dismissal.

[...]

If Taleb and Yarvin laid some of the theoretical groundwork for Trumpism, the most muscular and controversial case for electing him president—and the most unrelenting attack on Trump’s conservative critics—came from Michael Anton, a onetime conservative intellectual writing under the pseudonym Publius Decius Mus.

Thanks to an entree from Thiel, Anton now sits on the National Security Council staff. Initial reports indicated he would serve as a spokesman, but Anton is set to take on a policy role, according to a source with knowledge of the situation. A former speechwriter for Rudy Giuliani and George W. Bush’s National Security Council, Anton most recently worked as a managing director for Blackrock, the Wall Street investment firm.

Hiring Anton puts one of the key intellectual forces behind Trump in the West Wing. In his blockbuster article “The Flight 93 Election,” a 4,300-plus-word tract published in September 2016 under his pseudonym, Anton strikes many of the same notes as Taleb and Yarvin. “America and the West are on a trajectory toward something very bad,” he writes. He blasts conservatives as “keepers of the status quo” for refusing to take account of the need for “truly fundamental” change—especially a crackdown on immigration that he argues is promoting “ethnic separatism” and risks entrenching a permanent Democratic majority.

Strauss and Howe? Taleb? Moldbug? I think I’ve heard of ‘em.

You always have to have a plan B

February 9th, 2017

Everybody fails, but not everybody responds to failure the same way, Mike Riggs notes, as he interviews Megan McArdle about The Up Side of Down:

Mike: You use the writing profession as an example of this.

Megan: You have to accept that being bad is part of learning to write. Most people who end up approaching professional writer status were always better at it than other kids. Then they get into the professional landscape and realize everyone else in the industry was also better at it than the other kids. This can be very traumatic for a lot of writers, and I’ve seen some of them just freeze. They don’t turn stuff in because as long as they haven’t turned it in, it’s not bad yet.

How do you hack that thinking? You say to yourself, “Look, I can rewrite garbage, I can’t rewrite nothing.”

Mike: It’s the iteration paradox. You miss 100% of the shots you don’t take, but you also miss a ton of the shots you do take when you’re first starting out. You have to do a thing over and over to get good at it, while somehow dealing with the fact that it’s really embarrassing and discomfiting to try hard at something and still be bad.

Megan: And the only way around that is to accept that failure is an essential part of the process.

You are not supposed to sit down and be Proust on your first pass. Proust wasn’t even Proust on the first pass. That means you have to see doing something badly as better than not doing anything at all. I won’t get fired for handing in 1,000 bad words. I will definitely get fired for not handing in anything.

After that, the next step is learning to recognize where and why you’re bad without rolling around on the floor, saying, “This is terrible, I’m obviously the world’s worst writer.” And you do that by looking at your bad work as a dipstick that measures where you can improve rather than one that measures your innate talents.

Mike: This speaks to the idea that learning how to do something new is good for you even if it doesn’t necessarily turn into a career.

Megan: We learn by doing stuff not well. That’s how people learn to play tennis. You don’t become good at it by creating a really elaborate theory of tennis ball physics, or else MIT would win Wimbledon every year. You hit a ball, you try to guess where it will go. It doesn’t go where you expect and then on the 100th time you finally hit it right. By hitting it wrong all those times, you learn to hit it right.

If you’ve never done anything you weren’t good at, you can’t learn the valuable skill of sucking at something but continuing to do it, which is how people get good at anything. And we have to make ourselves do it because doing something you aren’t good at is usually less rewarding than things that come more easily.

[...]

Mike Riggs: It seems like the best way to hedge against that kind of collapse at the institutional level is to be as diversified as possible at a personal level. Try things that are difficult, save as much as you can, contribute to a 401k. But even that is hard for lots of people.

Megan McArdle: The fact is you can’t assume nothing bad will happen. You could get hit by a truck tomorrow. Your company could go under. We should prepare for failure, which is why I always tell my readers to save 20% of their gross income. As you can imagine, this is not a popular suggestion with my readers.

I also advise people to have a year’s worth of expenses in an emergency fund. This was viewed, even by financial advisors, as quite conservative. But I spent two years being unemployed after getting what was supposed to be the golden ticket to a guaranteed job, which was an MBA from a top-five school. And that taught me there’s no such thing as a golden ticket. You always have to have a plan B. You always have to be thinking about what you’ll do if your company fails. Where will you go next? You should be maintaining connections in that industry, but you should also be living below your means. You should have a smaller mortgage than what you can afford. You should have more savings than you really need.

If you end up dying of cancer at the age of 40, you’ll have over-saved. But if you die of cancer at the age of 40, your biggest regret is not going to be that you didn’t spend more money while you were healthy. Your biggest regret is going to be about relationships and the people you didn’t call, so call your mother.

Introductory psychology textbooks lean left

February 8th, 2017

Introductory psychology textbooks lean left:

Writing in Current Psychology, Christopher Ferguson at Stetson University and his colleagues at Texas A&M International University conclude that intro textbooks often have difficulty covering controversial topics with care, and that whether intentionally or not, they are frequently presenting students with a liberal-leaning, over-simplified perspective, as well propagating or failing to challenge myths and urban legends.

[...]

Ferguson and his team examined textbook coverage of seven areas of research consisting of findings which might be considered particularly appealing or unappealing to textbook authors with liberal leanings, and/or which could be prone to alarmist interpretation. This included research on whether media violence incites aggression; the stereotype threat (the notion that performance differences between groups are exaggerated by the fear of conforming to stereotypes); the narcissism epidemic (the idea that today’s youth are more narcissistic than youth in the past); that smacking/spanking children leads to aggression and other negative outcomes; that there are multiple intelligences; that human behaviour is explained by evolutionary theories related to mate selection and sexual competition (in this case, the authors assumed liberal authors would prefer not to cover this research); and controversy around antidepressant medication.

The researchers looked to see if textbook authors presented the evidence as more definitive than it is in these areas, or only presented one side of the arguments. They found that there was biased treatment of media violence and stereotype threat by half or more of the books, and of multiple intelligences and spanking by a third. A quarter of books failed to deal with controversy around antidepressants. Evolutionary theories were neglected by a fifth of the books and presented in biased fashion by one quarter. “We believe that these errors are consistent with an indoctrination, however intentional, into certain beliefs or hypotheses that may be ‘dear’ to a socio-politically homogenous psychological community,” Ferguson and his colleagues said.

They also looked at textbook treatment of various psychology myths and urban legends, including the frequently exaggerated story of the murder of Kitty Genovese, which is often cited as a perfect example of the “bystander effect”: our reduced likelihood of intervening to help when in the company of a greater number of other people who could help. Nearly half the books perpetuated the myth that 33 witnesses watched the killing of Genovese without doing anything to help her. Meanwhile, nearly three quarters of the books failed to challenge the popular misconception that we only use ten per cent of our brains, or that listening to Mozart makes us smarter. And 70 per cent of the books gave the French neurologist Paul Broca undue credit for localising speech function in the brain: the researchers say that the theory of the cortical localisation of speech was first put forward by Ernest Auburtin. “It is surprising to see so few textbooks addressing common misconceptions about psychology,” they said.

[...]

After all, in recent years, we’ve also covered research by Richard Griggs at Florida State University that’s found biased textbook treatment of Milgram’s classic studies on obedience, outdated accounts of the story of Phineas Gage, biased coverage of Asch’s studies of conformity, and of Zimbardo’s Stanford Prison Experiment. Psychology students: if you’re looking for a rounded and accurate introduction to the field , you could consider supplementing your textbook reading with regular visits to our Research Digest blog. Or maybe you do that already.

Shell shock after all

February 7th, 2017

Scientists assumed that explosive blasts affect the brain in much the same way as concussions from football or car accidents:

No one had done a systematic post-mortem study of blast-injured troops. That was exactly what the Pentagon asked Perl to do in 2010, offering him access to the brains they had gathered for research. It was a rare opportunity, and Perl left his post as director of neuropathology at the medical school at Mount Sinai to come to Washington.

Perl and his lab colleagues recognized that the injury that they were looking at was nothing like concussion. The hallmark of C.T.E. is an abnormal protein called tau, which builds up, usually over years, throughout the cerebral cortex but especially in the temporal lobes, visible across the stained tissue like brown mold. What they found in these traumatic-brain-injury cases was totally different: a dustlike scarring, often at the border between gray matter (where synapses reside) and the white matter that interconnects it. Over the following months, Perl and his team examined several more brains of service members who died well after their blast exposure, including a highly decorated Special Operations Forces soldier who committed suicide. All of them had the same pattern of scarring in the same places, which appeared to correspond to the brain’s centers for sleep, cognition and other classic brain-injury trouble spots.

Then came an even more surprising discovery. They examined the brains of two veterans who died just days after their blast exposure and found embryonic versions of the same injury, in the same areas, and the development of the injuries seemed to match the time elapsed since the blast event. Perl and his team then compared the damaged brains with those of people who suffered ordinary concussions and others who had drug addictions (which can also cause visible brain changes) and a final group with no injuries at all. No one in these post-mortem control groups had the brown-dust pattern.

Perl’s findings, published in the scientific journal The Lancet Neurology, may represent the key to a medical mystery first glimpsed a century ago in the trenches of World War I. It was first known as shell shock, then combat fatigue and finally PTSD, and in each case, it was almost universally understood as a psychic rather than a physical affliction.

[...]

A blast begins simply: A detonator turns a lump of solid matter into a deadly fireball. Within that moment, three distinct things happen. The first is the blast wave, a wall of static pressure traveling outward in all directions faster than the speed of sound. Next, a blast wind fills the void and carries with it any objects it encounters. This is the most manifestly destructive part of the blast, capable of hurling cars, people and shrapnel against buildings and roadsides. The remaining effects include fire and toxic gases, which can sear, poison and asphyxiate anyone within range.

The effects of all of this on the human body are myriad and more complicated than the blast itself. People who have been exposed to blasts at close range usually describe it as an overpowering, full-body experience unlike anything they have ever known. Many soldiers do not recall the moment of impact: it gets lost in the flash of light, the deafening sound or unconsciousness. Those who do remember it often speak of a simultaneous punching and squeezing effect, a feeling at once generalized and intensely violent, as if someone had put a board against your body and then struck it with dozens of hammers.

[...]

Very quickly [after WWI began], soldiers began emerging with bizarre symptoms; they shuddered and gibbered or became unable to speak at all. Many observers were struck by the apparent capacity of these blasts to kill and maim without leaving any visible trace. The British journalist Ellis Ashmead-Bartlett famously described the sight of seven Turks at Gallipoli in 1915, sitting together with their rifles across their knees: “One man has his arm across the neck of his friend and a smile on his face as if they had been cracking a joke when death overwhelmed them. All now have the appearance of being merely asleep; for of the several I can only see one who shows any outward injury.”

For those who survived a blast and suffered the mysterious symptoms, soldiers quickly coined their own phrase: shell shock.

[...]

One British doctor, Frederick Mott, believed the shock was caused by a physical wound and proposed dissecting the brains of men who suffered from it. He even had some prescient hunches about the mechanism of blast’s effects: the compression wave, the concussion and the toxic gases. In a paper published in The Lancet in February 1916, he posited a “physical or chemical change and a break in the links of the chain of neurons which subserve a particular function.” Mott might not have seen anything abnormal in the soldiers’ brains, even if he had examined them under a microscope; neuropathology was still in its infancy. But his prophetic intuitions made him something of a hero to Perl.

Mott’s views were soon eclipsed by those of other doctors who saw shell shock more as a matter of emotional trauma. This was partly a function of the intellectual climate; Freud and other early psychologists had recently begun sketching provocative new ideas about how the mind responds to stress.

[...]

Cernak became convinced [after the Balkans conflict of the 1990s] that blast ripples through the body like rings on a pond’s surface. Its speed changes when it encounters materials of different density, like air pockets or the border between the brain’s gray and white matter, and can inflict greater damage in those places. As it happens, physicists would later theorize some very similar models for how blast damages the brain. Several possibilities have now been explored, including surges of blood upward from the chest; shearing loads on brain tissue; and the brain bouncing back and forth inside the skull, as happens with concussion. Charles Needham, a renowned authority on blast physics, told me post-mortems on blast injuries have lent some support to all of those theories, and the truth may be that several are at play simultaneously.

A decade after her initial battlefield surveys in the Balkans, Cernak took a position at Johns Hopkins University in Baltimore, where she did animal research that bolstered her conviction about blast’s full-body effects. She found that even if an animal’s head is protected during a blast, the brain can sustain damage, because the blast wave transfers through the body via blood and tissue. Cernak also came to believe that blast injuries to the brain were cumulative and that even small explosions with no discernible effects could, if repeated, produce terrible and irreversible damage. Much of this would later be confirmed by other scientists.

This all sounds quite credible — but it doesn’t explain the many cases of PTSD from troops who never faced combat or suffered blast injuries.

The Stages of Grief at the Frontier

February 6th, 2017

Jakub J. Grygiel lays out the stages of geopolitical grief along the unquiet frontier:

Recounted in a biography written by Eugippius, Saint Severinus’s peregrinations along the Danubian frontier illustrate different stages of coping with a growing insecurity on a frontier that was gradually abandoned by Roman forces and harassed by small tribes roaming the area.

First, there is the gradual recognition that imperial forces were not what they used to be. The tangible presence of the empire was disappearing, and the towns were losing their main security providers. “So long as the Roman dominion lasted, soldiers were maintained in many towns at the public expense to guard the boundary wall. When this custom ceased, the squadrons of soldiers and the boundary wall were blotted out together.” But the gradual withdrawal of Roman troops did not seem to have had a shocking impact on the locals, who perhaps did not notice immediately that their security required the presence of armed men. Indeed, few consider how security and deterrence are maintained while peace reigns.

The Roman troop at Batavis (modern day Passau), however, held out. The place was itself a military base rather than a town; located on the confluence of two important rivers, the Danube and the Inn, it occupied important strategic real estate that most likely was deemed more valuable than other towns east of it. It was a remnant of a string of military outposts, and the soldiers there seemed to be severed from the bulk of the legions. At some point, “some soldiers of this troop had gone to Italy to fetch the final pay to their comrades.” They did not make it far because the barbarians marauding in the area killed them. For a while no one was aware of this massacre, but “one day, as Saint Severinus was reading in his cell, he suddenly closed the book and began to sigh greatly and to weep. He ordered the bystanders to run out with haste to the river, which he declared was in that hour besprinkled with human blood; and straightway word was brought that the bodies of the soldiers mentioned above had been brought to land by the current of the river.”

That was a shock.

The role of these few Roman soldiers was first and foremost one of reassurance. They could not have defended the small towns in case of a prolonged barbarian assault. They also did not maintain the safety of the surrounding areas, leaving it open to small but frequent barbarians incursions — and as the violent end of the few soldiers heading to obtain the overdue pay indicates, they could not even protect their own forces. Finally, these scarce imperial forces certainly did not serve as a “tripwire” because it was unlikely that, in case of a barbarian attack on them, Roman legions would have marched north in retaliation. In brief, they did not deter the barbarians. But they were there to reassure the locals. They were good enough to reassure, even if not good enough to deter and defend. And that is why when imperial forces melted away the locals were discouraged.

Second, after the reassuring presence of imperial might has vanished, the next stage does not include calls for defense or balancing or stronger walls. No. It is the stage of disbelief and self-delusion. As Roman power waned, the locals comforted themselves with the delusion that the threats did not exist or, if they did, that the menace was not great. Perhaps the enemies would seek other targets. Perhaps the walls would suffice. Perhaps the barbarians liked peace and commerce as much as they did. Perhaps they would just go away. Perhaps they would peacefully blend in. The list of possible justifications for this delusion is as long as it is wrong.

In the first town he visited, Asturis, Severinus warned the population that the enemy was indeed near and dangerous. They should repent, he told them. They should pray and fast, and they should unite by abandoning the search for the selfish fulfillment of material desires. Of course, as was to be expected from a complacent and materially satisfied polity, Severinus was laughed out of town.

People who are deluded — and do not see higher reasons for their own existence — will gladly justify their material self-satisfaction. Severinus left “in haste from a stubborn town that shall swiftly perish.” And perish Asturis did.

Third, in the next town, Comagenis, Severinus had more luck — the locals were on their next stage of grief. Because one man escaped from Asturis bringing the terrible news, the people of Comagenis could no longer ignore the hard fact that the barbarians were near and in search of destruction.

They recognized that security was a creation of force, not a self-sustaining reality.
But even before the technical question of how to defend themselves, the locals needed a reason to do it. They needed what Roman troops, however scant, had provided before: some reassurance. And this was Severinus’s greatest contribution: he reassured the local populations. He supplied the surviving towns with a firm motivation to resist and defend themselves, a reassurance that defense was worthwhile. With his presence the frontier “castles felt no danger. The trusty cuirass of fasting, and praiseworthy humility of heart, with the aid of the prophet, had armed them boldly against the fierceness of the enemy.”

This stage of geopolitical grief can be productive because it is characterized by the nascent desire to engage in the competition at hand. Security, these frontier towns realized, was not guaranteed by impersonal forces, but needed to be underwritten by somebody. And they had to do it themselves.

The problem at this stage is that the passage from delusion and panic to the desire to produce indigenous defense is not automatic. Before the “how” and the “where” of defending oneself, it is necessary to have a clear and firm answer to the “why.” A polity can have all the technical marvels, logistical supplies, and tactical skills, but without a strong motivation to defend itself they will all be useless. A castellum can be architecturally pleasing and surrounded by thick walls, but if the people inside it do not know who they are and why they should fight, it is as undefended as a wide open field.

In one of the Danubian towns, the local commander Mamertinus was concerned that the forces at his disposal were insufficient. (He was also future bishop — a pattern that replicated itself elsewhere in the decaying western Roman Empire. Bishops quickly became the main city authorities, caring not only for the spiritual life but also for the material survival of their flocks.) Mamertinus told Severinus: “I have soldiers, a very few. But I dare not contend with such a host of enemies. However, if thou commandest it, venerable father, though we lack the aid of weapons yet we believe that through thy prayers we shall be victorious.” Material capabilities are important, indeed essential; yet motivation and morale is even more so. Severinus stiffened their spines. Go out and engage the enemy, he told them. “Even if thy soldiers are unarmed, they shall now be armed from the enemy. For neither numbers nor fleshly courage is required, when everything proves that God is our champion.” Mamertinus’s troops went out, found some of the barbarians, attacked, and succeeded in routing most of them while obtaining a stash of their abandoned weapons.

Parasites and Piety

February 5th, 2017

In This Is Your Brain on Parasites, Kathleen McAuliffe examines how tiny creatures manipulate our behavior and shape society, with a chapter on parasites and piety. One passage recalls Chapter 4 of John Durant’s Paleo Manifesto, “Moses the Microbiologist”:

It took thousands of years for agriculture to take off. Few cities in the Middle East, where the movement began, had more than 50,000 inhabitants prior to biblical times. So the perfect storm was slow to gather but, when it hit, a health crisis of unimaginable disruption and trauma ensued. These new diseases were far more lethal and terrifying than the versions manifested in the untreated and unvaccinated today. We are the heirs of exceptionally hardy people who were unusual in having immune systems that could repel these virulent germs. Those at the forefront of these epidemics likely fared far worse on average than our more recent ancestors. Consider the fate that awaited some of the first people to get syphilis: pustules popped up on their skin from their heads to their knees, then their flesh began to fall off their bodies, and within three months they were dead. Those lucky enough to survive the ravages of never-before-encountered germs rarely came away unscathed. Many were crippled, paralysed, disfigured, blinded or otherwise maimed.

It was exactly at this critical juncture that our forefathers went from being not particularly spiritual to embracing religion — and not just passing fads, but some of the most widely followed faiths in the world today, whose gods promised to reward the good and punish the evil. One of the oldest of these belief systems is Judaism, whose most hallowed prophet, Moses, is equally revered in Christianity and in Islam (in the Quran, he goes by the name Musa and is referred to more times than Muhammad). Half the world’s population follows religions derived from Mosaic Law — that is, God’s commandments as communicated to Moses.

Not surprisingly, given its vintage, Mosaic Law is obsessed with matters related to cleanliness and lifestyle factors that we now know play a key role in the spread of disease. Just as villages in the Fertile Crescent were giving rise to filthy, crowded cities, and outbreaks of illness were becoming an everyday horror, Mosaic Law decreed that Jewish priests should wash their hands — to this day, one of the most effective public-health measures known to science.

The Torah contains much more medical wisdom — not merely its famous admonishments to avoid eating pork (a source of trichinosis, a parasitic disease caused by a roundworm) and shellfish (filter feeders that concentrate contaminants), and to circumcise sons (bacteria can collect under the foreskin flap). Jews were instructed to bathe on the Sabbath (every Saturday); cover their wells (which kept out vermin and insects); engage in cleansing rituals if exposed to bodily fluids; quarantine people with leprosy and other skin diseases and, if infection persisted, burn that person’s clothes; bury the dead quickly before corpses decomposed; submerge dishes and eating utensils in boiling water after use; never consume the flesh of an animal that had died of natural causes (as it might have been felled by illness) or eat meat more than two days old (likely on the verge of turning rancid).

When it came time for divvying up the spoils of war, Jewish doctrine required any metal booty that could withstand intense heat — objects made of gold, silver, bronze or tin — to ‘be put through fire’ (sterilised by high temperatures). What could not endure fire was to be washed with ‘purifying water’: a mixture of water, ash and animal fat: an early soap recipe.

Equally prescient from the standpoint of modern disease control, Mosaic Law has numerous injunctions specifically related to sex. Parents were admonished not to allow their daughters to become prostitutes, and premarital sex, adultery, male homosexuality and bestiality were all discouraged, if not banned outright.

Very, very bad at gun journalism

February 4th, 2017

The mainstream media lobbies hard for gun control, but it is very, very bad at gun journalism:

It might be impossible ever to bridge the divide between the gun-control and gun-rights movements. But it’s impossible to start a dialogue when you don’t know what the hell you are talking about.

Media stories in the wake of mass shootings typically feature a laundry list of mistakes that reflect their writers’ inexperience with guns and gun culture. Some of them are small but telling: conflating automatic and semi-automatic weapons, assault rifle and assault weapon, caliber and gauge—all demonstrating a general lack of familiarity with firearms. Some of them are bigger. Like calling for “common-sense gun control” and “universal background checks” after instances in which a shooter purchased a gun legally and passed background checks. Or focusing on mass shootings involving assault weapons—and thereby ignoring statistics that show that far more people die from handguns.

What Trump’s Immigration Order Says

February 3rd, 2017

Lyman Stone explains the visa ban:

The media has focused on the blanket ban on all visas for all people (except diplomats) with citizenship from Iran, Iraq, Libya, Syria, Sudan, Somalia, and Yemen. This means no tourists, no students, no immigrants, no refugees, no nothing. The EO does include permission for Customs to give “case-by-case” exceptions, but there do not appear to have been many exceptions yet (I could find only one documented case), and no guidance was given to Customs about what rules to use for making such exceptions.

The ban is not permanent, lasting only 90 days, but, as with the refugee ban, can be renewed or extended. Indeed, Section 3(e) of the EO actually orders the Department of Homeland Security (DHS) to come up with a list of countries for a more permanent ban. So this EO is teeing up for a more permanent ban in the future.

Some critics have claimed this EO is a “Muslim ban.” That’s debatable. The countries selected were based on a list provided by the Obama administration, and the Obama administration had already imposed stricter visa screening requirements on those countries.

However, former New York City mayor Rudy Giuliani has claimed that President Trump did explicitly say he wanted to ban Muslims. Yet most Muslims will be unaffected. The vast majority of Muslims and Muslim countries are in Africa, South Asia, Southeast Asia, or Central Asia. Within the Middle East, large countries like Egypt, Turkey, and Saudi Arabia were not restricted.

Some EO supporters have claimed the seven banned nations were selected due to a unique terrorist threat. This is not quite true. The Obama administration did identify them as places of concern, and most do have active sectarian conflicts and terrorist activity, but, the truth is, they have no common thread. Many unstable or violent places were not included (Chad, Central African Republic, Mali, Egypt, Ukraine, Nigeria, etc). Several of these even involve similar large-scale jihadist insurgencies similar to those observed in the banned countries. Iran, meanwhile, has no violent insurgency at all.

Furthermore, not a single American has died as a result of terrorist attacks committed by any citizen of the seven banned countries in this millennium.* Of course, this doesn’t mean, in the absence of a ban, no attack would occur in the future, but these countries have not posed a unique risk in the past. Additionally, countries whose citizens have perpetrated attacks, like Pakistan or Saudi Arabia, were not banned.

EO critics have claimed these countries were selected to avoid Trump’s properties, implicitly rewarding countries for doing business with the Trump Organization. This view is likewise hard to support with facts. Many countries with no presence of the Trump organization but with violent insurgencies were not banned, like Chad or South Sudan. Many Muslim countries with no Trump properties were not banned, like Afghanistan or Oman.

The truth is, there is no single rational factor that correlates with the seven banned countries. They do not share close religious similarities (Iran, Yemen, and Iraq have large Shi’a populations; Syria is largely Alawi and Sunni; Libya and Somalia are heavily Sunni). They do not all have insurgencies. Their governments are not all enemies of the United States; some, like Iraq, are even our close wartime allies!

Aside from arbitrary countries, the EO was poorly administered. It became effective almost immediately upon issuance, giving Customs no time to develop rules and practices or train personnel. It impacted even people who boarded planes before the president declared it.

Plus, it was unclear who should be banned. What if a person served as a U.S. military translator in Iraq? Is he or she banned? Thus far, the answer is yes. What if they have dual citizenship between the United Kingdom and Syria? Banned too! What about foreigners who are lawful permanent residents of the United States? They were initially banned as well, but DHS has since announced they will be allowed in. It is unclear if the White House supports this change.

It is reasonable for the administration to restrict admission of people from countries of unique concern. The president has the power to do this. Both President Bush and President Obama used this power in moments of crisis to ensure national security. But that power must be exercised wisely: government agencies need clear guidance, not “case-by-case” exceptions with no rules about who gets in and who doesn’t. They need time to prepare implementation, and we need a consistent policy, not one that waffles every few hours as the protests and judicial orders ebb and flow.

Westernization leads to de-Westernization

February 3rd, 2017

Westernization of less developed societies eventually leads to a form of de-Westernization, Samuel P. Huntington argues, in The Clash of Civilizations and the Remaking of World Order:

Initially, Westernization and modernization are closely linked, with the non-Western society absorbing substantial elements of Western culture and making slow progress towards modernization. As the pace of modernization increases, however, the rate of Westernization declines and the indigenous culture goes through a revival. Further modernization then alters the civilizational balance of power between the West and the non-Western society, bolsters the power and self-confidence of that society, and strengthens commitment to the indigenous culture.

In the early phases of change, Westernization thus promotes modernization. In the later phases, modernization promotes de-Westernization and the resurgence of indigenous culture in two ways. At the societal level, modernization enhances the economic, military and political power of the society as a whole and encourages the people of that society to have confidence in their culture and to become culturally assertive. At the individual level, modernization generates feelings of alienation and anomie as traditional bonds and social relations are broken and leads to crises of identity to which religion provides an answer.

The Church of Electronic Culture

February 2nd, 2017

Long before there were hackers and makers, there were tinkerers, and long before magazines like Wired and Mondo 2000 pushed a vision of the cyber-future, magazines like Amazing Stories and Science Wonder Stories pushed a vision of the electronic future. Hugo Gernsback was the tinkerer who coined the term scientifiction and published many of the magazines that blended science and fiction:

First, though, he was a radio man, immersed in and obsessed with the new technology of wireless communication. He was an inventor in the turn-of-the-century generation inspired by Thomas Edison; among his eighty patents are “Radio Horn”; “Detectorium”; “Luminous Electric Mirror”; “Ear Cushion” (for telephone receivers); “Combined Electric Hair Brush and Comb” (“may also be used as a massage instrument”). He formed the first radio hobbyist group, the Wireless Association of America, when he was twenty-five years old, and incorporated its successor, the Radio League of America, six years later; created Radio News magazine; and started one of New York’s first stations, WRNY, broadcasting from atop the Roosevelt Hotel on Madison Avenue. The station and the league promoted the magazine, and the magazine promoted the station and the league, and all promoted Gernsback. He was an evangelist for the church we might call electronic culture. Most of us are its parishioners nowadays, with our magic boxes.

Gernsback left a trail of technical writings, patents, interviews, newspaper clippings, and prophetic essays, and the best of these have now been gathered into a beautifully illustrated compendium and sourcebook titled The Perversity of Things: Hugo Gernsback on Media, Tinkering, and Scientifiction, by Grant Wythoff, a Columbia University historian of media studies.

Hugo Gernsback wearing his Isolator

Born Hugo Gernsbacher, the son of a wine merchant in a Luxembourg suburb before electrification, he started tinkering as a child with electric bell-ringers. When he emigrated to New York City at the age of nineteen, in 1904, he carried in his baggage a design for a new kind of electrolytic battery. A year later, styling himself in Yankee fashion “Huck Gernsback,” he published his first article in Scientific American, a design for a new kind of electric interrupter. That same year he started his first business venture, the Electro Importing Company, selling parts and gadgets and a “Telimco” radio set by mail order to a nascent market of hobbyists and soon claiming to be “the largest makers of experimental Wireless material in the world.”

His mail-order catalogue of novelties and vacuum tubes soon morphed into a magazine, printed on the same cheap paper but now titled Modern Electrics. It included articles and editorials, like “The Wireless Joker” (it seems pranksters had fun with the new communications channel) and “Signaling to Mars.” It was hugely successful, and Gernsback was soon a man about town, wearing a silk hat, dining at Delmonico’s and perusing its wine list with a monocle.

Public awareness of science and technology was new and in flux. “Technology” was barely a word and still not far removed from magic. “But wireless was magical to Gernsback’s readers,” writes Wythoff, “not because they didn’t understand how the trick worked but because they did.” Gernsback asked his readers to cast their minds back “but 100 years” to the time of Napoleon and consider how far the world has “progressed” in that mere century. “Our entire mode of living has changed with the present progress,” he wrote in the first issue of Amazing Stories “and it is little wonder, therefore, that many fantastic situations — impossible 100 years ago — are brought about today.”

So for Gernsback it was completely natural to publish Science Wonder Stories alongside Electrical Experimenter. He returned again and again to the theme of fact versus fiction — a false dichotomy, as far as he was concerned. Leonardo da Vinci, Jules Verne, and H. G. Wells were inventors and prophets, their fantastic visions giving us our parachutes and submarines and spaceships. “In time to come,” he wrote in one editorial, “there is no question that science fiction will be looked upon with considerable respect by every thinking person.” He declared, and believed, that science fiction would be the true literature of the future.

People just give up trying to improve

February 2nd, 2017

Anders Ericsson — of deliberate practice fame — began his career helping to push the boundaries of working memory:

Most people can repeat back a seven-digit phone number, but not a ten-digit one. He recruited Steve Faloon, an average Carnegie Mellon University student, and they set about systematically working to get better. After about 200 hours of effort, Faloon could repeat back 82 digits, by far a world record at the time. Faloon wasn’t destined for such greatness. Rather, Ericsson’s takeaway is that performance has no inherent limit. “Instead, I’ve found that people more often just give up and stop trying to improve,” he writes. Work constantly at the edge of your ability, though, and your brain changes in a way that makes better performance possible.

The Decline of the Western

February 1st, 2017

Molly Brigid Flynn laments the decline of the Western, as she contrasts the original Magnificent Seven against the recent remake:

In the original Magnificent Seven, a Mexican village beset by bandits cannot count on the absentee rurales (mounted police). The Old Man advises the farmers to buy guns north of the border — “guns are plentiful there” — but they buy gunmen instead. The seven hired loners lead the village’s defense against Calvera (Eli Wallach) and his gang. The film displays the superiority of the quietly industrious village over the Old West town. Yet, the farmers’ settled, communal life requires defense by unsettled, strong individuals, naturally drawn to other goods.

In an early scene, a traveling salesman (ladies’ corsets) passing through the Old West town does “what any decent man would” — pays the coroner after watching people step over the corpse of Old Sam in the street. But some townsmen object to the Indian’s burial in the potters’ field filled with white murderers and robbers. “How long has this been going on?” the salesman asks. “Since the town got civilized,” the coroner responds, apologetically.

“I don’t like it,” he adds. “I’ve always treated every man the same — just as another future customer.” The mixed blessings of capitalism, encapsulated in a sentence. Whether from decency or morbid self-interest, the two businessmen rise above bigotry, but still need tough guys Chris (Yul Brynner) and Vin (Steve McQueen), who volunteer to drive the hearse past the shotguns. This one scene in the old movie packs more thought about commerce and civilization than the new movie’s entire 133 minutes.

In their youthful independence, Chris and Vin’s main objection to civilization is that it’s boring. But once their gang arrives to defend the village, the quiet life becomes charming, admirable, worth defending. The American individualists gradually appreciate its wholesome excellence. Like midlife, civilization has its goods — but so do youth and independence. Superior in one way, inferior in another, Chris and Vin ride off after saving the village, while Chico — in love — stays for the long haul of settled life.

Erasing these reflections on capitalism and civilization, community and character, Antoine Fuqua’s new Magnificent Seven hunts smaller game.

The new movie only superficially displays a contemporary liberalism. Much has been made of its ostentatiously diverse seven, “a rainbow coalition.” An African American leads the team, which includes a Native American, a Mexican, an Asian American, and a minority of white guys (all three die). As Anthony Lane comments in The New Yorker, “It was difficult to ignore the patronizing tone of Sturgis’ tale, in which helpless Mexican villagers in white blouses are saved and blessed by the intervention of American tough guys, so the new version is wise to recruit a Latino gunslinger to the front line.”

Here Lane betrays a common prejudice against midcentury America. In Sturges’ film, Chico is Mexican, “from a village just like that one,” and Bernardo half-Mexican, even though the actors playing Chico and Bernardo (Horst Buchholz and Charles Bronson) were not. Also, in Sturges’ version the problem was not that Mexicans cannot be “tough guys.” The trouble was that the wrong people were tough. Westerns often emphasize the fact — a truth across ethnicities and a difficulty for all civilizations — that good people are less likely to be good fighters. Worse still, lost on Lane and director Fuqua is that the 1960 film asserts the Mexican village’s superiority over the American town.

Schizophrenic Attackers

January 31st, 2017

Dr. Jeroen Ensink, 41, a lecturer at the London School of Hygiene and Tropical Medicine, was stabbed to death by Nigerian-born student Femi Nandap, who was a cannabis-abusing psychotic (in the strict, clinical sense):

Despite attacking a police officer in May last year and then being caught in possession of two kitchen knives, Nandap was twice given bail, before prosecutors decided to drop the charges against him.

Six days later he attacked and killed Dr Ensink, telling police who tried to intervene that he was the “black messiah”.

The current academic wisdom is that it would be impossible to prevent such murders without restricting large numbers of patients — say 35,000 of them:

Taking the very paper which provides the “35,000” figure for stranger murder, the figures for assault are shown below, and put things into a more manageable context. The annual rates for assault and violent crime are extraordinarily high, almost unbelievably so. Given the very high base rate, screening and monitoring are worth while.

Positive Predictive Value for the Detection of Adverse Events in Schizophrenia

As the event becomes more rare, the positive predictive value of the risk-categorization becomes lower, and the error rate higher, with progressively more people needing to be monitored to prevent one rare event. However, to prevent an assault would require that 3 schizophrenic patients be monitored, calling them in to check they are taking their medication, and presumably (hardest part) searching for them if they failed to show up. Easier would be to link up with the Police, so that if a patient is brought in for violent behaviour of any sort there can be coordinated management of the offender. Devoutly to be wished, often denied, but in the manageable range given the will and the resources. It would provide a good service for the patients, reducing suicide attempts, improving the quality of their lives, and reducing threats to others. It would certainly be worth testing it out in a London Borough, and checking that the above figures, derived from the best sources, hold up on further examination.

None of the media coverage goes into the question which arises out of normal curiosity: is psychotic behaviour more common among Africans in the UK? The picture above shows murderer and victim, and is an all too common pairing. The answer to the African question is: 6 to 9 times higher.