The Deep Roots of Prosperity

Friday, October 14th, 2016

Today’s rich countries tend to be in East Asia, Northern and Western Europe — or are heavily populated by people who came from those two regions:

The major exceptions are oil-rich countries. East Asia and Northwest Europe are precisely the areas of the world that made the biggest technological advances over the past few hundred years. These two regions experienced “civilization,” an ill-defined but unmistakable combination of urban living, elite prosperity, literary culture, and sophisticated technology. Civilization doesn’t mean kindness, it doesn’t mean respect for modern human rights: It means the frontier of human artistic and technological achievement. And over the extremely long run, a good predictor of your nation’s current economic behavior is your nation’s ancestors’ past behavior. Exceptions exist, but so does the rule.

Recently, a small group of economists have found more systematic evidence on how the past predicts the present. Overall, they find that where your nation’s citizens come from matters a lot. From “How deep are the roots of economic development?” published in the prestigious Journal of Economic Literature:

A growing body of new empirical work focuses on the measurement and estimation of the effects of historical variables on contemporary income by explicitly taking into account the ancestral composition of current populations. The evidence suggests that economic development is affected by traits that have been transmitted across generations over the very long run.

From “Was the Wealth of Nations determined in 1000 B.C.?” (coauthored by the legendary William Easterly):

[W]e are measuring the association of the place’s technology today with the technology in 1500 AD of the places from where the ancestors of the current population came from…[W]e strongly confirm…that history of peoples matters more than history of places.

And finally, from “Post-1500 Population Flows and the Economic Determinants of Economic Growth and Inequality,” published in Harvard’s Quarterly Journal of Economics:

The positive effect of ancestry-adjusted early development on current income is robust…The most likely explanation for this finding is that people whose ancestors were living in countries that developed earlier (in the sense of implementing agriculture or creating organized states) brought with them some advantage—such as human capital, knowledge, culture, or institutions—that raises the level of income today.

To sum up some of the key findings of this new empirical literature: There are three major long-run predictors of a nation’s current prosperity, which combine to make up a nation’s SAT score:

S: How long ago the nation’s ancestors lived under an organized state.

A: How long ago the nation’s ancestors began to use Neolithic agriculture techniques.

T: How much of the world’s available technology the nation’s ancestors were using in 1000 B.C., 0 B.C., or 1500 A.D.

When estimating each nation’s current SAT score, it’s important to adjust for migration: Indeed, all three of these papers do some version of that. For instance, without adjusting for migration, Australia has quite a low ancestral technology score: Aboriginal Australians used little of the world’s cutting edge technology in 1500 A.D. But since Australia is now overwhelmingly populated by the descendants of British migrants, Australia’s migration-adjusted technology score is currently quite high.

On average, nations with high migration-adjusted SAT scores are vastly richer than nations with lower SAT scores: Countries in the top 10% of migration-adjusted technology (T) in 1500 are typically at least 10 times richer than countries in the bottom 10%. If instead you mistakenly tried to predict a country’s income today based on who lived there in 1500, the relationship would only be about one-third that size. The migration adjustment matters crucially: Whether in the New World, across Southeast Asia, or in Southern Africa, one can do a better job predicting today’s prosperity when you keep track of who moved where. It looks like at least in the distant past, migrants shaped today’s prosperity.

Wealth, Health, and Child Development

Thursday, October 13th, 2016

Swedish researchers looked at wealth, health, and child development, by studying lottery players:

We use administrative data on Swedish lottery players to estimate the causal impact of substantial wealth shocks on players’ own health and their children’s health and developmental outcomes. Our estimation sample is large, virtually free of attrition, and allows us to control for the factors conditional on which the prizes were randomly assigned.

In adults, we find no evidence that wealth impacts mortality or health care utilization, with the possible exception of a small reduction in the consumption of mental health drugs. Our estimates allow us to rule out effects on 10-year mortality one sixth as large as the cross-sectional wealth-mortality gradient.

In our intergenerational analyses, we find that wealth increases children’s health care utilization in the years following the lottery and may also reduce obesity risk. The effects on most other child outcomes, including drug consumption, scholastic performance, and skills, can usually be bounded to a tight interval around zero.

Overall, our findings suggest that in affluent countries with extensive social safety nets, causal effects of wealth are not a major source of the wealth-mortality gradients, nor of the observed relationships between child developmental outcomes and household income.

Insulin and Alzheimer’s

Thursday, October 13th, 2016

Insulin resistance may be a powerful force in the development of Alzheimer’s Disease:

In the body, one of insulin’s responsibilities is to unlock muscle and fat cells so they can absorb glucose from the bloodstream. When you eat something sweet or starchy that causes your blood sugar to spike, the pancreas releases insulin to usher the excess glucose out of the bloodstream and into cells. If blood sugar and insulin spike too high too often, cells will try to protect themselves from overexposure to insulin’s powerful effects by toning down their response to insulin — they become “insulin resistant.” In an effort to overcome this resistance, the pancreas releases even more insulin into the blood to try to keep glucose moving into cells. The more insulin levels rise, the more insulin resistant cells become. Over time, this vicious cycle can lead to persistently elevated blood glucose levels, or type 2 diabetes.

In the brain, it’s a different story. The brain is an energy hog that demands a constant supply of glucose. Glucose can freely leave the bloodstream, waltz across the blood-brain barrier, and even enter most brain cells — no insulin required. In fact, the level of glucose in the cerebrospinal fluid surrounding your brain is always about 60% as high as the level of glucose in your bloodstream — even if you have insulin resistance — so, the higher your blood sugar, the higher your brain sugar.

Not so with insulin — the higher your blood insulin levels, the more difficult it can become for insulin to penetrate the brain. This is because the receptors responsible for escorting insulin across the blood-brain barrier can become resistant to insulin, restricting the amount of insulin allowed into the brain. While most brain cells don’t require insulin in order to absorb glucose, they do require insulin in order to process glucose. Cells must have access to adequate insulin or they can’t transform glucose into the vital cellular components and energy they need to thrive.

Despite swimming in a sea of glucose, brain cells in people with insulin resistance literally begin starving to death.

Which brain cells go first? The hippocampus is the brain’s memory center. Hippocampal cells require so much energy to do their important work that they often need extra boosts of glucose. While insulin is not required to let a normal amount of glucose into the hippocampus, these special glucose surges do require insulin, making the hippocampus particularly sensitive to insulin deficits. This explains why declining memory is one of the earliest signs of Alzheimer’s, despite the fact that Alzheimer’s Disease eventually destroys the whole brain.

Can War Foster Cooperation?

Wednesday, October 12th, 2016

Can war foster cooperation? Of course it can:

In the past decade, nearly 20 studies have found a strong, persistent pattern in surveys and behavioral experiments from over 40 countries: individual exposure to war violence tends to increase social cooperation at the local level, including community participation and prosocial behavior. Thus while war has many negative legacies for individuals and societies, it appears to leave a positive legacy in terms of local cooperation and civic engagement. We discuss, synthesize and reanalyze the emerging body of evidence, and weigh alternative explanations. There is some indication that war violence especially enhances in-group or “parochial” norms and preferences, a finding that, if true, suggests that the rising social cohesion we document need not promote broader peace.

Hat tip to Tyler Cowen, who adds:

That is an all-star line-up of authors, and no this doesn’t mean any of those individuals are in favor of war. That would be the fallacy of mood affiliation, and we all know that MR readers never commit the fallacy of mood affiliation…

Looking into the brains of habitual short sleepers

Wednesday, October 12th, 2016

A recent study looked into the brains of habitual short sleepers:

The team compared data from people who reported a normal amount of sleep in the past month with those who reported sleeping six hours or less a night. They further divided the short sleepers into two groups: Those who reported daytime dysfunction, such as feeling too drowsy to perform common tasks or keeping up enthusiasm, and those who reported feeling fine.

Both groups of short sleepers exhibited connectivity patterns more typical of sleep than wakefulness while in the MRI scanner. Anderson says that although people are instructed to stay awake while in the scanner, some short sleepers may have briefly drifted off, even those who denied dysfunction. “People are notoriously poor at knowing whether they’ve fallen asleep for a minute or two,” he says. For the short sleepers who deny dysfunction, one hypothesis is that their wake-up brain systems are perpetually in over-drive. “This leaves open the possibility that, in a boring fMRI scanner they have nothing to do to keep them awake and thus fall asleep,” says Jones. This hypothesis has public safety implications, according to Curtis. “Other boring situations, like driving an automobile at night without adequate visual or auditory stimulation, may also put short sleepers at risk of drowsiness or even falling asleep behind the wheel,” he says.

Looking specifically at differences in connectivity between brain regions, the researchers found that short sleepers who denied dysfunction showed enhanced connectivity between sensory cortices, which process external sensory information, and the hippocampus, a region associated with memory. “That’s tantalizing because it suggests that maybe one of the things the short sleepers are doing in the scanner is performing memory consolidation more efficiently than non-short sleepers,” Anderson says. In other words, some short sleepers may be able to perform sleep-like memory consolidation and brain tasks throughout the day, reducing their need for sleep at night. Or they may be falling asleep during the day under low-stimulation conditions, often without realizing it.

Maybe Companies Aren’t Too Focused on the Short Term

Tuesday, October 11th, 2016

There are plenty of stories of how U.S. corporations live for the short-term, obsessing over the next quarterly earnings statement to the neglect of their longer-run prospects:

Still, it’s not been established that American corporations are on average more short-term in their thinking than they ought to be.

Perhaps most importantly, it is often easier and better to plan for the shorter term. In information technology, the average life of a corporate asset is about six years, in health care it is about 11 years, and for consumer products it runs about 12 to 15. Very often it is hard for a company to plan its operations beyond those time periods, as the U.S. economy is no longer based on durable manufacturing machines. Production has shifted toward service sectors with relatively short asset lives, and that may call for a shorter-term orientation in response.

Companies often see their short-term problems staring them in the face — think of the need to fire an incompetent manager or lease more office space. It is harder to predict the market 20 years hence, especially when information technology is involved, and thus planning so far out can involve a lot of expense and risk.

Plenty of companies have made big mistakes from thinking too big and too long-term; for instance, a lot of mergers were based on notions of long-run synergies that never materialized. In reality, short-term improvements are often the best way to get to a good long-run plan.

[...]

Equity markets do not seem to neglect the longer run. Amazon has a high share price even though its earnings reports have usually failed to show a profit. Possibly the market judgment is wrong, but it’s hardly the case that investors are ignoring the long-run prospects of the company.

Many tech startups have high valuations even though revenue is zero or low. Again, those judgments may or may not be correct, but clearly investors are trying to estimate longer-run prospects. During the dot-com bubble of the 1990s, there was too much long-run, pie-in-the-sky thinking and not enough focus on the concrete present.

Economics Nobel Laureate Eugene Fama once said, “In hindsight, every price is wrong.” With electric and driverless cars, investors are thinking long and hard about what the future might look like and investing in equities accordingly, with share prices to be revised as events develop. If the long-run thinking of the market were systematically defective, it would be possible to profit simply through superior patience. But it is not an easy matter to see further than others.

Where Creativity Comes From

Tuesday, October 11th, 2016

The old adage about inventiveness is that it stems from necessity:

Based on his studies of orangutans, primatologist Carel van Schaik of the University of Zurich has come to a very different view. “When food is scarce, orangutans go into energy-saving mode. They minimize movement and focus on unappealing fall-back foods,” he observed. Their strategy in this scenario is quite the opposite of innovation, but it makes sense. “Trying something new can be risky — you can get injured or poisoned — and it requires a serious investment of time, energy and attention, while the outcome is always uncertain,” van Schaik explains.

Research on humans faced with scarcity echoes van Schaik’s orangutan findings. In 2013, Science published a study by economist Sendhil Mullainathan of Harvard University and psychologist Eldar Shafir of Princeton University describing how reminding people with a low income of their financial trouble reduced their capacity to think logically and solve problems in novel situations. A subsequent study found that Indian sugarcane farmers performed much better on the same cognitive performance test after receiving the once-a-year payment for their produce, temporarily resolving their monetary concerns. (Farmers who did not take the test previously did comparably well after getting paid, so it is unlikely that the improvement was simply the consequence of prior experience with the test.) People will do whatever it takes to survive, of course, which may occasionally lead to innovations. But as these and other studies suggest, if one’s mind is constantly occupied with urgent problems, such as finding food or shelter or paying bills, there will not be much capacity left to come up with long-term solutions to better one’s livelihood.

So where does creativity come from? Insights have come from the surprising observation that orangutans can be incredibly creative in captivity. “If food is provided for and predators are absent, they suddenly have a lot of time on their hands, free from such distractions,” van Schaik explains. Furthermore, in their highly controlled environments, exploration rarely has unpleasant consequences, and there are many unusual objects to play around with. Under such circumstances, orangutans appear to lose their usual fear of the unknown. In a study published in the American Journal of Primatology in 2015, van Schaik and his colleagues compared the response of wild and captive orangutans to a newly introduced object, a small platform in the shape of an orangutan nest. While captive orangutans approached the new object almost immediately, most wild orangutans, though habituated to the presence of humans, didn’t even go near it during several months of testing — only one eventually dared to touch it. Such fear of novelty may pose a significant obstacle to creativity: if an animal avoids approaching any new objects, innovations become rather unlikely. “So if you ask me, opportunity is the mother of invention,” van Schaik remarks.

Big Viking Families

Monday, October 10th, 2016

In saga-era Iceland, killers had three times as many biological relatives and in-laws as their victims:

In the three sagas, a total of 66 individuals caused 153 deaths; two or more attackers sometimes participated in the same killing. No killers were biologically related to their victims (such as cousins or closer), but one victim was a sister-in-law of her killer.

About two-thirds or more of killers had more biological kin on both sides of their families, and more in-laws, than their victims did.

Six men accounted for about 45 percent of all murders, each killing between five and 19 people. Another 23 individuals killed two to four people. The rest killed once. Frequent killers had many more social relationships, through biological descent and marriage, than their victims did, suggesting that they targeted members of families in vulnerable situations, the researchers say.

Thought in its First, Molten State

Monday, October 10th, 2016

Philosophy never seems to be making progress:

One of Gottlieb’s central insights is that, as he wrote in his previous volume, “The Dream of Reason,” which covered thought from the Greeks to the Renaissance, “the history of philosophy is more the history of a sharply inquisitive cast of mind than the history of a sharply defined discipline.” You might say that philosophy is what we call thought in its first, molten state, before it has had a chance to solidify into a scientific discipline, like psychology or cosmology. When scientists ask how people think or how the universe was created, they are addressing the same questions posed by philosophy hundreds or even thousands of years earlier. This is why, Gottlieb observes, people complain that philosophy never seems to be making progress: “Any corner of it that comes generally to be regarded as useful soon ceases to be called philosophy.”

Growing Plants on Mars

Sunday, October 9th, 2016

Growing plants on Mars ain’t easy:

Drew Palmer, an assistant professor of Biological Sciences, Brooke Wheeler an assistant professor at the College of Aeronautics, and astrobiology majors from the Department of Physics and Space Sciences, are growing Outredgeous lettuce (a variety of red romaine) in different settings — Earth soil, analog Martian surface material known as regolith simulant, and regolith simulant with nutrients added — to find the magic formula for the type and amount of nutrients needed to grow a plant in inhospitable Martian dirt.

“We have to get the regolith right or anything we do won’t be valid,” said Andy Aldrin, director of the Buzz Aldrin Space Institute.

Unlike Earth soil, Martian regolith contains no helpful organic matter and has fewer minerals plants need for food, such as phosphates and nitrates. Adding to the challenges, real Martian regolith in its pure state is harmful for both plants and humans because of high chlorine content in the form of perchlorates.

The current Mars regolith simulant isn’t perfect. Until a real sample of Mars dirt comes back to Earth, which could happen on a mission estimated to be at least 15 years from now, Florida Tech researchers will spend the next year trying to create an accurate regolith analogue by applying chemical sensing data from the Mars rovers.

Eventually, it may be possible with the addition of fertilizer and removal of the perchlorates to grow various plants in a Martian soil. Florida Tech scientists are partnering with NASA scientists who have experience growing plants on the International Space Station to help figure out ways to make Martian farming a reality.

The Mr. Rogers Of Painting

Sunday, October 9th, 2016

The Mr. Rogers Of Painting popularized “wet on wet” oil painting:

The only public record of any tumult in Ross’ life is in his relationship with the man who taught him to paint, William Alexander. Alexander, who had his own PBS show, The Magic Of Oil Painting, claimed to be the originator of wet-on-wet oil painting, the technique Ross used on Joy Of Painting.

Classical oil painting is a time-consuming process of building up slow-drying glazes to produce a luminescent effect. If a layer is insufficiently dry, colors blend and become muddy. The wet-on-wet technique speeds this process up considerably by controlling the mixing of colors on the canvas so highlights and shadows are created with a quick series of gestures. Once mastered, it becomes easy to whip out a competent, representational image in a very short amount of time.

Bob Ross Painting

Alexander never got over what he felt was Ross’ theft. Without speaking to whether Ross unfairly took credit, it’s fair to say that it’s not the method of paint application that caused Ross to surpass his former mentor in popularity. Watching Alexander highlights just how much Ross’ personality benefits his show. Not simply with his widely quoted peacenik platitudes (as endearing as those are) but also in the rhythm he folds into his tutorials. Ross maintains a smooth, unbroken cadence as he shifts between instructions on brush placement or color mixture to a completely unrelated observation about the epileptic squirrel he’s rehabilitating. Each comment punctuated by the rhythmic pat of the brush or scraping of the palette knife against the canvas becomes almost musical.

Ross’ teacher didn’t have nearly the same on-camera ease. He spoke distractedly in a thick German accent and contemplated how artists suffering from a surplus of imagination led to acts like Van Gogh slicing off his own ear. A host who muses on self-mutilation in a brusque Teutonic inflection just won’t find the same level of enthusiasm among public television viewers as the softcore hippiedom of Bob Ross.

Marc Andreessen’s Library

Saturday, October 8th, 2016

The lobby at Andreessen Horowitz is also Marc Andreessen’s library, and it’s full of books about Hollywood:

In 1908, the country’s nine largest filmmakers formed the Movie Picture Patents Company, insisting that no one else could make movies because they controlled the patents on the original movie camera, co-created by Thomas Edison at his lab in New Jersey. The patents belonged to Edison, and he backed the Patents Company. So a new wave of filmmakers moved to the West Coast, where the courts were less friendly to Edison. Hollywood became a place to make movies in part because it was so sunny — you could film outdoors more often and with fewer lights — but also because it was so far away from New Jersey.

Who the Devil Made It — an oral history of Hollywood collected by the director and film historian Peter Bogdanovich — begins in the days of the Patents Company. Allan Dwan, who started making movies in the 1910s, tells Bogdanovich that as independent filmmakers moved west, the Patents Company hired strongmen to enforce its patents. Dwan remembers snipers climbing trees overlooking movie sets and taking shots at the cameras they deemed illegal. He would film as far as he could from the railroad stops, so he and his crew were harder to find.

The story of early Hollywood is very much the story of Silicon Valley, full of innovators fleeing the old rules in search of the new. It only makes sense that the lobby of Andreessen Horowitz is stocked with books on early Hollywood, including Who The Devil Made It. Bogdanovich and Dwan tell a story not unlike the one told in What the Dormouse Said, where a group of freethinkers rise up in the 1960s and create the personal computer, pushing against entrenched giants like IBM.

Andreessen Horowitz Bookshelf

As The New Yorker explains, Andreessen and Horowitz are pals with [Michael] Ovitz, the guy behind CAA, one of Hollywood’s biggest talent agencies. When they started their firm, they went to Ovitz for advice.

“Call everyone a partner, offer services the others don’t, and help people who aren’t your clients,” he said. “Disrupt to differentiate by becoming a dream-execution machine.” They did all that. And, in contrast to typical Silicon Valley VCs, they hired a whole team of publicists who guided Andreessen Horowitz stories into Fortune and Forbes. They hung some Rauschenbergs around the office — just like CAA. And when people pitched them, they drank from glassware rather than plastic. The books complement the Rauschenbergs and the glassware. They, too, lend authority.

The Mindful Child

Saturday, October 8th, 2016

Meditation training may be most helpful for children:

It’s long been known that meditation helps children feel calmer, but new research is helping quantify its benefits for elementary school-age children. A 2015 study found that fourth- and fifth-grade students who participated in a four-month meditation program showed improvements in executive functions like cognitive control, working memory, cognitive flexibility — and better math grades. A study published recently in the journal Mindfulness found similar improvements in mathematics in fifth graders with attention deficit hyperactivity disorder. And a study of elementary school children in Korea showed that eight weeks of meditation lowered aggression, social anxiety and stress levels.

These investigations, along with a review published in March that combed the developmental psychology and cognitive neuroscience literature, illustrate how meditative practices have the potential to actually change the structure and function of the brain in ways that foster academic success.

Fundamental principles of neuroscience suggest that meditation can have its greatest impact on cognition when the brain is in its earliest stages of development.

This is because the brain develops connections in prefrontal circuits at its fastest rate in childhood. It is this extra plasticity that creates the potential for meditation to have greater impact on executive functioning in children. Although meditation may benefit adults more in terms of stress reduction or physical rejuvenation, its lasting effects on things like sustained attention and cognitive control are significant but ultimately less robust.

A clinical study published in 2011 in The Journal of Child and Family Studies demonstrates this concept superbly. The research design allowed adults and children to be compared directly since they were enrolled in the same mindfulness meditation program and assessed identically. Children between 8 and 12 who had A.D.H.D. diagnoses, along with parents, were enrolled in an eight-week mindfulness-training program. The results showed that mindfulness meditation significantly improved attention and impulse control in both groups, but the improvements were considerably more robust in the children.

Outside of the lab, many parents report on the benefits of early meditation. Heather Maurer of Vienna, Va., who was trained in transcendental meditation, leads her 9-year-old daughter, Daisy, through various visualization techniques and focused breathing exercises three nights a week, and says her daughter has become noticeably better at self-regulating her emotions, a sign of improved cognitive control. “When Daisy is upset, she will sit herself down and concentrate on her breathing until she is refocused,” Ms. Maurer said.

Amanda Simmons, a mother who runs her own meditation studio in Los Angeles, has seen similar improvements in her 11-year-old son, Jacob, who is on the autism spectrum. Jacob also has A.D.H.D. and bipolar disorder, but Ms. Simmons said many of his symptoms have diminished since he began daily meditation and mantra chants six months ago. “The meditation seems to act like a ‘hard reboot’ for his brain, almost instantly resolving mood swings or lessening anger,” Ms. Simmons said. She believes it has enabled him to take a lower dose of Risperdal, an antipsychotic drug used to treat bipolar disorder.

Whether children are on medication or not, meditation can help instill self-control and an ability to focus. Perhaps encouraging meditation and mind-body practices will come to be recognized as being as essential to smart parenting as teaching your child to work hard, eat healthfully and exercise regularly.

To learn some meditation techniques you can teach your child, read Three Ways for Children to Try Meditation at Home.

Likely Radicalized

Friday, October 7th, 2016

Jason Falconer — who works part time for the Avon, Minn., police department and owns a business called Tactical Advantage — won’t face charges for shooting the “radicalized” Somali knife-attacker at a Minnesota mall:

It appears that Adan, who worked as a security guard at another business and who was wearing his uniform, appeared to have been radicalized and that the attack was premeditated, said Richard T. Thornton, special agent in charge of the Federal Bureau of Investigation’s Minneapolis Division.

[...]

During a news conference on Thursday, officials showed multiple videos of the attack. The videos showed Mr. Falconer’s pursuit of the suspect and Adan’s efforts to continue the attack after he had been shot multiple times.

The city received 95 calls to 911 in the incident, which injured 10 people, including a pregnant woman stabbed in the parking lot, officials said.

During the incident, Adan asked several of the victims, including Mr. Falconer, whether they were Muslim, officials said.

Ten bullets were shot in all, and six were found in Adan’s body.

Ms. Kendall described several of the stabbings, including a father and son stabbed outside an electronics store. Adan tried to enter a Target and a candy store, both of which had already closed their doors because of the commotion.

She said Mr. Falconer had finished shopping at Bath & Body Works when he heard the commotion and screams in the hallway. As he left the store, he came across Adan, who asked Mr. Falconer if he was Muslim.

Mr. Falconer said no, and Adan turned away from him, the prosecutor said. Mr. Falconer then noticed that Adan had two steak knives in his hands. Mr. Falconer drew his weapon, identified himself as a police officer and ordered Adan to stop.

Instead, Adan ran away toward Macy’s, and Mr. Falconer chased after him, Ms. Kendall said.

Inside Macy’s, Adan ducked behind a clothing rack and then charged at the officer, who fired several shots.

Adan went down, got up and continued to try to attack the officer, first running straight ahead, but eventually turning his back to the bullets, but continuing toward the officer, the prosecutor said.

Witnesses were confused by the sight of Mr. Falconer in plain clothes shooting at the suspect in a security-guard uniform, so Mr. Falconer pulled out his badge, Ms. Kendall said.

When Adan was nearly incapacitated, he attempted to try to get up again, eventually crawling toward the officer and trying to stand up on a display rack.

When Adan finally stopped moving, Mr. Falconer stood and waited for police, who arrived within four minutes, the prosecutor said.

Mr. Thornton, the FBI special agent, said that Adan returned home from work at around 3 p.m. that day but didn’t take off his uniform or take a nap, as he usually did. He told his family he had work to do, even though he wasn’t expected back at work until 10 p.m., Mr. Thornton said. The attack took place around 8 p.m.

Adan stopped by a convenience store shortly before 7 p.m. that evening. When a worker said, “See you later,” Adan replied: ”You won’t be seeing me again.”

Any competitive shooters out there must be wondering how he hit just six out of ten times. Also, what was he shooting? And did he have more rounds?

How Everything Became War and the Military Became Everything

Friday, October 7th, 2016

While working at the Pentagon, Rosa Brooks saw How Everything Became War and the Military Became Everything:

The White House wants a surveillance drone to monitor an evolving showdown over human rights in Kyrgyzstan. A member of staff at the National Security Council calls the author, Rosa Brooks, at the Pentagon to tell her to send it on its way. Ms Brooks explains that this is not how the chain of command works in the military. Where would the drone come from? Which job would it no longer be doing? Who was going to pay for it? Whose airspace would it operate from? The incredulous response: “We’re talking about like, one drone. You’re telling me you can’t just call some colonel at CentCom and make this happen?”

The story illustrates two themes in an interesting and worrying book, “How Everything Became War and the Military Became Everything”. The first is the growing tendency of politicians and bureaucrats in Washington to turn to the armed forces when something, almost anything, needs doing. The second, despite or perhaps because of this, is the gulf in understanding that is making civil-military relations increasingly fraught. But Ms Brooks has a wider purpose, which is to examine what happens to institutions and legal processes when the distinctions between war and peace become blurred and the space between becomes the norm, as has happened in America in the decade and a half since the attacks of September 11th 2001.

[...]

What she found [at the Pentagon] is that as the money available for conventional diplomacy and development aid precipitately declines, so the armed forces with their relatively inexhaustible resources are called upon to fill the gap. As one general puts it, the American military is becoming “a Super Walmart with everything under one roof”. Because its culture is proudly can-do, it gets on with the demands made on it without much complaint.

One consequence is that actual fighting has become something that only a small minority of soldiers do. Ms Brooks finds that through the recent, long wars most soldiers have spent their time supervising the building of wells, sewers and bridges, resolving community disputes, working with local police, writing press releases, analysing intelligence and so on. In many ways, Ms Brooks finds this admirable. The problem, she says, is that soldiers are not necessarily the best people to do this kind of work, lacking the inclination, the training or the experience to be much good at it.

The hope in the Pentagon nowadays is that it can return to its core purpose of deterring and preparing for proper, high-tech state-on-state wars. Counter-insurgency and nation-building have fallen out of fashion. Hillary Clinton has recently echoed Barack Obama in promising no “boots on the ground” in Iraq (despite the fact that there are about 5,000 pairs of them there and twice as many in Afghanistan). The reality is that you do not always get to choose the kind of wars you fight or how you fight them.