Riptides

Saturday, January 24th, 2015

Riptides are dangerous currents that drag people away from shore — but they also drag people back to shore:

Using GPS devices, he’s tracked them, and found that they circulate, coming back towards land. The accepted wisdom on what to do if you’re caught in a riptide is swim parallel to the shore to get away from the current, but professor Jamie MacMahan calculated that that gives you a good chance of being swept out to sea. Treading water for a three-and-a-half minutes yields a 90% chance of being brought back to shore.

Collapsing Capitals

Friday, January 23rd, 2015

Ancient capital cities often grew for centuries, reached a Golden Age, and then collapsed rapidly:

Angkor was flourishing in the late 13th century when Zhou Daguan visited; a little over a century later, it was all but abandoned. Researchers are beginning to see similarities in how these ancient low-density cities failed — and this is of particular interest today because, even as our cities grow in extent and population, their densities are falling.

[...]

There had long been a debate about what led to the decline of Angkor and the southward move of the Khmer seat of power. Proposed explanations included the strain on theocratic rule of Hindu-Buddhist jostling; attacks by Thai armies; and changes brought about by maritime trade. But the Greater Angkor Project added a significant new possibility: extreme climate instability. Analysis of tree rings in neighbouring Vietnam showed long periods of droughts followed by periods of unusually wet monsoons in the 14th and 15th centuries.

The upheaval caused by flooding during mega-monsoons is clearly visible in remote sensing images produced by the project: erosion channels show rapidly moving water breaching a dam, crashing into the wall of a reservoir, then tearing away the edge of a residential area, flowing at a high level through housing, and later damaging a bridge. Perhaps the scenes in Angkor were not very different from those seen in recent years in New Orleans or Fukushima.

Sand accumulated in Angkor’s canals, and parts of the water network were cut off from each other. Damage to an old, complex water management system meant the city became less resilient in intervening periods of drought. Angkor, with its large population and broken infrastructure, would have found it hard to sustain itself.

The pattern of urbanism at Angkor was hardly unique: the Mayan cities that Pottier’s maps of Angkor reminded Fletcher of have long been recognised as low-density agrarian settlements. The lack of the wheel and the absence of draught animals meant that large quantities of food could not be transported, and cities had to be largely self-sufficient, growing maize, varieties of beans, squash, manioc and other staples of the region.

The city of Tikal, in present-day Guatemala, was one of the most important of these Mayan centres. In what is called its Late Classic Period, around 600 AD, there was a flowering of art and architecture: large plazas, palaces, pyramid temples, sculpture and painted ceramics (of the many structures still found in Tikal, a 65-metre high pyramid is one of the tallest man-made structures in all pre-Columbian America). Conservative estimates put the city’s population at around 45,000 during this period; the city extended over 160 square kilometres. Then, in the middle of the ninth century, Tikal collapsed.

Tikal

Originally, the area of Tikal was around 70% upland tropical rainforest, and the rest swampy wetland. An extended family would build their houses in a cluster, with cultivable land attached. In all, the people of Tikal cleared around two-thirds of the rainforest to create their monuments and homes, and to fuel their fires. “In many ways they were managing the forest very effectively,” says Lentz. “But they weren’t aware that cutting down a forest reduces the amount of precipitation in the region. Then suddenly a horrible drought comes along, and they can’t figure out why they can’t supplicate their gods adequately to prevent it.”

It didn’t help that Tikal’s water management system had become increasingly reliant on collecting rainwater in reservoirs, at the cost of groundwater. “As Tikal grew and grew,” Lentz says, “they created all these pavements around the city, from which they’d divert water to the reservoirs. But this cut off the recharge capacity of the springs. When there was no longer any rainfall to fill up their reservoirs, the springs had dried up too.”

For centuries, the Maya at Tikal had been erecting stelae — upright stone slabs with hieroglyphs and depictions of gods and rulers. The last one is dated 869. Soon after, there are signs of what might today be called urban decay, with palaces being occupied by squatters. Charred, gnawed human bones from this late period suggest desperate times. Then, the city went quiet.

[...]

Lentz draws a comparison with a neighbouring city called El Zotz, which had a smaller population, which didn’t modify its landscape as drastically, and was thus able to survive the drought that felled Tikal.

[...]

Tikal, Angkor and Anuradhapura (which foundered in the 10th century after thriving for more than a millennium) were very different cities in their geography, environment and social and political functioning. But, Fletcher points out, they all had operational similarities: extensive land clearance, sprawling low-density settlement patterns, massive infrastructure — all of which are attributes of modern cities. The extended infrastructure of Angkor and Tikal proved vulnerable to a changing climate, something else that may be upon us.

Mental Fatigue

Thursday, January 15th, 2015

Mental fatigue leads to physical fatigue, so mental training can improve physical performance:

In the twelve-week study, two groups of fourteen soldiers each trained on stationary bikes. The first half trained three times a week for one hour at a moderate aerobic pace. The second half did exactly the same intensity of training for the same duration, so the physiological work was the same. But while this second group pedaled, they were also doing a mentally fatiguing task.

The results at the end of the study were mind blowing. The two groups saw similar increases in their VO2 max, meaning the physiological effects of the training were about the same. But when you asked them to do what’s called a “time to exhaustion test, in which they rode at a specific percentage of their VO2 max until they couldn’t go on, the differences were profound. The control group saw the time to exhaustion improve 42 percent from their results before the training started. The group that combined training with mental exercise saw an improvement of 115 percent, almost three times the improvement that the control group saw. Combining the physical and mental stress led to a quantum leap in performance.

From Faster, Higher, Stronger, by Mark McClusky.

Inflammaging

Monday, January 12th, 2015

As we get older, our immune systems begin to malfunction, leading to inflammaging:

This condition is characterized by increased production of inflammatory cytokines, as well as lower immune function. Cortisol is produced to counteract the inflammation, and this has deleterious consequences as well.

Inflammaging and sarcopenia — loss of muscle mass — are closely linked:

It turns out that a number of things can be done to counteract sarcopenia. A recent study found, for instance, that old rats given ibuprofen had their anabolic resistance abolished, and restored their muscle mass to levels seen in younger rats. Their levels of muscle protein synthesis rose by 25%. The authors of the study make clear the connection between inflammation and anabolic resistance, noting that “inflammatory markers and cytokines levels were significantly improved in treated old rats”.

That may all be well and good, but does this work in humans? In Influence of acetaminophen and ibuprofen on skeletal muscle adaptations to resistance exercise in older adults, the researchers put older adults (mid sixties) on a resistance training program, and gave two groups of them either acetaminophen or ibuprofen. What happened next will shock you: those on the anti-inflammatory drugs gained more muscle and more strength than controls.

That did in fact shock me, because I’d read previously that anti-inflammatories reduced or eliminated the body’s response to resistance training — no pain, no gain.

In Faster, Higher, Stronger, Mark McClusky has more to say about NSAIDs:

Athletes love these drugs. A study of players in the 2002 and 2006 soccer World Cup found that more than half of them took an NSAID during the tournament. Ten percent of players overall were taking them before every match — on one squad, twenty-two of the twenty-three players were doing so. In endurance sports, ibuprofen use is so prevalent — up to half of competitors in one popular ultramarathon race took ibuprofen during the run — that it’s often known as “vitamin I.”

There are a couple of problems with this type of widespread use. The first is that taking ibuprofen before an event doesn’t help with performance. In fact, there have been studies that have shown that cyclists perform about 4.2 percent worse in a ten-mile time trial when they’ve taken ibuprofen before the effort as compared to a placebo. Furthermore, animal studies have shown that taking ibuprofen during training can lead to a reduction in the benefits you get from it — even if you increase your training volume, you don’t get the same results as you would without the ibuprofen. Ibuprofen seems to, paradoxically, increase the amount of inflammation seen in the body during exercise. And then there are the problems that chronic ibuprofen use can cause with the liver and gastrointestinal system.

[...]

Acetaminophen might be a different story, however. First of all, the drug operates differently than ibuprofen and other NSAIDs. It isn’t a strong anti-inflammatory, so it doesn’t have the same negative effects on training adaptation that ibuprofen does. More interesting, however, are the possible effects that acetaminophen might have if you take it before you exercise.

A study at the University of Exeter took a group of thirteen well-trained cyclists, gave them either a placebo or 1,500 mg of acetaminophen, and asked them to ride a ten-mile time trial. After taking the drug, riders were 2 percent faster than those who had gotten the placebo. But that’s not all. When the riders had taken acetaminophen, they rode at a higher heart rate and produced more lactate, but had the same perception of effort as when they took the placebo. That’s to say, the rode harder, but it didn’t feel like it.

The lab, led by Alexis Mauger, has gone on to show that acetaminophen also provided a group of recreational cyclists with an increase in sprint performance on the order of 5 percent, mostly because repeated sprints didn’t suffer as large a drop in performance as without the drug. And they have also shown that acetaminophen increases performance in hot (86 degrees Fahrenheit) conditions, by helping keep the riders’ core temperatures lower due to the drug’s antipyretic effects. The riders didn’t just feel cooler as they exercised; their bodies actually stayed cooler during the effort.

Bad Policies Based on Fragile Science

Sunday, January 11th, 2015

Bold policies have been based on fragile science, and the long term results may be terrible, Richard Smith says — speaking of diets:

By far the best of the books I’ve read to write this article is Nina Teicholz’s The Big Fat Surprise, whose subtitle is “Why butter, meat, and cheese belong in a healthy diet.”The title, the subtitle, and the cover of the book are all demeaning, but the forensic demolition of the hypothesis that saturated fat is the cause of cardiovascular disease is impressive. Indeed, the book is deeply disturbing in showing how overenthusiastic scientists, poor science, massive conflicts of interest, and politically driven policy makers can make deeply damaging mistakes. Over 40 years I’ve come to recognise what I might have known from the beginning that science is a human activity with the error, self deception, grandiosity, bias, self interest, cruelty, fraud, and theft that is inherent in all human activities (together with some saintliness), but this book shook me.

Teicholz begins her examination by pointing out that the Inuit, the Masai, and the Samburu people of Uganda all originally ate diets that were 60-80% fat and yet were not obese and did not have hypertension or heart disease.

The hypothesis that saturated fat is the main dietary cause of cardiovascular disease is strongly associated with one man, Ancel Benjamin Keys, a biologist at the University of Minnesota. He was clearly a remarkable man and a great salesman, described by his colleague Henry Blackburn (whom I’ve had the privilege to meet) as “possessing a very quick, bright intelligence” but also “direct to the point of bluntness, and critical to the point of skewering.”

Keys launched his “diet-heart hypothesis” at a meeting in New York in 1952, when the United States was at the peak of its epidemic of heart disease, with his study showing a close correlation between deaths from heart disease and proportion of fat in the diet in men in six countries (Japan, Italy, England and Wales, Australia, Canada, and the United States). Keys studied few men and did not have a reliable way of measuring diets, and in the case of the Japanese and Italians he studied them soon after the second world war, when there were food shortages. Keys could have gathered data from many more countries and people (women as well as men) and used more careful methods, but, suggests Teicholz, he found what he wanted to find. A subsequent study by other researchers of 22 countries found little correlation between death rates from heart disease and fat consumption, and these authors suggested that there could be other causes, including tobacco and sugar consumption.

At a World Health Organization meeting in 1955 Keys’s hypothesis was met with great criticism, but in response he designed the highly influential Seven Countries Study, which was published in 1970 and showed a strong correlation between saturated fat (Keys had moved on from fat to saturated fat) and deaths from heart disease. Keys did not select countries (such as France, Germany, or Switzerland) where the correlation did not seem so neat, and in Crete and Corfu he studied only nine men. Critics pointed out that although there was a correlation between countries, there was no correlation within countries and nor was there a correlation with total mortality. Furthermore, although the study had 12?770 participants, the food they ate was evaluated in only 3.9%, and some of the studies in Greece were during Lent, when the Greek Orthodox Church proscribes the eating of animal products. A follow-up study by Keys published in 1984 showed that variation in saturated fat consumption could not explain variation in heart disease mortality.

An analysis of the data from the Seven Countries Study in 1999 showed a higher correlation of deaths from heart disease with sugar products and pastries than with animal products. John Yudkin from London had since the late 1950s proposed that sugar might be more important than fat in causing heart disease, but Keys dismissed his hypothesis as a “mountain of nonsense” and a “discredited tune.” Many scientists were sceptical about the saturated fat hypothesis, but as the conviction that the hypothesis was true gripped the leading scientific bodies, policy makers, and the media in the US these critics were steadily silenced, not least through difficulty getting funding to challenge the hypothesis and test other hypotheses.

The Fantastic Fur of Sea Otters

Friday, January 9th, 2015

How do sea otters stay lean yet keep warm? Through their fantastic fur, KQED’s Deep Look explains:

Winter Never Comes

Thursday, January 8th, 2015

The Metabolic Winter hypothesis suggests that obesity is partly due to lack of exercise, but mostly due to chronic overnutrition and chronic warmth:

Seven million years of human evolution were dominated by two challenges: food scarcity and cold. “In the last 0.9 inches of our evolutionary mile,” they write, pointing to the fundamental lifestyle changes brought about by refrigeration and modern transportation, “we solved them both.” Other species don’t exhibit nearly as much obesity and chronic disease as we warm, overfed humans and our pets do. “Maybe our problem,” they continue, “is that winter never comes.”

Political Diversity Will Improve Social Psychological Science

Tuesday, January 6th, 2015

Political diversity will improve social psychological science, some (daring) social psychologists suggest:

Psychologists have demonstrated the value of diversity — particularly diversity of viewpoints — for enhancing creativity, discovery, and problem solving. But one key type of viewpoint diversity is lacking in academic psychology in general and social psychology in particular: political diversity. This article reviews the available evidence and finds support for four claims: 1) Academic psychology once had considerable political diversity, but has lost nearly all of it in the last 50 years; 2) This lack of political diversity can undermine the validity of social psychological science via mechanisms such as the embedding of liberal values into research questions and methods, steering researchers away from important but politically unpalatable research topics, and producing conclusions that mischaracterize liberals and conservatives alike; 3) Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering dissenting minorities to improve the quality of the majority’s thinking; and 4) The underrepresentation of nonliberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination. We close with recommendations for increasing political diversity in social psychology.

I enjoyed this passage:

Fourth, we note for the curious reader that the collaborators on this article include one liberal, one centrist, two libertarians, one whose politics defy a simple left-right categorization, and one neo-positivist contrarian who favors a don’t-ask-don’t-tell policy in which scholarship should be judged on its merits. None identifies as conservative or Republican.

(Hat tip to Bryan Caplan.)

Ebola’s Reservoir Host

Tuesday, January 6th, 2015

The Ebola virus only occasionally spills over into humans from its reservoir host — but what is Ebola’s reservoir host?

Surveying wildlife in forests [near the borders of Liberia and Ivory Coast], the scientists found no evidence of a die-off among larger animals, such as duikers, monkeys, and chimpanzees, that are also susceptible to Ebola. This suggested that perhaps the virus had spilled over directly from its reservoir host into humans, without passing through other animals hunted or scavenged for food.

The team then focused on a village called Méliandou, in Guinea — the index village, where the human outbreak began. A young boy, Emile Ouamouno, was the earliest known victim. He died with Ebola-like symptoms in Méliandou back in December 2013, followed soon by his mother, sister, and grandmother. No adult males died in the first wave of the outbreak, another clue that seemed to point away from hunted wildlife as the origin of the virus.

During eight days in Méliandou, Leendertz’s team gathered testimony from survivors and collected samples, including blood and tissues from captured bats. From these data emerged the new hypothesis: Maybe the reservoir host was a bat, yes — but a very different sort of bat, in a different ecological relationship with humans.

While fruit bats are abundant in southeastern Guinea, they don’t roost in large aggregations near Méliandou. But the village did harbor a sizable number of small, insectivorous bats, which roosted under the roofs of houses and in natural recesses, such as hollow trees. The locals call them lolibelo.

“These bats are reportedly targeted by children,” the new paper recounts, “who regularly hunt and grill them over small fires.” Imagine a marshmallow roast, except the marshmallows are mouse-size bats devoured by protein-hungry children.

Dissected Angolan Free-Tailed Bat

The researchers then uncovered another clue: a large hollow tree, which had recently been set afire, producing as it burned what someone recalled as “a rain of bats.” Leendertz’s team collected soil samples at the base of that tree, which eventually yielded traces of DNA assignable to Mops condylurus, commonly called the Angolan free-tailed bat.

That species matched the villagers’ descriptions of lolibelo. What’s more, the big hollow tree had reportedly been a favorite play spot for small children of the village, including the deceased little boy, despite — or perhaps because of — the fact that it was full of little bats.

Heroes

Wednesday, December 31st, 2014

Only a few soldiers do most of the fighting, David Grossman (On Killing) notes, and it seems to be in their nature:

Swank and Marchand’s World War II study noted the existence of 2 percent of combat soldiers who are predisposed to be “aggressive psychopaths” and apparently do not experience the normal resistance to killing or the resultant psychiatric casualties associated with extended periods of combat. The negative connotation associated with the term “psychopath” or its modern equivalent, “sociopath,” is inappropriate here, since this behavior is a generally desirable one for soldiers in combat, but there does seem to be some foundation for a belief that a very small percentage of all combatants are doing a tremendously disproportionate amount of the killing.

[...]

The presence of aggression, combined with the absence of empathy, results in sociopathy. The presence of aggression, combined with the presence of empathy, results in an individual completely different from the sociopath.

One veteran I interviewed told me that he thought of most of the world as sheep: gentle, decent, kindly creatures who are essentially incapable of true aggression. In this veteran’s mind there is another human subspecies (of which he was a member) that is a kind of dog: faithful, vigilant creatures who are very much capable of aggression when circumstances require. But, according to his model, there are wolves (sociopaths) and packs of wild dogs (gangs and aggressive armies) abroad in the land, and the sheepdogs (the soldiers and policemen of the world) are environmentally and biologically predisposed to be the ones who confront these predators.

[...]

Some may think of them as sheepdogs, and that is a good analogy, but I prefer another term, another analogy. There is a model, an “archetype,” which, according to Jung, exists deep in the “collective unconscious” — an inherited, unconscious reservoir of images derived from our ancestors’ universal experiences and shared by the whole human race. These powerful archetypes can drive us by channeling our libidinal energy. They include such Jungian concepts as the mother, the wise old man, and the hero. I think that Jung might refer to these people as heroes not as sheepdogs.

According to Gwynne Dyer (War), United States Air Force research concerning aggressive killing behavior determined that 1 percent of USAF fighter pilots in World War II did nearly 40 percent of the air-to-air killing, and the majority of their pilots never even tried to shoot anyone down. This 1 percent of World War II fighter pilots, Swank and Marchand’s 2 percent, Griffith’s low Napoleonic and Civil War killing rates, and Marshall’s low World War II firing rates can all be at least partially explained if only a small percentage of these combatants were actually willing to actively kill the enemy in these combat situations. Call them sociopaths, sheepdogs, or heroes as you please, but they exist, they are a distinct minority, and in time of war our nation needs them desperately.

Sports and Creativity

Monday, December 29th, 2014

Researchers explored the relationship between childhood leisure activities and creativity in young adults, and the results were stark:

Time spent playing informal sports was significantly and positively related to overall creativity, while time spent playing organized sports was significantly and negatively related to overall creativity.

Perhaps even more interestingly, the difference between those participants whose scores placed them into “above-average” creativity bracket was only about two hours per week of unstructured sport participation throughout their school-age years.

What could account for such distal results? On a theoretical (and, frankly, intuitive) level, informal sports played in unstructured, unsupervised environments capture many of the elements that are linked with the developmental benefits of play for children. These environments offer children the freedom to self-govern, create rules, problem-solve and resolve social conflicts on their own terms.

Organized sports, on the other hand, tend to replicate hierarchical and militaristic models aimed at obedience, replication, adherence to authority, and a number of other qualities that, on a theoretical level, would be unlikely to be conducive to creative development.

[...]

Perhaps the single-most intriguing finding from our analysis was the fact that those individuals whose scores on the creativity assessment identified them as “above-average” were not children who eschewed organized sports in favor of the activities we traditionally associate with creativity (art, music, theater, etc.). Instead, the respondents with “above-average” creativity simply appeared to strike more balance between their time spent in organized and unstructured sport settings.

In fact, those scoring in the “above-average” creativity bracket reported spending 15% of their total childhood leisure time playing informal sports versus 13% playing organized sports. The participants with “below-average” creativity, on the other hand, spent only 10% of their childhood leisure time playing informal sports versus 22% in organized sports.

The Demands of Authority

Monday, December 29th, 2014

The powerful resistance to killing fellow human beings, David Grossman (On Killing) argues, can be overcome through the demands of authority:

The mass needs, and we give it, leaders who have the firmness and decision of command proceeding from habit and an entire faith in their unquestionable right to command as established by tradition, law and society.
— Ardant du Picq

In Milgram’s study the demands of authority were represented by an individual with a clipboard and a white lab coat. This authority figure stood immediately behind the individual inflicting shocks and directed that he increase the voltage each time the victim answered a series of (fake) questions incorrectly. When the authority figure was not personally present but called over a phone, the number of subjects who were willing to inflict the maximum shock dropped sharply. This process can be generalized to combat circumstances and operationalized into the following sub-factors:

Proximity of the authority figure to the subject. Marshall noted many specific World War II incidents in which almost all soldiers would fire their weapons while their leaders observed and encouraged them in a combat situation; when the leaders left, however, the firing rate immediately dropped to 15 to 20 percent.

Killer’s subjective respect for authority figure. To be truly effective, soldiers must bond to their leader just as they must bond to their group. Compared to an established and respected leader, an unknown or discredited leader has much less chance of gaining compliance from soldiers in combat.

Intensity of the authority figure’s demands for killing behavior. The leader’s mere presence is not always sufficient to ensure killing activity. The leader must also communicate a clear expectancy of killing behavior.

Legitimacy of the authority figure’s authority and demands. Leaders with legitimate, societally sanctioned authority have greater influence on their soldiers; and legitimate, lawful demands are more likely to be obeyed than illegal or unanticipated demands. Gang leaders and mercenary commanders have to work carefully around their shortcomings in this area, but military officers (with their trappings of power and the legitimate authority of their nation behind them) have tremendous potential to cause their soldiers to overcome individual resistance and reluctance in combat.

Distance from the Victim

Sunday, December 28th, 2014

There is a powerful resistance in most individuals to killing their fellow human beings, David Grossman (On Killing) argues, but it can be overcome through a number of factors, including distance from the victim.

Grossman starts with physical distance:

To fight from a distance is instinctive in man. From the first day he has worked to this end, and he continues to do so.
— Ardant du Picq

The physical distance between the actual aggressor and the victim was created in Milgram’s studies by placing a barrier between the subject and the individual he was shocking. This same process can be generalized to and observed in historical combat circumstances, as portrayed in Figure 3. John Keegan in The Face of Battle notes that “only a fraction of one percent of all wounds” at the Battle of the Somme in World War I were inflicted with edged weapons — and most of those in the back. Interviews and research reveal countless incidents in which combatants confronted with an enemy soldier at close range did not fire, but when faced with an enemy who could be attacked with a hand grenade, or who could be engaged at medium range or long range, the incidence of nonfiring behavior goes down significantly. At the greatest range, among high-altitude bombers or artillery crews, incidents of refusal to fire are extraordinarily rare.

Units with a history and tradition of close-combat, hand-to-hand killing inspire special dread and fear in an enemy by capitalizing upon this natural aversion to the “hate” manifested in this determination to engage in close-range interpersonal aggression. The British Gurkha battalions have been historically effective at this (as can be seen in the Argentineans’ dread of them during the Falklands War), but any unit that puts a measure of faith in the bayonet has grasped a little of the natural dread with which an enemy responds to the possibility of facing an opponent determined to come within “skewering range.”

What these units (or at least their leaders) must understand is that actual “skewering” almost never happens; but the powerful human revulsion to the threat of such activity, when confronted with superior posturing represented by a willingness or at least a reputation for participation in close-range killing, has a devastating effect upon the enemy’s morale. This powerful revulsion to being killed with cold steel could be observed when mutinous Indian soldiers captured during the Sepoy Mutiny “begged for the bullet,” pleading to be executed with a rifle shot rather than the bayonet.

The combination of closeness with uncertainty (especially at night) helps explain why flank and rear attacks shatter the enemy’s will to fight. The assumption that the enemy is very close raises the level of uncertainty. This closeness and uncertainty combine and conspire with the darkness’ lack of mutual surveillance in such a manner as to erode and destroy the enemy’s will to fight.

Emotional distance also matters:

Combat at close quarters does not exist. At close quarters occurs the ancient carnage when one force strikes the other in the back.
— Ardant du Picq

One of the more interesting processes to occur in the area of emotional distance is the psychological leverage gained by not having to see the victim’s face. Israeli research has determined that hooded hostages and blindfolded kidnapping victims have a significantly greater chance of being killed by their captors. This demonstrates the difficulty associated with killing an individual whose face you can see, even when that individual represents a significant threat by being able to later identify you in court.

This same enabling process explains why Nazi, communist, and gangland executions are traditionally conducted with a bullet in the back of the head, and individuals being executed by hanging or firing squad are traditionally blindfolded or hooded. Not having to look at the face of the victim provides a form of psychological distance which enables the execution party and assists in their subsequent denial and/or rationalization and acceptance of having killed a fellow human being.

In combat the enabling value of psychological distance can be observed in the fact that casualty rates increase significantly after the enemy forces have turned their backs and begin to flee. Clausewitz and du Picq both expound at length on the fact that the vast majority of casualties in historical battles were inflicted upon the losing side during the pursuit that followed the victory. In this vein du Picq holds out the example of Alexander the great, whose forces, during all his years of warfare, lost fewer than 700 men “to the sword.” They suffered so few casualties simply because they never lost a battle and therefore had to endure only the very minor casualties inflicted by reluctant combatants in close combat and never had to suffer the very significant losses associated with being pursued by a victorious enemy.

The killing during the pursuit has also traditionally been conducted by cavalry, chariot, or tank units, and these have their own form of psychological distance, which enables their killing activity. In combat a good horseman becomes one with his mount and is transformed into a remarkable new species. He is no longer a man, but is instead a ten-foot tall, half-ton, four-legged, centaur-like “pseudospecies” that has no hesitation to slay the lesser creatures that scurry about beneath him — especially if these lesser beings are being pursued and have their backs turned.

Emotional distance also includes:

  • Cultural distance, such as racial and ethnic differences, which permits the killer to dehumanize the victim.
  • Moral distance, which takes into consideration the kind of intense belief in moral superiority and vengeful/vigilante actions associated with many civil wars.
  • Social distance, which considers the impact of a lifetime of practice in thinking of a particular class as less than human in a socially stratified environment.
  • Mechanical distance, which includes the sterile “Nintendo Game” unreality of killing through a TV screen, a thermal sight, a sniper sight, or some other kind of mechanical buffer that permits the killer to deny the humanity of his victim.

A Profoundly Traumatic Experience

Saturday, December 27th, 2014

Killing in close combat is a profoundly traumatic experience David Grossman (On Killing) claims:

Years of research in this field have convinced me that there is a powerful resistance in most individuals to killing their fellow human beings. I have become equally convinced that there is a set of circumstances and pressures that can cause most human beings to overcome this resistance.

The factors that overcome this resistance are the same factors Milgram found in his infamous electric-shock experiments:

  1. Distance from the Victim
  2. Demands of Authority
  3. Group Absolution

Psyched Up

Friday, December 26th, 2014

“Be it agreeable or terrible, the less something is foreseen, the more does it cause pleasure or dismay. This is nowhere better illustrated than in war where every surprise strikes terror even to those who are much stronger,” Xenophon says.

Human beings generally need to be emotionally prepared in order to engage in aggressive behavior, David Grossman (On Killing) says:

The combat soldier, in particular needs to be “psyched up” for a confrontation. An attack launched at a time and place when the soldier thought he was safe takes advantage of the stress of uncertainty, destroys his sense of being in control of his environment, and greatly increases the probability that he will opt for flight (i.e., a rout) or submission (i.e., mass surrender). A highly mobile, fluid enemy who can launch surprise attacks in what the enemy believes is his rear area is particularly daunting and confusing, and the presence of such interpersonal hostility can be disproportionately destructive to the will to fight.

Viewed in another way, attacking at an unexpected and unprepared location results in the defender’s inability to orient himself. The defender’s observation-orientation-decision-action cycle, or his “OODA Loop,” has thus been stalled, and he cannot respond. Having been caught off balance, the defender panics and attempts to gain time by fleeing, or simply submits by surrendering in confusion to his assailant.

Psychological research in the area of information processing and human decision making has established a broad base of understanding of normal psychological responses to an “information overload” environment. As too much information comes in, the typical reaction is to fall back initially on heuristic, or “rule of thumb,” responses. These heuristic responses involve processes such as: “anchoring” on early information to the exclusion of later, possibly conflicting, or more accurate data; making decisions based on their “availability” or the ease with which a particular response comes to mind (e.g., repeating a recently executed maneuver); or falling into a “conformational bias” in which only information that confirms or supports the current working hypothesis is processed and contrary information is filtered out of consciousness. If these heuristic responses fail (as they are quite likely to), then the normal human response is to become trapped into a “cascading effect” in which he reacts with increasingly inappropriate actions and either fails completely (i.e., is destroyed by the enemy) or completely stops trying and falls into a paralyzed state sometimes referred to by psychologists as “learned helplessness” but always referred to by soldiers as “surrender.”

A classical example of this kind of maneuver warfare operation can be observed in Nathan Bedford Forrest’s campaign against William Tecumseh Sherman’s forces during Sherman’s march to the sea in the America Civil War. Forrest, with only a few thousand cavalry, forced Sherman to leave more than 80,000 men to guard his supply centers and is 340-mile-long supply line. On several occasions Forrest fell on unprepared units three times his size and inflicted disproportionate casualties upon his hapless enemies. His primary weapon was surprise. The rear-echelon units he was attacking were not humanly capable of maintaining a fighting pitch at all times, while Forrest’s troops entered battle having already attained “morale superiority” since they had plenty of time to prepare themselves emotionally prior to launching their surprise attacks.