Lone Survivor

October 24th, 2014

When the Lone Survivor movie came out, I read up on Operation Red Wings, but I only just got around to watching the movie.

The original plan kicked off with a six-man team of Marine Scout Snipers walking in under cover of darkness, but, because SOF air elements were going to be involved in later stages, that turned into a four-man team of SEALs inserting by helicopter — something the original planners thought would compromise the mission by revealing coalition presence in this area.

The movie depicts all the SEALs as fully kitted out and visibly encumbered, but they’re not wearing helmets, and it’s not clear that they’re wearing body armor, either. Two are armed with suppressed sniper rifles, while the other two have carbines with grenade launchers.

They make their way to a decent vantage point from which they can spot and indentify their (surveillance) target, Ahmad Shah, and his surprisingly large “army” of fighters.

They’re just out of rifle range and aren’t on a mission to take out Shah themselves. One of the spotters asks, “You make that shot?” and the sniper replies, “Negative. Wouldn’t have authority anyway.”

(I couldn’t help but wonder, what could four designated marksmen, all armed with higher-caliber semi-auto rifles, do to a few dozen insurgents caught in the open like that, before they could respond?)

In the mountainous, wooded terrain, the SEALs have “comms” problems and can’t report back their findings, call in support, or request an extraction — but they came expecting comms problems, so they don’t panic. On the other hand, they don’t seem to have a solid plan for handling the local situation without support.

In particular, they don’t seem to have a solid plan for handling a few locals stumbling upon their position. No one on the team speaks the local language, and no one has a plan for dealing with semi-hostile locals. When you don’t have a plan, you don’t make good decisions. I’m not sure what a good decision would have been, but both shooting the locals and letting them go have obvious downsides. I suppose they didn’t bring zip-ties? Paracord? A few extra hours could have made a big difference. Letting the enemy know you’re there, and that there are only four of you, seems like something you should put off as long as possible.

By the time the pursuers catch up to them, the SEALs are deep in rough, wooded terrain — where they do not have clear lines of sight for long-range shots.

One thing the movie drives home is just how physical modern combat can be. The SEALs take a beating from scrambling through the rocky, wooded terrain, take some terrible falls, and then, on top of that, get cut to pieces by fragments from RPGs, mortar bombs, ricochets, etc. And then they actually get shot. The through-and-through shots to the arms and legs don’t seem to slow them down much, but it all adds up.

If the film starts to feel more “Hollywood” by the end, that’s because it diverges from the book — and reality.

If you “enjoyed” Black Hawk Down, you should see Lone Survivor, too.

Sergeant-at-Arms Kevin Vickers receives standing ovation

October 24th, 2014

I was watching Sergeant-at-Arms Kevin Vickers receive a standing ovation, thinking, you could not create a more fitting conservative hero — silver-haired sergeant, in traditional costume, doing his duty — when I heard the announcer mention that Vickers “found his weapon in his office” before gunning down the Muslim-extremist attacking Parliament. Might I suggest having the Sergeant-at-Arms armed — with more than a mace?

Get There First

October 24th, 2014

It took five or six Shermans to take out a single Tiger tank — or did it?

Examining 98 engagements in the Ardennes, Army researchers discovered something rather interesting.

The study concluded that the single most important factor in tank-versus-tank fighting was which side spotted the enemy first, engaged first and hit first. This gave the defender a distinct advantage, since the defending tanks were typically stationary in a well-chosen ambush position. …

The side that saw first and hit first usually had the advantage in the first critical minute … the overall record suggests that the Sherman was 3.6 times more effective than the Panther … popular myths that that Panthers enjoyed a 5-to-1 kill ratio against Shermans or that it took five Shermans to knock out a Panther have no basis in historical records. The outcome of tank-versus-tank fighting was more often determined by the tactical situation than the technical situation.

Since the Shermans were more numerous and mechanically reliable, they typically got to the key terrain first. They kept going whereas the Panthers and Tigers could only road march short distances from their transporters and railheads. Thus, in most engagements the Shermans could get set up because there were so many of them and they tended to run reliably.

If there was a hill to be grabbed, a road to be blocked, the Shermans would get there first. By contrast, the German tanks were mechanically fragile. For all their power they were on average, late to the party. Therefore, on a fluid battlefield the Shermans would almost always arrive first on the key terrain and bushwhack the panzers.

None of the experts are experts

October 23rd, 2014

There are whole fields in which none of the experts are experts, Gregory Cochran notes:

At the high point of Freudian psychoanalysis in the US,  I figure that a puppy had a significantly positive effect on your mental health, while the typical psychiatrist of the time did not.  We (the US) listened to psychologists telling us how to deal with combat fatigue: the Nazis and Soviets didn’t, and had far less trouble with it than we did.

Fidel Castro, a jerk,  was better at preventive epidemiology (with AIDS) than the people running the CDC.

In the 1840s, highly educated doctors knew that diseases were not spread by contagion, but old ladies in the Faeroe Islands (along with many other people) knew that some were.

In 2003, the ‘experts’ (politicians, journalists, pundits, spies) knew that Saddam had a nuclear program, but the small number of people that actually knew anything about nuclear weapons development and something about Iraq (at the World Almanac level, say) knew that wasn’t so.

The educationists know that heredity isn’t a factor in student achievement, and they dominate policy — but they’re wrong.  Some behavioral geneticists and psychometricians know better.

In many universities, people were and are taught that really are no cognitive or behavioral differences between the sexes — in part because of ‘experts’ like John Money.  Anyone with children tends to learn better.

Infected by Politics

October 23rd, 2014

The public-health establishment has been infected by politics, Heather Mac Donald explains:

The public-health establishment has unanimously opposed a travel and visa moratorium from Ebola-plagued West African countries to protect the U.S. population. To evaluate whether this opposition rests on purely scientific grounds, it helps to understand the political character of the public-health field. For the last several decades, the profession has been awash in social-justice ideology. Many of its members view racism, sexism, and economic inequality, rather than individual behavior, as the primary drivers of differential health outcomes in the U.S. According to mainstream public-health thinking, publicizing the behavioral choices behind bad health—promiscuous sex, drug use, overeating, or lack of exercise—blames the victim.

The Centers for Disease Control and Prevention’s Healthy Communities Program, for example, focuses on “unfair health differences closely linked with social, economic or environmental disadvantages that adversely affect groups of people.” CDC’s Healthy People 2020 project recognizes that “health inequities are tied to economics, exclusion, and discrimination that prevent groups from accessing resources to live healthy lives,” according to Harvard public-health professor Nancy Krieger. Krieger is herself a magnet for federal funding, which she uses to spread the message about America’s unjust treatment of women, minorities, and the poor. To study the genetic components of health is tantamount to “scientific racism,” in Krieger’s view, since doing so overlooks the “impact of discrimination” on health. And of course the idea of any genetic racial differences is anathema to Krieger and her left-wing colleagues.

Super-Intelligent Humans Are Coming

October 23rd, 2014

Super-intelligent humans are coming, Stephen Hsu argues:

The Social Science Genome Association Consortium, an international collaboration involving dozens of university labs, has identified a handful of regions of human DNA that affect cognitive ability. They have shown that a handful of single-nucleotide polymorphisms in human DNA are statistically correlated with intelligence, even after correction for multiple testing of 1 million independent DNA regions, in a sample of over 100,000 individuals.

If only a small number of genes controlled cognition, then each of the gene variants should have altered IQ by a large chunk—about 15 points of variation between two individuals. But the largest effect size researchers have been able to detect thus far is less than a single point of IQ. Larger effect sizes would have been much easier to detect, but have not been seen.

This means that there must be at least thousands of IQ alleles to account for the actual variation seen in the general population. A more sophisticated analysis (with large error bars) yields an estimate of perhaps 10,000 in total.1

Each genetic variant slightly increases or decreases cognitive ability. Because it is determined by many small additive effects, cognitive ability is normally distributed, following the familiar bell-shaped curve, with more people in the middle than in the tails. A person with more than the average number of positive (IQ-increasing) variants will be above average in ability. The number of positive alleles above the population average required to raise the trait value by a standard deviation—that is, 15 points—is proportional to the square root of the number of variants, or about 100. In a nutshell, 100 or so additional positive variants could raise IQ by 15 points.

Given that there are many thousands of potential positive variants, the implication is clear: If a human being could be engineered to have the positive version of each causal variant, they might exhibit cognitive ability which is roughly 100 standard deviations above average. This corresponds to more than 1,000 IQ points.

Inside Gamergate’s (successful) attack on the media

October 23rd, 2014

Caitlin Dewey of the Washington Post looks inside Gamergate’s (successful) attack on the media:

It’s about fighting what they see as a massive, progressive conspiracy among female game developers, feminists and sympathetic, left-leaning media outlets, all of whom are purportedly bent on the destruction of the traditional “gamer” lifestyle.

[...]

The attack strategy is two-part: first, boycott the sites in question; second, pressure their advertisers to do the same.

The “operation,” as organizers have dubbed it, is called Disrespectful Nod, and it’s steadily picked up steam since it launched quietly in early September. According to the group’s records, half a dozen advertisers — including significant international companies, such as Unilever and Scottrade — have been persuaded to drop major media buys within the past six weeks.

[...]

But the incident still demonstrates a worrying new trend among the Gamergate crowd: curbing the speech of reporters they don’t like by threatening their advertisers. For a media empire, such as Gawker, of course, one advertiser won’t necessarily make or break operations. But for targeted sites like Gamasutra, a smaller, gaming industry news site, or Gameranx, a five-person operation, targeting advertisers isn’t just a form of protest: It’s a threat to their very existence.

Now, where have we seen these tactics used before?

It’s Impossible to Build on Failure

October 23rd, 2014

It’s impossible to build on failure, Tony Robbins says:

You build only on success. I turned around the United States Army pistol shooting program. I made certain that the first time someone shot a pistol, instead of shooting the .45 caliber pistol from 50 feet away — which is what they were starting these guys out at — I brought the target literally five feet in front of the students. I wouldn’t let them fire the gun until they had rehearsed over and over again the exact perfect shooting form for two hours. By the time they held the gun, they had every technique perfected, so when they fired, they succeeded. BAM!

At first the Army thought it was stupid, but it put ignition into the students’ brain — “WOW! I’ve succeeded!” — versus shooting bullets into the ceiling or floor the first few times. It created an initial sense of certainty.

I believe in setting people up to win. Many instructors believe in setting them up to fail so they stay humble and they are more motivated. I disagree radically. There is a time for that but not in the beginning. People’s actions are very limited when they think they have limited potential. If you have limited belief, you are going to use limited potential, and you are going to take limited action.

Student-Athletes

October 22nd, 2014

I am shocked — shocked! — to find cheating going on at UNC!

A blistering report into an academic fraud scandal at the University of North Carolina released Wednesday found that for nearly two decades two employees in the African and Afro-American Studies department ran a “shadow curriculum” of hundreds of fake classes that never met but for which students, many of them Tar Heels athletes, routinely received A’s and B’s.

Nearly half the students in the classes were athletes, the report found, often deliberately steered there by academic counselors to bolster their worrisomely low grade-point averages and to allow them to continue playing on North Carolina’s teams.

I’m so glad we’ve ferreted out this one isolated program, and America’s student-athletes can continue their long tradition of academic excellence.

The Greatness of George Orwell

October 22nd, 2014

Bruce Charlton discusses the greatness of George Orwell — and his fatal flaw:

My generation was fed Orwell at school from our mid teens — some of the essays such as Shooting an Elephant and Boys’ Weeklies; excerpts from the documentary books such as Down and Out.. and …Wigan Pier; and the two late political novels Animal Farm and 1984.

That Orwell was mostly correct about things was not really argued, but assumed; on the basis that he seemed obviously correct to almost everybody; so far as the English were concerned, Orwell was simply expressing the national character better than we ourselves could have done.

Orwell was claimed both by the Left — on the basis that he was explicitly a socialist through most of his life; and he was claimed by the Right — on the basis that his two best known novels are anti-communist warnings against totalitarianism.

In sum: Orwell’s influence was much as any writer reasonably could have hoped for. And his warnings about the dangers of Leftism and the operations of totalitarianism were as lucid, as explicit, and as forceful as any writer could have made them.

And yet Britain today is an ‘Orwellian’ society to a degree which would have seemed incredible even 25 years ago. The same applies to the USA, where Orwell was also revered.

In particular, the exact types of abuses, manipulations and distortions of language which Orwell spelled-out in fiery capital letters 100 feet high have come to pass; have become routine and unremarked — and they are wholly-successful, barely-noticed, stoutly-defended — and to point them out is regarded either as trivial nitpicking or evasive rhetoric.

The current manifestations of the sexual revolution, deploying the most crudely Orwellian appropriations and taboos of terminology, go further than even Orwell envisaged. The notion that sexual differences could so easily be subverted, and their evaluations so swiftly reversed; apparently at will and without any apparent limit would — I think — have gone beyond the possibilities Orwell could have realistically imagined.

(Indeed, it is characteristic of the Kafka-esque absurdity of modern Western life that a plain description of everyday reality — say in a state bureaucracy, the mass media or university — is simply disbelieved, it ‘does not compute’ and is rejected by the mind. And by this, nihilistic absurdity is safeguarded.)

I think Orwell would never have believed that people would accept, en masse, and so readily go along with (willingly embrace and enforce, indeed), the negative relabelling of normal biological reality, and he substitution of arbitrary and rapidly changing inverted norms: for Orwell, The Proles were sexually normal, like animals, and would continue so. The elites, whatever their personal preferences and practices, left them alone in this — presumably because sexuality was seen as a kind of bedrock.

And this leads to Orwell’s fatal flaw — which was exactly sexuality.

Projected Recoilless Improvised Grenade

October 22nd, 2014

The Projected Recoilless Improvised Grenade (PRIG) was a shoulder fired weapon developed by the Provisional Irish Republican Army (PIRA) for use against lightly armored vehicles:

The launcher consisted of a length of steel tube adapted to accept a charge of black powder in the middle by way of a capped off perforated pipe welded in place. The charge is wired to a simple circuit, often utilizing a light bulb holder as an arming switch and fired by a long arm micro-switch which serves as a trigger.

PRIG Diagram

The warhead itself consists of a standard food tin filled with 600g of Semtex, complete with a frontal explosive lens to create an armor piercing shaped charge. This round was designed to explode on impact, being an adaption of an earlier used improvised stick-grenade known as a ‘drogue bomb’ which was sometimes fitted with a trash bag to act as a guide parachute.

PRIG Round

To the rear of the launcher was placed the ‘counter-shot’, incorporated to utilize the recoilless principle (Reduced to as little as a .22lr rifle’s, according to some!). This consisted of two packets of digestive tea biscuits, wrapped in j-cloth.

PRIG Counter-Shot Digestive Biscuits

Transportation, Divergence, and the Industrial Revolution

October 21st, 2014

Nick Szabo explores transportation, divergence, and the Industrial Revolution:

After about 1000 AD northwestern Europe started a gradual switch from using oxen to using horses for farm traction and transportation.  This trend culminated in an eighteenth-century explosion in roads carrying horse-drawn carriages and wagons, as well as in canals, and works greatly extending the navigability of rivers, both carrying horse-drawn barges. This reflected a great rise in the use of cultivated fodder, a hallmark of the novel agricultural system that was evolving in northwestern Europe from the start of the second millennium: stationary pastoralism.  During the same period, and especially in the seventeenth through nineteenth centuries, most of civilized East Asia, and in particular Chinese civilization along its coast, navigable rivers, and canals, faced increasing Malthusian pressures and evolved in the opposite direction: from oxen towards far more costly and limited human porters. Through the early middle ages China had been far ahead, in terms of division of labor and technology, of the roving bandits of northern Europe, but after the latter region’s transition to stationary pastoralism that gap closed and Europe surged ahead, a growth divergence that culminated in the industrial revolution.  In the eighteenth century Europe, and thus in the early industrial revolution, muscle power was the engine of land transportation, and hay was its gasoline.

Metcalfe’s Law states that a value of a network is proportional to the square of the number of its nodes.  In an area where good soils, mines, and forests are randomly distributed, the number of nodes valuable to an industrial economy is proportional to the area encompassed.  The number of such nodes that can be economically accessed is an inverse square of the cost per mile of transportation.  Combine this  with Metcalfe’s Law and we reach a dramatic but solid mathematical conclusion: the potential value of a land transportation network is the inverse fourth power of the cost of that transportation. A reduction in transportation costs in a trade network by a factor of two increases the potential value of that network by a factor of sixteen. While a power of exactly 4.0 will usually be too high, due to redundancies, this does show how the cost of transportation can have a radical nonlinear impact on the value of the trade networks it enables.  This formalizes Adam Smith’s observations: the division of labor (and thus value of an economy) increases with the extent of the market, and the extent of the market is heavily influenced by transportation costs (as he extensively discussed in his Wealth of Nations).

Q.E.D.

October 21st, 2014

American political and social life today is pretty much one great big Q.E.D. for the two main theses of The Bell Curve, Charles Murray argues:

Those theses were, first, that changes in the economy over the course of the 20th century had made brains much more valuable in the job market; second, that from the 1950s onward, colleges had become much more efficient in finding cognitive talent wherever it was and shipping that talent off to the best colleges. We then documented all the ways in which cognitive ability is associated with important outcomes in life — everything from employment to crime to family structure to parenting styles. Put those all together, we said, and we’re looking at some serious problems down the road.

Gian-Carlo Rota’s Ten Lessons

October 21st, 2014

Gian-Carlo Rota of MIT shares ten lessons he wishes he had been taught:

  1. Lecturing
  2. Blackboard Technique
  3. Publish the same results several times.
  4. You are more likely to be remembered by your expository work.
  5. Every mathematician has only a few tricks.
  6. Do not worry about your mistakes.
  7. Use the Feynmann method.
  8. Give lavish acknowledgments.
  9. Write informative introductions.
  10. Be prepared for old age.

His lesson on lecturing:

The following four requirements of a good lecture do not seem to be altogether obvious, judging from the mathematics lectures I have been listening to for the past forty-six years.

Every lecture should make only one main point
The German philosopher G. W. F. Hegel wrote that any philosopher who uses the word “and” too often cannot be a good philosopher. I think he was right, at least insofar as lecturing goes. Every lecture should state one main point and repeat it over and over, like a theme with variations. An audience is like a herd of cows, moving slowly in the direction they are being driven towards. If we make one point, we have a good chance that the audience will take the right direction; if we make several points, then the cows will scatter all over the field. The audience will lose interest and everyone will go back to the thoughts they interrupted in order to come to our lecture.

Never run overtime
Running overtime is the one unforgivable error a lecturer can make. After fifty minutes (one microcentury as von Neumann used to say) everybody’s attention will turn elsewhere even if we are trying to prove the Riemann hypothesis. One minute overtime can destroy the best of lectures.

Relate to your audience
As you enter the lecture hall, try to spot someone in the audience with whose work you have some familiarity. Quickly rearrange your presentation so as to manage to mention some of that person’s work. In this way, you will guarantee that at least one person will follow with rapt attention, and you will make a friend to boot.

Everyone in the audience has come to listen to your lecture with the secret hope of hearing their work mentioned.

Give them something to take home
It is not easy to follow Professor Struik’s advice. It is easier to state what features of a lecture the audience will always remember, and the answer is not pretty. I often meet, in airports, in the street and occasionally in embarrassing situations, MIT alumni who have taken one or more courses from me. Most of the time they admit that they have forgotten the subject of the course, and all the mathematics I thought I had taught them. However, they will gladly recall some joke, some anecdote, some quirk, some side remark, or some mistake I made.

How Palmer Luckey Created Oculus Rift

October 20th, 2014

If there is a case to be made that unconventional schooling, without busywork or fixed schedules, helps unleash creativity, Palmer Luckey, creator of the Oculus Rift, might well be Exhibit A for the prosecution:

His mother, Julie, home-schooled all four of her children during a period of each of their childhoods (Luckey’s father, Donald, is a car salesman), but Palmer was the only one of the kids who never went back; he liked the flexibility too much. In his ample free time, he devoted most of his considerable energy to teaching himself how to build electronics from scratch.

No one else in Luckey’s family was especially interested in technology, but his parents were happy to give over half of the garage at their Long Beach, California, home to his experiments. There, Luckey quickly progressed from making small electronics to “high-voltage stuff” like lasers and electromagnetic coilguns. Inevitably, there were mishaps. While working on a live Tesla coil, Luckey once accidentally touched a grounded metal bed frame, and blew himself across the garage; another time, while cleaning an infrared laser, he burned a gray spot into his vision.

When Luckey was 15, he started “modding” video game equipment: taking consoles like the Nintendo GameCube, disassembling them, and modifying them with newer parts, to transform them into compact, efficient and hand-crafted devices. “Modding was more interesting than just building things entirely using new technologies,” Luckey told me. “It was this very special type of engineering that required deeply understanding why people had made the decisions they made in designing the hardware.”

Luckey soon became obsessed with PC gaming. How well, he wondered, could he play games? “Not skill level,” he clarified to me, “but how good could the experience be?” By this time, Luckey was making good money fixing broken iPhones, and he spent most of it on high-end gaming equipment in order to make the experience as immersive as possible. At one point, his standard gaming setup consisted of a mind-boggling six-monitor arrangement. “It was so sick,” he recalled.

But it wasn’t enough. Luckey didn’t just want to play on expensive screens; he wanted to jump inside the game itself. He knew the military sometimes trained soldiers using virtual reality headsets, so he set out to buy some — on the cheap, through government auctions. “You’d read that these VR systems originally cost hundreds of thousands of dollars, and you thought, clearly if they’re that expensive, they must be really good,” Luckey said. Instead, they fell miles short of his hopes. The field of view on one headset might be so narrow that he’d feel as if he was looking through a half-opened door. Another might weigh ten pounds, or have preposterously long lag between his head moving and the image reacting onscreen — a feature common to early VR that literally makes users nauseated.

So Luckey decided to do what he’d been doing for years with game consoles: He’d take the technology apart, figure out where it was falling short and modify it with new parts to improve it. Very quickly, he realized that this wasn’t going to be simple. “It turned out that a lot of the approaches the old systems were taking were dead ends,” he said.

The problem was one of fundamental design philosophy. In order to create the illusion of a three-dimensional digital world from a single flat screen, VR manufacturers had typically used complex optical apparatuses that magnified the onscreen image to fill the user’s visual field while also correcting for any distortion. Because these optics had to perform a variety of elaborate tricks to make the magnified image seem clear, they were extremely heavy and costly to produce.

Luckey’s solution to this dilemma was ingeniously simple. Why use bulky, expensive optics, he thought, when he could put in cheap, lightweight lenses and then use software to distort the image, so that it came out clear through them? Plus, he quickly realized that he could combine these lenses with screens from mobile phones, which the smartphone arms race had made bigger, crisper and less expensive than ever before. “That let me make something that was a lot lighter and cheaper, with a much wider field of view, than anything else out there,” he said.

From 2009 to 2012, while also taking college classes and working at the University of Southern California’s VR-focused Institute for Creative Technologies, Luckey poured countless hours into creating a working prototype from this core vision. He tinkered with different screens, mixed and matched parts from his collection of VR hardware, and refined the motion tracking equipment, which monitored the user’s head movements in real-time. Amazingly, considering the eventual value of his invention, Luckey was also posting detailed reports about his work to a 3-D gaming message board. The idea was sitting there for anyone to steal.

But, as Brendan Iribe put it to me, “Maybe his name is Luckey for a reason.” By that point, no one was interested in throwing more money away on another doomed virtual reality project.

Then, in early 2012, luck struck again when the legendary video game programmer John Carmack stumbled onto his work online and asked Luckey if he could buy one of his prototypes. Luckey sent him one for free. “I played it super cool,” he assured me. Carmack returned the favor in a big way: At that June’s E3 convention — the game industry’s gigantic annual commercial carnival — he showed off the Rift prototype to a flock of journalists, using a repurposed version of his hit game “Doom 3” for the demonstration. The response was immediate and ecstatic. “I was in Boston at a display conference at the time,” Luckey said, “and people there were like, ‘Dude, Palmer, everyone’s writing articles about your thing!’”

The rest, as they say, is virtual history: Over the next 21 months, Luckey partnered with Iribe, Antonov and Mitchell, launched a Kickstarter campaign that netted $2.4 million in funding — nearly ten times its initial goal — and joined the Facebook empire, thereby ensuring the company the kind of financial backing that most early-stage tech companies can only dream of.

The Oculus Rift is now entering its final stages of development — it’s slated for commercial release next year — and this fall Samsung will release a scaled-down product for developers and enthusiasts, powered by Oculus technology, that will clip over the company’s Galaxy Note 4 smartphone. But Luckey knows that success is by no means assured. “To this point, there has never been a successful commercial VR product, ever,” Luckey told me. “Nobody’s actually managed to pull this off.” Spend a few minutes inside the Rift, though, and one can’t help but believe that Luckey will be the one to do it.