Robbery under Law

Monday, January 26th, 2009

Lawrence Auster asks, Where is Moldbug really at?, and Mencius replies with a brief discussion of Evelyn Waugh’s Robbery under Law (available in Waugh Abroad):

Maybe one way to answer is to quote the closing words of Evelyn Waugh’s Robbery under Law, written about Mexico in 1939. Waugh describes the destruction, already apparent by 1939 but trivial in comparison to the chaos of Mexico today, that American secular liberalism wreaked on Mexico’s old Catholic polity. Since WWII this tragedy has been replicated around the planet, creating the horror we now know as the Third World.

Waugh concludes:

A conservative is not merely an obstructionist who wishes to resist the introduction of novelties; nor is he, as was assumed by most 19th-century parliamentarians, a brake to frivolous experiment. He has positive work to do, whose value is particularly emphasized by the plight of Mexico. Civilization has no force of its own beyond what is given it from within. It is under constant assault and it takes most of the energies of civilized man to keep going at all. There are criminal ideas and a criminal class in every nation and the first action of every revolution, figuratively and literally, is to open the prisons. Barbarism is never finally defeated; given propitious circumstances, men and women who seem quite orderly, will commit every conceivable atrocity. The danger does not come merely from habitual hooligans; we are all potential recruits for anarchy. Unremitting effort is needed to keep men living together at peace; there is only a margin of energy left over for experiment however beneficent. Once the prisons of the mind have been opened, the orgy is on. There is no more agreeable position than that of dissident from a stable society. Theirs are all the solid advantages of other people’s creation and preservation, and all the fun of detecting hypocrisies and inconsistencies. There are times when dissidents are not only enviable but valuable. The work of preserving society is sometimes onerous, sometimes almost effortless. The more elaborate the society, the more vulnerable it is to attack, and the more complete its collapse in case of defeat. At a time like the present it is notably precarious. If it falls we shall see not merely the dissolution of a few joint-stock corporations, but of the spiritual and material achievements of our history. There is nothing, except ourselves, to stop our own countries becoming like Mexico. That is the moral, for us, of her decay.

While obviously I agree with this and I suspect you do as well, it has little to do with the word “conservative” as used by most Americans today. Even Waugh in this passage is trying, obviously without success, to redefine “conservative.” Therefore I prefer the word “reactionary,” meaning one who hopes to see the decay not just stop but reverse.

The American Rebellion

Monday, January 26th, 2009

Rather than discuss the American Revolution, in which good triumphed over evil, Mencius Moldbug prefers to discuss the American Rebellion, in which evil triumphed over good:

Let’s call our first witness. His name is Thomas Hutchinson, and he is the outstanding Loyalist figure of the prerevolutionary era. His Strictures upon the Declaration of the Congress at Philadelphia is here. It is not long. Please do him the courtesy of reading it in full, then continue below.

Now: what do you notice about Hutchinson’s Strictures? Well, the first thing you notice is: before today, you had never read it. Or even heard of it. Or probably even its author. What is the ratio of the number of people who have read theDeclaration to the number who have read the Strictures? 105? 106? Something like that. Isn’t that just slightly creepy?

The second thing we notice about the Strictures is its tone — very different from the Declaration. The Declaration shouts at us. The Strictures talk to us. Hutchinson speaks quietly, with just the occasional touch of snark. He adopts the general manner of a sober adult trapped in an elevator with a drunk, knife-wielding teenager.
[...]
What we learn from the Strictures is that, as in the rest of American history, there is absolutely no guarantee that a detailed and rational argument about a substantive factual question will prevail, whether through means military, political, or educational, over a meretricious tissue of lies [like the Declaration].

What’s with the Economics Profession?

Sunday, January 25th, 2009

What’s with the Economics Profession? Arnold Kling cites Will Wilkinson:

When I see Delong more or less indiscriminately trashing everyone at Chicago, or Krugman trashing Barro, etc., what doesn’t arise in my mind is a sense that some of these guys really know what they’re talking about while some of them are idiots. What arises in my mind is the strong suspicion that economic theory, as it is practiced and taught at the world’s leading institutions, is so far from consensus on certain fundamental questions that it is basically useless for adjudicating many profoundly important debates about economic policy.

Abstraction Leads to Procrastination

Sunday, January 25th, 2009

A team of researchers led by Sean McCrea of the University of Konstanz, in Germany, have found that people act in a timely way when given concrete tasks but dawdle when they view them in abstract terms:

Dr McCrea and his colleagues conducted three separate studies. First they recruited 34 students who were offered €2.50 ($3.30) for completing a questionnaire within the subsequent three weeks. Half of the students were then sent an email asking them to write a couple of sentences on how they might go about various activities, such as opening a bank account or keeping a diary. The others were asked to write about why someone might want to open a bank account or keep a diary.
Click here

For their second study, Dr McCrea and his colleagues recruited 50 students, who were offered the same sums and timespans as the first lot. Half of these students were asked to provide examples of members of a group, for example, naming any type of bird. The task was inverted for the other students, who were asked to name a category to which birds belong.

Finally the researchers asked 51 students, who were again offered cash and given a deadline, to examine a copy of “La Parade” by Georges Seurat, a 19th-century French artist. Half were given information about pointillism, the technique Seurat used to create the impression of solid colours from small dots of paint. The others were told that the painting was an example of neo-impressionism in which the artist had used colour to evoke harmony and emotion. Both groups were then asked to rate the importance of colour in 13 other works of art.

As the team report in Psychological Science, in all three studies, those who were presented with concrete tasks and information responded more promptly than did those who were asked to think in an abstract way. Moreover, almost all the students who had been prompted to think in concrete terms completed their tasks by the deadline while up to 56% of students asked to think in abstract terms failed to respond at all.

How low can homes go? Try $0

Sunday, January 25th, 2009

How low can homes go? Try $0 — in Detroit:

Detroit real estate agent Ian Mason has sold homes for $1.

When I asked him to check the listings for other properties at that price, he found four more.

He then took me to a white, clapboard-sided house that his company, Bearing Group Real Estate Brokerage, has listed.

“If you want this house, you can have it,” he said. “I’ll just give it to you.”

“I’m not allowed to accept anything of value from a source,” I told him.

“Who said I was giving you anything of value?” he replied.

The median price of a home sold in Detroit last month was $7,500:

Mason counted 1,228 homes listed for under $10,000, 209 of which were under $1,000.

“Many of them are in pretty decent shape,” he said, “and some can be lived in.”
[...]
In the neighborhood where Mason offered me a $0 house (not including closing costs, escrow, taxes, etc.), almost every dwelling was in shambles. Boarded windows. Abandoned cars. Collapsed porches. Ubiquitous graffiti.

The home across the street was charred, likely by arsonists.

We drove through snow nobody would ever plow.

“What’s this place like in the summer?” I asked.

“You wouldn’t be driving through here,” Mason said. “There’s a small chance you’d field a bullet.”

Police stopped patrolling these neighborhoods years ago.

“So if I buy a $1 house, I’m going to need to hire some security?”

“Not necessarily,” Mason said. “Some of these neighborhoods are so desolate, crime isn’t much of a concern.”

“Really?”

“I could take you to 30 square blocks of urban prairie.”

The Motor City had more residents in the 1930s than it does today. About a million people have left since the 1950s, leaving less than a million today.

Enormous buildings sit vacant downtown, their hulking shadows darkening city streets at night. Unemployment is tallied in double digits. And this is how it is before Chrysler, General Motors, Ford and associated companies possibly file for bankruptcy this year.

Is Social Conservatism Necessary?

Sunday, January 25th, 2009

Is Social Conservatism Necessary? Social conservative James Kalb says, yes, and explains why the conservatives of a previous era didn’t dwell on social issues:

The social issues weren’t issues for Eisenhower because they weren’t public issues at all then. Nobody in public life favored abortion or homosexuality, and people thought of Christianity and the more-or-less traditional family as good things we should all support. They were seen as basic to the social background against which political disputes played out. No one was demanding their eradication as a violation of inclusiveness and tolerance.

That was then and this is now. The situation was changing rapidly by the time of the Goldwater candidacy, and by the late sixties the transformation of social relations had become a driving force of politics. “Change” is now the slogan, and that means giving political will a free field of action.
[...]
You don’t get social justice if you don’t deal with social issues. Those who want “change” are the ones most concerned with them. That is why “getting government out of our bedrooms” has turned out to mean sensitivity training, sexual harassment law, compulsory radical redefinition of marriage, and training children to put condoms on cucumbers.

Kalb believes that leftism demands “supporting the system and otherwise minding our own business by concerning ourselves only with tolerant and private goals”:

It is therefore basic to the liberal view that people must be made to view non-liberal goods and institutions as wrong and shameful. In particular, they must be taught to reject with disgust distinctions not related to the functioning of liberal institutions. That’s what “inclusion” and “tolerance” mean.

For example, the system depends on certified expertise, so it’s OK to distinguish between high school grads and college grads, or even between Harvard grads like Obama and State U grads like Palin. There’s nothing wrong with that. In contrast, distinctions related to family, culture, religion, and inherited community must be suppressed. They have at least as much effect as formal education on what we are and do, but they’re bad because they offer an alternative method of social organization and so threaten liberalism. That is why those who make distinctions based on sex, marital status, or community and cultural background must be squashed.

Hence the extraordinary moralism and intolerance of liberalism, its tendency to treat any tolerance for non-liberal standards and distinctions as the worst human quality imaginable. People become intolerant and moralistic when they confront views and conduct that they believe threaten the basis and functioning of social order. And liberals confront such things everywhere. All history, all nature, all culture, and all religion threaten the basis and functioning of a liberal social order.

Atheistic Theocracy

Sunday, January 25th, 2009

Mencius Moldbug considers our modern government an atheistic theocracy:

You see, the problem is not just that our present system of government — which might be described succinctly as an atheistic theocracy — is accidentally similar to Puritan Massachusetts. As anatomists put it, these structures are not just analogous. They are homologous. This architecture of government — theocracy secured through democratic means — is a single continuous thread in American history.

An excellent historical description of this continuity is George McKenna’s Puritan Origins of American Patriotism — it gets a little confused in the 20th century, but this is to be expected. However, as a demonstration, I am particularly partial to one particular primary source — this article from 1942, which I found somehow in Time Magazine’s wonderful free archive.

The nice thing about reading a primary source from 1942 is that you are assured of its “period” credentials, unless of course someone has hacked Time’s archive. The author cannot possibly know anything about 1943. If you find a text from 1942 that describes the H-bomb, you know that the H-bomb was known in 1942. One such text is entirely sufficient.

What’s great about the “American Malvern” article is that, while it describes a political program you will place instantly, it describes it in a very odd way. You are used to thinking of this perspective, which is obviously somewhere toward the left end of your NPR dial, as representative of a political movement. Instead, the anonymous Time reporter describes it as a religious (“super-protestant,” to be exact) program. Isn’t that just bizarre?

We have caught the worm in the act of turning. The political program and perspective that we think of as progressive is, or is at least descended from, the program of a religious sect. Unsurprisingly, this sect, best known as ecumenical mainline Protestantism, is historically the most powerful form of American Christianity — and happens to be the direct, linear descendant of Professor Staloff’s Puritans. (You can also see it in abolitionism, the Social Gospel, the Prohibitionists, and straight on down to global warming. The mindset never changes.)

For a brief snapshot of where it is today, try this article. Note that Congregationalist and Puritan are basically synonyms, and American Unitarianism is a spinoff of Congregationalism. Of course, these belief systems have evolved since the time when these labels meant anything. Since the 1960s they have merged into one warm, mushy, NPR-flavored whole, which we here at UR sometimes refer to as Universalism. Michael Lerner is perhaps the ultimate Universalist.

Thus we see the whole, awful picture merge together. It is Cthulhu. We don’t just live in something vaguely like a Puritan theocracy. We live in an actual, genuine, functioning if hardly healthy, 21st-century Puritan theocracy.

What this means is that you can trust hardly any of your beliefs. You were educated by this system, which purports to be a truth machine but is clearly nothing of the sort. Since the US is not the Soviet Union, hard scientific facts — physics, chemistry, and biology, are unlikely to be wrong. But the Soviet Union actually did pretty well with hard science.

Other than that, you have no rational reason to trust anything coming out of the Cathedral — that is, the universities and press. You have no more reason to trust these institutions than you have to trust, say, the Vatican. In fact, they are motivated to mislead you in ways that the Vatican is not, because the Vatican does not have deep, murky, and self-serving connections in the Washington bureaucracy. They claim to be truth machines. Why wouldn’t they?

Earthquake Lessons

Saturday, January 24th, 2009

Stewart Brand (The Whole Earth Catalog, How Buildings Learn) shares a number of lessons learned from the 1989 Loma Prieta earthquake, where volunteer rescuers in San Francisco’s Marina District outnumbered professionals three-to-one during the critical first few hours:

  • Collect thoughts, then collect tools! These are some of the tools that have proven useful for earthquake search and rescue and for fighting fires while they’re still small:
    • Gas-powered saws
    • Hand saws
    • Axes
    • Ladders
    • Crow bars and pry bars
    • Bolt cutters
    • Wrenches for gas valves
    • Flashlights, miner’s lights, lanterns, extra batteries
    • Portable generator and power tools and work lights
    • Jacks, blocks, and shoring material such as 4 x 4 lumber
    • Rope
    • Shovels
    • Work gloves, boots
    • Loud hailers
    • Buckets

    “A lot of people don’t know it, but the best fire extinguisher in the world is a garden hose with a hand shut-off nozzle and enough hose to reach any part of your building. If you don’t have a hose, use a bucket.” — Bob Jabs

  • In any collapsed building, assume there are people trapped alive. Locate them, let them know everything will be done to get them out.
  • Searching a building, call out. “Anybody in here? Anybody need help? Shout or bang on something if you can hear my voice.”
  • After an earthquake, further collapse is not the main danger. Fire is.
  • If you want to lend your help, ask! If you want to be helped, ask!
  • Fire fighting is a series of mistakes, corrected as soon as possible.
  • Bystanders make the convenient assumption that everything is being taken care of by the people already helping. That’s seldom accurate.
  • Join a team or start a team. Divide up the tasks. Help leadership emerge.

Was There Ever a Default on U.S. Treasury Debt?

Saturday, January 24th, 2009

Was There Ever a Default on U.S. Treasury Debt? Well, yes, Alex J. Pollock explains:

The United States quite clearly and overtly defaulted on its debt as an expediency in 1933, the first year of Franklin Roosevelt’s presidency. This was an intentional repudiation of its obligations, supported by a resolution of Congress and later upheld by the Supreme Court.

Granted, the circumstances were somewhat different in those days, since government finance still had a real tie to gold. In particular, U.S. bonds, including those issued to finance the American participation in the First World War, provided the holders of the bonds with an unambiguous promise that the U.S. government would give them the option to be repaid in gold coin.

Nobody doubted the clarity of this “gold clause” provision or the intent of both the debtor, the U.S. Treasury, and the creditors, the bond buyers, that the bondholders be protected against the depreciation of paper currency by the government.

Unfortunately for the bondholders, when President Roosevelt and the Congress decided that it was a good idea to depreciate the currency in the economic crisis of the time, they also decided not to honor their unambiguous obligation to pay in gold.

This went to the Supreme Court, which ruled in favor of Congress by a vote of 5 to 4. The majority opinion, written by Chief Justice Hughes, made this point:

Contracts, however express, cannot fetter the constitutional authority of the Congress.

Justice McReynolds, writing on behalf of the four dissenting justices, made these points:

  • The enactments here challenged will bring about the confiscation of property rights and repudiation of national obligations.
  • The holder of one of these certificates was owner of an express promise by the United States to deliver gold coin of the weight and fineness established.
  • Congress really has inaugurated a plan primarily designed to destroy private obligations, repudiate national debts, and drive into the Treasury all gold within the country in exchange for inconvertible promises to pay, of much less value.
  • Loss of reputation for honorable dealing will bring us unending humiliation.

None of this would have shocked David Hume, who said the following in his Of Public Credit:

Contracting debt will almost infallibly be abused in every government. It would scarcely be more imprudent to give a prodigal son a credit in every banker’s shop in London, than to empower a statesman to draw bills upon posterity.

Good Old-Fashioned Nanotechnology

Saturday, January 24th, 2009

Ed Regis believes that good old-fashioned nanotechnology has the potential to change everything:

I specify the old-fashioned version because nanotechnology is decidedly no longer what it used to be. Back in the mid-1980s when Eric Drexler first popularized the concept in his book Engines of Creation, the term referred to a radical and grandiose molecular manufacturing scheme. The idea was that scientists and engineers would construct vast fleets of “assemblers,” molecular-scale, programmable devices that would build objects of practically any arbitrary size and complexity, from the molecules up. Program the assemblers to put together an SUV, a sailboat, or a spacecraft, and they’d do it — automatically, and without human aid or intervention. Further, they’d do it using cheap, readily-available feedstock molecules as raw materials.

The idea sounds fatuous in the extreme…until you remember that objects as big and complex as whales, dinosaurs, and sumo wrestlers got built in a moderately analogous fashion: they began as minute, nanoscale structures that duplicated themselves, and whose successors then differentiated off into specialized organs and other components. Those growing ranks of biological marvels did all this repeatedly until, eventually, they had automatically assembled themselves into complex and functional macroscale entities. And the initial seed structures, the gametes, were not even designed, built, or programmed by scientists: they were just out there in the world, products of natural selection. But if nature can do that all by itself, then why can’t machines be intelligently engineered to accomplish relevantly similar feats?

Latter-day “nanotechnology,” by contrast, is nothing so imposing. In fact, the term has been co-opted, corrupted, and reduced to the point where what it refers to is essentially just small-particle chemistry. And so now we have “nano-particles” in products raging from motor oils to sunscreens, lipstick, car polish and ski wax, and even a $420 “Nano Gold Energizing Cream” that its manufacturer claims transports beneficial compounds into the skin. Nanotechnology in this bastardized sense is largely a marketing gimmick, not likely to change anything very much, much less “everything.”

But what if nanotechnology in the radical and grandiose sense actually became possible? What if, indeed, it became an operational reality? That would be a fundamentally transformative development, changing forever how manufacturing is done and how the world works. Imagine all of our material needs being produced at trivial cost, without human labor, and with no waste. No more sweat shops, no more smoke-belching factories, no more grinding workdays or long commutes. The magical molecular assemblers will do it all, permanently eliminating poverty in the process.

I can already hear the young indy tech fans sighing, “I only like old nano.”

The Chomskian Transformation

Saturday, January 24th, 2009

Mencius Moldbug considers his point of view to be roughly the opposite of modern academia’s — he likens the modern university system to the medieval church — and of Noam Chomsky’s in particular. Here he describes the Chomskian transformation:

To the bishops of the Cathedral [the university system and the mainstream media], anything that strengthens their influence is a good thing, and vice versa. The analysis is completely reflexive, far below the conscious level. Consider this comparison of the coverage between the regime of Pinochet and that of Castro. Despite atrocities that are comparable at most — not to mention a much better record in providing responsible and effective government — Pinochet receives the full-out two-minute hate, whereas the treatment of Castro tends to have, at most, a gentle and wistful disapproval.

This is because Pinochet’s regime was something completely alien to the American intellectual, whereas — the relationship between Puritan divines and Bolshevism being exactly as the mad Arab, Abdul Alhazred, says — Castro’s regime was something much more understandable. If you sketch the relative weights of the social networks connecting Pinochet to the Cathedral, versus Castro to the Cathedral, you are comparing a thread to a bicep.

We also see the nature of the blue pill here. After completing the UR treatment, it is interesting to go back and read your Chomsky. What you’ll see is that Chomsky is, in every case, demanding that all political power be in the hands of the Cathedral. The American system is very large and complex, and this is certainly not the case. The least exception or (God forbid) reversal, and Chomsky is in on the case, deploying the old principle of “this animal is very dangerous; when attacked, it defends itself.” The progressive is always the underdog in his own mind. Yet, in objective reality, he always seems to win in the end.

In other words, the Chomskian transformation is to interpret any resistance, by a party which is inherently much weaker, as oppression by a magic force of overwhelming strength. For example, we can ask: which set of individuals exerts more influence over American journalists? American professors, or American CEOs? American diplomats, or American generals? In both cases, the answer is clearly the former. Yet any hint of corporate or military influence over the press is, of course, anathema.

If anyone is in an obvious position to manufacture consent, it is (as Walter Lippmann openly proposed) first the journalists themselves, and next the universities which they regard as authoritative. Yet, strangely, the leftist has no interest whatsoever in this security hole. This can only be because it is already plugged with his worm. The complaint of the Chomskian, in other words, always occurs when the other team is impudent enough to try to manufacture a bit of its own consent. Hence: the blue pill.

Michael Lewis on the Death of Journalism

Saturday, January 24th, 2009

In a recent Atlantic interview, Michael Lewis answered a question about the dying magazine business:

Well my personal experience has been very nice. The market for me has only gotten better!
[...]
Well it makes it a little hard for me to prophesize doom. And I hate spinning theories to which I’m an exception. So my sense is, there’ll always be a hunger for long-form journalism, and that it’s just a question of how it’s packaged. And that people will always figure out how to make it sort of viable. It’s never going to be a hugely profitable business: it’s more like the movie business or the car business in that there are all sorts of good non-economic reasons to be involved in it. The economic returns will always probably be driven down by too many people wanting to be in it.

But I don’t feel gloomy about the magazine business at all.
[...]
It’s always inherently in a state of turmoil of one form or another. But let me put it this way: when I write a long magazine piece that gets attention I feel like it’s more widely read now than it was ten years ago, by a long way. In fact, it feels excessively well read. Twenty years ago I might get a couple of notes in the mail and I’d hear about it maybe at a dinner party. And that would be the end of it, and it would go away very quickly. Ten years ago it would get passed around by email, and it would seem to have a life to me that would go on a little longer. Now the blogosphere picks it up and it becomes almost like a book: it lives for months. I’m getting responses to it for months. And I don’t think the journalism has gotten any better. It’s just the environment you publish it in is more able to rapidly get it to the people who are or might be interested in it. They’re more likely to see it. So the demand side of things is not a problem. People really want to read this stuff. The question is how you monetize that.

And there are still magazines that make plenty of money. Vanity Fair makes plenty of money. Huge sums of money. The New York Times Magazine makes plenty of money, it’s just buried inside this institution that doesn’t.

Robots at War: The New Battlefield

Friday, January 23rd, 2009

When it comes to Robots at War, everyone’s reluctant to talk about what P. W. Singer calls The Issue That Must Not Be Discussed:

What happens to the human role in war as we arm ever more intelligent, more capable, and more autonomous robots?

When this issue comes up, both specialists and military folks tend to change the subject or speak in absolutes. “People will always want humans in the loop,” says Eliot Cohen, a noted military expert at Johns Hopkins who served in the State Department under President George W. Bush. An Air Force captain similarly writes in his service’s professional journal, “In some cases, the potential exists to remove the man from harm’s way. Does this mean there will no longer be a man in the loop? No. Does this mean that brave men and women will no longer face death in combat? No. There will always be a need for the intrepid souls to fling their bodies across the sky.”

All the rhetoric ignores the reality that humans started moving out of “the loop” a long time before robots made their way onto battlefields. As far back as World War II, the Norden bombsight made calculations of height, speed, and trajectory too complex for a human alone when it came to deciding when to drop a bomb. By the Persian Gulf War, Captain Doug Fries, a radar navigator, could write this description of what it was like to bomb Iraq from his B-52: “The navigation computer opened the bomb bay doors and dropped the weapons into the dark.”

In the Navy, the trend toward computer autonomy has been in place since the Aegis computer system was introduced in the 1980s. Designed to defend Navy ships against missile and plane attacks, the system operates in four modes, from “semi- automatic,” in which humans work with the system to judge when and at what to shoot, to “casualty,” in which the system operates as if all the humans are dead and does what it calculates is best to keep the ship from being hit. Humans can override the Aegis system in any of its modes, but experience shows that this capability is often beside the point, since people hesitate to use this power. Sometimes the consequences are tragic.

The most dramatic instance of a failure to override occurred in the Persian Gulf on July 3, 1988, during a patrol mission of the U.S.S. Vincennes. The ship had been nicknamed “Robo- cruiser,” both because of the new Aegis radar system it was carrying and because its captain had a reputation for being overly aggressive. That day, the Vincennes’s radars spotted Iran Air Flight 655, an Airbus passenger jet. The jet was on a consistent course and speed and was broadcasting a radar and radio signal that showed it to be civilian. The automated Aegis system, though, had been designed for managing battles against attacking Soviet bombers in the open North Atlantic, not for dealing with skies crowded with civilian aircraft like those over the gulf. The computer system registered the plane with an icon on the screen that made it appear to be an Iranian F-14 fighter (a plane half the size), and hence an “assumed enemy.”

Though the hard data were telling the human crew that the plane wasn’t a fighter jet, they trusted the computer more. Aegis was in semi- automatic mode, giving it the least amount of autonomy, but not one of the 18 sailors and officers in the command crew challenged the computer’s wisdom. They authorized it to fire. (That they even had the authority to do so without seeking permission from more senior officers in the fleet, as their counterparts on any other ship would have had to do, was itself a product of the fact that the Navy had greater confidence in Aegis than in a human- crewed ship without it.) Only after the fact did the crew members realize that they had accidentally shot down an airliner, killing all 290 passengers and crew, including 66 children.

The tragedy of Flight 655 was no isolated incident. Indeed, much the same scenario was repeated a few years ago, when U.S. Patriot missile batteries accidentally shot down two allied planes during the Iraq invasion of 2003. The Patriot systems classified the craft as Iraqi rockets. There were only a few seconds to make a decision. So machine judgment trumped any human decisions. In both of these cases, the human power “in the loop” was actually only veto power, and even that was a power that military personnel were unwilling to use against the quicker (and what they viewed as superior) judgment of a computer.

The point is not that the machines are taking over, Matrix-style, but that what it means to have humans “in the loop” of decision making in war is being redefined, with the authority and autonomy of machines expanding. There are myriad pressures to give war- bots greater and greater autonomy. The first is simply the push to make more capable and more intelligent robots. But as psychologist and artificial intelligence expert Robert Epstein notes, this comes with a built-in paradox. “The irony is that the military will want [a robot] to be able to learn, react, etc., in order for it to do its mission well. But they won’t want it to be too creative, just like with soldiers. But once you reach a space where it is really capable, how do you limit them? To be honest, I don’t think we can.”

Simple military expediency also widens the loop. To achieve any sort of personnel savings from using unmanned systems, one human operator has to be able to “supervise” (as opposed to control) a larger number of robots. For example, the Army’s long- term Future Combat Systems plan calls for two humans to sit at identical consoles and jointly supervise a team of 10 land robots. In this scenario, the humans delegate tasks to increasingly autonomous robots, but the robots still need human permission to fire weapons. There are many reasons, however, to believe that this arrangement will not prove workable.

Researchers are finding that humans have a hard time controlling multiple units at once (imagine playing five different video games simultaneously). Even having human operators control two UAVs at a time rather than one reduces performance levels by an average of 50 percent. As a NATO study concluded, the goal of having one operator control multiple vehicles is “currently, at best, very ambitious, and, at worst, improbable to achieve.” And this is with systems that aren’t shooting or being shot at. As one Pentagon- funded report noted, “Even if the tactical commander is aware of the location of all his units, the combat is so fluid and fast paced that it is very difficult to control them.” So a push is made to give more autonomy to the machine.

And then there is the fact that an enemy is involved. If the robots aren’t going to fire unless a remote operator authorizes them to, then a foe need only disrupt that communication. Military officers counter that, while they don’t like the idea of taking humans out of the loop, there has to be an exception, a backup plan for when communications are cut and the robot is “fighting blind.” So another exception is made.

Even if the communications link is not broken, there are combat situations in which there is not enough time for the human operator to react, even if the enemy is not functioning at digital speed. For instance, a number of robot makers have added “counter sniper” capabilities to their machines, enabling them to automatically track down and target with a laser beam any enemy that shoots. But those precious seconds while the human decides whether to fire back could let the enemy get away. As one U.S. military officer observes, there is nothing technical to prevent one from rigging the machine to shoot something more lethal than light. “If you can automatically hit it with a laser range finder, you can hit it with a bullet.”

This creates a powerful argument for another exception to the rule that humans must always be “in the loop,” that is, giving robots the ability to fire back on their own. This kind of autonomy is generally seen as more palatable than other types. “People tend to feel a little bit differently about the counterpunch than the punch,” Noah Shachtman notes. As Gordon Johnson of the Army’s Joint Forces Command explains, such autonomy soon comes to be viewed as not only logical but quite attractive. “Anyone who would shoot at our forces would die. Before he can drop that weapon and run, he’s probably already dead. Well now, these cowards in Baghdad would have to pay with blood and guts every time they shot at one of our folks. The costs of poker went up significantly. The enemy, are they going to give up blood and guts to kill machines? I’m guessing not.”

Each exception, however, pushes one further and further from the absolute of “never” and instead down a slippery slope. And at each step, once robots “establish a track record of reliability in finding the right targets and employing weapons properly,” says John Tirpak, executive editor of Air Force Magazine, the “machines will be trusted.”

The reality is that the human location “in the loop” is already becoming, as retired Army colonel Thomas Adams notes, that of “a supervisor who serves in a fail- safe capacity in the event of a system malfunction.” Even then, he thinks that the speed, confusion, and information overload of modern-day war will soon move the whole process outside “human space.” He describes how the coming weapons “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct.” As Adams concludes, the new technologies “are rapidly taking us to a place where we may not want to go, but probably are unable to avoid.”

The irony is that for all the claims by military, political, and scientific leaders that “humans will always be in the loop,” as far back as 2004 the U.S. Army was carrying out research that demonstrated the merits of armed ground robots equipped with a “quick-draw response.” Similarly, a 2006 study by the Defense Safety Working Group, in the Office of the Secretary of Defense, discussed how the concerns over potential killer robots could be allayed by giving “armed autonomous systems” permission to “shoot to destroy hostile weapons systems but not suspected combatants.” That is, they could shoot at tanks and jeeps, just not the people in them. Perhaps most telling is a report that the Joint Forces Command drew up in 2005, which suggested that autonomous robots on the battlefield would be the norm within 20 years. Its title is somewhat amusing, given the official line one usually hears: Unmanned Effects: Taking the Human Out of the Loop.

So, despite what one article called “all the lip service paid to keeping a human in the loop,” auton omous armed robots are coming to war. They simply make too much sense to the people who matter.

Michael Lewis on Credit Default Swaps

Friday, January 23rd, 2009

The Atlantic has a new business channel, which features an interview with Michael Lewis — author of Liar’s Poker, Moneyball, and The Blind Side, and editor of the recent Panic — in which he attacks certain financial innovations as tools for obfuscation:

Well, there’s probably no innovation that’s entirely useless. But there are some innovations whose use value is so trivial — except as a tool for disguising risk and enabling reckless innovation. A really good example of this is credit default swaps, which everyone has seen mentioned. Credit default swaps are not that complicated on the surface. On the surface they’re just bond insurance. If you buy a credit default swap from me, you’re buying insurance against a municipal bond or a corporate bond or a subprime bond or a treasury bond going bust.

The difference, I guess, being that a third party can buy the swap.

That’s right. And that the value of the insurance can be many times the value of the original bond. So let’s say there’s some really dodgy subprime bond out that everybody knows is going to go bust but that the market is still pretending is a triple-A bond. You might have insurance that is 100 times the value of the actual bond. So lets say there’s a million dollars in a bond out there. You might have 100 million dollars in insurance contracts on it. So it’s obviously not insurance at that point. It’s something else. It’s a way to bet on the bond. And it’s a very simple and clean way to bet on the bond.

And one of the really weird things about this instrument is — well, back away from it and think about it for a second. Lets take a bond, let’s say a General Electric bond. A General Electric bond trades at some spread over treasuries. So let’s say you get, I dunno, in normal times, 75 basis points over treasuries, or 100 basis points over Treasuries, over the equivalent maturity in Treasury bonds. So you get paid more investing in GE. And what does that represent? You get paid more because you’re taking the risk that GE is going welsh on its debts. That the GE bond is going to default. So the bond market is already pricing the risk of owning General Electric bonds. So then these credit default swaps come along. Someone will sell you a credit default swap — what enables the market is that it’s cheaper than that 75 basis point spread — and he’s saying that in doing this he knows GE is less likely default than the bond market believes.

Why does he know that? Well, he doesn’t know that. What really happened was that traders on Wall Street have the risk on their books measured by their bosses, by an abstruse formula called Value at Risk. And if you’re a trader on Wall Street you will be paid more if your VaR is lower — if you are supposedly taking less risk for any given level of profit that you generate. The firm will reward you for that.

Well, one way to lower your Value at Risk as a trader is to sell a lot of credit default insurance because the VaR formula doesn’t count it as risk. Because it’s so unlikely to happen, the formula doesn’t grab it. The formula thinks you’re doing business that is essentially riskless. And the formula is screwed up. So this encouraged traders to sell lots and lots of default insurance because, while they get a small premium for it, it doesn’t matter to them because the firm is essentially saying, “Do it, because we’re not going to regard this risk you’re taking as actual risk.”

It’s insane. That market is huge as a result. But if people actually had to have the capital, like a real insurer, to back up the contracts they’re riding, the market would shrink by — who knows? Who knows what would be left of it?

MRSA rising in kids’ ear, nose, throat infections

Thursday, January 22nd, 2009

MRSA rising in kids' ear, nose, throat infections:

Researchers say they found an “alarming” increase in children’s ear, nose and throat infections nationwide caused by dangerous drug-resistant staph germs. Other studies have shown rising numbers of skin infections in adults and children caused by these germs, nicknamed MRSA, but this is the first nationwide report on how common they are in deeper tissue infections in the head and neck, the study authors said. These include certain ear and sinus infections, and abcesses that can form in the tonsils and throat.

The study found a total of 21,009 pediatric head and neck infections caused by staph germs from 2001 through 2006. The percentage caused by hard-to-treat MRSA bacteria more than doubled during that time from almost 12 percent to 28 percent.

“In most parts of the United States, there’s been an alarming rise,” said study author Dr. Steven Sobol, a children’s head and neck specialist at Emory University.

The study appears in January’s Archives of Otolaryngology, released Monday.

It is based on nationally representative information from an electronic database that collects lab results from more than 300 hospitals nationwide.

MRSA, or methicillin-resistant Staphylococcus aureus, can cause dangerous, life-threatening invasive infections and doctors believe inappropriate use of antibiotics has contributed to its rise.