On Letting a Computer Help with the Work

Thursday, November 15th, 2012

When Thomas Schelling first worked out his game-theoretic models of segregation, he used coins on a grid to manually demonstrate how micromotives could lead to macrobehaviors — and he recommended against using the primitive computers of the time:

I cannot too strongly urge you to get the nickels and pennies and do it yourself. I can show you an outcome or two. A computer can do it for you a hundred times, testing variations in neighborhoods demands, overall ratios, sizes of neighborhoods, and so forth. But there is nothing like tracing it through for yourself and seeing the process work itself out. It takes about five minutes — no more time than it takes me to describe the result you would get. In an hour you can do it several times and experiment with different rules of behavior, sizes and shapes of boards, and (if you turn some of the coins heads and some tails) subgroups of nickles and pennies that make different demands on the color compositions of their neighborhoods (Schelling 1974, 48).

Later, when computers had advanced enough to give real-time feedback, he changed his mind, learned BASIC, and learned what programming does to the programmer:

First, programming requires a decomposition of a whole that is given before by a more or less intuitive description. Simple components have to be identified and specified. Their interplay, the parameters and their possible values, all that has to be exactly defined in a precise language with a rigorous grammar. Whoever started to program, realises after some lines of coding the ambiguities and the holes in informal descriptions that, beforehand, he or she considered perfectly clear and complete. Programming implies to resolve the ambiguities and to fill the holes. In short: Programming forces the programmer to sharpen his or her view on the subject.

Second, programming has an inherent tendency, if not irresistible seduction, to generalisation: it often transforms the original subject, its features or components, into instances of something much more general.

A first example: In an informal description we may have two groups, blacks and whites. In the program they are represented by two lists. But why to stop with just two. Why not three, four, five … ? Whoever commands the natural numbers, can’t avoid asking that question, and in that moment the idea of a generalised group structure with m groups of possibly different group size is born. Such a generalised view may require not even a single additional line of code, because the original case (blacks and whites only), was technically realised in such a way that, whether two or m groups, is simply a question of just one parameter value in the lines that were already written to cope with two groups. It is a frequent programming experience that code, written to implement a specific feature of the subject matter, unintentionally (!) realises the particular as an instance of a generalisation that goes far beyond the original feature.

Another example: While programming rules of movement, starting from some intuitive descriptions, it becomes obvious that there are much more and totally different possible rules. For instance, rules that require consent of the neighbourhood that someone wants to enter. Almost never can we implement all alternatives. But from now on we know that whatever we implement is just an instance of something more general that might be called a migration regime. Again, programming has changed the view. In short: Programming is an eye opener; by programming — for the most part unintentionally — we get to a more general point of view.

A Positive Account of Rights

Wednesday, December 22nd, 2010

David Friedman describes three kinds of rights — including his own idiosyncratic definition of positive rghts:

If I have a normative right not to be killed, that means that if you kill me you have acted badly, are a bad person, and ought to feel guilty. If I have a legal right, that means that killing me is against the law. If I have a positive right not to be killed, that means that the consequences to you of killing me are such that you probably won’t. Normative rights are moral claims. Positive rights, as I use the term, are descriptions of behavior.

A positive right could, of course, be the consequence of belief in a normative right. If enough people think that killing me is bad and are unwilling to do bad things, I am unlikely to be killed. Alternatively, a positive right could be the result of a legal right — people don’t kill me because if they believe that if they do they will be arrested, tried, convicted, and hanged.

That last notion — that “an interest qualifies as a right when an effective legal system treats it as such by using collective resources to defend it” — which Stephen Holmes and Cass Sunstein make in The Cost of Rights, is, according to Friedman, widely held and demonstrably false:

The simplest evidence that it is false is the fact that positive rights, in the form of territorial behavior, predate not merely human government but the human species. Since birds and fish do not have governments or legal systems, those cannot be the source of that behavior or of the associated right.

The logic of territorial behavior is simple and relevant. An individual of a territorial species claims a territory by marking it in a way recognizable to other members of that species. Other members of the species, as a rule, either do not trespass or retreat when confronted by the owner. What enforces this pattern of behavior is a commitment strategy. The claimant has somehow committed himself to fight a trespasser more and more desperately the farther the trespasser penetrates into the territory. Unless one of the two combatants is much more formidable than the other, a fight to the death is a loss for the winner as well as the loser. Hence the trespasser, perceiving the commitment strategy, realizes that continued trespass is a mistake and retreats. The result is a positive property right in the sense in which I have just defined it.

Its source is not a legal right. Could it be a normative right? One cannot dismiss out of hand the possibility that species other than ours feel moral obligations, although it is unlikely that they have moral philosophers to analyze them. But in the case of territorial behavior, it seems natural to interpret any moral feelings involved —  guilt felt by the trespasser, shame felt by a proprietor who fails to enforce his claim — as consequence rather than cause. Given the logic of the commitment strategy, a potential trespasser who is unwilling to trespass will be more likely to survive and reproduce than one who does not. Given that potential trespassers recognize commitment strategies and their strength, the potential proprietor whose strategy is supported by what in a human would be considered moral considerations is more believable, hence less likely to have to either make good his commitment to defense or to his territory, and — quite possibly — his opportunity to reproduce. So it may make sense to think of some moral feelings in animals as patterns of behavior produced by Darwinian evolution in the context of territorial behavior — and perhaps of other moral feelings, including those of humans, as produced in a similar way in other contexts.

Territorial behavior in animals is a particularly clear case, but humans provide lots of examples of positive rights enforced by non-legal means, often involving commitment strategies. Consider a feud society such as saga-period Iceland, pre-Islamic Bedouin society, or modern-day Romanichal Gypsies. What enforces my right not to be robbed is that potential robbers know that I will go to a good deal of trouble to revenge myself against them. What enforces my right not to be killed is the knowledge that anyone who kills me will either have to make a large damage payment (wergild in the Icelandic system) to my kin or risk their killing him, or possibly his kin, in retaliation. In the Icelandic case the commitment strategies were filtered through a legal system — if I brought my claim against you to court and lost the case, I might no longer feel obligated to enforce it. But the court system provided no enforcement mechanism — there was nothing corresponding to an executive branch of government. What enforced the court’s judgment was the plaintiff’s commitment to do so, supported by the commitments of his kin and allies.

Rights in human societies, including modern ones, are based on the same pattern of behavior as territorial behavior in animals or enforcement via feud and the threat of feud, even if less obviously so. Each individual has a view of his entitlements and is willing to bear unreasonably large costs in defense of them. As long as those views are mutually consistent, as long as it is uncommon for two people to believe they own, and be willing to fight for, the same object, we have a reasonably peaceful and orderly society. The form of fighting varies from case to case, society to society — one form of combat in our society is to sue someone, knowing that both parties will bear sizable legal costs as a result. But the underlying logic of the structure is the same.

Civil order is maintained by an elaborate Schelling point, Friedman suggests, a set of imaginary lines defining what each of us believes he is entitled to and is willing to bear large costs to defend:

Where that order clashes with the order that the legal rules purport to maintain, the informal order not uncommonly prevails. The process has been documented by Robert Ellickson in the context of the privately enforced norms of present-day Shasta County (and modern academics) and routinely observed in the unsuccessful attempts to enforce, without individual support, laws that prohibit activities many individuals want to engage in, such as alcohol and marijuana use.

The same pattern can be observed on a larger and cruder scale in international relations. The United Kingdom was willing to bear very large costs in order to defend a few sparsely inhabited islands near the South Pole because those islands were theirs. That was the result of a rational commitment strategy; its absence would put other and more valuable territories at risk, resulting in either losing them or having to bear more and larger costs in their defense.

Not all patterns of rights are equally workable. What many call negative rights — variations on the right to be left alone — are quite workable, while what many people, other than Friedman, call positive rights — the right to a living wage, etc. — are not so workable:

Negative rights are, for the most part, rights that can be defended by individual commitment strategies with only a small risk of clashes due to inconsistent claims. Positive rights — in [this] sense — are open-ended claims against the world, hence almost inevitably inconsistent with each other. My right to control my body is relatively easy to enforce, since it takes substantial effort to violate it. A right by me to control your body in order to provide me with an outcome I claim a right to would be much harder to enforce. The whole structure of rights is built on two interrelated technologies — one determining what claims humans can commit themselves to defend and one determining the costs of defending, or violating, such claims.

The Genesis of Dr. Strangelove

Wednesday, October 20th, 2010

Dan Lindley provides a study guide to Kubrick’s Dr. Strangelove, which discusses the genesis of the film:

Dr. Strangelove is based on Red Alert by Peter George (who used the pen name Peter Bryant). George was an RAF major in military intelligence. While serving at a U.S. airbase in the U.K, a B-47 roared overhead, shaking a precariously perched coffee cup and sending it crashing to the floor. Someone said “that’s the way World War III will start.” and George was off to the races with an idea to write Red Alert. George wrote the book in three weeks.

The story of how Red Alert inspired the film goes back to 1958 when someone handed Thomas Schelling the book during an airplane flight. As the first detailed scenario of how someone might start a nuclear war, Schelling found the book sufficiently interesting to purchase and give away around four dozen copies. Over lunch with a magazine editor, Schelling discussed writing an article on accidental nuclear war, and mentioned Red Alert. The editor suggested opening up the article with a review of the literature on WWIII. So, Schelling wrote the article and reviewed Red Alert, On the Beach, and Alas Babylon. The magazine rejected the article, but it was soon published in the Bulletin of the Atomic Scientists. (36) A friend of Schelling who wrote for the Observer of London got the Bulletin article reprinted in full as the lead story in the features section. Stanley Kubrick read the newspaper story, then the Bulletin article, called up the publishers of Red Alert, and got in touch with George. Kubrick, Schelling, and George then sat down for an afternoon to discuss how to make the movie.

When the book was written, intercontinental missiles were not a factor in the strategic balance. But by the time they discussed the movie, both ground and submarine launched missiles were gaining in importance compared to bombers. Kubrick, Schelling, and George spent much time trying to see of they could start the war and play out the crisis with missiles. They could not. Only bombers provided enough time to make all the war room scenes possible. In particular, they wanted to create the strategic choice of whether the President would exploit the bomber launch to send in follow-on forces.(37) With missiles, the war would have started much too quickly. One theme of the book was how hard it was to actually start a nuclear war. Schelling noted that this theme got a bit lost in the film.

According to Schelling, another concern of Kubrick’s was to avoid insulting or attacking the U.S. Air Force. (38) Kubrick found himself in a bind on this because he couldn’t start the war without a psychopathic officer. This was one reason the characters in the film are at times so exaggerated and unbelievable. In the end, a major reason the film is so comedically effective is the way it alternates between absolute realism (such as in its military standard operating procedures and terminology) and incredible zaniness. (39) According to Terry Southern, George’s Red Alert helped set the stage for deadpan realism in Dr. Strangelove: “Perhaps the best thing about the book was the fact that the national security regulations in England, concerning what could and could not be published, were extremely lax by American standards. George had been able to reveal details concerning the “fail-safe” aspect of nuclear deterrence (for example, the so-called black box and the CRIM [sic] Discriminator) — revelations that, in the spy-crazy U.S.A. of the Cold War era, would have been downright treasonous. Thus the entire complicated technology of nuclear deterrence in Dr. Strangelove was based on a bedrock of authenticity that gave the film what must have been its greatest strength: credibility.” (40)

George was concerned that his American friends would hold the film against him. (41) Schelling wrote to reassure him, to say that was not true, that he liked the film and would be welcome as a friend on any future visit to the U.S. Later, Schelling wrote another letter saying he would be bringing his family to London, but George’s wife wrote back that George would not be responding…

Peter George committed suicide in June of 1966, perhaps in part because he suffered “fear and pain about the threat of nuclear war.” (42) One theme of this paper is that many of the fears raised by Peter George and in Dr. Strangelove were remarkably close to reality. The film makes fun of it, but the world was (and still is) a very scary place. Hopefully this article has made this clear, especially in its sections on the logic of deterrence and the devolution of authority, civil-military relations, pre-emption, the precariousness of MAD, and in the comparisons of film language to real language. After much scholarship and experience, these dangers are more easily seen in the year 2000. But in the late 1950s and early 1960s, Peter George was a pioneer in helping make us aware of these dangers. We should be grateful.

In that Bulletin piece, Metors, Mischief, and War, famed game-theorist Schelling sings the praises of Red Alert, “one of the niftiest little analyses to come along”:

The author does not frighten us with how loosely SAC might be organized and how easily the system could be subverted; what makes this book good fiction is what makes a good mystery — the author has used his ingenuity to make the problem hard.

(Hat tip to Kalim Kassam.)

Both an Atheist and a Believer in Divine-Right Monarchy

Wednesday, November 19th, 2008

Mencius Moldbug half-jokes that he is both an atheist and a believer in divine-right monarchy, citing Sir Robert Filmer‘s Patriarcha. before explaining his own less-theocratic take:

But an atheist, such as myself, has a simpler way of getting to the same result. Really, what Filmer is saying, is: if you want stable government, accept the status quo as the verdict of history. There is no reason at all to inquire as to why the Bourbons are the Kings of France. The rule is arbitrary. Nonetheless, it is to the benefit of all that this arbitrary rule exists, because obedience to the rightful king is a Schelling point of nonviolent agreement. And better yet, there is no way for a political force to steer the outcome of succession — at least, nothing comparable to the role of the educational authorities in a democracy.

The Nukes of October

Saturday, March 15th, 2008

The Nukes of October looks at Richard Nixon’s secret plan to bring peace to Vietnam:

Codenamed Giant Lance, Nixon’s plan was the culmination of a strategy of premeditated madness he had developed with national security adviser Henry Kissinger. The details of this episode remained secret for 35 years and have never been fully told. Now, thanks to documents released through the Freedom of Information Act, it’s clear that Giant Lance was the leading example of what historians came to call the “madman theory”: Nixon’s notion that faked, finger-on-the-button rage could bring the Soviets to heel.

Nixon and Kissinger put the plan in motion on October 10, sending the US military’s Strategic Air Command an urgent order to prepare for a possible confrontation: They wanted the most powerful thermonuclear weapons in the US arsenal readied for immediate use against the Soviet Union. The mission was so secretive that even senior military officers following the orders — including the SAC commander himself — were not informed of its true purpose.
After their launch, the B-52s pressed against Soviet airspace for three days. They skirted enemy territory, challenging defenses and taunting Soviet aircraft. The pilots remained on alert, prepared to drop their bombs if ordered. The Soviets likely knew about the threat as it was unfolding: Their radar picked up the planes early in their flight paths, and their spies monitored American bases. They knew the bombers were armed with nuclear weapons, because they could determine their weight from takeoff patterns and fuel use. In past years, the US had kept nuclear-armed planes in the air as a possible deterrent (if the Soviets blew up all of our air bases in a surprise attack, we’d still be able to respond). But in 1968, the Pentagon publicly banned that practice — so the Soviets wouldn’t have thought the 18 planes were part of a patrol. Secretary of defense Melvin Laird, who opposed the operation, worried that the Soviets would either interpret Giant Lance as an attack, causing catastrophe, or as a bluff, making Washington look weak.
The madman theory was an extension of [the "flexible response"] doctrine. If you’re going to rely on the leverage you gain from being able to respond in flexible ways — from quiet nighttime assassinations to nuclear reprisals — you need to convince your opponents that even the most extreme option is really on the table. And one way to do that is to make them think you are crazy.

Consider a game that theorist Thomas Schelling described to his students at Harvard in the ’60s: You’re standing at the edge of a cliff, chained by the ankle to another person. As soon as one of you cries uncle, you’ll both be released, and whoever remained silent will get a large prize. What do you do? You can’t push the other person off the cliff, because then you’ll die, too. But you can dance and walk closer and closer to the edge. If you’re willing to show that you’ll brave a certain amount of risk, your partner may concede — and you might win the prize. But if you convince your adversary that you’re crazy and liable to hop off in any direction at any moment, he’ll probably cry uncle immediately. If the US appeared reckless, impatient, even insane, rivals might accept bargains they would have rejected under normal conditions. In terms of game theory, a new equilibrium would emerge as leaders in Moscow, Hanoi, and Havana contemplated how terrible things could become if they provoked an out-of-control president to experiment with the awful weapons at his disposal.
Dobrynin recounted Nixon’s threatening words in his report to the Kremlin: The president said “he will never (Nixon twice emphasized that word) accept a humiliating defeat or humiliating terms. The US, like the Soviet Union, is a great nation, and he is its president. The Soviet leaders are determined persons, but he, the president, is the same.”

Dobrynin warned Soviet leaders that “Nixon is unable to control himself even in a conversation with a foreign ambassador.” He also commented on the president’s “growing emotionalism” and “lack of balance.”

This was exactly the impression that Nixon and Kissinger had sought to cultivate. After the meeting, Kissinger reveled in their success. He wrote the president: “I suspect Dobrynin’s basic mission was to test the seriousness of the threat.” Nixon had, according to Kissinger, “played it very cold with Dobrynin, giving him one back for each he dished out.” Kissinger counseled the White House to “continue backing up our verbal warnings with our present military moves.”

Jerry Muller on Schumpeter

Monday, May 28th, 2007

Arnold Kling cites a number of a passages by Jerry Muller on Schumpeter:

He argued that it was precisely the dynamism injected into capitalist society by the entrepreneur that made him an object of antipathy. For the rise of a new entrepreneur…necessarily meant the relative economic decline of those ensconced in the status quo…

In attempting to account for the appeal of socialism, Schumpeter borrowed not only from Nietzsche but from the Italian political theorist Vilfredo Pareto…Pareto’s 1901 essay “The Rise and Fall of Elites,” conveys two themes to which Schumpeter would return time and time again: the inevitability of elites, and the importance of nonrational and nonlogical drives in explaining social action. Pareto suggested that the victory of socialism was “most probable and almost inevitable.” Yet, he predicted…the reality of elites would not change. It was almost impossible to convince socialists of the fallacy of their doctrine, Pareto asserted, since they were enthusiasts of a substitute religion. In such circumstances, arguments are invented to justify actions that were arrived at before the facts were examined, motivated by nonrational drives.

It should come as no surprise that academics and politicians often dislike capitalism:

It was no accident, Schumpeter thought, that capitalism had been so productive…For it appeals to, and helps create, a system of motives that is both simple and forceful. It rewards success with wealth and, no less over, it attracts the brightest and most energetic into market-related activity: as capitalist values come to dominate, a large portion of those with “supernormal brains” move toward business, as opposed to military, governmental, cultural, or theological pursuits.

I’m afraid Dan Klein lost me with his comment:

Nice stuff. What I like especially: They highlight how entrepreneurship can be significantly discoordinating in the Schelling sense of mutual coordination, while significantly coordinating in the Coase/Hayek sense of concatenate or extensive coordination.

On this matter, I find Kirzner, Boettke, Sautet, and many others frustrating, because they resist the distinction between the two coordinations. Once you embrace the distinction, it all becomes clear.

Mr. Counterintuition

Saturday, February 17th, 2007

Michael Spence, who studied under Thomas Schelling, calls the Nobel-winning game theorist Mr. Counterintuition:

He pointed out that it took the U.S. 15 years after World War II to learn to think seriously about the security of its weapons. Before that, weapons did not have combination locks, let alone complex electronic security codes. Now, most weapons will not detonate even if given the codes unless they are at their designated targets. He recalled that a friend who had a role in developing the weapons told him that one day in the late 1950s, he got off a plane at an air base in Germany and saw a military aircraft on the tarmac with a bomb beside it guarded by a single soldier. In those days there were not locks and codes. The man strolled over and asked the soldier what this was. The answer: “I believe it is a nuclear bomb, sir.” When asked what he would do if someone started to roll the weapon away, the soldier replied that he would call his superiors for instructions. A further enquiry established that the phone was some 300 meters away.

That was the level of thinking the US had given the problem, and it spent years bringing other nuclear powers around to the idea that nuclear weapons are good for deterrence and not much else:

Terrorists, Tom insists, “also need to understand that nuclear devices are really only useful for deterrence. They would be unlikely to have the capacity to deliver them on planes or missiles, and would be more likely to smuggle them into a hostile country and hide them in cities, and then threaten to detonate them if attacked — or unless their aims and conditions are met. The object should be not to blow up a city but to deter attacks on their country, region or organization.” One is struck, once again, by the counterintuitive nature of the strategic issues related to these weapons — one has, to a large extent, a powerful strategic interest in the sophistication of one’s enemies.

We spoke, also, about bioweapons. “Three years ago,” Tom explains, “there was a lot of interest in, and concern about, the use of smallpox as a weapon. I was involved in a meeting that included a number of bioweapons experts, and after considerable discussion, I asked how long it would take for a smallpox epidemic deliberately started in the U.S. to spread around the world. The answer was ‘Not long.’ Then how practical are infectious diseases as bioweapons? Is it really likely that terrorists in the Middle East would use smallpox against a neighbor? Because of these considerations the interest in infectious diseases as weapons (as opposed to anthrax for example, which does not spread infectiously from person to person) has declined. But I was struck by the fact experts in bioweapons are not strategists, and by the thought that if our experts hadn’t thought of this, could we be sure that others, including terrorist organizations, had?” Smallpox, in a nutshell, cannot rationally be used as a weapon because it would spread too quickly, a kind of self-inflicted wound and mutually assured destruction.

The Usual Suspects

Friday, August 18th, 2006

Wretchard looks at Thomas Schelling’s game theory and its implications for modern policy, using an illustrative example from The Usual Suspects:

First described is the basic notion of commitment, which communicates to the enemy that you will do what you undertake. Commitment makes deterrence credible and credibility is the essential problem. “The most difficult part is communicating your intentions to your enemies. They must believe that you are committed to fighting them in order to defend” what you say you will defend for them to take you seriously. As Verbal Kint put it “to be in power, you didn’t need guns or money or even numbers. You just needed the will to do what the other guy wouldn’t.” To accomplish it no matter what. Schelling taught that threats are more credible if you “burn your bridges or ships” thereby making it clear that you have only one option: fight. When the Hungarian mob invaded Soze’s home to intimidate him into submitting, he simply killed his family first, illustrating Schelling’s point that to truly be believed “you must get yourself into a position where you cannot fail to react as you said you would”. Such is this power that when the fictional Kaiser Soze demonstrated absolute commitment he ceased to be simply a man and became a force of nature.

Tom Schelling’s key contribution was to establish on a sound mathematical basis the role of will — expressed as commitment — in war. Deterrence was not simply a matter of possessing advanced weapons. That was only half the equation. The other half was to establish that you were absolutely ready to use those weapons to your purpose. And given a choice between superiority in weapons and ascendance in will, weapons always came in second. Die Welt relates the experience of an Israeli officer who fought Hezbollah during the early 1980s. Israel had artillery, tanks, airplanes to Hezbollahs guns and knives. But Israel was a liberal democracy and Hezbollah a ruthless criminal organization. The overmatch in will made knives were more powerful than tanks because Hezbollah was willing to use them unhesitatingly. “Hezbollah’s barbarism is legendary. Gen. Effe Eytam, an Israeli veteran of that first Lebanon war, tells of how — after Israel had helped bring “Doctors without Borders” into a village in the 1980s to treat children — local villagers lined up 50 kids the next day to show Eytam the price they pay for cooperating with the West. Each of the children had had their pinky finger cut off.”

None of the weapons in the IDF arsenal could level this disparity in will.

Wretchard then goes on to cite Alexander Solzhenitsyn, who made this comment in his speech to the Harvard class of 1978:

No weapons, no matter how powerful, can help the West until it overcomes its loss of willpower. In a state of psychological weakness, weapons become a burden for the capitulating side. To defend oneself, one must also be ready to die; there is little such readiness in a society raised in the cult of material well-being. Nothing is left, then, but concessions, attempts to gain time and betrayal.

Iranian Nuke Would Be Suicide Bomb

Sunday, February 12th, 2006

In Iranian Nuke Would Be Suicide Bomb, Nobel Prize-winning economist and game-theorist Thomas Schelling shares his thoughts:

Hope for the future rests on the fact that, despite plenty of opportunities to use the bomb in these past few decades — whether the United States in Korea or Vietnam, or Israel when Egyptian troops crossed the Suez in 1973, or the Soviets in Afghanistan — it wasn’t used.

This reality ought to impress India or Pakistan or anyone else who acquires nuclear weapons. By looking at these foregone opportunities, they will realize for their own case that using the bomb would incur universal opprobrium, if not bring devastation down on their own house.

By calling this record to the attention of the Iranian leadership in particular, I hope it will see that any actual use of nuclear weapons other than holding them in reserve for deterrence would cause it to lose any friend it has and multiply their enemies.

How an economic theory beat the atomic bomb

Thursday, October 13th, 2005

Tim Harford named his piece on Thomas Schelling How an economic theory beat the atomic bomb:

If you want to win a Nobel prize without doing technical research, Mr Schelling’s winning formula is simple: find hidden patterns or puzzles of everyday life that nobody else can see, show how they illuminate the biggest questions of the day and write it all up in the most sparkling prose.

The Great Game

Tuesday, October 11th, 2005

In The Great Game, David Henderson summarizes Thomas Schelling’s work in game theory:

Many of the problems he discusses occur, he notes, because it’s too difficult to enter an exchange. Mr. Schelling put it beautifully: “Small children learn to trade stamps with an acumen that the real estate fraternity can only envy, but their parents can travel incommunicado behind a slow truck on a mountain grade without finding a way to make it worth the truck driver’s time to pull off the road for 15 seconds.”

Thomas Schelling, New Nobel Laureate

Monday, October 10th, 2005

Tyler Cowen notes that his former mentor at Harvard, Thomas Schelling, is a new Nobel Laureate. His contributions:

  1. The idea of precommitment.
  2. The paradox of nuclear deterrence.
  3. Focal points.
  4. Behavioral economics and the theory of self-constraint.
  5. The economics of segregation.

When I heard the news, I dug up my copies of Schelling’s Micromotives and Macrobehavior and The Strategy of Conflict.

Evolutionary economics

Monday, July 18th, 2005

In Evolutionary economics, Bob Rowthorn reviews Paul Ormerod’s latest book, Why Most Things Fail:

Ormerod gives many examples of social interaction leading to outcomes which are impossible to predict. The most striking example is Schelling’s model of residential segregation. In the US, there are few racially mixed communities and most blacks and whites live in neighbourhoods which are populated almost entirely by their own kind. This might suggest that there is a strong antipathy between the two groups. Yet a large amount of evidence suggests that this is not the case. Most blacks and whites would like to live in neighbourhoods where their racial group is in a majority, but they are perfectly happy to have a large minority of people from the other group as neighbours.

To explore the implication of such preferences, Schelling ran a number of simulations in which individuals were allowed to move house if they found themselves surrounded by too many of the other racial group. These simulations demonstrated two things. In the course of time, the typical result was that blacks and whites spontaneously relocated themselves into highly segregated neighbourhoods. It was impossible to predict where the boundaries of these neighbourhoods would lie or where any particular individual would end up. But it was a safe bet that the bulk of people would end up surrounded largely by people of their own race. This outcome showed clearly that social interaction may magnify small variations into very large differences. It also showed the limitations of the conventional approach to social phenomena, which assumes that large differences must have large causes.

Thomas Schelling

Tuesday, May 10th, 2005

Tyler Cowen passes along a Federal Reserve Bank of Richmond interview with Thomas Schelling, where Schelling answers a number of interesting questions about his work in economics and game theory — such as, Why do some types of criminal activity become organized while others do not?:

Part of this is semantic. Let’s say you have a group of automobile thieves. They may be organized, but we don’t call that ‘organized crime.’ Instead, when we use that term we are almost always referring to a small group of activities: gambling, prostitution, and drugs are the big ones. My question was: What is it that characterizes those things we call ‘organized crime’? The answer is that they all exist as monopolies. There is strong demand for each of the activities I mentioned before, but each of them is illegal. So the people who work in those markets are relatively easy to extort because they cannot turn to the police. As a result, it is possible to gain something approaching monopoly power in those markets. So the bookmakers, prostitutes, and drug dealers are not really the perpetrators of organized crime. They are the victims.

So the bookmakers, prostitutes, and drug dealers are not really the perpetrators of organized crime. They are the victims.