Everybody’s Second Choice

Sunday, August 3rd, 2008

Michael Schrage cites former Chrysler vice chairman Robert Lutz on prototyping to avoid being everybody’s second choice:

When we showed the early prototype for the new “big-rig-inspired” dodge Ram pickup to consumer focus groups in the early ’90s, the reaction was so polarized that the room practically vibrated with magnetism. A whopping 80 percent of the respondents disliked the bold new drop-fendered design. A lot even hated it! They wanted their pickups to keep on resembling the horizontal cornflake boxes they were used to, not to be striking or bold. According to traditional consumer research strategy, we should have thrown that design out on its ear, or at least toned it down to placate the hatemongers. But that would have been looking through the wrong end of the telescope, for the remaining 20 percent of the clinic participants were saying they were truly, madly, deeply in love with the design. And since the old Ram had only about 4 percent of the market at the time, we figured, What the hell! even if only half of those positive respondents actually buy, we’ll more than double our share! The result? Our share of the pickup market shot up to 20 percent on the radical new design.

Serious Power Trips

Saturday, August 2nd, 2008

Serious games and simulations can lead to serious power trips:

Policy and custom expressly forbid the president of the United States from active participation in decision making during national-security war games. The secretary of state plays, the joint chiefs play, the director of the Central Intelligence Agency plays, only the president does not. The U.S. national-security establishment has decreed that no one should know how the president might react to speculative scenarios. Presidential advisers in real national-security emergencies should not be influenced by prior knowledge of how the president responded to a simulated crisis. Nor should potential enemies of the United States. The president should in effect be above simulated frays.

This points to a larger problem:

The Pentagon discovered during 1960s-era war-game exercises that pitting officers of different ranks against one another didn’t work. Well-meaning military innovators had believed that mixed-rank exercises would enhance career development an dencourage nonhierarchical communication, but the actual result was rivalry and recrimination. “The Pentagon found out that you don’t play general against colonels,” recall defense-simulation designer Clark Abt. “You play peer to peer.”

In fact, Princeton mathematical economist Martin Shubik has pointed out that all models have two sets of rules: the rules of the model itself and the rules of the larger world it inhabits. Beating the boss is a pyrrhic victory.

STRIPS and Black Boxes

Friday, August 1st, 2008

Michael Schrage warns of the dangers of black boxes:

In 1991, Kidder hired Joseph Jett to arbitrage treasury bonds and STRIPS (separate trading of registered interest and principal of securities, i.e., bonds stripped of their coupon payments). Such arbitrage is theoretically a riskless transaction and would thus not need to be tracked by Kidder’s standard market and credit risk management systems. The firm relied on a computerized expert system that allowed traders to model and simulate their trades in accordance with software rules about valuing such transactions in the bond market. The software also automatically updated the firm’s inventory, position, and profit-and-loss (P&L) statement. In keeping with market conventions, the system valued the STRIPS lower than their associated bonds. This difference was reflected in the firm’s P&L statement, which was also the basis for assessing trader bonuses. By entering into forward transaction on the synthetic STRIPS, Jett was able to defer when the actual losses were recognized on the P&L statement by taking up still larger positions in STRIPS and then digitally reconstituting synthetic STRIPS already in the system.

In 1993 Jett enjoyed STRIPS profits in excess of $150 million; he received a $12-million bonus and the chairman’s “Man of the Year” award. By March 1994, when Jett’s positions included $47 billion worth of STRIPS and $42 billion worth of reconstituted STRIPS, Kidder management decided to figure out Jett’s secret. A month later, the firm announced that Jett had falsely inflated his profits in excess of $350 million. He was fired and sued for fraud.

Seriously Unwelcome Surprises

Thursday, July 31st, 2008

Michael Schrage notes that the real value of a model or simulation stems from its power to generate useful surprise:

Louis Pasteur once remarked that “chance favors the prepared mind.” It holds equally true that chance favors the prepared prototype: models and simulations can and should be media to create and capture surprise and serendipity. Yet surprises are not always welcome.

Indeed, surprises are not always welcome:

Clark Abt of Abt Associates, a pioneer in applying simulation games to public policy, recalls running a simulation for the Agency for International Development (AID) involving sustainable economic development in a developing country. “The simulation was biased in favor of saving the forests, while still allowing for a growing population and increasing the standard of living,” Abt recalls. The overt goal was, in his words, “to learn how to save the environment in a politically responsible way while having healthy economic development.” But practically every run of every simulation led to the relatively rapid destruction of the econologically cherished but commercially irresistible forests. “By the end of the day, the forests were all gone,” Abt remembers. “The AID types were really pissed off.”

So what did AID do in the ugly face of this consistent and politically incorrect outcome?

I think you already know what they did:

The agency shut down the exercise.

Abt makes a few amusing points about models and simulations:

  • “You know you have something when the model has a life of its own.”
  • Abt compares models to women’s skirts: “They should be long enough to cover the subject but short enough to be interesting.”

Serious Accidents and Teamwork

Wednesday, July 30th, 2008

In Serious Play, Michael Schrage describes how a life-or-death management issue was uncovered by accident, when regulators went to test the safety of pilots working longer shifts in the newly deregulated air-travel market of the 1980s:

The researchers tested two groups of test crews: those who flew the scenario after a minimum of two days off, as if it were the first leg of a three-day trip (preduty) and those who flew the scenario as the last segment of a three-day trip (postduty). The scenario was characterized by poor weather that forced a missed approach to a landing. The missed approach was further complicated by a hydraulic-system failure that created a high-speed, high-workload situation. The two pilots had to select an alternate landing site and manually extend the plane’s gears and flaps while flying an approach at higher-than-normal speed.

As expected, the postduty crews had had less presimulation sleep and reported singificantly more fatigue. But, to the researcher’ astonishment, “fatigued crews were rated as performing significatnly better and made fewer serious operational errors than the rested, preduty crews.”

As NASA’s researchers commented, “in hindsight, the finding shouldn’t have been a surprise at all. By the very nature of the scheduling, most crews in the postduty condition had just completed three days of operation as a team. By contrast, those in the preduty condition normally did not have the beneft of recent experience with their other crew members.”

When the researchers reanalyzed their data, fatigue was found to be a far less statistically significant safety factor than whether the crews had recently flown together. The simulation fidings indicate that crew schedules resulting in frequent mixing of pilot teams can have significant operational implications. The NASA researchers noted that no fewer than three of the wors 1980s-era accidents — a stall under icy conditions, an aborted takeoff that landed the plane in the water, and a runway collison in dense fog — all involved crews paired for the first time.

Zucchiniware

Tuesday, July 29th, 2008

It has been a while since I mentioned Michael Schrage’s Serious Play, but I thought I’d share the story of Zucchiniware:

One of the dullest low-level tasks in creating software at Microsoft is managing “the daily build,” which is, in practice, a daily prototype of the product in process. The person performing the daily build collects all the code from the programmers on the product team and puts it on a single computer to see if it all works together. For years, this task was performed by an entry-level person and regarded as mind-numbing grunt work. One manager changed that in a way that made the process more efficient and more effective. Instead of delegating the task to a grunt, the manager gave the daily-build responsibilities to the people writing the code. Each day the programmers would give their code to one “buildmeister,” who put it all together. If the code wasn’t compatible, the person whose software “broke the build” became buildmeister as punishment until someone else’s code broke the build. In the summer of 1996, the buildmeister was also given an enormous zucchini — “the zucchini of questionable freshness,” — sometimes with Groucho Marx glasses and a fake nose, to keep until the next buildmeister was named.

Delegating the task of buildmesiter to the team changed Microsoft’s daily prototyping process for the better. More developers got to see how their work fit together, or didn’t. No one wanted to be buildmeister, so an extra incentive to hand in quality code was created. What’s more, the unpleasant task of build management was equitably shared by everyone in the group. Accountability, responsibility, and quality were thus aligned.

The realignment had other important repercussions. The smartest and savviest high-level software developers hated being buildmeisters and wanted to spend as little time on the task as possible. But instead of weaseling out, they wrote tools to automate the task of buildmeister. The result? Microsoft developrs now manage the build with a fraction of the friction and in a fraction of the time they did in the mid-1990s.

Look, But Don’t Touch

Monday, January 7th, 2008

I’ve been discussing Michael Schrage’s Serious Play, which examines how organizations use models, simulations, and prototypes to stimulate innovation:

Can Detroit’s lagging competitiveness in the 1980s be blamed in part on its prototyping media? Absolutely. Intricate and expensive clay models didn’t lend themselves to easy modification or rapid iteration. The sheer effort required to craft them actually made them more like untouchable works of art than malleable platforms for creative interaction. The medium’s message is, Look but don’t touch. “American automobile companies didn’t have an iterative culture,” says IDEO’s David Kelley. “Clay…was like God’s tablets.” GVO’s Michael Barry agrees: “When a model starts to harden up,” he says, “so does a lot of the thinking.” Clay was more than a medium; it was a metaphor for management.

Daniel Whitney of MIT’s Draper Labs, who has studied the use of computer-aided design tools in Japan, observes that until the 1990s, U.S. car companies attempted to use clay models as inputs for their computer-aided design systems. This approach combined the worst of both media worlds: it was labor-intensive and imprecise, analogous to typing a handwritten novel into a word processor, editing the printout by hand, and retyping the final version into a computerized typesetting system. The cost in time, labor, and errors in painfully high.

Serious Politics

Saturday, January 5th, 2008

It’s been a while since I last mentioned Michael Schrage’s Serious Play, which examines how organizations use models, simulations, and prototypes to stimulate innovation.

One of his key points is that prototypes are always political, because knowledge is power, and this warps how prototypes get made and shared:

As we have seen, some prototypes raise political questions that the organization is unwilling or unable to answer. A primary reason for the failure of the IBM PCjr home computer in the mid-1980s was that IBM management had decided it might cannibalize sales from IBM’s popular line of personal computers. The product of a spec-driven culture, the PCjr was deliberately hobbled in the prototyping process to thwart that possibility. Less than two years after its introduction, the PCjr was withdrawn. IBM’s internal politics of prototyping killed it.

Serious Legislative Innumeracy

Friday, December 7th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

Sometimes even a valid model doesn’t guarantee useful communication:

A congressman who favored a “soft-technology” approach to U.S. energy needs was discussing demand projection with a modeler. He pointed out that adequate conservation measures and modest lifestyles could reduce growth of electrical demand to 2 percent per year. “But Congressman,” said the modeler, “even at 2 percent per year, electrical demand will double in thirty-five years.”

“That’s your opinion!” exclaimed the congressman.

Serious Legislative Innumeracy

Thursday, December 6th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

Sometimes even a valid model doesn’t guarantee useful communication:

A congressman who favored a “soft-technology” approach to U.S. energy needs was discussing demand projection with a modeler. He pointed out that adequate conservation measures and modest lifestyles could reduce growth of electrical demand to 2 percent per year. “But Congressman,” said the modeler, “even at 2 percent per year, electrical demand will double in thirty-five years.”

“That’s your opinion!” exclaimed the congressman.

Serious Taboos

Wednesday, December 5th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

He notes that when we want to learn about an organization, we should fight the impulse to look at what it puts into its models and simulations:

“I’ve learned that you learn far more about an organization from what they won’t model than from what they do,” asserts political scientist Garry Brewer, coauthor of the classic study of U.S. military simulations The War Game. “What I’ve observed — in both the military and private industry — is that organizations frequently leave out the very assumptions that are most important or most threatening to their sense of themselves. They always have a ‘good reason’ for this…. As a result, many organizations expend an extraordinary amount of effort developing models that can never be as useful or as valid as they say they want.”

For example:

In its war games during the 1980s, for example, the U.S. Navy would not allow aircraft carriers — its biggest, most expensive, and perhaps most controversial weapons platform — to be sunk hypothetically. This taboo persisted even after the Argentines successfully sank a British carrier during the Falklands War. It held fast even when the navy’s own submariners argued that carriers were particularly vulnerable to under-sea attack. For a variety of budgetary, political, interservice-rivalry and national-security reasons, the navy was permitted to run extensive war games and simulations in which its biggest and most vulnerable carriers were given a pass. The taboo was tacitly respected in virtually all formal reviews. External efforts to simulate conflicts in which carriers were destroyed were met with threats of security classification. One result, documented in Thomas B. Allen’s War Games, a popular history of U.S. war gaming, is that the navy acquired a reputation for cheating that undermined the credibility of naval proposals and exacerbated interservice rivalries. This particular taboo was deeply ironic because, as Harvard’s Stephen Peter Rosen ably documents in Winning the Next War, simulations and war games had been largely responsible for encouraging the navy to adopt aircraft carriers in the first place.

I discussed the U.S. Navy’s effective use of war games in Learning to Learn to Fight.

Erratum: I bow to mon frère‘s superior war-nerditry, for he caught this error in Schrage’s text: The HMS Sheffield was a destroyer not a carrier.

Serious Taboos

Wednesday, December 5th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

He notes that when we want to learn about an organization, we should fight the impulse to look at what it puts into its models and simulations:

“I’ve learned that you learn far more about an organization from what they won’t model than from what they do,” asserts political scientist Garry Brewer, coauthor of the classic study of U.S. military simulations The War Game. “What I’ve observed — in both the military and private industry — is that organizations frequently leave out the very assumptions that are most important or most threatening to their sense of themselves. They always have a ‘good reason’ for this…. As a result, many organizations expend an extraordinary amount of effort developing models that can never be as useful or as valid as they say they want.”

For example:

In its war games during the 1980s, for example, the U.S. Navy would not allow aircraft carriers — its biggest, most expensive, and perhaps most controversial weapons platform — to be sunk hypothetically. This taboo persisted even after the Argentines successfully sank a British carrier during the Falklands War. It held fast even when the navy’s own submariners argued that carriers were particularly vulnerable to under-sea attack. For a variety of budgetary, political, interservice-rivalry and national-security reasons, the navy was permitted to run extensive war games and simulations in which its biggest and most vulnerable carriers were given a pass. The taboo was tacitly respected in virtually all formal reviews. External efforts to simulate conflicts in which carriers were destroyed were met with threats of security classification. One result, documented in Thomas B. Allen’s War Games, a popular history of U.S. war gaming, is that the navy acquired a reputation for cheating that undermined the credibility of naval proposals and exacerbated interservice rivalries. This particular taboo was deeply ironic because, as Harvard’s Stephen Peter Rosen ably documents in Winning the Next War, simulations and war games had been largely responsible for encouraging the navy to adopt aircraft carriers in the first place.

I discussed the U.S. Navy’s effective use of war games in Learning to Learn to Fight.

Erratum: I bow to mon frère‘s superior war-nerditry, for he caught this error in Schrage’s text: The HMS Sheffield was a destroyer not a carrier.

Serious Shell Games

Wednesday, December 5th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

One of the most important lessons is that it matters how we use those models, simulations, and prototypes:

At Royal Dutch/Shell, the world’s second-largest oil company, senior executives used to be urged to come up with three scenarios whenever they considered a strategic course of action. Each scenario was typically a small jewel of narrative analysis and foresight. But there was a catch. “The problem was we always chose the middle one,” Shell UK head Chris Fay told the Financial Times. “So now we only put forward two.”

Serious Shell Games

Wednesday, December 5th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

One of the most important lessons is that it matters how we use those models, simulations, and prototypes:

At Royal Dutch/Shell, the world’s second-largest oil company, senior executives used to be urged to come up with three scenarios whenever they considered a strategic course of action. Each scenario was typically a small jewel of narrative analysis and foresight. But there was a catch. “The problem was we always chose the middle one,” Shell UK head Chris Fay told the Financial Times. “So now we only put forward two.”

Serious Play Between the Spreadsheets

Tuesday, December 4th, 2007

As I’ve already mentioned, in Serious Play, Michael Schrage, of the MIT Media Lab, examines how organizations use models, simulations, and prototypes to stimulate innovation.

One of the most important tools for serious play is the spreadsheet — which may not seem particularly playful for those outside the world of finance:

“Spreadsheets totally changed the financial business,” observes George Gould, a cofounder of the Donaldson, Lufkin, Jenrette investment-banking firm and undersecretary of the Treasury in the Reagan Administration. “Certainly, spreadsheets made CFOs more powerful than they used to be — a fact that is reflected in their pay scales.”

Low-cost spreadsheet software effectively launched the largest and most significant experiment in rapid prototyping and simulation in the history of business. [...] Financial models that had once cost thousands of dollars to design and build now cost thousands of pennies. [...] Within five years of the 1979 introduction of VisiCalc, the first electronic spreadsheet for personal computers, over 1 million software spreadsheets were being sold annually.

Here’s where things get interesting:

Operationally, Gould asserts, spreadsheet affected every significant facet of finance. “They were the great leveraged-buyout tool of that [1980s] era,” he notes. “They turned what had been a traditional financial analysis into a blueprint of how to run the business to maximize cash flow. Mergers and acquisitions once driven by long-term investment-banking relationships were now being driven by aggressive young bankers with even more aggressive spreadsheet models. But they were seen as credible models, so boards of directors were legally obligated to take them seriously.”

Spreadsheets turned financial analysis into a blueprint for running the company. But that’s not the main reason they caught on, at least not initially:

Dan Bricklin, the Harvard Business School student who created VisiCalc with MIT’s Bob Frankston, attributes the success of his software to the speed with which it paid for itself. Bricklin observes that well-heeled Wall Street analysts — thoroughly sick and tired of recalculating spreadsheet after spreadsheet on paper — would cheerfully shell out over $2,500 to buy VisiCalc and an Apple II personal computer simply to be able to reduce the time and tedium associated with the manual approach. “For most of these guys,” Bricklin recalls, “the payback for their investment was under a week.”