The Ice Cream Ordering Sequence

Friday, September 21st, 2007

Joe Sugarman, in Triggers, uses his Ice Cream Ordering Sequence to explain a sales technique:

In the late 1950s I was working in New York selling printing equipment. One day after dinner, I decided to stop by a small ice cream parlor to have a dish of ice cream. I sat down at the counter and the waitress asked me for my order.

I requested my favorite dessert, “I’ll have a dish of chocolate ice cream with whipped cream.”

The waitress looked at me with a puzzled expression, “You mean a chocolate sundae?”

“No, I want a dish of chocolate ice cream with whipped cream,” was my response.

“Well, that’s a chocolate sundae without the syrup,” replied the waitress.

“Isn’t it just chocolate ice cream with whipped cream? What’s the difference?” I inquired.

“Well, a sundae is 35 cents and plain ice cream is 25 cents. What you want is a sundae without the syrup,” replied the waitress, with a rather smug expression on her face.

“OK, I want chocolate ice cream with whipped cream, so if you have to charge me 10 cents more, go ahead,” was my reply. (This took place in the ’50s when a dollar was worth a lot more than it is today.)
[...]
And for the next few weeks, each time I ordered my favorite dessert, regardless of the restaurant, I’d still go through the same hassle.One evening, after having worked really hard during the day, I was finishing my meal in a restaurant in mid-town Manhattan when the waitress looked at me and asked, “Would you
like dessert?”

I really wanted my favorite, but I just didn’t feel like going through the entire verbal routine that I had been experiencing for the last few weeks. “I’ll have a dish of chocolate ice cream,” was my response. I didn’t ask for the whipped cream. This was a simple request — one I didn’t expect a hassle over.

As the waitress was walking away, I thought to myself, in what must have been a fraction of a second, how much I really wanted chocolate ice cream with whipped cream and that I should not let myself be intimidated by a waitress. “Hey, miss,” I called, as the waitress was still walking away, “could you put whipped cream on that ice cream?”

“Sure,” was her response. “No problem.”

When the check came, I noticed that I had been charged just 25 cents for the ice cream and whipped cream — something that I had been charged 35 cents before.

How is this used in sales?

A good example of this can be seen at car dealerships. The salesperson tallies your entire order, gets approval from the general manager, and then has you sign the purchase contract. As she is walking away to get the car prepped and ready for you to drive it away, she turns to you and says, “And you do want that undercoating, don’t you?” You instinctively nod your head. The charge is added to your invoice. “And you’ll also want our floor mats to keep your car clean as well, won’t you?”

Once a commitment is made, the tendency is to act consistently with that commitment. The customer nods his head.
[...]
One of the important points to remember is to always make that first sale simple. Once the prospect makes the commitment to purchase from you, you can then easily offer more to increase your sales. This is very true for products sold from a mail order ad or from a TV infomercial. I have learned to keep the initial offer extremely simple. Then, once the prospect calls and orders the product I am offering, and while the prospect is on the phone, I offer other items and end up with a larger total sale. An additional sale occurs over 50% of the time, depending on my added offer.

Physicist shows how steroids can fuel home runs

Friday, September 21st, 2007

Physicist shows how steroids can fuel home runs — with some fairly simple math:

Calculations show that, by putting on 10 percent more muscle mass, a batter can swing about 5 percent faster, increasing the ball’s speed by 4 percent as it leaves the bat.

Depending on the ball’s trajectory, this added speed could take it into home run territory 50 percent more often, said Roger Tobin of Tufts University in Boston.

“A 4 percent increase in ball speed, which can reasonably be expected from steroid use, can increase home run production by anywhere from 50 percent to 100 percent,” said Tobin, whose study will be published in an upcoming issue of the American Journal of Physics.

You don’t have to increase your average hitting distance much to double the tiny fraction of your hits that go over the wall.

Soccer beats jogging for fitness

Friday, September 21st, 2007

Soccer beats jogging for fitness — which does not surprise me at all:

The researchers selected men with similar health profiles aged 31 to 33 and split them into groups of soccer players, joggers, and couch potatoes — who not surprisingly ended the three-month study in the worst shape.

Each period of exercise lasted about one hour and took place three times a week. After 12 weeks, researchers found that the body fat percentage in the soccer players dropped by 3.7 percent, compared to about 2 percent for the joggers.

The soccer players also increased their muscle mass by almost 4.5 pounds, whereas the joggers didn’t have any significant change. Those who did no exercise registered little change in body fat and muscle mass.

“Even though the football (soccer) players were untrained, there were periods in the game that were so intense that their cardiovascular was maximally taxed, just like professional football (soccer) players,” said Dr. Peter Krustrup, head of Copenhagen University’s department of exercise and sport sciences, who led the study.

The soccer players and the joggers had the same average heart rate, but the soccer players got a better workout because of intense bursts of activity. Krustrup and his colleagues found there were periods during soccer matches when the players’ hearts were pumping at 90 percent their full capacity. But the joggers’ hearts were never pushed as hard.

Unlike the soccer players, the joggers consistently thought their runs were exhausting.

“The soccer players were having more fun, so they were more focused on scoring goals and helping the team, rather than the feeling of strain and muscle pain,” Krustrup said.

"Happy Feet" director shooting "Justice League"

Friday, September 21st, 2007

“Happy Feet” director shooting “Justice League”:

“Happy Feet” director George Miller is in talks to bring the superheroes of the “Justice League of America” comic books to the big screen.

I prefer to think of it as “Mad Max” director shooting “Justice League”:

Miller wrote and directed the Mad Max movies starring Mel Gibson (Mad Max, The Road Warrior, and Mad Max Beyond Thunderdome); co-wrote Babe and wrote and directed its sequel; and Lorenzo’s Oil. He also directed The Witches of Eastwick, starring Jack Nicholson, Susan Sarandon, Cher and Michelle Pfeiffer and he directed the fourth segment of Twilight Zone: The Movie, which was hailed as the best in most critical reviews.

There some challenges to overcome:

One thorny issue the production needs to deal with is casting. Warner Bros. is in production on “The Dark Knight,” a sequel to “Batman Begins,” starring Christian Bale, and is in development on another Superman movie with Brandon Routh as Clark Kent/Superman. Those two actors will not reprise their roles for the “League” movie as the studio is intent on keeping all of its superhero movies as separate franchises. “League” also is looked at as a launchpad for other comic book movies.

As such, the studio is hoping to cast the movie with lesser-known actors, and an international search is under way.

The smaller names in the movie will help with the second issue facing the production: budget. A “League” movie was long thought impossible simply because the thinking was that any undertaking would break the bank on big-name actors and special effects. On the effects front, media like animation were considered before deciding to stick with live action. Miller’s “League” will be effects-intensive. Some motion capture likely will be used as well.

As you might imagine, anyone who directs both Mad Max and Babe is an interesting fellow — he was a medical doctor before becoming a filmmaker.

Airport chocolate, ReBooks, and Dune

Friday, September 21st, 2007

Orson Scott Card reviews Airport chocolate, ReBooks, and Dune — but I’ll skip to the part where he takes another look at Dune and its uncanny prescience:

There was considerable irony in Dune‘s use of Arabic culture and language as the explicit basis of the “Fremen,” the desert dwellers who become the source of Paul Atreides power and, when he unleashes them, the scourge of the universe.

Herbert traces the roots of Fremen culture from world to world, and makes it clear that, while the specifics of Islamic belief are never laid out, the customs and culture of these people have been Muslim all along. (One of the great sources of their seething anger against the empire is that they have been denied the right to the Haj — the pilgrimage that Muslims make to Mecca.)

The emotional core of the novel, then, comes from a T.E. Lawrence-like character, Paul Atreides, coming to dwell with and learning to live as an Arab Muslim, until he is able to lead them to victorious battle.

Paul, being a non-Muslim, treats the idea of jihad as an abhorrent one; he long tries to resist the blood and horror of such a thing, though by the end of the book he has given up and realizes that the jihad will happen and cannot be prevented or even controlled.

So here’s the thought that occurred to me during such passages of Dune: What if Osama bin Laden somehow read Dune during his formative years? Or, if he did not read it himself, certainly there were Arab Muslim students in America who did read it, and the book might well have been part of the reason they became receptive to Osama’s ideas.

Because a Muslim would not read this book the same way I did. To an Arab Muslim, the Arabic words and names would leap off the page; the Fremen characters would be the ones an Arab reader would most identify with.

Such a reader would not feel any of Paul Atreides’ reluctance for jihad — on the contrary, he would be hoping Paul would fail to stop the jihad.

And when, at the end of the book, the Arab jihad is triumphant, this reader — Osama or another of his ideology — would not only feel great emotional satisfaction, he would have the blueprint for his own future.

Because the Fremen in Dune triumph, not just because of the force of their arms or their courage in battle, but because they control the only source of the “Spice,” a substance only created in the complex desert ecology of Arrakis, the planet they control. Without Spice the starships cannot navigate, and interstellar trade would grind to a halt.

The whole economy of the interstellar empire is dependent on and therefore under the ultimate control of the Fremen. Anything the offworlders do to them will hurt the offworlders far more than it hurts the Fremen. The parallel with oil is obvious.

I can just see such a reader thinking, This isn’t fiction. This is the future. This is why jihad not only can work but must work; we lack only a leader to show us the way. The novel made it a European (in culture) who comes to the poor Fremen and leads them, but this is nonsense.

To such a reader, the true founder of the victory of the Fremen is Liet Kynes, the native-born Fremen who studied offworld science and then came home and, under the noses of their colonial rulers, prepared the Fremen for jihad and victory.

Remember that Herbert wrote Dune in the 1960s, before the first oil embargo, before any Islamist government was ever formed.

Whether Dune had any causal influence on the rise of Al Qaeda, Herbert certainly did a superb job of predicting the rise and the power of such an ideology. I would be surprised if there were not, among the followers of Osama bin Laden, at least a few readers of Dune for whom this book feels like their future, their identity, their dream.

In other words, Herbert got it horribly right.

Meanwhile, it’s one of the seminal novels of science fiction, and one of the most important novels in the English language in the second half of the twentieth century. It’s a shame that it is only taught and discussed in classes on science fiction instead of taking its rightful place in literary studies.

It is laughable to think of some of the trivial books from the same period that are taught — by professors who sneer at all science fiction. They still celebrate literature about the adolescent “counterculture” of the 1960s, while the fiction that was capturing the imagination of the best and brightest of that generation, and which still bears a significant relationship to the real world, is ignored.

I guess that’s what the ivory tower is all about.

The Ice Cream Ordering Sequence

Friday, September 21st, 2007

Joe Sugarman, in Triggers, uses his Ice Cream Ordering Sequence to explain a sales technique:

In the late 1950s I was working in New York selling printing equipment. One day after dinner, I decided to stop by a small ice cream parlor to have a dish of ice cream. I sat down at the counter and the waitress asked me for my order.

I requested my favorite dessert, “I’ll have a dish of chocolate ice cream with whipped cream.”

The waitress looked at me with a puzzled expression, “You mean a chocolate sundae?”

“No, I want a dish of chocolate ice cream with whipped cream,” was my response.

“Well, that’s a chocolate sundae without the syrup,” replied the waitress.

“Isn’t it just chocolate ice cream with whipped cream? What’s the difference?” I inquired.

“Well, a sundae is 35 cents and plain ice cream is 25 cents. What you want is a sundae without the syrup,” replied the waitress, with a rather smug expression on her face.

“OK, I want chocolate ice cream with whipped cream, so if you have to charge me 10 cents more, go ahead,” was my reply. (This took place in the ’50s when a dollar was worth a lot more than it is today.)
[...]
And for the next few weeks, each time I ordered my favorite dessert, regardless of the restaurant, I’d still go through the same hassle.One evening, after having worked really hard during the day, I was finishing my meal in a restaurant in mid-town Manhattan when the waitress looked at me and asked, “Would you
like dessert?”

I really wanted my favorite, but I just didn’t feel like going through the entire verbal routine that I had been experiencing for the last few weeks. “I’ll have a dish of chocolate ice cream,” was my response. I didn’t ask for the whipped cream. This was a simple request — one I didn’t expect a hassle over.

As the waitress was walking away, I thought to myself, in what must have been a fraction of a second, how much I really wanted chocolate ice cream with whipped cream and that I should not let myself be intimidated by a waitress. “Hey, miss,” I called, as the waitress was still walking away, “could you put whipped cream on that ice cream?”

“Sure,” was her response. “No problem.”

When the check came, I noticed that I had been charged just 25 cents for the ice cream and whipped cream — something that I had been charged 35 cents before.

How is this used in sales?

A good example of this can be seen at car dealerships. The salesperson tallies your entire order, gets approval from the general manager, and then has you sign the purchase contract. As she is walking away to get the car prepped and ready for you to drive it away, she turns to you and says, “And you do want that undercoating, don’t you?” You instinctively nod your head. The charge is added to your invoice. “And you’ll also want our floor mats to keep your car clean as well, won’t you?”

Once a commitment is made, the tendency is to act consistently with that commitment. The customer nods his head.
[...]
One of the important points to remember is to always make that first sale simple. Once the prospect makes the commitment to purchase from you, you can then easily offer more to increase your sales. This is very true for products sold from a mail order ad or from a TV infomercial. I have learned to keep the initial offer extremely simple. Then, once the prospect calls and orders the product I am offering, and while the prospect is on the phone, I offer other items and end up with a larger total sale. An additional sale occurs over 50% of the time, depending on my added offer.

Velociraptor was a feathered fiend

Friday, September 21st, 2007

Velociraptor was a feathered fiend:

Scientists have suspected for several years that velociraptors were feathered beasts, but only now have they been able to identify what they believe is conclusive proof. Close analysis of a velociraptor forelimb unearthed in Mongolia in 1998 reveals that quill knobs were present on the fossilised bone. Quill knobs, which are found on many modern bird species, are where the flight or wing feathers are anchored to the bone by ligaments.

Velociraptors had short forelimbs compared with modern birds’ wings, which has led researchers to conclude that they were flight-less but had probably descended from an extinct creature that had been able to fly. That the velociraptors had retained at least some feathers suggests that they continued to have a role, even if not for flight.

The researchers said that one of the most likely functions of the feathers was to display to other velociraptors, perhaps in courtship rituals or as a show of strength against aggressors. Other functions could have included use as a shield to protect eggs, a temperature control to prevent the dinosaurs from getting too hot or cold, or to help them to manoeuvre while running.

Mark Norell, one of the researchers from the American Museum of Natural History, said: “The more that we learn about these animals the more we find that there is basically no difference between birds and their closely related dinosaur ancestors like velociraptor. Both have wishbones, brooded their nests, possess hollow bones and were covered in feathers. If animals like velociraptor were alive today our first impression would be that they were just very unusual looking birds.”

The fossil analysed for the study came from a velociraptor that was estimated to have been 5ft (1.5m) long, 3ft tall and weighing 33lb (15kg) when it died.

Critical Chain 4

Thursday, September 20th, 2007

I’ve been discussing Eli Goldratt’s third business novel, Critical Chain, which explores project management.

Goldratt’s primary point is that padding individual task estimates with safety just means that a lot of time gets wasted. All that safety should get pooled into a few strategically placed buffers.

But Goldratt makes a secondary point about multi-tasking — namely that it does terrible things to lead times.

Imagine that you have three tasks to get done — X, Y, and Z — and each takes 10 days to finish. Your lead time should be 10 days for each task.



If, on the other hand, you start each task, get it half done, then flit to another of the tasks, get it half done, then work on the remaining untouched task, and get it half done, before returning to the first and finishing it, then your lead time will double to 20 days for each task.

Now, Goldratt only hints at this in Critical Chain, but how bad is extra lead time? Does it matter? Well, yes and no. For any task not on the critical path, no, it shouldn’t matter, not as long as there’s enough slack. For any task on the critical path, yes, of course, it matters a great deal.

To play devil’s advocate for a moment though, when would it make sense to switch away from Task X to Task Y? Goldratt asserts that workers multi-task simply to keep busy — and I’m sure that’s often the case — but what happens when you can work on Task X, but Task Y is critical? You work on Task X until Task Y is ready for you, then you switch to Task Y, leaving Task X half done. What happens when Task Z, which is even more critical, is ready for you? You switch to it.

The problem comes when you switch back to Task X without finishing Z, then Y, because those were the more critical tasks; otherwise you never would have — or never should have — switched to them in the first place.



In such a case, starting a low-priority task early, to keep busy, might inflate lead time numbers, but it doesn’t hurt the project’s progress at all.

The key is always knowing which tasks are critical or threatening to become critical — to have buffer-driven task priorities. By looking at each path’s relative buffer burn rate — the percentage of the buffer penetrated versus the percentage of work completed on that path — we can immediately see which paths, and thus which tasks, deserve priority.

The Pirates’ Code

Thursday, September 20th, 2007

James Surowiecki looks at The Pirates’ Code and George Mason University economist Peter Leeson’s research on pirate politics:

Leeson is fascinated by pirates because they flourished outside the state — and, therefore, outside the law. They could not count on higher authorities to insure that people would live up to promises or obey rules. Unlike the Mafia, pirates were not bound by ethnic or family ties; crews were as remarkably diverse as in the “Pirates of the Caribbean” films. Nor were they held together primarily by violence; while pirates did conscript some crew members, many volunteered. More strikingly, pirate ships were governed by what amounted to simple constitutions that, in greater or lesser detail, laid out the rights and duties of crewmen, rules for the handling of disputes, and incentive and insurance payments to insure that crewmen would act bravely in battle. (The rules that governed a ship that the buccaneer John Exquemelin sailed on, for instance, provided that six hundred pieces of eight would go (pdf) to a man who lost his right arm.) The Pirates’ Code mentioned in the “Caribbean” series was not, in that sense, a myth, although in effect each ship had its own code.

But rules alone did not suffice. Pirates also needed to limit the risk that their leaders would put individual interests ahead of the interests of the ship. Most economists today would call this problem “self-dealing”; Leeson uses the term “captain predation.” Some pirates had turned to buccaneering after fleeing naval and merchant vessels, where the captain was essentially a dictator — “his Authority is over all that are in his Possession,” as one contemporary account had it. Royal Navy and merchant captains guaranteed themselves full rations while their men went hungry, beat crew members at their whim, and treated dissent as mutinous. So pirates were familiar with the perils of autocracy.

As a result, Leeson argues, pirate ships developed models that in many ways anticipated those of later Western democracies. First, pirates adopted a system of divided and limited power. Captains had total authority during battle, when debate and disagreement were likely to be both inefficient and dangerous. Outside of battle, the quartermaster, not the captain, was in charge — responsible for food rations, discipline, and the allocation of plunder. On most ships, the distribution of booty was set down in writing, and it was relatively equal; pirate captains often received only twice as many shares as crewmen. (Woodward writes that Privateer captains typically received fourteen times as much loot as crewmen.) The most powerful check on captains and quartermasters was that they did not hold their positions by natural right or blood or success in combat; the crew elected them and could depose them. And when questions arose about the rules that governed behavior on board, interpretation was left not to the captain but to a jury of crewmen.

Leeson’s analysis of pirate governance focusses mainly on the way in which this system deterred self-dealing. But the pirate system was also based on an important insight: leaders who are great in a battle or some other crisis are not necessarily great managers, and concentrating power in one pair of hands often leads to bad decision-making.

Strategy Letter VI

Thursday, September 20th, 2007

I recommend Joel Spolsky’s Strategy Letter VI, which looks to explain the future by looking at the past:

In the late 80s, Lotus was trying very hard to figure out what to do next with their flagship spreadsheet and graphics product, Lotus 1-2-3. There two obvious ideas: first, they could add more features. Word processing, say. This product was called Symphony. Another idea which seemed obvious was to make a 3-D spreadsheet. That became 1-2-3 version 3.0.

Both ideas ran head-first into a serious problem: the old DOS 640K memory limitation. IBM was starting to ship a few computers with 80286 chips, which could address more memory, but Lotus didn’t think there was a big enough market for software that needed a $10,000 computer to run. So they squeezed and squeezed. They spent 18 months cramming 1-2-3 for DOS into 640K, and eventually, after a lot of wasted time, had to give up the 3D feature to get it to fit. In the case of Symphony, they just chopped features left and right.

Neither strategy was right. By the time 123 3.0 was shipping, everybody had 80386s with 2M or 4M of RAM. And Symphony had an inadequate spreadsheet, an inadequate word processor, and some other inadequate bits.

“That’s nice, old man,” you say. “Who gives a fart about some old character mode software?”

Humor me for a minute, because history is repeating itself, in three different ways, and the smart strategy is to bet on the same results.

Critical Chain 3

Thursday, September 20th, 2007

I’ve been discussing Eli Goldratt’s third business novel, Critical Chain, and established project management techniques, like PERT.

One subtle issue with charting the critical path of a project is that the individual task estimates, which start out as distributions — defined by optimistic, most likely, and pessimistic estimates — get boiled down to one-dimensional expected numbers.



This lets us define a critical path, but — for simplicity and clarity’s sake — it ignores the potential for other paths to become critical.

Let’s say we add up all the variances along our critical path — Tasks A, C, and F, in our example — and they’re fairly small, so that our critical path has a duration of 7 ± 1 days. From that we might assume that our project has a 98-percent chance of finishing in 9 days — but what we’ve calculated is the chance that the original critical path will finish in 9 days.

What if the non-critical path along Tasks B and E takes 6 ± 2 days? We only have a 93-percent chance of finishing that path in 9 days.

Perhaps that seems academic in our toy problem, but real projects with many dependencies can demonstrate an alarming cascade effect, where every task seems to be waiting for something else to get done.

Returning to our toy problem, what happens if we replace Task A with three identical tasks, Tasks A1, A2, and A3? Our PERT analysis does not change at all, but clearly the fact that they average three days each does not mean that the dependent Task C gets to start after three days. It has to wait for the slow-poke.



Another academic complaint about PERT analysis is that it uses beta distributions, which may or may not reflect the actual distributions, and that it assumes that those beta distributions are numerous enough to sum to a fairly normal distribution, via the central limit theorem.

Less academic is the concern that the task durations are not independent. If it takes longer than planned to design a component, that might very well imply that it will take longer to develop a prototype, to produce it, to test it, etc. If that’s the case, then pooling all the safeties into one buffer at the end won’t reduce the total safety needed — but it should still reduce the threats from Student syndrome and Parkinson’s Law.

A bigger issue still is that people are notoriously bad at estimating task durations, and they are notoriously overconfident in their ability to estimate. The optimistic and pessimistic estimates are supposed to book-end a range that covers almost all possibilities — 99 percent — but far more than one percent of tasks fall outside those estimated ranges.

Distribution Fitting

Wednesday, September 19th, 2007

It’s amazing how easy it is to study mathematical statistics in great detail without learning a thing about which distributions fit which natural (or unnatural) phenomena. I found this piece on Distribution Fitting, from StatSoft‘s online textbook remarkably helpful:

Variables whose values are determined by an infinite number of independent random events will be distributed following the normal distribution, whereas variables whose values are the result of an extremely rare event would follow the Poisson distribution. The major distributions that have been proposed for modeling survival or failure times are the exponential (and linear exponential) distribution, the Weibull distribution of extreme events, and the Gompertz distribution. The section on types of distributions contains a number of distributions generally giving a brief example of what type of data would most commonly follow a specific distribution as well as the probability density functin (pdf) for each distribution.

I must admit, I hadn’t even heard of some of these distributions. The Weibull distribution sounds fascinating:

The Weibull distribution is often used in the field of life data analysis due to its flexibility — it can mimic the behavior of other statistical distributions such as the normal and the exponential. If the failure rate decreases over time, then k < 1. If the failure rate is constant over time, then k = 1. If the failure rate increases over time, then k > 1.

An understanding of the failure rate may provide insight as to what is causing the failures:

  • A decreasing failure rate would suggest “infant mortality”. That is, defective items fail early and the failure rate decreases over time as they fall out of the population.
  • A constant failure rate suggests that items are failing from random events.
  • An increasing failure rate suggests “wear out” — parts are more likely to fail as time goes on.

When k = 3.4, then the Weibull distribution appears similar to the normal distribution. When k = 1, then the Weibull distribution reduces to the exponential distribution.

Hmm… I suspect I really geeked out there — even more than normal.

Be an instant middleman

Wednesday, September 19th, 2007

Be an instant middleman:

David Carter has a knack for discovering more offbeat approaches to making money. When I wrote about him for this column last year (“The Startup Facade,” October 2006), he had just stumbled into the decidedly pre-Internet business of surveying buildings for asbestos. How? He had set up a simple website and within weeks amassed so many clients wanting surveys that he took the next step: he enrolled in a two-day course in surveying and started taking orders. This past year, Carter, 48, earned $250,000 as an eight-hour-per-week surveyor.

Now he’s at it again — using the Web to create another false front that pads his bank account. His strategy now, though, revolves around saving money. The gist: squeeze out the middlemen in home remodeling by using the Web to masquerade as one.

It all began when Carter started renovating his 1930s four-bedroom house on the outskirts of Birmingham, England. He wanted to install floor-to-ceiling glass doors in the back of the house, and after some research he settled on handmade doors built by a German company (which I agreed not to name). When he called the firm to get in touch with a local supplier, Carter was told there were none in the area and to choose a window installer who could place the order on his behalf.

That was the lightbulb moment. “It was time to bring out the smoke and mirrors,” Carter says with a chuckle. He took a domain name he’d already bought (new-windows.co.uk), tossed up a quick-and-dirty website, and — poof — instantly became a “unique window brokerage service.”

He listed a phone number manned by a vendor that, for $1.50 a pop, answers calls for “New Windows UK” and sends him a text message to call back. Carter fired off a note to the German manufacturer, requesting the wholesale price for the doors he wanted. After two attempts, a quote came back with a 45 percent discount. His total savings: $7,250. “The house is sucking up all my money at the moment,” he says. “Wherever I can save, that’s what I’ll do.” The doors are en route.

Isn’t it all a bit sleazy? Carter doesn’t think so. “Who am I hurting?” he asks. He has a point: why should a window broker earn $7,250 for doing nothing but placing an order? The manufacturer makes the same money regardless. Moreover, Carter’s website says he will charge a flat fee of $500 to measure and choose windows. So far, no one has taken him up on it – but “if someone asks, will you help me with my windows for 250 quid, I’d be a fool to say no,” he says. “I’ve got a tape measure.”

Of course, applying Carter’s methods to other services may not always work. Manufacturers often insist on some form of verification. So what’s next? “I need a new roof,” he says. “Whether I can pull that off, I don’t know.” Just in case, he’s already got the domain: uk-roofing.com.

Critical Chain 2

Wednesday, September 19th, 2007

As I mentioned earlier, Eli Goldratt’s third business novel, Critical Chain, deals with project management, which has existed in its modern form since the 1950s.



One obvious issue with charting the critical path of a project is estimating all the task durations. Sure, for some projects all the tasks are well understood, but for many the tasks are new and untried.

In fact, complex design work that “should” take one month might take two, or three, or four months. Less likely, it might resemble an old, already-solved problem, and it might only take two or three weeks to finish.

So when the project manager asks a team member how long a task will take, what should the worker say? A young hotshot might give the mode of the distribution — “Yeah, I should be able to get it done in a month.” After a project or two, our chastened young worker starts giving numbers closer to the median of the distribution — estimates with a 50-50 chance of being long enough.

What the project manager probably wants is something closer to the mean of the distribution, or the expected duration of the task, which is greater than either the mode or the median in our skewed distribution.

On the other hand, the grizzled worker probably gives an estimate with plenty of safety — anyone who has missed a deadline knows it’s better to under-promise and over-deliver.

In fact — turning toward Critical Chain — that is one of Goldratt’s key points: each individual task estimate has plenty of safety built in. In fact, each layer of management adds its own safety, too — no boss wants his team to come in late. Then upper-level management doesn’t like the cumulative estimate, so it has all the task estimates cut — but the workers know to boost their own task estimates even more to account for that.

So why doesn’t the project come in on time? Because no one finishes early. Either they procrastinate, because they have “more than enough time” to finish the task — Student syndrome — or they use the extra time to add bells and whistles — Parkinson’s Law.

Delays accumulate, while advances do not.

So what does Goldratt recommend? He recommends cutting out most of the safety from individual tasks, then pooling the collected safety into a project buffer — which does not need to be as big as the collected safeties, because delays and advances will cancel out.

Goldratt also recommends a feeding buffer anywhere a path merges with the critical path — but our sample project is a bit of a degenerate case, with no paths merging into the critical path until the finish pseudo-task.



I don’t know if Goldratt thought that this notion of using accurate estimates and pooling safety buffers was a new idea, but it’s found in old-school PERT — the real version, if not the version most people use — where each task is assigned not an estimate but three estimates: optimistic (or best case), most likely, and pessimistic (or worst case).

These estimates are then fed into formulas based on the beta distribution, which, with the right parameters, looks an awful lot like the log-normal distribution pictured above, but with the appealing attribute that it has a well-defined minimum and maximum.

The formulas assume six standard deviations between optimistic and pessimistic:

Expected = (Optimistic + 4 x Most likely + Pessimistic) / 6
Variance = [(Pessimistic - Optimistic) / 6]2

It’s the expected times that determine the critical path and the cumulative variance — or the square root of the cumulative variance — that determines the project buffer. (PERT assumes that there are enough tasks in the critical path that the many beta distributions sum to a fairly normal distribution.)

Whether it’s new or not, the takeaway message is this: Don’t ask for a single-number estimate but for a distribution, and don’t pad each estimate, but pool the safety buffers into larger feeder buffers and project buffers, so that advances can cancel out delays.

Soylent Green

Wednesday, September 19th, 2007

You don’t have to have seen the movie to know that Soylent Green is not made from plankton — nor is it made from a mix of soy and lentils, the original source of the name.

I’ve seen bits and pieces of Soylent Green over the years, but I finally sat down to watch the whole thing, and I was soon shocked by a scene where the rich man’s young mistress — who comes with the “furnished” apartment — is playing a video game. Is she playing Asteroids? In 1973? No, she’s playing Computer Space, which I hadn’t even heard of before, a bridge between Spacewar! and Asteroids:

Computer Space is a video arcade game released in November 1971 by Nutting Associates. Created by Nolan Bushnell and Ted Dabney, who would both later found Atari, it is generally accepted that it was the world’s first commercially sold coin-operated video game — and indeed, the first commercially sold video game of any kind, predating the Magnavox Odyssey by six months, and Atari’s Pong by one year.

I can’t believe they were able to make a game with this hardware:

Computer Space utilizes no microprocessor, RAM or ROM. The entire computer system is a state machine made of discrete 74 series TTL logic elements. Graphic elements are held in diode arrays. Physical configuration is made up of 3 PCBs interconnected through a common bus. Display is rendered on a General Electric 15″ black-and-white portable television vacuum tube set specially modified for Computer Space.

The video game isn’t the only interesting bit of trivia:

Charlton Heston’s tears at Sol’s death were real, as Heston was the only cast member who knew that Edward G. Robinson was dying of terminal cancer. This was the 90th and last movie in which Robinson appeared. He died nine days after the shooting was done, on January 26, 1973.