On the Hunt for Bottlenecks

Friday, March 12th, 2010

Lean manufacturing, Bill Waddell says, is the application of the old scientific management concept to the entire factory — not just the direct labor slice:

While any number of authors and ‘experts’ with little actual factory experience point out that the original Ford plant had a Time Study Department and Shigeo Shingo did not consider himself fully dressed in the morning if he did not have his stop watch, there was a huge difference. They were not timing isolated operations looking for direct labor cost savings. They were on the hunt for bottlenecks, looking for anything restricting flow. The only time that matters in a one piece flow plant is the longest time in the flow. Reducing any other time saves nothing. (I imagine Eli Goldratt used a stopwatch when he made his much publicized breakthrough in the chicken coop business. As Goldratt quite accurately points out: An improvement at the bottleneck is an improvement in the system; while an improvement anywhere else is a mirage.) Just because these fellows were carrying stopwatches does not mean they saw factories remotely like Taylor did.

Lean practitioners go from one end of the process to the other looking at every action and every cost, looking to optimize the total. The traditional approach puts direct labor and machine operations on a pedestal. Every other activity is first and foremost supposed to optimize direct labor performance to the old Taylor standard. Only after that goal is met should management then pursue the second goal of minimizing the support cost. One can almost envision the operatic soloist alone in the spotlight while the other performers and the orchestra are hidden in the shadows all doing whatever they have to in order to make the soloist look good. In the remaining traditionally managed American factories, it is the machine operator, surrounded by inventory and a gang of material handlers, inspectors and foremen all assigned the task of making sure that, come what may, that operator makes or exceeds the rate for the job. Lean looks at that and says, “Nonsense”.

Necessary But Not Sufficient

Wednesday, March 4th, 2009

I finally made the time to read Goldratt’s last business novel, Necessary But Not Sufficient, which argues that new information technology is necessary but not sufficient for reaching the goal of making more money, because extra information doesn’t do you any good if you don’t do anything differently because of it.

The thin story of the novel involves an ERP software company with a major client who’s blindsided by a “weasel” on the board — a weasel who has the temerity to demand some justification for all the money spent on this enormous IT project. How does it improve the bottom line?

That’s when the ERP software company realizes that it has no case. “Better visibility into operations” doesn’t translate nicely into a dollar figure. Quicker turnaround on quarterly financial reports is also nice, but it — ironically? — doesn’t translate nicely into a dollar figure either, especially when no one from the finance department gets laid off. In fact, most time-saving improvements don’t translate into dollar savings, because, again, no one’s actually getting laid off; labor costs aren’t going down.

The real, quantifiable payoffs come from improvements in the supply chain. When the plants know how much has been sold from each distribution center three weeks sooner than they used to, that means they can carry less inventory and suffer fewer stock-outs, reducing their carrying costs and increasing their sales.

The system also reduces invoicing errors, which means they can get their money from customers sooner, and it allows them to combine purchases made by multiple plants, which means they can get better deals on raw materials.

(Incidentally, these benefits mean much less for medium-size firms with just a few plants and distribution centers, which aren’t so spread out.)

In the end, our heroes revolutionize the entire ERP software industry with their simpler, more effective solution, based on drum-buffer-rope and buffer management — which are not explained at all in the book, but which, in this fictional scenario, boost capacity by 40 percent.

But this boon has its own downside: suddenly the regional warehouses are overstuffed with finished goods. What happened?

The target inventory levels didn’t change — they stayed at four months’ inventory — but with the plant’s increased capacity, it can now fulfill requests in a timely manner, and actual inventory levels have climbed from an average of two months’ inventory to three.

With replenishment times cut in half, they cut target inventory levels in half, but this leads to shortages, because demand is still volatile, and one plant can go through two months’ inventory in one product pretty easily, even if it has plenty of inventory in other products, and other plants have plenty of inventory of that one product.

In the end, the solution is to pool inventory at a warehouse near the plant and to replenish the regional warehouses overnight from there, using pull inventory — sending them as much of a product as they’ve just sold.

Then the client realizes he can take this one step further and integrate his whole supply chain, not just the vertically integrated portion he owns — and that means our heroes can sell their ERP solution to medium-size and small firms who need to integrate with big firms. Everyone loves a happy ending.

GM’s Broken Axle

Sunday, March 30th, 2008

GM’s Broken Axle is American Axle & Manufacturing, which it spun off in 1994. Now labor’s on strike there, and GM is finding that its monopsony on axles does not compare to their monopoly:

GM relies heavily on parts from American Axle, buying about $2.6 billion of them, including axles and key components in most of its trucks and some passenger cars. Currently, 28 GM plants are either idled completely or have cut production thanks to the strike, which started on Feb. 26. At least one more — a factory building the Cadillac DTS and Buick Lucerne sedans, just outside Detroit’s city limits — will go down next week.

As of last week, GM lost 80,000 vehicles’ worth of production as a result of the strike, says one company insider. Because carmakers book revenue as soon as a vehicle leaves the assembly line and heads to a dealership, the strike is hitting GM’s top line. At least one analyst has dropped his first-quarter profit expectations as a result. Deutsche Bank (DB) analyst Rod Lache issued a report earlier this week boosting his forecast for GM’s quarterly loss from about $600 million to almost $1.4 billion.

“Richard Dauch [American Axle's chairman and CEO] isn’t just locking out the UAW,” says Sean McAlinden, chief economist at the Center for Automotive Research in Ann Arbor, Mich. “He’s locking out GM.”

Lache says the strike is costing GM about $890 million a month. The only mitigating factor is that the truck production being lost probably would have been cut anyway, because sales are falling. So Lache didn’t cut his earnings expectations for the year. But a two-month strike will start to have more permanent effects, he says.

Note two key points:

  • Because carmakers book revenue as soon as a vehicle leaves the assembly line and heads to a dealership, the strike is hitting GM’s top line.
  • The only mitigating factor is that the truck production being lost probably would have been cut anyway, because sales are falling. So Lache didn’t cut his earnings expectations for the year.

Goldratt (of Theory of Constraints fame) would have a field day with this. The makers mistake shipping a car with selling a car. The dealer does not assume the risk of selling or not selling the car — the maker does — so all those cars on the lot are just thinly disguised finished-goods inventory. They’re not true throughput.

On a lighter note, the strike probably isn’t hurting throughput, because axles are not a meaningful constraint on sales right now — demand is — and GM needed to cut production anyway.

Planning Fallacy

Thursday, November 15th, 2007

When I recently discussed project management — in Critical Chain 1, 2, 3, 4, 5, and 6 — I touched on the planning fallacy:

A bigger issue still is that people are notoriously bad at estimating task durations, and they are notoriously overconfident in their ability to estimate. The optimistic and pessimistic estimates are supposed to book-end a range that covers almost all possibilities — 99 percent — but far more than one percent of tasks fall outside those estimated ranges.

Eliezer Yudkowsky discusses the Planning Fallacy in much greater detail:

Buehler et. al. (1995) asked their students for estimates of when they (the students) thought they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?
  • 13% of subjects finished their project by the time they had assigned a 50% probability level;
  • 19% finished by the time assigned a 75% probability level;
  • and only 45% (less than half!) finished by the time of their 99% probability level.

As Buehler et. al. (2002) wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”

It gets worse:

A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et. al. (2000), who found that:
  • Asking subjects for their predictions based on realistic “best guess” scenarios; or
  • Asking subjects for their hoped-for “best case” scenarios…

…produced indistinguishable results.

So what’s the solution?

Unlike most cognitive biases, we know a good debiasing heuristic for the planning fallacy. It won’t work for messes on the scale of the Denver International Airport, but it’ll work for a lot of personal planning, and even some small-scale organizational stuff. Just use an “outside view” instead of an “inside view”.

People tend to generate their predictions by thinking about the particular, unique features of the task at hand, and constructing a scenario for how they intend to complete the task — which is just what we usually think of as planning. When you want to get something done, you have to plan out where, when, how; figure out how much time and how much resource is required; visualize the steps from beginning to successful conclusion. All this is the “inside view”, and it doesn’t take into account unexpected delays and unforeseen catastrophes. As we saw before, asking people to visualize the “worst case” still isn’t enough to counteract their optimism — they don’t visualize enough Murphyness.

The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past. This is counterintuitive, since the inside view has so much more detail — there’s a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.

But experiment has shown that the more detailed subjects’ visualization, the more optimistic (and less accurate) they become. Buehler et. al. (2002) asked an experimental group of subjects to describe highly specific plans for their Christmas shopping — where, when, and how. On average, this group expected to finish shopping more than a week before Christmas. Another group was simply asked when they expected to finish their Christmas shopping, with an average response of 4 days. Both groups finished an average of 3 days before Christmas.

Likewise, Buehler et. al. (2002), reporting on a cross-cultural study, found that Japanese students expected to finish their essays 10 days before deadline. They actually finished 1 day before deadline. Asked when they had previously completed similar tasks, they responded, “1 day before deadline.” This is the power of the outside view over the inside view.

A similar finding is that experienced outsiders, who know less of the details, but who have relevant memory to draw upon, are often much less optimistic and much more accurate than the actual planners and implementers.

So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.

You’ll get back an answer that sounds hideously long, and clearly reflects no understanding of the special reasons why this particular task will take less time. This answer is true. Deal with it.

Critical Chain 6

Monday, October 1st, 2007

The folks of the Theory of Constraints Center have a little project game to demonstrate how task and resource dependencies combine with a bit of unpredictability to create a cascade effect.

Imagine you have a simple project with five tasks, each of which is expected to take seven days. Since Tasks A and B can be done in parallel, and Tasks C and D can be done in parallel, we should expect the whole project to be done in 21 days.

But those five tasks don’t take exactly seven days each — they take roughly seven days each — and for our little game we roll a pair of dice for each task to represent that uncertainty.

Let’s say our random task durations come out as follows:

Task A — 5 days
Task B — 9
Task C — 3
Task D — 8
Task E — 6

Then our whole project is done in 23 days, not 21.

Hey, that’s not so bad, right? You roll high sometimes; you roll low sometimes. We expected 21, but it came out 23.

But let’s take another look. Our average roll was just 6.2 — less than 7. We rolled better than average, yet our total was worse than average. In fact, if we look at each color — I’m assuming each color represents a different resource dependency — then each color scores better than average too.

Delays accumulate, while advances do not.

Critical Chain 5

Saturday, September 29th, 2007

It has been a while since I last discussed Eli Goldratt’s Critical Chain and project management, but there is one more element to Critical Chain Project Management that I haven’t touched on.

Critical-path analysis assumes that a task is either dependent on another, or it’s not.

But what if two tasks require the same limited resource? What if they require the same expert’s time, or the same expensive piece of machinery? The two tasks aren’t dependent on one another, but they can’t be performed in parallel.

Suddenly our wonderful critical-path analysis goes out the window, and we have to tinker with the schedule until our resource isn’t in two places at once.

With a toy problem, this is easy enough to do by inspection. With a larger program, let a computer do the work.

Of course, if you don’t have this all plotted out ahead of time, you find out the hard way that Task B is wildly behind schedule, and that your critical path has shifted from Tasks C and F to Tasks B and E.

It’s no wonder why project leaders build a lot of safety into their schedules…

Critical Chain 4

Thursday, September 20th, 2007

I’ve been discussing Eli Goldratt’s third business novel, Critical Chain, which explores project management.

Goldratt’s primary point is that padding individual task estimates with safety just means that a lot of time gets wasted. All that safety should get pooled into a few strategically placed buffers.

But Goldratt makes a secondary point about multi-tasking — namely that it does terrible things to lead times.

Imagine that you have three tasks to get done — X, Y, and Z — and each takes 10 days to finish. Your lead time should be 10 days for each task.

If, on the other hand, you start each task, get it half done, then flit to another of the tasks, get it half done, then work on the remaining untouched task, and get it half done, before returning to the first and finishing it, then your lead time will double to 20 days for each task.

Now, Goldratt only hints at this in Critical Chain, but how bad is extra lead time? Does it matter? Well, yes and no. For any task not on the critical path, no, it shouldn’t matter, not as long as there’s enough slack. For any task on the critical path, yes, of course, it matters a great deal.

To play devil’s advocate for a moment though, when would it make sense to switch away from Task X to Task Y? Goldratt asserts that workers multi-task simply to keep busy — and I’m sure that’s often the case — but what happens when you can work on Task X, but Task Y is critical? You work on Task X until Task Y is ready for you, then you switch to Task Y, leaving Task X half done. What happens when Task Z, which is even more critical, is ready for you? You switch to it.

The problem comes when you switch back to Task X without finishing Z, then Y, because those were the more critical tasks; otherwise you never would have — or never should have — switched to them in the first place.

In such a case, starting a low-priority task early, to keep busy, might inflate lead time numbers, but it doesn’t hurt the project’s progress at all.

The key is always knowing which tasks are critical or threatening to become critical — to have buffer-driven task priorities. By looking at each path’s relative buffer burn rate — the percentage of the buffer penetrated versus the percentage of work completed on that path — we can immediately see which paths, and thus which tasks, deserve priority.

Critical Chain 3

Thursday, September 20th, 2007

I’ve been discussing Eli Goldratt’s third business novel, Critical Chain, and established project management techniques, like PERT.

One subtle issue with charting the critical path of a project is that the individual task estimates, which start out as distributions — defined by optimistic, most likely, and pessimistic estimates — get boiled down to one-dimensional expected numbers.

This lets us define a critical path, but — for simplicity and clarity’s sake — it ignores the potential for other paths to become critical.

Let’s say we add up all the variances along our critical path — Tasks A, C, and F, in our example — and they’re fairly small, so that our critical path has a duration of 7 ± 1 days. From that we might assume that our project has a 98-percent chance of finishing in 9 days — but what we’ve calculated is the chance that the original critical path will finish in 9 days.

What if the non-critical path along Tasks B and E takes 6 ± 2 days? We only have a 93-percent chance of finishing that path in 9 days.

Perhaps that seems academic in our toy problem, but real projects with many dependencies can demonstrate an alarming cascade effect, where every task seems to be waiting for something else to get done.

Returning to our toy problem, what happens if we replace Task A with three identical tasks, Tasks A1, A2, and A3? Our PERT analysis does not change at all, but clearly the fact that they average three days each does not mean that the dependent Task C gets to start after three days. It has to wait for the slow-poke.

Another academic complaint about PERT analysis is that it uses beta distributions, which may or may not reflect the actual distributions, and that it assumes that those beta distributions are numerous enough to sum to a fairly normal distribution, via the central limit theorem.

Less academic is the concern that the task durations are not independent. If it takes longer than planned to design a component, that might very well imply that it will take longer to develop a prototype, to produce it, to test it, etc. If that’s the case, then pooling all the safeties into one buffer at the end won’t reduce the total safety needed — but it should still reduce the threats from Student syndrome and Parkinson’s Law.

A bigger issue still is that people are notoriously bad at estimating task durations, and they are notoriously overconfident in their ability to estimate. The optimistic and pessimistic estimates are supposed to book-end a range that covers almost all possibilities — 99 percent — but far more than one percent of tasks fall outside those estimated ranges.

Critical Chain 2

Wednesday, September 19th, 2007

As I mentioned earlier, Eli Goldratt’s third business novel, Critical Chain, deals with project management, which has existed in its modern form since the 1950s.

One obvious issue with charting the critical path of a project is estimating all the task durations. Sure, for some projects all the tasks are well understood, but for many the tasks are new and untried.

In fact, complex design work that “should” take one month might take two, or three, or four months. Less likely, it might resemble an old, already-solved problem, and it might only take two or three weeks to finish.

So when the project manager asks a team member how long a task will take, what should the worker say? A young hotshot might give the mode of the distribution — “Yeah, I should be able to get it done in a month.” After a project or two, our chastened young worker starts giving numbers closer to the median of the distribution — estimates with a 50-50 chance of being long enough.

What the project manager probably wants is something closer to the mean of the distribution, or the expected duration of the task, which is greater than either the mode or the median in our skewed distribution.

On the other hand, the grizzled worker probably gives an estimate with plenty of safety — anyone who has missed a deadline knows it’s better to under-promise and over-deliver.

In fact — turning toward Critical Chain — that is one of Goldratt’s key points: each individual task estimate has plenty of safety built in. In fact, each layer of management adds its own safety, too — no boss wants his team to come in late. Then upper-level management doesn’t like the cumulative estimate, so it has all the task estimates cut — but the workers know to boost their own task estimates even more to account for that.

So why doesn’t the project come in on time? Because no one finishes early. Either they procrastinate, because they have “more than enough time” to finish the task — Student syndrome — or they use the extra time to add bells and whistles — Parkinson’s Law.

Delays accumulate, while advances do not.

So what does Goldratt recommend? He recommends cutting out most of the safety from individual tasks, then pooling the collected safety into a project buffer — which does not need to be as big as the collected safeties, because delays and advances will cancel out.

Goldratt also recommends a feeding buffer anywhere a path merges with the critical path — but our sample project is a bit of a degenerate case, with no paths merging into the critical path until the finish pseudo-task.

I don’t know if Goldratt thought that this notion of using accurate estimates and pooling safety buffers was a new idea, but it’s found in old-school PERT — the real version, if not the version most people use — where each task is assigned not an estimate but three estimates: optimistic (or best case), most likely, and pessimistic (or worst case).

These estimates are then fed into formulas based on the beta distribution, which, with the right parameters, looks an awful lot like the log-normal distribution pictured above, but with the appealing attribute that it has a well-defined minimum and maximum.

The formulas assume six standard deviations between optimistic and pessimistic:

Expected = (Optimistic + 4 x Most likely + Pessimistic) / 6
Variance = [(Pessimistic - Optimistic) / 6]2

It’s the expected times that determine the critical path and the cumulative variance — or the square root of the cumulative variance — that determines the project buffer. (PERT assumes that there are enough tasks in the critical path that the many beta distributions sum to a fairly normal distribution.)

Whether it’s new or not, the takeaway message is this: Don’t ask for a single-number estimate but for a distribution, and don’t pad each estimate, but pool the safety buffers into larger feeder buffers and project buffers, so that advances can cancel out delays.

Critical Chain

Tuesday, September 18th, 2007

As I mentioned earlier, when I read Kevin Fox‘s Blue Light anecdote, it spurred me to go back and read some old Goldratt books I hadn’t read yet, including his third business novel, Critical Chain.

Critical Chain doesn’t look at production or logistics but at project management.

Modern project management goes back to the 1950s, when Booz-Allen & Hamilton developed the Program Evaluation and Review Technique, or PERT, with Lockheed for the Polaris missile submarine program, and DuPont developed the Critical Path Method, or CPM, with Remington Rand for plant maintenance projects.

The basic idea behind these methods is to diagram out the various tasks within the larger project, along with their interdependencies and durations.

Two pseudo-tasks, start and finish, make the analysis clearer but don’t represent real work.

The first step in the formal analysis is to compute the early start and early finish dates for each task — the earliest it could start, given all its dependencies, and the earliest it could then end, given its duration — starting with the start pseudo-task and working to the right.

The second step in the formal analysis is to compute the late start and late finish dates for each task — the latest it could finish, without delaying the larger project, and the latest is could then start, given its duration — starting from the finish pseudo-task and working backward to the left.

Once you’ve done all that — or have made a computer do all that — you can see which tasks are on the critical path, with no slack. Any delays to any task on the critical path will delay the larger project. Any delays to any task not on the critical path will not delay the larger project — until all the slack for that task gets used up.

Haystack Syndrome 4

Thursday, September 13th, 2007

In The Haystack Syndrome, which I thought I was done discussing, Goldratt presents a production and marketing problem with a less-than-intuitive solution:

If you read much Goldratt, you know he’s always talking about constraints, which is a clue to how to solve the problem — it’s a thinly disguised linear programming problem.

In this day and age, solving linear programming problems is remarkably easy — if you know how to formulate the problem for Excel’s Solver add-in. This Google spreadsheet lays out the basic problem, but Google hasn’t built a Solver analog into it’s spreadsheet tool just yet.

It’s Not Luck 3

Saturday, September 8th, 2007

So, I’ve been discussing Eli Goldratt’s It’s Not Luck, in which our hero desperately seeks out-of-the-box ways for his three companies to dramatically improve their profits. What do the three companies do?

When last we looked, Pete was lamenting the fact that his printing company did not have the bigger, better, faster machines of his competitors. The obvious solution to his problem: focus on small jobs. The bigger, faster machines are only better for big jobs; they have a longer setup cost.

The not-so-obvious solution goes further. It involves recognizing that customers don’t actually want to order candy wrappers in huge quantities; they just want the low prices that come with ordering in huge quantities. In fact, Pete digs up some statistics from an industry journal showing that customers who order a six-month supply of wrappers only use the whole six months’ worth about 30 percent of the time. Most of the time, something changes — an ingredient, a promotion, a legal labeling requirement — before they’ve used up their whole stock.

So Pete’s company can offer its customers a two-month supply of wrappers, which gets completely used 90 percent of the time, for less per usable unit than the big printers can provide a six-month supply, once obsolescence gets factored it.

And that’s the deal that saves the company.

Bob’s cosmetics company, meanwhile, comes up with a solution based on the fact that its customers, the stores, have to discount obsolete products, are often out of the products that consumers want, and have difficulty making payments to their suppliers — companies like Bob’s.

Since these stores are given discounts for ordering in bulk, they tend to order in bulk — which explains why they didn’t take advantage of the new distribution system. So Bob shifts the discount policy to work off of the dollar amount the store orders per year, not per order. Further, Bob shifts to daily replenishment. The stores were ordering two to six months’ stock.

But that’s just the beginning. The big change is giving the merchandise to the stores on consignment — no obligation when it ships, just when it sells. The stores, which are perpetually short on cash, are delighted, but they actually end up paying sooner, because they have to pay to get their stock replenished.

Stacey’s pressure-steam company comes up with its own solution: selling pressure-steam as a service, for a monthly and per-unit-of-steam fee, not as a machine with high-margin spare parts. This means that they bring all spare parts in-house, rather than asking clients to pay huge mark-ups on a lot of safety stock. After all, it costs the pressure-steam company much, much less to maintain aggregated safety stock for all of its clients.

Ah, another win-win deal.

It’s Not Luck 2

Friday, September 7th, 2007

As I mentioned earlier, I just read Eli Goldratt’s It’s Not Luck, and I found the concrete business solutions more intriguing than Jonah’s abstract and jargon-laden Thinking Processes.

As our story opens, Alex Rogo, our hero, has just turned around three companies — divisions within a larger conglomerate, really — and he has to take them from barely profitable to very profitable. They are a printing company, headed by Pete, a cosmetics company, headed by Bob, and a pressure-steam company, headed by Stacey.

The printing company prints cereal boxes and candy wrappers, and Pete has just turned it around by implementing the changes Alex learned about in The Goal — letting non-constraint resources go idle, reducing work-in-process inventory, elevating constraints, etc. At this point Pete is lamenting that his competitors have bigger, better, newer machines that have much greater economies of scale. (Hmm…)

At the cosmetics company, Bob has just turned things around by rationalizing his distribution system — just in time too, because the cosmetics industry is changing, and they’re expecting to introduce a new product line every year, which would be a disaster with three months’ obsolete product in the distribution pipeline.

Under the old system, even with three months’ inventory, with over 600 products, they were always missing something from a customer order, to the point that they were only able to deliver complete orders 30 percent of the time, and they’d have to ship missing items later.

Under the new system, they can respond to customers in one day, and they are able to deliver complete orders 90 percent of the time — all while holding just six weeks’ inventory in the system, half as much as before.

Under the old system, plants were treated as profit centers, and they recorded any production as a sale as soon as it was shipped to the regional warehouses, where it became somebody else’s responsibility.

Under the new system, stock stays at the plant, which acts as a central warehouse, with just 20 days’ stock at the regional warehouses, replenished every three days. This allows the plant to aggregate the forecast demand across all 25 regions — and when you aggregate demand across 25 regions, you do not get 25 times as much volatility (standard deviation); you get five times as much volatility. (Five is the square root of 25.) When demand is higher than expected in one region and lower in another, those errors cancel out — but only when aggregated.

Reducing inventory in the distribution pipeline doesn’t just reduce your carrying costs and obsolescence costs. When you have three months’ inventory in the distribution pipeline, that means you’re producing based on three-month-old forecasts.

Anyway, although the company has moved to its new distribution system, its customers, the shops, are still ordering in bulk, which puts a bigger strain on the system than if they ordered in smaller, more frequent batches.

Bob notes that they choose their inventory buffer size in the same way they’d set a buffer in front of a bottleneck in a manufacturing plant, based on expected consumption and expected replenishment time. What he does not mention is that real-life goods often have very different demand volatilities — or, more specifically, very different coefficients of variation of demand — so that 20 days’ average demand in one good may offer more than enough protection or far too little. Also, different goods present very different ratios of costs of overage (from holding too much inventory) to costs of underage (from holding too little inventory), especially when you realize that you’re not supplying independent goods but whole orders. You might want 99.9-percent safety on cheap parts that might hold up larger orders.

Which brings us to Stacey’s pressure-steam company. The pressure-steam company, like others in its industry, sells its pressure-steam equipment to manufacturing plants at cost, or thereabouts, and makes its money buy selling spare parts at high margins later — because the customers are locked in then. The pressure-steam company keeps the necessary spare parts at the client site, and it requires over 95-percent safety on those parts, because, when it needs a part, that means the client’s whole plant is down until the fix gets made. (Hmm…)

I wonder what the three companies will do…

It’s Not Luck

Wednesday, September 5th, 2007

As I mentioned earlier, when I read Kevin Fox‘s Blue Light anecdote, it spurred me to go back and read some old Goldratt books I hadn’t read yet.

When I got to It’s Not Luck, his second novel, it reminded me why I liked The Goal so much: a narrative is a surprisingly good way to explore ideas and convey technical information. Science fiction authors have known this for years, but it also works for more down-to-earth ideas.

In The Goal, our hero desperately tries to save his factory by following the cryptic advice of his Yoda-like mentor, Jonah — who bears a striking resemblance to the author, Eli Goldratt.

In It’s Not Luck, our hero has moved up in the organization — after saving the plant, of course — and he now struggles to internalize the greater lessons of his mentor and to use his methodologies to solve problems in logistics, marketing, and strategy, not just production. In fact, he has to come up with miracles to save three different divisions in the conglomerate before they’re sold off to Wall Street sharks.

It’s Not Luck is not great literature, but it has its strengths. First, the solutions our hero and his team come up with are solid, interesting solutions, and reading about them might inspire a manager to come up with similar ideas closer to home.

Second, Goldratt has worked as a consultant long enough to write convincingly about resistance and apathy from within an organization — like Scott Adams (Dilbert), but not funny. His characters don’t ring particularly true in other ways — they are quite wooden — but the “change management” portions of the story feel grounded in reality.

What does not work is the constant advertising for Goldratt’s jargon-laden Thinking Processes. Our hero routinely explains the great importance of drawing a Cloud, or spelling out a Current Reality Tree, when it’s not at all clear that these formal tools are what allowed someone to come up with the intriguing business solutions Goldratt put in the book.

One last point: Perhaps I’ve spent too much time thinking like a Finance Guy, but when our hero and his colleagues realize that they can save their companies by making them vastly more profitable, but accepted accounting practices will make them look less profitable in the short term, I couldn’t help but think, Why aren’t they arranging a management buyout? The real reason: Goldratt isn’t a finance guy.

Haystack Syndrome 3

Wednesday, September 5th, 2007

In The Haystack Syndrome, which I was just discussing, Goldratt expands his original example by supposing that the marketing director visits Japan — the book was published in 1990, remember — where demand for Products P and Q is almost the same as in the states — but at a price 20 percent lower.

We previously calculated that each P sold was worth $3 per minute on Resource B, the bottleneck, and each Q was worth $2. Now, each P sold to Japan is worth less than $3 per minute. In fact, each P sold to Japan is worth just $27 — costs didn’t decrease by 20 percent, just the selling price — which is less than $2 per minute. So we don’t want to sell any product to Japan.

What happens if we increase our capacity? Let’s assume that we can buy another B machine for $100,000, and we can pay another B worker for $400 per week. (These numbers obviously aren’t real, even for 1990.)

Now we can sell 100 of Product P and 50 of Product Q in the states for a total contribution (or throughput) of $7,500. Even after we subtract out our now-higher fixed costs of $6,400, our net profit is $1,100, or $800 higher than the $300 we were making before. That machine will pay for itself in 125 weeks.


Oh, wait, maybe we can sell to Japan now. We’ve lifted the constraint on Resource B, but now Resource A has become the bottleneck. By selling 100 of Product P, we’re using 1,500 minutes of Resource A. By selling 50 of Product Q, we’re using 500 more minutes of Resource A. We only have 400 minutes left to throw at Japan. With 400 minutes at Resource A, we can produce about 26 more units of Product P, for another $700 in contribution (or throughput). That almost doubles our previous improvement.

It’s a good thing we didn’t stop our analysis too soon; it almost cost us a lot of money.

Wait a minute, our constraint shifted from Resource B to Resource A, and we didn’t recalculate everything, even though changing our constraint changes everything. If Resource A is our constraint, then Product P has a throughput of $3/minute, and Product Q has a throughput of $6/minute! Product P sold to Japan now has a throughput of less than $2/minute, and Product Q sold to Japan now has a throughput of $4/minute.

With our 2,400 minutes of Resource A, we should be making Product Q for the states (500 minutes), then Product Q for Japan (500 minutes), then Product P for the states (1,400 minutes). We should produce no Product P for Japan. Then we can bring in $3,000 + $2,000 + $4,200 = $9,200 in contribution (throughput), for a net profit of $2,800, which dramatically improves our profit again!

Inertia can cost you a lot of money.