Thick and thin

Tuesday, January 8th, 2013

Gregory Cochran contrasts thick and thin problem-solving styles:

Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets. I said that you could think of a thundercloud as a vertical dipole, with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said “and so the curves must cross!” That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a thin problem.

At the other extreme, consider Darwin gathering and pondering on a vast amount of natural-history information, eventually coming up with natural selection as the explanation. Some of the information in the literature wasn’t correct, and much key information that would have greatly aided his quest, such as basic genetics, was still unknown. That didn’t stop him, anymore than not knowing the cause of continental drift stopped Wegener.

And now a fun example:

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942, needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted — say, one word of five, with luck. He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’. He’d studied in Japan, knew the Japanese well. He had plenty of Navy experience — knew what was possible. I would call this a classic thick problem, one in which an analyst needs to deal with an enormous amount of data of varying quality. Being smart is necessary but not sufficient: you also need to know lots of stuff.

At this point he was utterly saturated with information about the Japanese Navy. He’d been living and breathing JN-25 for months. The Japanese were aimed somewhere, that somewhere designated by an untranslated codegroup — “AF”. Rochefort thought it meant Midway, based on many clues, plausibility, etc. OP-20-G, back in Washington, thought otherwise. They thought the main attack might be against Alaska, or Port Moresby, or even the West Coast.

Nimitz believed Rochefort — who was correct. Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser. As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s (head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman). That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge. At best, they believe you because you’ve been right in the past. Or, sometimes, once you have developed the answer, there is a thin way of confirming your answer — as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system — soon followed by a Japanese report that “AF” was short of water.

Leave a Reply