How To Think Real Good

Monday, October 20th, 2014

After compiling How to do research at the MIT AI Lab, David Chapman went on to write How To Think Real Good, a rather meandering piece that culminates in this list:

  • Figuring stuff out is way hard.
  • There is no general method.
  • Selecting and formulating problems is as important as solving them; these each require different cognitive skills.
  • Problem formulation (vocabulary selection) requires careful, non-formal observation of the real world.
  • A good problem formulation includes the relevant distinctions, and abstracts away irrelevant ones. This makes problem solution easy.
  • Little formal tricks (like Bayesian statistics) may be useful, but any one of them is only a tiny part of what you need.
  • Progress usually requires applying several methods. Learn as many different ones as possible.
  • Meta-level knowledge of how a field works — which methods to apply to which sorts of problems, and how and why — is critical (and harder to get).

I didn’t find that list as interesting as his pull-out points along the way:

  • Understanding informal reasoning is probably more important than understanding technical methods.
  • Finding a good formulation for a problem is often most of the work of solving it.
  • Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.
  • Choosing a good vocabulary, at the right level of description, is usually key to understanding.
  • Truth does not apply to problem formulations; what matters is usefulness.
  • All problem formulations are “false,” because they abstract away details of reality.
  • Work through several specific examples before trying to solve the general case. Looking at specific real-world details often gives an intuitive sense for what the relevant distinctions are.
  • Problem formulation and problem solution are mutually-recursive processes.
  • Heuristics for evaluating progress are critical not only during problem solving, but also during problem formulation.
  • Solve a simplified version of the problem first. If you can’t do even that, you’re in trouble.
  • If you are having a hard time, make sure you aren’t trying to solve an NP-complete problem. If you are, go back and look for additional sources of constraint in the real-world domain.
  • You can never know enough mathematics.
  • An education in math is a better preparation for a career in intellectual field X than an education in X.
  • You should learn as many different kinds of math as possible. It’s difficult to predict what sort will be relevant to a problem.
  • If a problem seems too hard, the formulation is probably wrong. Drop your formal problem statement, go back to reality, and observe what is going on.
  • Learn from fields very different from your own. They each have ways of thinking that can be useful at surprising times. Just learning to think like an anthropologist, a psychologist, and a philosopher will beneficially stretch your mind.
  • If all you have is a hammer, everything looks like an anvil. If you only know one formal method of reasoning, you’ll try to apply it in places it doesn’t work.
  • Evaluate the prospects for your field frequently. Be prepared to switch if it looks like it is approaching its inherent end-point.
  • It’s more important to know what a branch of math is about than to know the details. You can look those up, if you realize that you need them.
  • Get a superficial understanding of as many kinds of math as possible. That can be enough that you will recognize when one applies, even if you don’t know how to use it.
  • Math only has to be “correct” enough to get the job done.
  • You should be able to prove theorems and you should harbor doubts about whether theorems prove anything.
  • Try to figure out how people smarter than you think.
  • Figure out what your own cognitive style is. Embrace and develop it as your secret weapon; but try to learn and appreciate other styles as well.
  • Collect your bag of tricks.
  • Find a teacher who is willing to go meta and explain how a field works, instead of lecturing you on its subject matter.

Comments

  1. William Newman says:

    “Before applying any technical method, you have to already have a pretty good idea of what the form of the answer will be.”

    This is usefully true in many ways, and pretty much completely true for a lot of things that we think of as technical methods (e.g. finite element analysis, or calculus of variations, or functional programming), so the claim seems to be true as it is intended to be understood. But it seems to me that there is an important exception to the claim as it is actually written.

    The old ideas of Occam’s Razor and falsifiability are fuzzy enough that they are maybe not technical methods. The new information theoretic and statistical systematizations of those ideas (notably http://en.wikipedia.org/wiki/Minimum_description_length), however, certainly ought to qualify for “technical method” as Chapman wrote the phrase, even if his internal mental state was more narrowly specifically referring to methods like finite element analysis. Also some of the ideas coming in from (mostly) the microeconomics community are methods which are at least borderline technical, where if your idea is really meaningful and you sincerely believe in it, presumably you’d welcome a chance to bet on it or otherwise “put your money where your mouth is” in systems like hanson.gmu.edu/ideafutures.html , right?

    Chapman seems to mean “before applying any technical method to construct a specific understanding of the universe…”; the methods like minimum description length that I have in mind are about determining whether your constructed understanding of the universe is any good. Borderline technical methods like idea futures are also playing in the same space.

    Once one understands enough math that the puzzle about information theory is “why did it take so long for people to figure this out?” rather than “ow! ow! ow! what does this bafflegab mean?” then applying ideas like MDL to test things that are fondly imagined (by you and/or by others) to be meaningful truths about the world works rather well. And these technical methods slither around the requirement to “already have a pretty good idea” about the form of the answer, because their natural role is “I don’t know what the form will be, but I do know it will satisfy this criterion”. I don’t know what the form of the answer would be in answering macroeconomic questions about e.g. future unemployment rates, I just know if people had a strongly valid answer is, they would be able to pass MDL tests with it and make money betting on it. (That “strongly valid” weaseling is because all these criteria admit borderline cases where a pattern is completely correctly understood but reality happens to provide so few chances to test the pattern that any statistical signal is weak and ambiguous.)

    So things like MDL and idea futures play a role similar to the Turing test: the test doesn’t tell you what the form of a general AI will be, it intentionally slithers around that question completely, instead giving you a sufficient condition for recognizing a general AI when you see it. (And incidentally the Turing test is arguably a technical method as the phrase was written, and if so suffices to falsify the claim as written.)

  2. Aretae says:

    Ok, someone else to read. Thanks.

  3. Candide III says:

    William: I can’t help feeling that Chapman wouldn’t agree with you about automatist tools like MDL. They may be useful enough in their own domain, but you have to have language before you can present data, much less construct descriptions of it, and the idea that we can slither around this with some kind of ‘technical method’ mumbo-jumbo is at least highly non-obvious. I.e., by the time you have obtained and formalized your data sufficiently to apply MDL, most of the subtleties have been swept under the rug, usually without anybody realizing it. Data on GDP, inflation and unemployment are trivial examples.

  4. Candide III says:

    Obligatory Moldbug reference:”A reservationist epistemology“.

Leave a Reply