Where can a laymen get an introduction to the current state-of-the-art of “hard AI”?, a redditor asked, and one martincmartin replied:
I worked on (baby steps toward) hard AI until about 2003, first at CMU, then MIT. I thought a lot about where the field is going, and why. Here’s a brief history of AI:1940s:
- computer invented; clearly does things that, in people, are considered intelligent (e.g. arithmetic)
- Atomic bomb changes face of science research in America. It’s hard to overstate the impact of going from conventional bombs (killing groups of people) to atomic bombs (wiping out entire cities). Russians quickly catch up. People know we’re just scratching the surface of sub-atomic physics, and wonder what else lies hidden in the atom. Physics funding increases 10x.
1950s:
- Computers can solve calculus problems, which looks to many people like they have the intelligence of an undergraduate.
- Dartmouth conference and AI starts to gel as a discipline.
- Sputnik shows that Russians can push a button and two hours later, an atomic bomb explodes in the U.S., with no way to stop it. ARPA founded to give scientists lots of money to research basic science. Funding in physics goes up another 10x.
1960s:
- Space race, where Russians are continually ahead of the Americans: first person in space, first person in orbit, first device on the moon, etc.
- Computers do more things that look intelligent: hold simple conversations, re-discover hundreds of years of math in a few hours. People worry that Russians will be able to create intelligence thousands or millions of times greater than a human’s, and outsmart us. This peaks around 1970, as captured in the movie “The Forbin Project.”
- The media focuses on the most outlandish predictions.
- The movie 2001 is seen as more-or-less plausible depiction of what could happen by the year 2001. Perhaps a little optimistic, but not wildly so.
1970s:
- Russians fall behind in space & physics. Hyped AI doesn’t pan out. Low hanging fruit in symbol systems AI are taken, and it’s into a hard slog. Voters grumble about all the money being spent on research. ARPA renamed DARPA and told to focus on military specific technology.
1980s:
- The people funding AI no longer want to hear about hard AI, they want people to solve practical, near-term problems. There’s a growing consensus that symbol systems, hard AI stuff doesn’t work and isn’t going to work any time soon.
- An AI professor at MIT told me that, in the 1980s, CS professors were embarrassed to say they were working in AI, and hasted to add “but not that kind of AI!”
- There’s lots of talk about what the next paradigm will be. Stuff that looks like other engineering disciplines wins. (Hidden Markov Models come from communications theory, as do Khalman filters, etc.)
1990s:
- “Machine learning” (aka applied statistics) takes over as the only game in town (in the U.S.). Essentially, AI is in its behaviorist stage, where it’s assumed that anyone who talks about strong AI is a flake.
2000s:
- More “machine learning = AI”. Perhaps the seeds of the next paradigm are being laid, but we’ll only know in retrospect.