Not everything that’s useful to do is quite so “human like”

Friday, February 24th, 2023

It’s always amazing when things suddenly just work, Stephen Wolfram remarks:

I’ve been tracking neural net technology for a long time (about 43 years, actually). And even having watched developments in the past few years I find the performance of ChatGPT thoroughly remarkable. Finally, and suddenly, here’s a system that can successfully generate text about almost anything — that’s very comparable to what humans might write. It’s impressive, and useful. And, as I’ll discuss elsewhere, I think its success is probably telling us some very fundamental things about the nature of human thinking.

But while ChatGPT is a remarkable achievement in automating the doing of major human-like things, not everything that’s useful to do is quite so “human like”. Some of it is instead more formal and structured. And indeed one of the great achievements of our civilization over the past several centuries has been to build up the paradigms of mathematics, the exact sciences—and, most importantly, now computation—and to create a tower of capabilities quite different from what pure human-like thinking can achieve.

[…]

At its core, ChatGPT is a system for generating linguistic output that “follows the pattern” of what’s out there on the web and in books and other materials that have been used in its training. And what’s remarkable is how human-like the output is, not just at a small scale, but across whole essays. It has coherent things to say, that pull in concepts it’s learned, quite often in interesting and unexpected ways. What it produces is always “statistically plausible”, at least at a linguistic level. But — impressive as that ends up being — it certainly doesn’t mean that all the facts and computations it confidently trots out are necessarily correct.

[…]

Machine learning is a powerful method, and particularly over the past decade, it’s had some remarkable successes — of which ChatGPT is the latest. Image recognition. Speech to text. Language translation. In each of these cases, and many more, a threshold was passed — usually quite suddenly. And some task went from “basically impossible” to “basically doable”.

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human — or a systematic algorithm — will then pick from or refine.

It’s completely remarkable that a few-hundred-billion-parameter neural net that generates text a token at a time can do the kinds of things ChatGPT can. And given this dramatic — and unexpected — success, one might think that if one could just go on and “train a big enough network” one would be able to do absolutely anything with it. But it won’t work that way. Fundamental facts about computation — and notably the concept of computational irreducibility — make it clear it ultimately can’t. But what’s more relevant is what we’ve seen in the actual history of machine learning. There’ll be a big breakthrough (like ChatGPT). And improvement won’t stop. But what’s much more important is that there’ll be use cases found that are successful with what can be done, and that aren’t blocked by what can’t.

Comments

  1. VXXC says:

    Actually 95% right with that 5% monitored by competent humans will allow us to downsize our aging and nowhere near 95% and closer to 5% correct Bureaucracy, HR, Lawyers and email jobbers.

    Allow us to, not will. But it’s now quite possible.

  2. Gavin Longmuir says:

    VXXC: “Allow us to, not will.”

    To quote Peter Drucker — Nothing is so wasteful as taking the time to do well that which should not be done at all.

    We need to burn most law & bureaucracy to the ground and then salt the earth it stood on — not automate it.

  3. Buckethead says:

    One wonders what the capabilities of a ChatGPT enhanced regulatory bureaucracy might be. Even the most lazy and incompetent bureaucrat could generate comprehensive and voluminous regulations on the scale of hours.

    Might be we still might end up needing BuSab.

  4. Wang Wei Lin says:

    I had an exchange with ChatGPT a few weeks ago. It was like talking to a liberal moron. The dialogue was predictable on the usual topics. All the programmers did was make an automated NPC.

Leave a Reply