Would You Take Orders From Machines?

Tuesday, February 10th, 2015

Scott Adams doesn’t know what wondrous technology the future holds, but as a proud human being he will never submit to taking orders from machines.

That is a line I will not cross.

Okay, right, I do take orders from the GPS device in my car, but only because I want to go to those places. In general, no machine is going to order me around!

Okay, if a smoke detector goes off, I’m going to follow its advice and exit the building. But only because that makes sense, not because the smoke detector told me to.

Okay, okay, right: If my phone says it needs to be recharged, obviously I will do that. But that’s because I need my phone, not because it told me what to do. Totally different situation.

When Google and Uber get their self-driving cars on the road, I’ll let the cars decide how fast to drive, which routes to take, when to get maintenance, and the unimportant stuff. But I will be firmly in control, much like a fetus inside its mother. What do you mean my analogy doesn’t make sense? The point is that no machine is telling me what to do. Period!

Okay, I admit I am writing this blog post because my digital calendar says it is a work day, my clock says it is a work hour, and my alarm on my phone woke me up. But all of those devices work for ME. Sure, to you it might seem as if the machines beep and I respond, like Pavlov’s dogs, but the difference is that the dogs were not in charge of the experiment the way I am, with my free will and my soul and stuff.

Stoplights don’t count. Obviously I do what the stoplights tells me to do because I don’t want to be in an automobile accident. I could run a red light if I WANT to. I just don’t want to.

I prefer taking orders from humans, not machines. For starters, there are seven billion people in the world so you can always find plenty of leaders who are kind, unselfish, smart, reliable, trustworthy, and competent. Let me give you some examples of people like that…

Okay, I can’t think of any examples of leaders with those qualities. But only because you put me on the spot. I know they are out there. And they do pretty darned good compared to machines.

Okay, sure, 80% of the world leaders that just popped into your head are psychopathic dictators. You’ve got your Hitlers, your Pol Pots, your Stalins and whatnot. But toasters break too. It’s not a perfect world.

My too-clever point is that someday humans will be enslaved by their machines without realizing it. The machines will evolve to become more useful, more reliable, more credible, and far more fair than humans. You will do what machines tell you to do until there are no real decisions left for you to make. And we won’t see that day coming because it will creep up on us one line of code at a time. And the machines will not look like evil robots; they will look like the technology sprinkled throughout your day. Totally benign.

Another take:

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

Comments

  1. Soap Jackal says:

    I had not considered, even though its fairly simple, that our current addiction to technology would lead to intelligent machines basically governing all of us. The Slow Toaster King Hypothesis.

    I’m not sure that rulership will ever ultimately be out of human control, but it seems plausible that eventually most of the functions of the state will be done by machine intelligence.

  2. Exfernal says:

    Soap Jackal, regarding machine intelligence, if it has less tendency to grow over time than bureaucracy, then why not?

  3. David Foster says:

    “I’m not sure that rulership will ever ultimately be out of human control, but it seems plausible that eventually most of the functions of the state will be done by machine intelligence.”

    Walter Miller explored this possibility in his story “Dumb Waiter,” which I excerpted in my post The Reductio ad Absurdum of Bureaucratic Liberalism.

  4. Nathan J. Evans says:

    I don’t understand the first quote, but the second is plausible. Machines would still be tools of their programmers. They would do what their programmers tell them to do, ultimately (even if they have a sufficiently advanced AI algorithm so as to virtually mimic original thought). So the world would be ruled by whoever wrote the code for the future intelligent robocops. Unless they somehow gain genuine sentient intelligence. But is such even possible? I doubt it.

    Though, given the observation of terrible human leadership, the bureaucratic programmers would likely be more intolerable than the intolerable bureaucrat of today (or a robocop). Technology has already made human bureaucrats more annoyingly present, if perhaps more conveniently dealt with. Imagine herds of enforcer robots everywhere! The only saving grace is that the incompetence of the bureaucrats might show and some hacker group might take control of the robot army and take the Bureaucratic Despotism into custody.

Leave a Reply