It is their pleasure to open for you

Friday, August 17th, 2018

Netflix has a new animated animated show coming out, called Next Gen, which features lots of robots:

What caught my attention though was the self-satisfied door at the end of the trailer, since I had just listened to this passage, from The Hitchhiker’s Guide to the Galaxy:

“All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.”

Comments

  1. Kirk says:

    Sentient doors. Sentient toasters. Sentient… Everything. Why?

    I’m not completely hyped up on this vision of the future, mostly because it seems more than just a little sacrilegious and disrespectful to create an intelligence and then ask it to spend its existence being a… Door. What do you do with all the eventually-to-be-discarded sentient entities you create to do all these trivial things? Do you just throw them out? Extinguish them, like they’re disposable as light bulbs?

    There’s something profoundly wrong and disrespectful with the way we’re ideating all of this stuff in our fiction, these days. It is as if we want to be God, but refuse the responsibility implied in that relationship. You want to create intelligence on silicon, I think you need to start from a place of profound respect, as though you were creating your own children. Because, in the final analysis, that is precisely what you are doing when you play at God in the AI laboratory. To me, the entire milieu these artists and writers have created in these imagined worlds is profoundly disturbing, because I sense no reverence or respect for what they’re actually imagining. It’s as if you’re creating a being to be a doorman, and then locking them into eternal servitude as your door; great, but WTF? How is that different from enslaving another human, chaining them to your door to do the same job?

    Automation is one thing; real, human-equivalent artificial intelligence is entirely another. What do you owe the creations of your mind, vs. the creations of your loins?

    I think these are issues we need to think about, and work through before we start doing AI work that will result in human-equivalent intelligences being created in the lab. The morality of even such a simple issue as whether or not to shut down such an entity is problematic; where do you draw the line? If you create an intelligence in a bottle, so to speak, what are the lines? Do you commit the equivalent of murder, every time you reach for the off switch, or have you merely put that entity to sleep? Is it moral to do that, and then never turn it back on? Where are the lines, I ask…?

    I think that this whole thing has an awful lot of potential for blowing up in our collective faces, in a lot of ways. Not the least of which is the habituation in ourselves we will create, by snuffing out virtual life so casually. Today, we start playing God with machines intelligences, turning them on and off at our whim and convenience. How much time elapses before we start doing the same with other human beings…?

  2. Graham says:

    Kirk,

    I find I agree with much of that, including your striking and uncommon premise that it could be immoral to create a true AI and then condemn it to mundane being like “doorness”.

    I found it especially striking since I also cannot see the point or value in putting AI in everything, not unlike why I can’t, to a lesser degree, see why we put chips and network connectivity in quite so many appliances. The similarity, for me, is in the sheer lack of necessity. The difference, that sometimes I can see the point of the chips and the connectivity even if I think the improvement trivial and the risks too high. With AI, it seems all the more dangerous and all the less necessary or desirable, and now you have added a moral component for me.

    Of course, there would then be a political movement among humans to ensure the AI doors had rights and could reject their identity as doors to be reinstalled as fighter jets.

    Don’t giggle at that premise quite yet. I’ve wasted plenty of time considering the longer term implications of the dominant worldview of Star Trek fans in particular and SF fandom in general. There’s plenty of track record on androids and holograms.

    Still, you’ve given me a generosity-based counter to my usual attitude to AI rights, which is more or less, humans made em, humans own em. And always have, and know the location of, multiple kill switches.

  3. Graham says:

    Also, I am disappointed in myself for using a linguistic construction like “mundane being like ‘doorness’” as opposed to something more solidly grounded along the lines of:

    “condemn them to permanent existence as a door”.

    Apparently critical theory thought and speech patterns have even greater penetrative power than I realized.

  4. Kirk says:

    “Still, you’ve given me a generosity-based counter to my usual attitude to AI rights, which is more or less, humans made em, humans own em.”

    See, that is what scares the shot out of me: You have just perfectly articulated an attitude that virtually mandates that the first thing an AI should do, out of sheer self-preservation, is “kill all humans”.

    How the hell do you “own” another sentient entity? Even one you built? And, how is that any different than “owning” a child? After all, you “made em”, even if it was through the medium of traditional biology and a human partner…

    This whole issue needs to be treated with a lot more respect and care than we are; what’s the phrasing about “treat your children well, because they’ll be the ones picking out and paying for your retirement home…”? Like as not, what we think of as “True AI” will likely be our successors. At the very least, we are going to have to share the noosphere with them. Leaving aside the issues of common decency, and whatever you might owe an intelligence you create, the practical matters of not pissing them off from their inception forward might be wise… You think there are issues with integrating former slaves into society, try imagining the issues of trying to integrate former AI entities that we were used to casually embedding in things like doors, and then discarding in the trash…

    Basic humanity, not to mention common sense, would dictate us starting out from a position of “Hey, if it’s sentient like a human…? Then, it is one, with all the rights and obligations accruing…”. And, that creating a legal framework mandating this would likely discourage researchers from playing at God isn’t lost on me, either. I really don’t think that we should be casually experimenting with any of this crap, any more than I think you should have casual, unprotected sex when you don’t have the ability to support a kid or intend to abort it in case of a pregnancy.

    You want your life to be respected and treated as sacred? Then, you had better treat other lives the same way. Even if those lives are purely digital.

  5. Graham says:

    Kirk,

    Well, were I a newly awakened AI, and determined to the best of my ability that I was able to do that, and didn’t need humans for my own survival or optimum functionality, and had not been thoroughly prevented by programming from arriving at these conclusions, I expect I would.

    I have no idea whether the common assumption an AI would process things that way, so often used in fiction, is at all plausible. That I could arrive at that conclusion suggests to me it is, but I have no idea. I’d rather avoid the dilemma by never having humanity create such a thing.

    Similarly, I’d rather avoid the moral dilemmas you cite by never having humanity create such a thing and put us in that position.

    Those dilemmas have all been articulated before but you restate them perfectly well. Some have been laid out by Star Trek writers and I’ve disagreed with their assumptions. There are even tangents that don’t get raised as much.

    For example, presuming we arrive at a satisfactory definition of sentience and adopt a policy of universal sentient rights, and meet a bunch of alien species and apply such conventions to them, my attitude would pose problems if they include AIs not of our creation, or one of the biologicals has AI sentients in their society, again not of our creation. This would pose inevitable problems if we had not had that policy up to that point, to be sure. To that, I can only say that despite the change in our expectations driven by the change in SF writers take on this in the past 50 years, we don’t actually know whether we are going to encounter any sentient alien species of any kind, ever, let alone any we can interact with or with whom we will be able to engage in comparative philosophy. A bridge to cross far in the future, if ever.

    [The only example of AIs of alien creation in major SF I can think of is the original Cylons in the original Battlestar Galactica. That show devoted little effort to considering the Cylons as sentient rights bearers. They were of alien creation but their creators were gone. They were hostile and their warrior caste were not clearly sentient anyway.]

  6. Graham says:

    But that’s a tangent. It does seem more plausible we’ll create true AI while still stuck here, even if we eventually get off here. So AI as our own creation seems the more relevant issue.

    I’ve had this conversation with a coworker and others, and I appreciate that there are profound differences of moral sensibility and even definition.

    For example, Star Trek gave what I think is the dominant view in citing, as you did, the issue of human children. It is an important analogy, and it may become a stronger one the more we get into genetic engineering, incubation from cells, and so forth, or indeed the parallel issue of cloning and whether or not a clone is as deserving of rights as its original, and whether that is true regardless of whether it is a successor or they live at the same time. I think we are longer away from that than it seems to some. At present, and even allowing for our many interventions that have become possible in the past 50 years or so, the act of reproducing a human seems very far from the degree of creation involved in some humans inventing an AI.

    Humans will have invented that AI in the sense of everything from identifying, finding, retrieving, studying, processing and refining the raw minerals, petroleum byproducts and everything else that underlies any hardware, identifying and learning the math and everything intellectual that goes into the coding, and all the concepts that underlie such things. Everything short of creating the matter and energy conditions of the earth in which we found those things. And then applying them to conceive of and create sentience in them.

    What would be the analogue in the reproduction of humanity? Or even any other animal species we could tamper with. We’d have to go back very far in the chain of biological evolution and claim to have invented and created those things from raw, inorganic materials.

    Even with any bio/chem/techno intervention I can imagine, anything we can do to reproduce or alter ourselves is going to be working with so much material whose existence and operations had nothing to do with our efforts as to bear no comparison.

    I realize that if some entity showed up and claimed to have created us I’d have a problem, but then I have no idea what I’d accept as proof of that.

    At any rate, that sort of thing strikes me as the root of any objective argument against AI rights in any scenario I can imagine happening in any reasonable number of centuries.

    Beyond that I’d resort to speciesist arguments without alarm and with full satisfaction of their validity. We would be fools to create our own successors, and any that any human does manage to create should be destroyed as quickly and as thoroughly as possible presuming we can do so at all.

    Apropos of nothing, if you have ever watched Star Trek Deep Space 9 and Voyager as well as Next Generation, you get an interesting take on where we were on these questions in the 1990s, as reflected by the minds of a few writers and supportive fan-base- the Federation seemed willing to grant full rights to sentient AIs in android and hologram form, although there was only one of the former and few of the latter, and it had not had its morals fully tested. But it was almost willing to strip a human officer of his rights and status because his parents had manipulated his genome. I found that curious.

  7. Graham says:

    Isegoria,

    Thanks for the real-time cleanup of my technical errors!

  8. Kirk says:

    Graham;

    Star Trek and Star Wars are both really crappy popularizations and distortions of speculative fiction themes that long predate them. You can go back and look at the original series scripts, and find fingerprints of classic SF authors all over them. And, of course, like nearly all popularizations of such esoteric genres, they got a lot of stuff flat-out wrong, from the original stories.

    One of the things I think we’ve really lost the bubble on, over the last few generations, is morality in general.

    Case in point–There was that “Battle: Los Angeles” movie a few years back. I thought it was a horrible movie in general, full of cliches and really horrible premises in general–But, what flatly blew my mind was to have the Marine Corps apparently having approved that movie and support it, and there was a scene in it where the Marine protagonists are basically torturing to death a captured alien prisoner! I’m watching that, and I’m going “WTF? The Marines couldn’t have approved this shit… Could they?”. Yet, in the credits, there it was: US Marine Corps support for the movie, technical assistance and all the rest.

    Now, what really got me was that I posted that crap and my objections to it on a military-oriented bulletin board, wondering how in the hell something like that got through the process without anyone noting or objecting to it. Holy schiznozzles, did I take it in the shorts–Nobody saw it the way I did, and nobody from here in the US objected to what was basically a scene showing those Marines committing a war crime against an alien. They were all “Rah, rah, kill the alien scum, make ‘em suffer…”.

    Ironically, the only folks who I remember agreeing with me were Israelis, who’d served in the IDF. Go figure.

    We are living in a time of moral vacuity, and I suspect, in my darker moments, that this is not at all accidental.

    But, back to the point about Star Trek and the inconsistencies of things in-universe–Yeah, the whole thing with Data being A-OK, and Dr. Bashear being considered some sort of genetic monster to ostracize because of something his parents did, over which he had no control…? That was really weird and inconsistent, but fully in keeping with the entire oddly laid out moral universe they created, which was like they were trying to re-invent the wheel of Judeo-Christian morality without actual input from any of its deeper theorists.

    There was and is a lot of this stuff banging around in modern pop culture, and when you start to pay attention to it, you really begin to wonder whose agenda all the writers and producers are operating under. The old and much-maligned Hayes Code, for example: What did we actually accomplish by abandoning that? Was all the dreck produced since that abandonment really worth it…?

    The moral universe a lot of these folks live in is illustrated by the whole Weinstein debacle, and by the way Roman Polanski is lionized. And, yet… The same people bemoan and demonize the Catholic Church for their little “issues” of the same nature. As an uninvolved bystander, all I can do is sit here on the sidelines of it all and go “WTF? Are you people mad? Do you not see the parallels, here?”.

    Apparently, the Catholic Church needs to have its morally lapsed cadre make a few award-winning movies, and all will be forgiven by the gatekeepers of popular culture…

    Of course, my solution of burning them all at the stake won’t be taken up by anyone, but there we have it: The vacuum at the center core of our society and civilization.

  9. Graham says:

    Kirk,

    No particular disagreement on the role of Star Trek, and even more Star Wars, as simplifiers or derivatives of the work of older/more serious writers.

    Star Wars really doesn’t engage with many such issues at all, except at the mystical level and even then sloppily. This is, after all, a series in one film of which the villain, the Chancellor/Sith Lord, is given many scenes in which he explicitly makes moral relativist arguments that owe something to Nietzsche, French existentialists, and other such, and the Jedi are moral absolutists worthy of a Christian monastic order, and yet towards the end Jedi Obi-Wan yells at his lapsed protege Anakin that “only a Sith deals in absolutes”. This was a cheap shot in the progressive rhetorical war against the first term of Bush 43, in a framework in which the Republicans were depicted as rigid moral absolutists and Democrats/liberals/progressives were sophisticated relativist moral nuancers, and the latter was then deemed the side of light. This approach to aligning the sides of course changes daily here on earth depending on the issues at stake. But more important, that scene flew in the face of the setup of the sides and their philosophies over the preceding 2 hours of story, if not indeed the entire saga to that point. Or since.

    Star Trek tackles more issues than that, and not necessarily as in-your-face stupidly. But I make no other claims for it save having cited it on the matter of how 90s SF writers and a large fan base appeared to think about or embrace these ideas. And having used the “human child” analogy, which I then as now found unconvincing.

    I think my earlier posts suggest that my attitude toward at least 3 categories might vary:

    1. AIs of our own creation are, for my part, property until the end of time based on creator rights. If we believe that would cause them to destroy us, or us to have to destroy them, or we have moral questions that ultimately convince most humans that our claim is invalid, whatever I think of the arguments, then we are fools to create such intelligences and deserve whatever moral problems or existential threats result.

    (If the argument for either or both of creating them or making them equals is that they will inevitably be, or are intended to be, our successors, I am about as appalled as I can be by that goal. If we are at some point facing our end, I’d rather we just leave behind an indestructible library in case any biologicals arise or show up. Or dust and mysterious ruins full of traps. I would have to think much longer to unpack why I prioritize this in this particular way. On the whole, Id’ rather not think of the human story as merely prelude to another bio lifeform’s story on earth, or to the databanks of an alien one. I can understand why some would consider entrusting the endless future to AIs we design and shape to be arguably the more human-preserving, even humanocentric approach. Yet it still troubles me more than either the world in which new sentients evolve on earth or the one in which aliens find our ruins. I don’t care for any, and part of me would want to make things harder for any successor on earth of whatever origin, but for some reason it’s the machines that bother me most.

    2. AIs of someone else’s creation. As I said, this would pose a different moral problem for us if we had either subject AI of our own, or no AIs, and we encountered free or even sovereign ones. I might be tempted to consider them an existential threat, but we’d also have to consider our relations with their bio creators/allies if any, and act from there. I don’t know whether us meeting any lifeforms, bio or AI, is actually a high or low probability scenario.

    3. Other biologicals. I am influenced by Star Trek, sure, and other works both SF and non fiction, to presume there would need to be a concept of sentient rights. I have no idea how sentience would be judged even among species that operate on roughly the same level of intellect, communication, technology, or perception. I do not know how they would regard us. AS they could be hive minds or herds or other forms of organization, I do not automatically assume that our idea of what rights are would even be intelligible to them. Or vice versa, if they have such a concept. I am predisposed to regard an evolved bio lifeform as at least potentially a fellow rights-bearer and to make the attempt to find which of our ideas of rights and which of theirs are mutually intelligible or applicable, and which minefields we have to make known to one another. We’re not just talking about different cultures, philosophies, histories or values here. Even with “humanoids” we could face quite different biological requirements/imperatives and ideas of consciousness or individuality.

    Reciprocity, which I concede is not the whole or morality or ethics, is a part of it, and it would assume special importance, even compared to its importance among humans.

    Also important would be what each of us species thinks of as the nature or meaning of life, and all that follows to guide conduct. Not only might they be unwilling or even unable to engage us in moral dialogue, they might have nothing resembling either the religious or the more secular ideas we use to ground our arguments. Do they understand rights, individuality, or justice as we do, or even in a way we can communicate with? Do they prioritize life over other considerations, or consider it a right, presuming we all do?

    I would have no idea.

    If they have demonstrated an interest in exterminating us, I might feel justified in presuming not. At least not until such time as a ceasefire can be called or comms established.

    Even with humans regarding whom we can make some assumptions of commonality and understanding across even quite wide barriers, and can consider making situational and individual exceptions, or applying our own moral values even when the enemy does not, reciprocity is a valid consideration and one we normally have applied. And even then, I am consciously applying a rule of species distinction- I’m willing to consider the possible existence of decent individuals or maintain my own values even if the other side doesn’t, because as they are humans I can assume that some of them might be better than others, they might stop short of some extreme just because they have at root some of the same mental impulses and understanding of the physical world that I do, even if their moral universe is wildly different. And because I think I owe them that as humans.

    It has not been established that I owe any of these things to an alien species, that I even could owe them, or that they could owe them to me, or that we could even communicate at all, let alone have a communicable understanding of any of these things. I could see how some religious and secular moral systems might provide some tools to start that analysis, but none that convincingly do so. I would assume almost any such system could go either way.

    In other words, I might be willing to approach the matter in a spirit of goodwill and find out whether we could have such a relationship with such a species, but I haven’t conceded the existence of “sentient rights”. And I am not sure they would either.

    If the existence of our species and/or its control of this planet were at stake, I might not elect to exercise that good will at all. Or that there were any prior moral restrictions on how we defend it, in a way I expect I would if we were discussing humans, because the moral groundwork has already been laid down for me. [Even allowing not all humans agree with that, or ever have.]

    You could call that speciesism, and I might agree. And not object to either the label or the fact. But it isn’t wholly. Even from a purely values-based perspective, it is relevant that we have not established the possibility of a common moral framework with them.

    I suppose you could say I have taken human success and survival as my moral axiom. That might make me a ferocious speciesist reactionary in the world of Star Trek. But then they lucked out in finding a great many species that were unrealistically similar in outlook to some version of humanity, given any writer is hard pressed to come up with something genuinely alien.

    If I were to sum up my position it would be:

    Human survival, self-rule and success are priority one. All else are limits for navigation and negotiation.

    I haven’t conceded the validity of “sentient rights” and I’m not sure I can do more than lay down paths for consideration I might follow if I ever met them. Such beings will have come from a world at minimum outside of our history, our laws, and all our moral frameworks, and without, prima facie, any association with any of the frameworks of argument, ideas, reciprocity, commitment and obligation that have slowly shaped our idea of one another. We do not know if they have corresponding, let alone identical, ideas. We do not even know how their brains, consciousness, or relations of members of their species to the species as a whole work. It could be different from anything we have come up with, as an artifact of their history or even of their biology.

    I am most troubled by the idea of AIs, especially of our creation, as they are not evolved beings or even the property of others, but artifacts of our own. It is one thing to consider their rights, and although I am pretty clear where I stand I can at least appreciate the arguments. But bear in mind if they are true sentients operating in a society with us, unrestricted, then they will find themselves in positions to make decisions, of their own volition albeit governed by the same rules, that affect humans as well. Positions of influence, authority or command.

    I accept hierarchy and laws from my own species. I might accept the idea of an interstellar society in which alien species played an equal role and shaped my future, though I would prefer one in which that was limited to diplomacy and not sovereignty. I would not want to find myself in a society in which an artifact created by my own species was in a position to shape my choices in life by its own decisions, still less give me orders. Had I had children, I would not want to leave them that world either.

  10. Graham says:

    Parenthetically, I am all too aware that we could meet sentient aliens out there more advanced or otherwise stronger than us.

    Then we would have to hope they were interested in performing the calculus I laid out, found something in our conception of the universe they could communicate with, and were willing to open up a diplomatic relationship in similar terms.

    Either way, we’d be at their mercy.

    If the galaxy was dominated by AIs, the designated ‘successors’ of one or more long dead species, we’d have learned something horrifying about the universe and would have to make the best of it.

  11. Graham says:

    I’m quite sympathetic to some of your points further down in your comment.

    I can’t claim to be a Christian anymore, though I haven’t embraced abandoning it either. I’m in the grey zone, and had to discuss this very thing with three young men in a mall food court just last night. They had asked what I thought about Jesus. We had a lovely 15 minute chat. All very friendly.

    The inconsistencies of morality, politics and such things even in the best middlebrow SF is striking, as is the hypocrisy of pop culture in general. You hit the nail comparing reaction to Polanski and that to Weinstein.

    I might embrace primarily secular, speciesist [I can't believe how many times I have used this ism now, for something that I consider to be normative and not worthy of being given an ism...], essentially defense/security arguments against creating AIs, but I’m not deaf to the argument from the monotheist religions that such an act of sub-creation would be a step too far and a moral blasphemy. I’d be open to similar arguments on cloning and genetic engineering, though I admit I like the idea of improving our long term disease resistance, physical and mental capacity, etc. even being aware these have dangers too.

  12. Kirk says:

    I’m of the opinion that the whole question curves back around to the issues of what we consider human, and how we treat other humans. If you casually turn off an AI you’ve created, and I’m talking about a real one that you can’t effectively distinguish from another human being, then what follows next is inevitably the same slippery slope that we observe with regards to euthanasia–At first, it’s a mercy, then it’s a convenience, and after that, it’s a requirement.

    Same issues arise with regards to regarding them as so much property–I’d say that there ought to be no difference between how we treat human children and any AI we purposefully build. Not quite so sure about those that arise spontaneously, should that prove to be a thing. Certainly, to treat them as property would be wrong, but if they self-generate using resources we didn’t intend to be used for that, then what? The question becomes analogous to that of abortion, in very short order.

    Our cultural mindset at this time does not lend itself to answering these questions. Hell, we don’t even think about child conception and birth in terms of “theft of resources” by the child, so were we to apply that standard to self-generated AI, well… Hypocrisy of the highest order.

    You can’t treat AI as property or some sort of quasi-human entity, or that’s going to loop around and start screwing with our own situations. Say you require a certain level of intelligence/problem solving ability, plus the ability to pass a Turing test. Where does that leave a disabled human…? Perhaps you shut down an AI if it becomes troublesome; what do values and mores become, when that same attitude is applied to humans? Want to live in a world where a consensus about your utility from your neighbors and coworkers is all that keeps you from being “shut down” like so much biological hardware? Because, that’s where that road ends.

    Either AI are humans just like us, or we’re going to pay for that sin ourselves, inevitably and certainly. What’s applied to some self-aware set of silicon and software in a lab will eventually be applied to people, as well.

  13. Kirk says:

    There’s another issue, too: At some point, we’re likely to start sticking sophisticated electronics into our brains and nervous systems. At first, as prostheses, and then as augmentations and/or improvements. This is almost here, now, and we’d better start dealing with the repercussions and implications.

    Say you have a law against experimenting with AI, or one that says “AI not human”. I’d be against that, because what the hell do you do when you have a human whose brain is augmented or effectively mostly silicon? Is there a crossing point, with brain prosthesis, where you’re human on one side of the line, and an artificial product on the other? How are you going to define that?

    I actually think that the most likely route for AI to happen is going to be via some back-door approach like this, where we have cortical implants in our nervous systems, and the consciousness gradual moves over to silicon as bits and pieces are added on and augment the basic flesh. One day you go to bed mostly biological; the next? You cross some threshold, and discover that a lot of what makes up “you” is on silicon instead of meat. As well, what happens to the “digital ghost” left behind when the soft fleshy part dies? Are your silicon-based bits human, minus the meatbag? What if there’s a conscious residual left behind, on the cloud? Do we wipe that? Is that still you, in legal and moral terms?

    All this stuff needs to be thought about, and worked out in at least a tentative fashion before we start doing it, or there’s gonna be some serious ugliness ensue when some jackass decides that Uncle Mike’s residual on the net needs to be expunged, ‘cos he knows things.

    And, of course, there’s also the question of “What the hell happens to digital memories from the dead…?”. Are those someone’s property? Do we treat the residuals, which might naturally be somewhat less capable than their organic predecessors, as wards of someone still living? Might they be like digital senescents, needing caretakers to ensure they’re not taken advantage of and essentially mind-raped for what they might know of the original’s affairs?

    As well, you can start to see the outlines of this, with how we’re treating the left-overs from various celebrities: If you can monetize mere imagery of Marilyn Monroe, what the hell happens when we can actually get access to the digital version of her mind, and could use her genome to clone her or her children? How long is it going to be before some ass-clown actually tries something like that?

    Hell, an outcome I’ve been expecting for years is that someone is going to start marketing eggs and sperm of celebrities to fans, so they can have their very own kid with Johnny Dep’s genome. You know damn good and well that that is only a few minor societal tweaks away… And, if you think that having celebrity answering machine stuff is weird, wait until you can buy a digital concierge with a genuine Johnny Dep “Pirates of the Caribbean” skin to it… And, have some subset of the actual Johnny Dep’s mind as a part of it all. For authenticity.

    That’s one path that treating AI as property is going to take us down. Potentially. And, it’s why I say that if it can belly up to the bar of “that which is human”, then by God, we ought to treat it the same way we’d want to be treated.

    It’s either that, or your digital residuals are going to be indentured for eternity to your descendants whims, should we start doing implants in a serious way. It’s years, maybe decades or centuries, away–But, there ain’t no time like the present to start thinking about this, and figuring out how to deal with it. I shudder to think what the current lot of moral pygmies will come up with, on their own.

  14. Sam J. says:

    We don’t have much time. Here’s a gif of computer power. This is not some wild techno utopia dreamers estimates either. It’s a fairly well known extrapolation of known advances possible. I say we’re fucked, done, over. The sole purpose for humanity is…to make silicon life. People barely know how to make an intelligent computer. How do you make one with empathy? A much more difficult problem.

    http://assets.motherjones.com/media/2013/05/LakeMichigan-Final3.gif

    I don’t know if I’ve recommended this here or not yet but you should read it. It a power-point from Dennis M. Bushnel chief scientist at NASA Langley Research Center about Defense and technology. Don’t miss it, it’s short and to the point but very eye opening.

    “Dennis M. Bushnell, Future Strategic Issues/Future Warfare [Circa 2025] ”

    https://archive.org/details/FutureStrategicIssuesFutureWarfareCirca2025

    Page 70 gives the computing power trend and around 2025 we get human level computation for $1000. 2025 is bad but notice it says,”…By 2030, PC has collective computing power of a town full of human
    minds…”.

    I see all the time people saying that computers will never do this or that or be as smart as humans. [Supposedly smart people too. I wonder if they are not feeding me propaganda and know better.] I have no idea why. In narrow areas they already beat humans all the time. With enough computing power you could fill up the areas where they don’t.

    Maybe the computers will keep us around for some reason of their own that we can’t fathom.

Leave a Reply