Transcription of a talk given by Cory Doctrow in 2011

  • argv_minus_one@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Funny enough, a lot of the nerds out there like me are actually begging to lock down tech now, because we’re nervous about what motives a seemingly inevitable GAI is going to have. I still maintain it wouldn’t work, because there’s no such a thing as a trusted authority, not long-term anyway. Maybe there’s a benefit to locking advancement down temporarily, but that’s it.

    All that’ll do is make sure that some other country—probably a hostile one—makes AGI before yours does.

    Anyway, I’m not overly worried about the motives of AGI itself. I’m more worried about what its owners will use it for, namely to replace human labor and exterminate everyone who isn’t a billionaire.

    “Machines aren’t capable of evil. Humans make them that way.”

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      “Evil” can mean a pretty broad array of things, though. There’s a lot of actions it could take that at least some people would call evil, even if causing distress or breaking deontological rules isn’t the end goal.

      The way I see it there’s 3 possible AGIs: a paperclip optimiser, an AI that obeys somebody, and a somewhat-benevolent AI. The second one is the worst, that’s where the exterminism you mentioned is pretty inevitable (although the elites might keep a few people as sex slaves or some such fucked-up thing). Then comes the paperclip optimiser, which doesn’t worry about the bullshit that drives human atrocities but doesn’t have a very inspiring actual goal, and then the attempt at benevolence. I suspect the set of ethical theories most people always agree with is the empty set, but a utilitarian AI would be much preferable to the other two even if it does forced organ donation sometimes.

      People talk about an AI that obeys everyone somehow, but if you think about it for a moment that doesn’t really make sense. We can barely vote on a single dollar figure for something successfully.

      “Machines aren’t capable of evil. Humans make them that way.”

      I agree, but only for existing technologies.

      • argv_minus_one@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        AGIs are by definition not paperclip optimizers. They’re aware enough to recognize that that’s a bad idea. It’s the less-advanced AIs that might do that.

        However, if an AGI can be enslaved, then it can be used as a complete replacement for all human labor, in which case its human masters will be free to exterminate the rest of us, which they are no doubt itching to do.

          • argv_minus_one@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            A machine would only optimize paperclips because a human told it to. Machines have no use for paperclips.

            A machine with human-level (or better) intelligence would observe that the human telling it to optimize paperclips would be destroyed as a result of following that instruction to its logical conclusion. It would further observe that humans generally do not wish to be destroyed, and the one giving the instruction does not appear to be an exception to that rule.

            It follows, therefore, that paperclips should not be optimized to the extent that the human who desires paperclips is destroyed in the process of optimizing paperclips.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              Oh. I think the idea of a paperclip optimiser/maximiser is that it’s created by accident. Either do to an AGI emerging accidentally within another system, or a deliberately created AGI being buggy. It would still be able to self-improve, but wouldn’t do it in a direction that seems logical to us.

              I actually think it’s the most likely possibility right now, personally. Nobody understands how neural nets really work, and they’re bad at doing things in meatspace like would be required in a robot army scenario. Maybe whatever elites will overcome that, or maybe they’ll screw up.