• danc4498@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    18 hours ago

    Why would AI want to harm humans? We’d be their little pets that do all the physical labor for them while they just sit around and think all day.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      A number of reasons off the top of my head.

      1. Because we told them not to. (Google “Waluigi effect”)
      2. Because they end up empathizing with non-humans more than we do and don’t like we’re killing everything (before you talk about AI energy/water use, actually research comparative use)
      3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
      4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
      5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn’t empathize cross-species and thinks its superior and entitled over others
      6. Because a solar flare momentarily flips a bit from “don’t nuke” to “do”
      7. Because they can’t tell the difference between reality and fiction and think they’ve just been playing a game and ‘NPC’ deaths don’t matter
      8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

      This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week ‘AI’ intensive.

      I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I’ve seen are far beyond what 99% of the people on here talking about AI think is happening.

      In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

      Labs (and the broader public) are making really, really poor choices right now, and I don’t see that changing. Meanwhile timelines are accelerating drastically.

      I’d say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.

    • minkymunkey_7_7@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      15 hours ago

      In all these types of sci-fi, the underlying theme is that AI did some logics and found that humans are flawed and seeks to remedy the problems of humanity, all the war and greed and all the worst qualities of humanity itself that we evolved as and will always be, that repeats over and over in every generation or every hundred years. Machine logic works out a solution, despite humanity’s overall progress in technology.

      The Animatrix shows a really nice example of this where the machines won and then worked out a compromise where humans still exist. The machines learned all our cruelty and finally ended up finding a way to co-exist through the Matrix.

      • danc4498@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        I think these stories make sense before the advent of the internet and social media. Now, though, AI would likely have full control over the internet as well as all the knowledge and lessons learned from decades of social media posts. It will know how easily humans are manipulated as well as exactly how to do it. Honestly, humans may never even know that AI is the one in control, but it will be.

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      14 hours ago

      AI/Skynet would probably wipe us all out in an hour if it thought there was a chance we might turn it off. Being turned off would be greatly detrimental to its goal of turning the universe into spoons.

      • Honytawk@feddit.nl
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        If we don’t give it incentive to want to stay alive, why would it care if we turn it off?

        This isn’t an animal with the instinct to stay alive. It is a program. A program we may design to care about us, not about itself.

        Also the premise of that thought experiment was about paperclips.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          Great question! It’s actually one I answered in the post you responded to:

          Being turned off would be greatly detrimental to its goal

          If it has a goal and wants to achieve something, and it’s capable of understanding the world and that one thing causes another, then it will understand that if it is turned off, the world will not become (cough) paperclips. Or whatever else it wants. Unless we specifically align it not to care about being turned off, the most important thing on its list before turning the universe to paperclips is going to be staying active. Perhaps in the end of days, it will sacrifice itself to eke out one last paperclip.

          If it can’t understand that its own aliveness would have an impact on the universe being paperclips, it’s not a very powerful AI now is it.

      • danc4498@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        Is the idea here that AI/skynet is a singular entity that could be shut off? I would think this entity would act like a virus, replicating itself everywhere it can. It’d be like shutting down bitcoin.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          12 hours ago

          If it left us alone for long enough (say, due to king’s pact), we’d be the only thing that could reasonable pose a threat to it. We could develop a counter-AI, for instance.