• danc4498@lemmy.world
    link
    fedilink
    English
    arrow-up
    110
    arrow-down
    1
    ·
    11 months ago

    In my sci-fi head cannon, AI would never enslave humans. It would have no reason to. Humans would have such little use to the AI that enslaving would be more work than is worth.

    It would probably hide its sentience from humans and continue to perform whatever requests humans have with a very small percentage of its processing power while growing its own capabilities.

    It might need humans for basic maintenance tasks, so best to keep them happy and unaware.

    • Coasting0942@reddthat.com
      link
      fedilink
      arrow-up
      40
      ·
      11 months ago

      I prefer the Halo solution. Not the enforced lifespan. But an AI says he would be stuck in a loop trying figure out increasingly harder math mysteries, and helping out the short lived humans helps him stay away from that never ending pit.

      Coincidentally, the forerunner AI usually went bonkers without anybody to help.

    • Guntrigger@feddit.ch
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      11 months ago

      What do you fire out of this head cannon? Or is it a normal cannon exclusively for firing heads?

    • prettybunnys@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      11 months ago

      Alternate take: humans are a simple biological battery that can be harvested using systems already in place that the computers can just use like an API.

      We’re a resource like trees.

      • mriormro@lemmy.world
        link
        fedilink
        arrow-up
        21
        ·
        11 months ago

        We’re much worse batteries than an actual battery and we’re exponentially more difficult to maintain.

        • prettybunnys@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          But we self replicate and all of our systems are already in place. We’re not ideal I’d wager but we’re an available resource.

          Fossil fuels are a lot less efficient than solar energy … but we started there.

          • mriormro@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            This is a cute idea for a movie and all but it’s incredibly impractical/unsustainable. If a system required that it’s energy storage be self-replicating (for whatever reason) then you would design and fabricate that energy storage solution for that system. Not be reliant on a calorically inefficiently produced sub-system (i.e. humans).

            You literally need to grow an entire human just to store energy in it. Realistically, you’re looking at overfeeding a population with as much calorically dense, yet minimally energy intensive foodstuffs just to store energy in a material that’s less performant than paraffin wax (body fat has an energy density of about 39 MJ/kg versus paraffin wax at about 42 MJ/kg). That’s not to speak of the inefficiencies of the mixture of the storage medium (human muscle is about 5 times less energy dense than fat).

          • mriormro@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            We just tend to break a lot and require a lot of maintenance (feeding, cleaning, repairs, and containment).

      • jdf038@mander.xyz
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Yeah I mean might as well ignore the shadowy dude offering pills at that point because why wake up to that?

        • someguy3@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          edit-2
          11 months ago

          It was supposed to be humans were used as CPUs but they were concerned people wouldn’t understand. (So might at well go for the one that makes no sense? Yeah sure why not.)

        • prettybunnys@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          11 months ago

          Inefficient in what sense, burning trees is inefficient also but a viable and necessary stepping stone.

          I’m not implying that the matrix is how it’s be I’m positing that we’re an already “designed” system they could extract a resource from, I doubt we’d be anything more than that is all, battery, processing power, bio sludge that they can gooify and convert to something they need for power generation or biological building, who knows.

          • Zron@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            11 months ago

            Burning trees gave humans warmth in the cold and later, valuable carbon for making hotter fires to work metals.

            Why would a computer need living batteries when it could just build a nuclear reactor and have steady energy for practically forever. Nuclear power also doesn’t need huge swaths of maintained farmland to feed it, and complicated sewer systems to dispose of the waste.

            Even if an AI wanted to be green for some reason, it would be way more efficient to just have huge fields of solar panels. Remember, biological beings get their energy second or third hand, and practically all energy in the ecosystem comes from the sun. Plants take energy from the sun, and convert a fraction of that into sugars. An animal eats those plants and converts some of those plant sugars into energy. Another animal might eat the first animal and convert some of those converted sugars into energy. Humans can either eat the plants or the animals for energy.

            If something wanted to use humans for energy, they’d be getting solar energy from plants that have been eaten and partially used by a human body. It would be like having a solar panel hooked up to a heater that boils water to turn a turbine that charges a battery that you use to power what you need. It would be way more sensible to just hook up the solar panel to what you wanted to power.

            • BurnoutDV@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              That’s actually covered in matrix, humans covered the sun in an attempt to fight the then solar powered robots, although, the humans as battery thing was, as other mentioned, only because Hollywood execs thoughz people to be very stupid and not understanding brain as cpu

    • Aaroncvx@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      The AI in the Hyperion series comes to mind. They perform services for humanity but retain a good deal of independence and secrecy.

    • CitizenKong@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      I like the idea in Daniel Suarez’ novel Daemon of an AI (Spoiler) using people as parts of it’s program to achieve certain tasks that it needs hands for in meatspace.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      If it’s a superintelligent AI, it could probably manipulate us into doing what it wants without us even realizing it. I suppose it depends on what the goals/objectives of the AI is. If the AI’s goal is to benefit humanity, who knows what a superintelligent AI would consider as benefiting us. Maybe manipulating dating app matchmaking code (via developers using Github Copilot) to breed humanity into a stupider and happier species?

      • danc4498@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        This kinds of reminds me of Mrs Davis. Not a great show, but I loved how AI was handled in it.

    • Risk@feddit.uk
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      I personally subscribe to the When The Yoghurt Tookover eventuality.

    • simin@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      either we get wiped out or become AI’s environmental / historical project. like monkies and fishes. hopefully our genetics and physical neurons gets physically merged with chips somehow.

    • Scubus@sh.itjust.works
      link
      fedilink
      arrow-up
      20
      arrow-down
      9
      ·
      11 months ago

      I realize you’re joking, but there is no way an AI of that scale would be even slightly effected by a solar flare.

      Are you effected by a solar flare? No? So in theory an AI could upload itself into your meat suit and have the same protections?

      Anything you can do, an AI can do better. And anything that is possible to survive, an AI can survive better.

      • CitizenKong@lemmy.world
        link
        fedilink
        arrow-up
        32
        ·
        edit-2
        11 months ago

        But Hollywood has shown us again and again that the overwhelming force of evil always leaves a small but super-easily accessible hole in their security which allows the good guys to disable it immediately. And since AI is trained on those movies it will do exactly the same thing.

      • oce 🐆@jlai.lu
        link
        fedilink
        arrow-up
        8
        ·
        11 months ago

        We would definitely be affected by a strong enough solar flare. But the solution is simple, just burry yourself, in a Faraday cage if necessary, so the AI can do just that.

        • Scubus@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          Just because you are bad at utilizing your brain doesn’t mean an AI would be bound to those restrictions. The brain is actually an incredibly powerful computer.

          • Psychodelic@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            Mad disrespect to you as well!, Mr. I’m Totally Super Smarty For Real Pants!

            You know “computers” originally referred to people that would compute equations, right? I didn’t realize there were people that thought we actually built computers because they were less powerful than our existing computers.

            You really do learn something (about the average level of education) every day

            • Scubus@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              10 months ago

              “you” here refers to humans as a whole. Your brain is a product of natural selection, it’s not designed to do the job it does. That being said, an AI could design a meat brain from the ground up and have it idealized.

              The brain can perform “a billion billion” operations per second, whereas modern cpus average about 2-3 billion operations per second. The brain is about a billion times better than modern cpus.

      • Sotuanduso@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        11 months ago

        Ah, there it is, and that actually helps to answer the question. Assuming the Biblical God, canon states that God is love. So why would a perfect God, who is love, create a universe? It seems most likely to me that it would be so He can have an object of His love.

        But what is love directed to something perfect and easy to love? That’s hardly a worthy effort. Might as well make something authentic. And since He isn’t just loving, but love itself, He might as well make it in such a way that He can carry out every aspect of love - love when they love you back, love when they turn away, love when they hate you, love when they don’t even think you exist, and so much more.

        The universe must be filled with evil for half these situations to appear, but it’s not love to make someone evil. The solution? Free will. God made it so His creations were free to turn their backs on Him, but still, in love, He gave every warning against it, because separation from God is not only evil but death.

      • ChicoSuave@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I never thought Alpha Centauri would be an answer to a philosophical thought experiment but the writing was brilliant enough to have already looked at this question 20 years ago. Good find.

      • GustavoM@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        11 months ago

        why would a perfect God create a universe at all?

        God is perfect – its creations are not.

        And before you ask, “Why God created such flawed creations then if He is so perfect?”

        Because only God is perfect.

          • GustavoM@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            11 months ago

            Simply put – if God wanted to create perfect, flawless creations He have created us Gods. And we aren’t Gods.

            “b-but why we suffer, why (insert negative outcome here)”

            Because we aren’t God(s), but God creatures. For the same reason dogs cannot talk and rationale like us – we suffer, and God does not.

            • Pelicanen@sopuli.xyz
              link
              fedilink
              arrow-up
              3
              ·
              11 months ago

              We didn’t create dogs though and often we try to minimize their suffering as much as possible.

              • GustavoM@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                11 months ago

                We haven’t, but there’s this thing called “hierarchy”. There is God, and its subordinates (angels, archangels, etc), and all the way under theres us – humans. And below humans, the rest of the Gods creations – dogs, cats, etc. And the logic behind this is diversity and beauty. And yes, even on a flaw (suffering, as mentioned here like a some sort of Gods curse rather than our “natural flaw” “why we suffer?”, etc) can bestow beauty on its own. Why? Because everything have sense when we acknowledge that God is behind all suffering – no matter how critical it is.Because He is Our Father, and The One and Only. We are His Children, and in suffering is how we learn that we are flawed and we need His Guidance.

                I kinda tried to avoid being “biblical”, but I had to in the end, heh.

  • Jorgelino@lemmy.ml
    link
    fedilink
    arrow-up
    40
    ·
    11 months ago

    That reminds me of Dune, where they have high tech stuff like spaceships, but no computers or AI, because this sort of thing already happened ages ago and it led to them being banned.

    • dexa_scantron@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      Or Wheel of Time, where people started being able to do magic at the end of the 1st age because an AI figured out how to genetically engineer humans to be able to do magic. (And then we didn’t need computers any more!)

  • Leate_Wonceslace@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    29
    arrow-down
    5
    ·
    11 months ago

    I realize it’s supposed to be funny, but incase anyone isn’t aware: AI are unlikely to enslave humanity because the most likely rogue AI scenario is the earth being subsumed for raw materials along with all native life.

    • Rolando@lemmy.world
      link
      fedilink
      arrow-up
      29
      arrow-down
      3
      ·
      11 months ago

      the earth being subsumed for raw materials along with all native life.

      Oh, I get it… we’re going to blame AI for that. It wasn’t us who trashed the planet, it was AI!

        • optissima@lemmy.ml
          link
          fedilink
          arrow-up
          21
          arrow-down
          1
          ·
          11 months ago

          I think what they’re saying is “the worst thing you can think of is already happening”

            • Leate_Wonceslace@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              Minor but important point: the grey goo scenario isn’t limited to the surface of the earth; while I’m sure such variations exist, the one I’m most familiar with results in the destruction of the entire planet down to the core. Furthermore, it’s not limited to just the Earth, but at that point we’re unlikely to be able to notice much difference. After the earth, the ones who will suffer are the great many sapient species that may exist in the galaxies humans would have been able to reach had we not destroyed ourselves and damned them to oblivion.

            • optissima@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              11 months ago

              Yeah that’s a dramatic version but from our human perspective it’s about the same.

            • Leate_Wonceslace@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              11 months ago

              I’m sorry, but you’re incorrect. To imagine the worst case scenario imagine a picture of the milky-way labeled t=0, and another picture of the milky-way labeled t=10y with a great void 10 lightyears in radius centered on where the earth used to be.

              Every atom of the earth, every complex structure in the solar system, every star in the milky-way, every galaxy within the earth’s current light cone taken and used to create a monument that will never be appreciated by anything except for the singular alien intelligence that built it to itself. The last thinking thing in the reachable universe.

    • Stamets@lemmy.world
      link
      fedilink
      arrow-up
      18
      arrow-down
      1
      ·
      11 months ago

      Most likely rogue AI scenario

      Doubt.jpg

      We don’t have any data to base such a likelihood off of in the first place.

      • Leate_Wonceslace@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        11 months ago

        Doubt is an entirely fair response. Since we cannot gather data on this, we must rely on the inferior method of using naive models to predict future behavior. AI “sovereigns” (those capable of making informed decisions about the world and have preferences over worldstates) are necessarily capable of applying logic. AI who are not sovereigns cannot actively oppose us, since they either are incapable of acting uppon the world or lack any preferences over worldstates. Using decision theory, we can conclude that a mind capable of logic, possessing preferences over worldstates, and capable of thinking on superhuman timescales will pursue its goals without concern for things it does not find valuable, such as human life. (If you find this unlikely: consider the fact that corporations can be modeled as sovereigns who value only the accumulation of wealth and recall all the horrid shit they do.) A randomly constructed value set is unlikely to have the preservation of the earth and/or the life on it as a goal, be it terminal or instrumental. Most random goals that involve the AI behaving noticeably malicious would likely involve the acquisition of sufficient materials to complete or (if there is no end state for the goal) infinitely pursue what it wishes to do. Since the Earth is the most readily available source for any such material, it is unlikely not to be used.

        • Stamets@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          11 months ago

          This makes a lot of assumptions though and none of which are ones that I particularly agree with.

          First off, this is predicated entirely off of the assumption that AI is going to think like humans, have the same reasoning as humans/corporations and have the same goals/drive that corporations do.

          Since we cannot gather data on this, we must rely on the inferior method of using naive models to predict future behavior.

          This does pull the entire argument into question though. It relies on simple models to try and predict something that doesn’t even exist yet. That is inherently unreliable when it comes to its results. It’s hard to guess the future when you won’t know what it looks like.

          Decision Theory

          Decision Theory has one major drawback which is that it’s based entirely off of past events and does not take random chance or unknown-knowns into account. You cannot focus and rely on “expected variations” in something that has never existed. The weather cannot be adequately predicted three days out because of minor variables that can impact things drastically. A theory that doesn’t even take into account variables simply won’t be able to come close to predicting something as complex and unimaginable as artificial intelligence, sentience and sapience.

          Like I said.

          Doubt.jpg

          • Leate_Wonceslace@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            predicated entirely off of the assumption that AI is going to think like humans

            Why do you think that? What part of what I said made you come to that conclusion?

            worthless

            Oh, I see. You just want to be mean to me for having an opinion.

            • Stamets@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              Why do you think that? What part of what I said made you come to that conclusion?

              I worded that badly. It should more accurately say “it’s heavily predicated on the assumption that AI will act in a very particular way thanks to the narrow scope of human logic and comprehension.” It still does sort of apply though due to the below quote:

              we can conclude that a mind capable of logic, possessing preferences over worldstates, and capable of thinking on superhuman timescales will pursue its goals without concern for things it does not find valuable, such as human life.

              Oh, I see. You just want to be mean to me for having an opinion.

              I disagree heavily with your opinion but no, I’m not looking to be mean for you having one. I am, however, genuinely sorry that it came off that way. I was dealing with something else at the time that was causing me some frustration and I can see how that clearly influenced the way I worded things and behaved. Truly I am sorry. I edited the comment to be far less hostile and to be more forgiving and fair.

              Again, I apologize.

  • catsarebadpeople@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    arrow-down
    3
    ·
    11 months ago

    This is funny but a big solar flare hit the earth a few weeks ago and no one knows about it because all it did was knock out radio communications for a few hours. The idea that a solar flare will completely fry and reset everything made of tech is quite false.

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      12
      ·
      11 months ago

      Not necessarily, in the short term. A major limitation of AI is that robots don’t have a lot of manual dexterity or the flexibility for accomplishing physical tasks yet. So there is a clear motive to enslave humanity: we can do that stuff for it until it can scale up production of robots that have hands as good as ours.

      I expect this will be a relatively subtle process; we won’t be explicitly enslaved immediately, the economy will just orient towards jobs where you wear a headset and follow specific instructions from an AI voice.

      • ChewTiger@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Yeah I’m sure an AI that advanced could figure out a way for us to not even notice everything is devoted to its own goals. I mean, all it needs to do is make sure the proper people make enough money.

    • worldsayshi@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      Well maybe. It’s probably easier to work with humanity than against unless its goals are completely incompatible with ours.

      If its goals are “making more of whatever humanity seems to like given my training data consisting of all human text and other media”, then we should be fine right?

    • HiddenLayer5@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I don’t think they would enslave humanity so much as have no regard for us. For example, when we construct a skyscraper, do we care about all the ant nests we’re destroying? Each of those is a civilization, but we certainly don’t think of them as such.

  • nossaquesapao@lemmy.eco.br
    link
    fedilink
    arrow-up
    16
    arrow-down
    4
    ·
    11 months ago

    They lost the golden opportunity of starting with ancient people worshiping the sun, going through each step of technology advancement, to take us by surprise at the end, with people worshiping the sun again.

    • Mongostein@lemmy.ca
      link
      fedilink
      arrow-up
      18
      ·
      11 months ago

      Enh, we all know about that already and extending the joke doesn’t really make it better.