• superfes@beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Good god, we’re no closer to AI anything than we were 50 years ago, the only thing that changed is the amount of CPU and storage we could allocate to the maths involved.

    AI will never happen with the current models.

    • Pons_Aelius@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Thank you. I am glad I am not the only one saying this every time this sort of bullshit gets posted.

      Simply put.

      We are no closer today in understanding how self-awareness and intelligence develops in animals than we were when all this AI research started 60+ years ago.

      You can go back to the 1970s and read similar articles to today.

      In 10 years we will use AI to talk to other animals

      In 10 years AI will be self-aware and as smart as a human

      Shit, go watch Colossus: The Forbin Project and see the same shit in a movie from 53 years ago.

    • Rhaedas@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You are referring to AGI (artificial general intelligence). AI has been around for a while now in the form of ANI (artificial narrow intelligence). LLMs are still in the latter, but faster compute and ways to use different LLMs to improve their outputs have changed and broadened how narrow they can be. Still not AGI, absolutely, but the point here is still valid because even a narrow AI or even lower can have alignment problems that turn them into an issue. And safety towards such things is very much backseat in any AI operation, even as the same experts talk about sudden and unexpected emergent properties. Eventually with such recklessness for profit and being first, an emergent property will occur that might as well be AGI for the dangerous potential it has, and we are not ready.

      Companies are bending over backwards to insert the AI that we’ve come up with (that’s absolutely not AGI) in all sorts of places, with some major failures (because LLMs are being sold as AGI, not as what they are). Eventually someone will go too far even without AGI and it doesn’t seem anyone is putting on the brakes.

  • Espiritdescali@futurology.todayM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Alignment will be one of the biggest challenges the entire human species faces over the next 50 years. Assuming we survive climate change that is!