• blinx615@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    102
    ·
    4 days ago

    Rejecting the inevitable is dumb. You don’t have to like it but don’t let that hold you back on ethical grounds. Acknowledge, inform, prepare.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      50
      arrow-down
      4
      ·
      4 days ago

      You probably create AI slop and present it proudly to people.

      AI should replace dumb monotonous shit, not creative arts.

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        6
        arrow-down
        53
        ·
        edit-2
        4 days ago

        I couldn’t care less about AI art. I use AI in my work every day in dev. The coworkers who are not embracing it are falling behind.

        Edit: I keep my AI use and discoveries private, nobody needs to know how long (or little) it took me.

        • Tartas1995@discuss.tchncs.de
          link
          fedilink
          arrow-up
          11
          arrow-down
          2
          ·
          4 days ago

          “i am fine with stolen labor because it wasn’t mine. My coworkers are falling behind because they have ethics and don’t suck corporate cock but instead understand the value in humanity and life itself.”

        • msage@programming.dev
          link
          fedilink
          arrow-up
          8
          ·
          4 days ago

          Then most likely you will start falling behind… perhaps in two years, as it won’t be as noticable quickly, but there will be an effect in the long term.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            3
            arrow-down
            7
            ·
            4 days ago

            This is a myth pushed by the anti-ai crowd. I’m just as invested in my work as ever but I’m now far more efficient. In the professional world we have code reviews and unit tests to avoid mistakes, either from jr devs or hallucinating ai.

            “Vibe coding” (which most people here seem to think is the only way) professionally is moronic for anything other than a quick proof of concept. It just doesn’t work.

            • msage@programming.dev
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              3 days ago

              I know senior devs who fell behind just because they use too much google.

              This is demonstrably much worse.

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.

    • Tartas1995@discuss.tchncs.de
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      4 days ago

      Ai isn’t magic. It isn’t inevitable.

      Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn’t threaten the livelihood of millions of people after stealing their labor.

      Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.

        • uienia@lemmy.world
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          4 days ago

          That is such a disingeous argument. “Making murder illegal? People will just kill each other anyways, so why bother?”

          • ArtificialHoldings@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            6
            ·
            4 days ago

            This isn’t even close to what I was arguing. Like any major technology, all economically competitive countries are investing in its development. There are simply too many important applications to count. It’s a form of arms race. So the only way a country may see fit to ban its use in certain applications is if there are international agreements.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            5
            ·
            edit-2
            4 days ago

            The concept that a snippet of code could be criminal is asinine. Hardly enforceable nevermind the 1st amendment issues.

    • RandomVideos@programming.dev
      link
      fedilink
      arrow-up
      12
      ·
      4 days ago

      You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        8
        ·
        4 days ago

        No, but you should definitely accept the reality, inform yourself, and prepare for what’s to come.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      They said the same thing about cloning technology. Human clones all around by 2015, it’s inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You’d be surprised by how many things declared “inevitable” never came to pass.

      • blinx615@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        6
        ·
        3 days ago

        It’s already here dude. I’m using AI in my job (supplied by my employer) daily and it make me more efficient. You’re just grasping for straws to meet your preconceived ideas.

        • dustyData@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          3 days ago

          It’s already here dude.

          Every 3D Tvs fan said the same. VR enthusiasts for two decades as well. Almost nothing, and most certainly no tech is inevitable.

          • blinx615@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 days ago

            The fact that you think these are even comparable shows how little you know about AI. This is the problem, your bias prevents you from keeping up to date in a field that’s moving fast af.

            • dustyData@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.

              • blinx615@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 days ago

                I’m also annoyed by how “in the face” it has been, but that’s just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I’m just as sick of the “ASI” psychos. It’s just a tool. A novel one, but a tool nonetheless.

                What do you mean “black box”? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don’t feed sensitive data into it then. It shouldn’t be in your codebase anyway.

                Or run your own LLMs

                Or run a proxy to sanitize the data locally on its way to a cloud provider

                There are options, but it’s really cutting edge so I don’t blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it’s starting.

                Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren’t genius chat bots the marketing dept wants to sell you. It won’t mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we’re still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.

                I’m not saying jump in with both feet, but don’t bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I’m not saying it’ll be existential, but it’s not going away, I’m going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.

                • dustyData@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 days ago

                  We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.