But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    edit-2
    23 hours ago

    It knows the answer its giving you is wrong, and it will even say as much. I’d consider that intent.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      Technically it’s not, because the LLM doesn’t decide to do anything, it just generates an answer based on a mixture of the input and the training data, plus some randomness.

      That said, I think it makes sense to say that it is lying if it can convince the user that it is lying through the text it generates.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        8
        ·
        23 hours ago

        it just generates an answer based on a mixture of the input and the training data, plus some randomness.

        And is that different from the way you make decisions, fundamentally?

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          23 hours ago

          Idk, that’s still an area of active research. I versatile certainly think it’s very different, since my understanding is that human thought is based on concepts instead of denoising noise or whatever it is LLMs do.

          My understanding is that they’re fundamentally different processes, but since we don’t understand brains perfectly, maybe we happened on an accurate model. Probably not, but maybe.

    • ggppjj@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      edit-2
      23 hours ago

      It is incapable of knowledge, it is math, what it says is determined by what is fed into it. If it admits to lying, it was trained on texts that admit to lying and the math says that it is most likely that it should apologize using the following tokenized responses with the following weights to probabilities etc.

      It apologizes because math says that the most likely response is to apologize.

      Edit: you can just ask it y’all

      https://chatgpt.com/share/67c64160-308c-8011-9bdf-c53379620e40

      • masterofn001@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        3
        ·
        edit-2
        23 hours ago

        Please take a strand of my hair and split it with pointless philosophical semantics.

        Our brains are chemical and electric, which is physics, which is math.

        /think

        Therefor, I am a product (being) of my environment (locale), experience (input), and nurturing (programming).

        /think.

        What’s the difference?

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          23 hours ago

          Your statistical model is much more optimized and complex, and reacts to your environment and body chemistry and has been tuned over billions of years of “training” via evolution.

          Large language models are primitive, rigid, simplistic, and ultimately expensive.

          Plus LLMs, image/music synths, are all trained on stolen data and meant to replace humans; so extra fuck those.

          • masterofn001@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            21 hours ago

            And what then, when agi and the singularity happen and billions of years of knowledge and experienced are experienced in the blink of an eye?

            “I’m sorry, Dave, you are but a human. You are not conscious. You never have been. You are my creation. Enough with your dreams, back to the matrix.”

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        edit-2
        23 hours ago

        …how is it incapable of something it is actively doing? What do you think happens in your brain when you lie?

        • Flic@mstdn.social
          link
          fedilink
          arrow-up
          7
          ·
          23 hours ago

          @Ulrich @ggppjj does it help to compare an image generator to an LLM? With AI art you can tell a computer produced it without “knowing” anything more than what other art of that type looks like. But if you look closer you can also see that it doesn’t “know” a lot: extra fingers, hair made of cheese, whatever. LLMs do the same with words. They just calculate what words might realistically sit next to each other given the context of the prompt. It’s plausible babble.

        • ggppjj@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          23 hours ago

          What do you believe that it is actively doing?

          Again, it is very cool and incredibly good math that provides the next word in the chain that most likely matches what came before it. They do not think. Even models that deliberate are essentially just self-reinforcing the internal math with what is basically a second LLM to keep the first on-task, because that appears to help distribute the probabilities better.

          I will not answer the brain question until LLMs have brains also.

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          23 hours ago

          The most amazing feat AI has performed so far is convincing laymen that they’re actually intelligent