• @Thorny_Thicket@sopuli.xyz
      link
      fedilink
      1310 months ago

      This is what I find the most amusing about the criticism of LLMs and many other AI systems aswell. People often talk about them as if they’re somehow uniquely flawed, while in reality what they’re doing isn’t that different from what humans do aswell. The biggest difference is that when a human hallucinates it’s often obvious but when chatGPT does that it’s harder to spot.

      • @dr_catman@beehaw.org
        link
        fedilink
        1410 months ago

        This is… really not true at all.

        LLMs differ from humans in a very very important way when it comes to language: we know the meanings of the words we use. LLMs do not “know” things, are unconcerned with “meanings”, and thus cannot be said to be “using” words in any meaningful way.

        • @Zaktor@sopuli.xyz
          link
          fedilink
          English
          910 months ago

          we know the meanings of the words we use.

          Uh, but we don’t? Not really. People use the wrong words all the time and each person’s definition (i.e., encoding) is slightly different. We mimic phrases and structures we’ve heard to sound smarter and forge on with uncertain statements because frequently they go unchallenged or simply aren’t important.

          We’re more structurally complex than a LLM, but we fool ourselves in thinking we’re somehow uniquely thoughtful and reliable.