• Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      54
      ·
      3 months ago

      And the system doesn’t know either.

      For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

      • xantoxis@lemmy.world
        link
        fedilink
        English
        arrow-up
        35
        ·
        3 months ago

        Accurate.

        No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          18
          ·
          3 months ago

          The worst for me was a fairly simple programming question. The class it used didn’t exist.

          “You are correct, that class was removed in OLD version. Try this updated code instead.”

          Gave another made up class name.

          Repeated with a newer version number.

          It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

          • nilloc@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

            From what I’ve seen you’ll need an iron stomach.

    • treadful@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 months ago

      They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.

      Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.

      • Quereller@lemmy.one
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.