• kipo@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    ‘Hallucinations’ are not a bug though; it’s working exactly as intended and this is how it’s designed. There’s no bug in the code that you can go in and change that will ‘fix’ this.

    LLMs are impressive auto-complete, but sometimes the auto-complete doesn’t spit out factual information because LLMs don’t know what factual information is.

    • squaresinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      They aren’t a technical bug, but an UX bug. Or would you claim that an LLM that outputs 100% non-factual hallucinations and no factual information at all is just as desirable as one that doesn’t do that?

      Btw, LLMs don’t have any traditional code at all.

    • dragonfly4933@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I don’t think calling hallucinations a bug is strictly wrong, but it’s also not working as intended. The intent is defined by the developers or the company, and they don’t want hallucinations because that reduces the usefulness of the models.

      I also don’t think we know that it is a fact that this is a problem that can’t be solved in current technology, we simply have not found any useful solution.