• @Zaktor@sopuli.xyz
    link
    fedilink
    English
    110 months ago

    Yes, it’s been my career for the last two decades and before that was the focus of my education. The idea that “correctness is a coincidence” is absurd and either fails to understand how training works or rejects the entire premise of large data revealing functional relationships in the underlying processes.

    • VeraticusOP
      link
      fedilink
      English
      110 months ago

      Or you’ve simply misunderstood what I’ve said despite your two decades of experience and education.

      If you train a model on a bad dataset, will it give you correct data?

      If you ask a question a model it doesn’t have enough data to be confident about an answer, will it still confidently give you a correct answer?

      And, more importantly, is it trained to offer CORRECT data, or is it trained to return words regardless of whether or not that data is correct?

      I mean, it’s like you haven’t even thought about this.