• ag10n@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    10
    ·
    22 小时前

    Page 6 the judge writes the LLM “memorized” the content and could “recite” it.

    Neither is true in training or use of LLMs

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      21 小时前

      The judge writes that the Authors told him that LLMs memorized the content and could recite it. He then said “for purposes of argument I’ll assume that’s true,” and even despite that he went ahead and ruled that LLM training does not violate copyright.

      It was perhaps a bit daring of Anthropic not to contest what the Authors claimed in that case, but as it turns out the result is an even stronger ruling. The judge gave the Authors every benefit of the doubt and still found that they had no case when it came to training.

    • Artisian@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      21 小时前

      Depends on the content and the method. There are tons of ways to encrypt data, and under relevant law they may still count as copies. There are certainly weaker NN models where we can extract a lot of the training data, even if it’s not easy, from the model parameters (even if we can’t find a prompt that gets the model to regurgitate).