I think AI is neat.

  • poke@sh.itjust.works
    link
    fedilink
    arrow-up
    69
    arrow-down
    1
    ·
    10 months ago

    Knowing that LLMs are just “parroting” is one of the first steps to implementing them in safe, effective ways where they can actually provide value.

    • KᑌᔕᕼIᗩ@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      10 months ago

      LLMs definitely provide value its just debatable whether they’re real AI or not. I believe they’re going to be shoved in a round hole regardless.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      I think a better way to view it is that it’s a search engine that works on the word level of granularity. When library indexing systems were invented they allowed us to look up knowledge at the book level. Search engines allowed look ups at the document level. LLMs allow lookups at the word level, meaning all previously transcribed human knowledge can be synthesized into a response. That’s huge, and where it becomes extra huge is that it can also pull on programming knowledge allowing it to meta program and perform complex tasks accurately. You can also hook them up with external APIs so they can do more tasks. What we have is basically a program that can write itself based on the entire corpus of human knowledge, and that will have a tremendous impact.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      2
      arrow-down
      11
      ·
      edit-2
      10 months ago

      The next step is to understand much more and not get stuck on the most popular semantic trap

      Then you can begin your journey man

      There are so, so many llm chains that do way more than parrot. It’s just the last popular catchphrase.

      Very tiring to keep explaining that because just shallow research can make you understand more than it’s a parrot comment. We are all parrots. It’s extremely irrelevant to the ai safety and usefulness debates

      Most llm implementations use frameworks to just develop different understandings, and it’s shit, but it’s just not true that they only parrot known things they have internal worlds especially when looking at agent networks