A study from Profound of OpenAI’s ChatGPT, Google AI Overviews and Perplexity shows that while ChatGPT mostly sources its information from Wikipedia, Google AI Overviews and Perplexity mostly source their information from Reddit.

  • Mniot@programming.dev
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    I think the academic advice about Wikipedia was sadly mistaken. It’s true that Wikipedia contains errors, but so do other sources. The problem was that it was a new thing and the idea that someone could vandalize a page startled people. It turns out, though, that Wikipedia has pretty good controls for this over a reasonable time-window. And there’s a history of edits. And most pages are accurate and free from vandalism.

    Just as you should not uncritically read any of your other sources, you shouldn’t uncritically read Wikipedia as a source. But if you are going to uncritically read, Wikipedia’s far from the worst thing to blindly trust.

    • Chulk@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I think the academic advice about Wikipedia was sadly mistaken.

      Yeah, a lot of people had your perspective about Wikipedia while I was in college, but they are wrong, according to Wikipedia.

      From the link:

      We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn’t be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

      I personally use ChatGPT like I would Wikipedia. It’s a great introduction to a subject, especially in my line of work, which is software development. I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand. Unfortunately, most people do not use LLMs this way.

      • Mniot@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 days ago

        This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information.

        The whole paragraph is kinda FUD except for this. Normal research practice is to (get ready for a shock) do research and not just copy a high-level summary of what other people have done. If your professors were saying, “don’t cite encyclopedias, which includes Wikipedia” then that’s fine. But my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.

        I personally use ChatGPT like I would Wikipedia

        Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(

        • Chulk@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          my experience was that Wikipedia was specifically called out as being especially unreliable and that’s just nonsense.

          Let me clarify then. It’s unreliable as a cited source in Academia. I’m drawing parallels and criticizing the way people use chatgpt. I.e. taking it at face value with zero caution and using it as if it’s a primary source of information.

          Eesh. The value of a tertiary source is that it cites the secondary sources (which cite the primary). If you strip that out, how’s it different from “some guy told me…”? I think your professors did a bad job of teaching you about how to read sources. Maybe because they didn’t know themselves. :-(

          Did you read beyond the sentence that you quoted?

          Here:

          I can get summarized information about new languages and frameworks really quickly, and then I can dive into the official documentation when I have a high level understanding of the topic at hand.

          Example: you’re a junior developer trying to figure out what this JavaScript syntax is const {x} = response?.data. It’s difficult to figure out what destructuring and optional chaining are without knowing what they’re called.

          With Chatgpt, you can copy and paste that code and ask “tell me what every piece of syntax is in this line of Javascript.” Then you can check the official docs to learn more.

    • antonim@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      I think the academic advice about Wikipedia was sadly mistaken.

      It wasn’t mistaken 10 or especially 15 years ago, however. Check how some articles looked back then, you’ll see vastly fewer sources and overall a less professional-looking text. These days I think most professors will agree that it’s fine as a starting point (depending on the subject, at least; I still come across unsourced nonsensical crap here and there, slowly correcting it myself).

      • Mniot@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        I think it was. When I think of Wikipedia, I’m thinking about how it was in ~2005 (20 years ago) and it was a pretty solid encyclopedia then.

        There were (and still are) some articles that are very thin. And some that have errors. Both of these things are true of non-wiki encyclopedias. When I’ve seen a poorly-written article, it’s usually on a subject that a standard encyclopedia wouldn’t even cover. So I feel like that was still a giant win for Wikipedia.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          In 2005 the article on William Shakespeare contained references to a total of 7 different sources, including a page describing how his name is pronounced, Plutarch, and “Catholic Encyclopedia on CD-ROM”. It contained more text discussing Shakespeare’s supposed Catholicism than his actual plays, which were described only in the most generic terms possible. I’m not noticing any grave mistakes while skimming the text, but it really couldn’t pass for a reliable source or a traditionally solid encyclopedia. And that’s the page on the best known English writer, slightly less popular topics were obviously much shoddier.

          It had its significant upsides already back then, sure, no doubt about that. But the teachers’ skepticism wasn’t all that unwarranted.