If you’re in the US, you might see a new shaded section at the top of your Google Search results with a summary answering your inquiry, along with links for more information. That section, generated by Google’s generative AI technology, used to appear only if you’ve opted into the Search Generative Experience(SGE) in the Search Labs platform. Now, according to Search Engine Land, Google has started adding the experience on a “subset of queries, on a small percentage of search traffic in the US.” And that is why you could be getting Google’s experimental AI-generated section even if you haven’t switched it on.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    8 months ago

    Almost every time I ask a direct question, the two AI answers almost always directly contradict each other. Yesterday I asked if vinegar cuts grease. I received explanations for both why its an excellent grease cutter, and why it doesn’t because it’s an acid.

    • time_fo_that@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 months ago

      I think this will be a major issue with AI. Just because it was trained on a huge wealth of knowledge doesn’t mean that it was trained on correct knowledge.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 months ago

        Just because it was trained on a huge wealth of knowledge doesn’t mean that it was trained on correct knowledge.

        Which makes its correct answers and it’s confidently wrong answers look as plausible as each other. One needs to apply real intelligence to determine which to trust, makikg the AI tool mostly useless.

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        I don’t see any reason being trained on writing informed by correct knowledge would cause it to be correct frequently. unless you’re expecting it to just verbatim lift sentences from training data

    • Murdoc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      8 months ago

      Showing different viewpoints in order to not appear biased. It’s the cornerstone of democracy after all.

      😛