• Raisin8659@monyet.cc
    link
    fedilink
    English
    arrow-up
    157
    arrow-down
    4
    ·
    1 year ago

    Yeah, you should have checked it before you ruined all those poor students’ lives.

  • hoshikarakitaridia@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    1
    ·
    edit-2
    1 year ago

    If I was a student who wrote a text that was rejected due to this tool, do I have a case against either my institution, the professor who threw it out or OpenAI?

    I am stuck with defamation but idk if that’s actually defamatory in itself, as that only works if the professor or school had done due diligence that the tool is good for use, but there were already reports that it was not.

    • experbia@kbin.social
      link
      fedilink
      arrow-up
      43
      ·
      1 year ago

      do I have a case against either my institution, the professor who threw it out or OpenAI?

      This all seems like such recent technology, I can not imagine this question being very answerable except via the long way: a courtroom. I suspect it would take someone trying in order to set precedent.

    • ReallyKinda@kbin.social
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Turnitin isn’t AI technology but I assume it has similar legal ramifications and a lot of schools require teachers to have everything go through turnitin (usually by having students submit online). It just spits out a percentage so that the prof can take a closer look. Real quotes count towards the percentage displayed. Maybe with AI you’d have a bit more of a case against the company because you might claim you trusted it to be accurate or something?

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Real quotes count towards the percentage displayed.

        TII can be configured to ignore properly quoted texts.

    • kava@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      There’s a similar issue in chess with cheating detection. They use statistical analysis to see if someone’s moves are too good. Computers play at a much higher level than humans and you can measure how “accurate” a move is.

      It doesn’t mean much for a few moves or even 1 or 2 games but with more data you get more confidence that someone is cheating or not cheating.

      Chess.com released a rather infamous report last year about a high profile chess player that was cheating on their site. They never directly said “he is cheating” but simply stated “his games triggered our anti-cheating algorithms”

      One is debatable, the other is a simple fact. The truth is an absolute defense to defamation. Hans attempted to sue Chess.com for defamation and from what I understand, the case got recently dismissed.

      I’d imagine these AI detectors for schools have similar wordings to avoid legal risk. “High probability for AI” instead of saying “AI written”. In that case, you may have very little case for defamation.

      However, I’m not a lawyer. I’m just guessing these companies that offer this analysis to colleges have lawyers and have spent time shielding the company from legal liability.

    • Pons_Aelius@kbin.social
      link
      fedilink
      arrow-up
      24
      arrow-down
      1
      ·
      1 year ago

      And once you release the LLM detector it becomes a training tool for LLMs and those creating them to fool the detector.

      It is like the battle between google search and SEO tactics, except at hyperspeed.

  • nxfsi@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    Because all LLM outputs read like middle school homework essays regardless of context

  • DarkMFG@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    AI writing detectors are so shit. One of my written assignments was flagged as being written by AI even though at the time of writting it, programs like ChatGPT was not even popular or mainstream.

    • sheogorath@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yep, I think there should be a revolution in regarding how teachers structure their assignment for their students. AI is here to stay, and the education system needs to find a way to coexist with AI. In order to survive, education system need to find a way to make AI usage like calculator usage when working with math problems.

      • outrageousmatter@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Expect when you got teachers “Memorize all the formulas as it’ll be on the test.” and they don’t provide the formulas even though in the real world, they just fucking look it up.

        • Hardeehar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          There is a reason for doing it that way.

          I remember complaining to my dad about this exact thing saying that people just look it up in real life. He told me that in the end, the grade that I get will only tell future employers that I am more or less “teachable” compared to others.

          What you studied, specifically, didn’t matter. It’s how well you learned the material in a short period of time, then cranked out correct answers in a time-pressure situation.

          If you can do that quickly, you get a better grade, which tells people that you are a better candidate for a position.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      27
      arrow-down
      2
      ·
      1 year ago

      They shouldn’t, their profs are now going to use whatever random crappy website comes up first when they google “AI detector.”

  • ngdev@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    If I started an AI chat bot that was capable of sounding human, why wouldn’t I make a crappy AI writing detection tool and then shut it down shortly afterward saying “my AI chat bot is too good! You can’t detect it!”

    • ipkpjersi@lemmy.one
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Don’t forget wanting to introduce regulations for AI because it’s “dangerous”, but no not mine, mine is the good one.

  • kratoz29@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I have only heard about this tools, but never used one for myself… Are there any other tools like this?

      • danielbln@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Some work ok-ish on long texts, but none are reliable enough to not produce false positives. Might be worth using it as one bit of evidence amongst others for stuff where it really matters, like some master/PhD thesis, but definitely not for Jimmies 8th grade essay about Lincoln.