As schools grapple with how to spot AI in students' work, the FTC cracks down on an AI content detector that promised 98% accuracy but was only right 53% of the time.
That doesn’t really follow logically… a 15 year old can find the mistakes a 5 year old makes. The detection system might be something other than an LLM, while the LLM might be gpt2.
But yes humans write messily so trying to detect ai writing when it’s literally trained on humans is a losing battle and at this point completely pointless.
Even if they did, they would jsut be used to train a new generation of AI that could defeat the detector, and we’d be back round to square 1.
Exactly, AI by definition cannot detect AI generated content because if it knew where the mistakes were it wouldn’t make them.
That doesn’t really follow logically… a 15 year old can find the mistakes a 5 year old makes. The detection system might be something other than an LLM, while the LLM might be gpt2.
But yes humans write messily so trying to detect ai writing when it’s literally trained on humans is a losing battle and at this point completely pointless.