I know that we’re all still feeling our way around this issue, but how are other profs handling it? What is good evidence of unauthorized AI use? How do you handle a student who refuses to engage in attempts to get their side of the story?

For my classes, we talk once a month or so about acceptable use (treat it like a not-very-bright friend who’s overconfident and prone to hallucinations). It’s okay to brainstorm, bounce ideas, and generally use AI to spark creative problem solving. It’s not okay to have it do your assignments.

  • CaptObviousOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    For reference, my working practice this semester is to treat unauthorized AI use (we discuss what is authorized repeatedly) as an academic integrity violation. I’ll begin an inquiry if at least two different AI detectors indicate that a majority of a submission was AI generated (either ≥50% of sentences or ≥50% probability that the entire paper was written by AI). So far guilty students have either immediately confessed or tried a variety of stalling tactics. One had me emailing with the AI for week, offering one excuse after another until the F was recorded and we moved on. Another relayed Helicopter Parent’s instruction that I was to be lenient in grading and to stop talking with Student; that didn’t go as they expected. Here at the end of the semester, others have simply ignored multiple emails, seemingly trying to run out the clock (hey, it works in sportsball).

    I’ll give a fair chance to explain, and there have been cases where those explanations passed muster. I’m completely happy to base a judgement on preponderance of evidence. But they have to actually offer some evidence, and neither my patience, time, nor the semester is infinite.