I know that we’re all still feeling our way around this issue, but how are other profs handling it? What is good evidence of unauthorized AI use? How do you handle a student who refuses to engage in attempts to get their side of the story?

For my classes, we talk once a month or so about acceptable use (treat it like a not-very-bright friend who’s overconfident and prone to hallucinations). It’s okay to brainstorm, bounce ideas, and generally use AI to spark creative problem solving. It’s not okay to have it do your assignments.

  • FitzNuggly@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    I read about a teacher who had her students use chat gpt to write an essay, and then review it and highlight the inaccuracies.

    • CaptObviousOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I’ve heard of that one too. It seems like a good idea to try and show the pitfalls.

  • flooppoolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Where I’m at it’s a circus. Some professors encourage use, while others have it banned entirely. The ambiguity of consequence has caused some students to be referred to student conduct for plagiarism and other classes to simply have their own simple punishment.

    It’s quite distressing to find out that the restructuring of your own sentences can be considered plagiarism, even though the main idea is still your own.

    I don’t teach but I can say that it helps a lot of students that aren’t great at expressing ideas on paper have a somewhat coherent idea. Anecdotally, ChatGPT fails at reaching professional level concepts such as medical law or chemotherapy regimens and will easily land you in a heap of trouble.

    But if your having a hard time coming up with ideas for an essay related to “the economic downturn of the early 2000s and the political climate that fueled the depression” or “war in the Middle East” I wouldn’t call it unfair to smash a few key research points into an app that basically does the “explain” part of the essay you worked hard to build a setup for.

    It’s gross and lazy when the whole thing is obvious crappy AI hallucination talk. It’s nice when it is actually used as a tool that enhances the work.

    Edit: I also just realized this is a community for professors. Thank you for all the hard work you all put into us students. A lot of our lives are secretly shaped around some of the behaviors we pick up from you, and most importantly the things we learn from you. For some of us you are the shimmering hope of escaping a more traditional way of life.

    The education we take home after the day is done and into our future careers that you all help build is massive and I sometimes think y’all are so humble up your own asses that it sends you to cringe land to hear it. Thanks all!

    At least here it doesn’t sound like I’m begging for a grade

    • CaptObviousOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Indeed, it does not sound like grade begging in this context. I’m sure that your professors would agree that you’re welcome, and thanks for acknowledging their work as well. I hope you find a way to say this to them.

      Mostly, I agree with you about acceptable AI use, although this is still very fluid and subject to change. There’s no significant difference, to my mind, in working with a human sounding board and working with AI in that role. The problem comes when the AI generates most or all of the final product and the student submits it as their own. That’s not even close to acceptable, particularly in a writing class where the entire point is learning to produce one’s own good writing. However, as you note, other profs have different perspectives depending on their course objectives and professional fields. That is appropriate.

      My issue in the original post was students who either mostly or entirely copy the prompt into an AI generator and then submit the result for a grade. Such essays hardly ever actually address the question or even follow instructions. They would fail in any circumstance, but the nature of their creation also violates the course’s academic integrity policy.

      Even text spinners, while useful to improve a few words to express an idea, can land students in trouble. When spinners are overused, the student’s voice (and sometimes their entire message) is lost. No one should want that. I’ve had students fail assignments because the submission no longer represented their own writing; rather, it reflected lengthy periods refining the input. It isn’t plagiarism according to my class definition, but it’s also not acceptable.

      You do raise an interesting idea: How long until we need to include a lesson on crafting appropriate AI prompts in order to help students use them as tools and not as unpaid ghost writers? That’s probably a very different, deep, and interesting rabbit hole.

      At any rate, even though it technically violates the community rules, I hope the mods leave your message here. To my mind, it’s good to have the student perspective as we wrestle with this new menace, er, “tool” in education settings. Thanks for posting.

  • CaptObviousOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    For reference, my working practice this semester is to treat unauthorized AI use (we discuss what is authorized repeatedly) as an academic integrity violation. I’ll begin an inquiry if at least two different AI detectors indicate that a majority of a submission was AI generated (either ≥50% of sentences or ≥50% probability that the entire paper was written by AI). So far guilty students have either immediately confessed or tried a variety of stalling tactics. One had me emailing with the AI for week, offering one excuse after another until the F was recorded and we moved on. Another relayed Helicopter Parent’s instruction that I was to be lenient in grading and to stop talking with Student; that didn’t go as they expected. Here at the end of the semester, others have simply ignored multiple emails, seemingly trying to run out the clock (hey, it works in sportsball).

    I’ll give a fair chance to explain, and there have been cases where those explanations passed muster. I’m completely happy to base a judgement on preponderance of evidence. But they have to actually offer some evidence, and neither my patience, time, nor the semester is infinite.