deleted by creator
Or, because you can’t rely on computers to tell you the truth. Which is exactly the issue with LLMs as well.
You can’t rely on books or people tell you the truth either.
I was mostly referring to the top comment. If you need to write an essay on Hamlet, the book can in fact not lie, because the entire exercise is to read the book and write about the contents of it.
But in general, you are right. (Which is why it is proper journalistic procedure to talk to multiple experts about a topic you write about. Also a good article does not present a forgone conclusion, but instead let’s readers form their own opinion on a topic by providing the necessary context and facts without the author’s judgement. LLMs as a one-stop-shop do not provide this and are less reliable than listening to a single expert would be)
Which is why bibliographies exist.
Just need to get AI on that.
The only thing AI writing seems to be useful for is wasting real people’s time.
Terence Tao just did a thread on Mathstodon talking about jow ChatGPT help him program a algorithm for looking for numbers.
True -
- Write points/summary
- Have AI expand in many words
- Post
- Reader uses AI to generate summarize post preferably in points
- Profit??
I have to hand in a short report
I wrote parts of it and asked chatgpt for a conclusion.
So i read that, adjusted a few points. Added another couple points…
Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)
We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)
I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.
Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!
it makes things up, it makes no attempt to adhere to reason when it’s making an argument.
It doesn’t hardly understand logic. I’m using it to generate content and it continuously will assert information in ways that don’t make sense, relate things that aren’t connected, and forget facts that don’t flow into the response.
As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn’t understand logic at all. Afaik, there is currently no rational process which considers whether what it’s about to say makes sense and is correct.
It just sort of bullshits it’s way to an answer based on whether words seem likely according to its model.
That’s why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.
Any teacher still issuing out of class homework or assignments is doing a disservice IMO.
Of coarse people will just GPT it… you need to get them off the computer and into an exam room.
We need to embrace AI written content fully. Language is just a protocol for communication. If AI can flesh out the “packets” for us nicely in a way that fits what the receiving humans need to understand the communication then that’s a major win. Now I can ask AI to write me a nice letter and prompt it with a short bulleted list of what I want to say. Boom! Done, and time is saved.
The professional writers who used to slave over a blank Word document are now obsolete, just like the slide rule “computers” of old (the people who could solve complicated mathematics and engineering problems on paper).
Teachers who thought a hand written report could be used to prove that “education” has happened are now realizing that the idea was a crutch (it was 25 years ago too when we could copy/paste Microsoft Encarta articles and use as our research papers).
The technology really just shows us that our language capabilities really are just a means to an end. If a better means asrises we should figure out how to maximize it.
Huh?
Couldn’t you just ask ChapGPT whether it wrote something specific?
No. The model doesn’t have a record of everything it wrote.