[OpenAI CEO Sam] Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.” These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Primary source: https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz
Secondary source: https://bookshop.org/a/12476/9780063418561
I recently took some college classes and they had us run our papers through gramarly to check for errors and to help our writing.
I hated it. It took the voice of your writing out almost completely and every sentence was weighted to be written like a standard textbook. Sure, all the same information was there, but when the ai said it was good… It sounded like it was just written by ai in the first place. Making it happy was worse than writing the paper in the first place since the grammar portion of the grading was simply ‘run it through ai and mark down for any errors it picks up’.