Researchers say that the model behind the chatbot fabricated a convincing bogus database, but a forensic examination shows it doesn’t pass for authentic.
How many times do we have to play this game before people realize it’s not a researcher, lawyer, doctor, or anything that has to rely on facts and established, valid data?
It’s a next-word generator that’s remarkably good at sounding human. Yes, this can often lead to accurate sounding information, but it doesn’t actually “know” anything. Not in any sense that could be relied on.
How many times do we have to play this game before people realize it’s not a researcher, lawyer, doctor, or anything that has to rely on facts and established, valid data?
It’s a next-word generator that’s remarkably good at sounding human. Yes, this can often lead to accurate sounding information, but it doesn’t actually “know” anything. Not in any sense that could be relied on.
It’s crazy we’re still thinking that these chat bots are actually intelligent.
What is this “we” shit???
Nobody that has any grip on technology let alone this pseudo AI LLM stuff thinks they are “actually intelligent”