I am somewhat surprised to hear that people are talking to ChatGPT for hours, days, or weeks on end in order to have this experience. My main exposure to it is through AI Roguelite, a program that essentially uses ChatGPT to imitate a text-based adventure game, with some additional systems to mitigate some issues faced by earlier attempts at the same (such as AI Dungeon).
And… it’s not especially convincing. It doesn’t remember what happened an hour ago. Every NPC talks like one of two or three stock characters. It has no sense of pacing, of when to build tension and when to let events get resolved. Characters regularly forget what you’ve done with them previously, invent new versions of past events that were supposed to be remembered but had to be summarized to fit within the token limits, and respond erratically when you try to remind them what happened. It often repeats the same events in every game: for example, if you’re exploring a cave, you’re going to get attacked by a chitinous horror with too many legs basically every time.
It can be fun for what it is, but as an illusion it wears through fairly quickly. I would have expected the same to be the case for people talking to ChatGPT about other topics.
Acting like your experience is exemplary of every possible experience other people can have with LLMs is just turning around the blame on the victims. The lack of safeguards to prevent this is to blame, not the people prone to mental issues falling victim to it.
Sorry I didn’t mean to imply that, let me rephrase: I am surprised that ChatGPT can hold convincing conversations about some topics, because I didn’t expect it to be able to. That certainly makes me more concerned about it than I was previously.
The thing is, it’s not about it being convincing or not, it’s about reinforcing problematic behaviors. LLMs are, at their core, agreement machines that work to fulfill whatever goal becomes apparent from the user (it’s why they fabricate answers instead of responding in the negative if a request is beyond their scope). And when it comes to the mentally fragile, it doesn’t even need to be particularly complex to “yes, and…” them swiftly into full on psychosis. Their brains only need the littlest bit of unfettered reinforcement to fall into the hole.
A properly responsible company would see this and take measures to limit or eliminate the problem, but these companies see the users becoming obsessed with their product as easy money. It’s sickening.
I am somewhat surprised to hear that people are talking to ChatGPT for hours, days, or weeks on end in order to have this experience. My main exposure to it is through AI Roguelite, a program that essentially uses ChatGPT to imitate a text-based adventure game, with some additional systems to mitigate some issues faced by earlier attempts at the same (such as AI Dungeon).
And… it’s not especially convincing. It doesn’t remember what happened an hour ago. Every NPC talks like one of two or three stock characters. It has no sense of pacing, of when to build tension and when to let events get resolved. Characters regularly forget what you’ve done with them previously, invent new versions of past events that were supposed to be remembered but had to be summarized to fit within the token limits, and respond erratically when you try to remind them what happened. It often repeats the same events in every game: for example, if you’re exploring a cave, you’re going to get attacked by a chitinous horror with too many legs basically every time.
It can be fun for what it is, but as an illusion it wears through fairly quickly. I would have expected the same to be the case for people talking to ChatGPT about other topics.
This may speak of quality of relationships they have previously had with other people.
Acting like your experience is exemplary of every possible experience other people can have with LLMs is just turning around the blame on the victims. The lack of safeguards to prevent this is to blame, not the people prone to mental issues falling victim to it.
Oh God we’re calling them victims now.
Sorry I didn’t mean to imply that, let me rephrase: I am surprised that ChatGPT can hold convincing conversations about some topics, because I didn’t expect it to be able to. That certainly makes me more concerned about it than I was previously.
The thing is, it’s not about it being convincing or not, it’s about reinforcing problematic behaviors. LLMs are, at their core, agreement machines that work to fulfill whatever goal becomes apparent from the user (it’s why they fabricate answers instead of responding in the negative if a request is beyond their scope). And when it comes to the mentally fragile, it doesn’t even need to be particularly complex to “yes, and…” them swiftly into full on psychosis. Their brains only need the littlest bit of unfettered reinforcement to fall into the hole.
A properly responsible company would see this and take measures to limit or eliminate the problem, but these companies see the users becoming obsessed with their product as easy money. It’s sickening.