This is the problem with llm generated responses. It restates the same argument 4 times, providing no additional context or facts (because it doesn’t know any). It’s just close enough to real looking that it tricks me into expecting more is just around the corner but it never comes. Then the hallmark shitty conclusion to really seal it.
This is the problem with llm generated responses. It restates the same argument 4 times, providing no additional context or facts (because it doesn’t know any). It’s just close enough to real looking that it tricks me into expecting more is just around the corner but it never comes. Then the hallmark shitty conclusion to really seal it.