• TheOneCurly@lemmy.theonecurly.page
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    This is the problem with llm generated responses. It restates the same argument 4 times, providing no additional context or facts (because it doesn’t know any). It’s just close enough to real looking that it tricks me into expecting more is just around the corner but it never comes. Then the hallmark shitty conclusion to really seal it.