phoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agoWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldexternal-linkmessage-square113fedilinkarrow-up11.04Karrow-down125
arrow-up11.02Karrow-down1external-linkWhy is no one talking about how unproductive it is to have verify every "hallucination" ChatGPT gives you?lemmy.worldphoneymouse@lemmy.world to People Twitter@sh.itjust.works · 1 month agomessage-square113fedilink
minus-squareWalnutLum@lemmy.mllinkfedilinkarrow-up23arrow-down3·1 month agoAll LLMs are text completion engines, no matter what fancy bells they tack on. If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully. For everything else you are wading through territory you could probably do easier using other methods.
minus-squareburgersc12@mander.xyzlinkfedilinkarrow-up2arrow-down1·1 month agoI love the people who are like “I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?” And blame ChatGPT when the problem is all they really needed was a fucking calculator
All LLMs are text completion engines, no matter what fancy bells they tack on.
If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.
For everything else you are wading through territory you could probably do easier using other methods.
I love the people who are like “I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?” And blame ChatGPT when the problem is all they really needed was a fucking calculator