I’m not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that’s how the information is stored. So LLM never sees the actual word.
Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.
This is key because, I can’t emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.
I don’t get it
It used to reply 2 until this new upgrade. But now after 14 min the new update give you the right answer
interesting
I’m not involved in LLM, but apparently the way it works is that the sentence is broken into words and each word has assigned unique number and that’s how the information is stored. So LLM never sees the actual word.
Adding to this, each word and words around it are given a statistical percentage. In other words, what are the odds that word 1 and word 2 follow each other? You scale that out for each word in a sentence and you can see that LLMs are just huge math equations that put words together based on their statistical probability.
This is key because, I can’t emphasize this enough, AI does not think. We (humans) anamorphize them, giving them human characteristics when they are little more than number crunchers.
Not words but tokens, strawberry could be the tokens ‘straw’ and ‘berry’, but it could also be ‘straw’, ‘be’ and ‘rry’
The joke is that it took 14 mins to give that answer
https://www.youtube.com/shorts/7pQrMAekdn4