At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they’re AI, we’ll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is “expect nine women to make one baby in a month” levels of stupid.
I wouldn’t say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn’t precise enough to be reliable for anything more than entertainment. Claiming it’s AI, much less AGI is just marketing bullshit, tho.
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
At this rate, if people keep cheerfully piling into dead ends like LLMs and pretending they’re AI, we’ll never have AGI. The idea of throwing ever more compute at LLMs to create AGI is “expect nine women to make one baby in a month” levels of stupid.
I wouldn’t say LLMs are going away any time soon. 3 or 4 years ago I did the Sentdex youtube tutorial to build one from scratch to beat a flappy bird game. They are really impressive when you look at the underlying math. And the math isn’t precise enough to be reliable for anything more than entertainment. Claiming it’s AI, much less AGI is just marketing bullshit, tho.
You’re saying you think LLMs are not AI?
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
Basically, whenever we find that a human ability can be automated, the goalposts of the “AI” buzzword are silently moved to include it.