We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Ahem, the way it works is a model gets trained. That works by giving it a goal(!) and then the process modifies the weights to try to match the goal. By definition every AI or machine learning model needs to have a goal. With LLMs it’s producing legible text. Text resembling the training dataset. By looking at previous words. That’s the goal of the LLM. The humans designing it have a goal as well. Making it do what they want and the goals match is called “the alignment problem”.
Simpler models have goals as well. Whatever is needed to regulate your thermostats. Or score high in Mario Kart. And goals aren’t tied to conciousness. Companies for example have goals (profit), yet they’re not alive. A simple control loop has a goal, and it’s a super simple piece of tech.
Knowledge and reasoning are two different things. Knowledge is being able to store and retrieve information. It can do that. I’ll ask it what an Alpaca is, it’ll give me an essay / the Wikipedia article. Not anything else. And it can even apply knowledge. I can tell it to give me an animal alike an Alpaca for my sci-fi novel in the outer rim, and it’ll make up an animal with similar attributes. Simultaneously knowing how sci-fi works, the tropes in it and how to apply it to the concept of an Alpaca. It knows how dogs and cats relate to each other, what attributes they have and what their category is. I can ask it about the paws or the tail and it “knows” how that’s connected and it’ll deal with the detail question. I can feed it two pieces of example computer code and tell it to combine both projects, despite no one ever doing it that way. And it’ll even know how to use some of the background libraries.
It has all of that. Knowledge, is able to apply it, transfer it to new problems… You just can’t antropomorphize it. It doesn’t have intelligence or knowledge the same way a human has. It does it differently. But that’s why it’s called Artificial something.
Btw that’s also why AI in robots works. They form a model of their surroundings. And then they’re able to maneuvre there. Or move their arms not just randomly, but actually to pick something up. They “understood” aka. formed a model. That’s also the main task of our brain. And the main idea of AI.
But yeah, they have a very different way to do knowledge than humans. The internal processes to apply it are very different. And the goals are entirely different. So if you mean in the sense of human goals, or human reasoning, then no. It definitely doesn’t have that.