I didn’t plan on writing this, but judging by my LinkedIn feed, the tide has turned – and everyone’s kicking AI while it’s down.
“AI can’t do math.”
“AI fails logical problems.”
“AI hallucinates and lies.”
Sure, sometimes it does. But here’s the question: why did we expect anything else?
What we mostly call “AI” today are large language models (LLMs).
They’re powerful, trained on massive amounts of data. Still, they weren’t built to “think” like humans.
They don’t “understand.”
They don’t “decide.”
They predict what’s likely to come next.
That’s why they can ace some math and logic tasks but fail spectacularly on others. It’s not “broken” – it’s just not built for symbolic reasoning.
Of course the tech companies leaned heavily on the “AI” label. It sells.
But calling LLMs “intelligent” created unrealistic expectations. We assumed we were talking to something that thinks – and when it doesn’t behave like a human, we feel cheated.
That’s on us, or, if you want someone else to blame, that’s one on marketing.
There still things AI can do well. For example:
✅ Draft and polish emails
✅ Summarize documents
✅ Help with translations
✅ Assist with coding
✅ Generate images and visuals
✅ Brainstorm creative ideas
✅ Convert my original draft into this polished version of a post
These are real capabilities, just don’t confuse them with human-like intelligence.
The Real Problem Isn’t AI. The issue isn’t that today’s AI is “useless.”
The issue is that we expected general intelligence from tools that were never designed to have it.
AI tools are exactly that: tools. They’re powerful, sometimes unreliable, occasionally dangerous, but useful when you know their limits, or, at least, when you don’t trust them wholeheartedly.
Maybe it’s time to stop treating today’s LLMs as early AGI and start treating them as what they actually are: advanced pattern recognizers with surprising capabilities.
And, maybe, we should stop calling them “intelligent” until they actually are.
Admittedely, that would be somewhat difficult to do until Microsoft, Open AI, Anthropic, Meta, etc stop calling those things “AI” agents or tools, but, really, maintaining this hype is what I’d be blaming them for. They can’t be blamed for the actual shortcomings of today’s “AI”, since there is almost nothing they can do about the LLM nature of such tools.