Will we ever have artificial general intelligence (AGI)—AI that can think like humans? Experts disagree, but everyone agrees today's AI is nowhere close. Humans can explore the world, discover new problems, and create solutions. AI can only perform specific tasks and can't apply what it learns to new situations. Programs beat world champions at video games but can't play slightly different games. AI detects cancer but can't tell cats from dogs. Language models write thousands of articles but struggle with simple logical questions.
The Core Problem: Every AI system needs humans to define and represent problems before solving them. AI can't discover problems on its own.
Three Types of AI and Their Limitations
Symbolic AI: Early AI that required programmers to write detailed rules for everything. It can solve complex math and make expert decisions, but only when humans provide step-by-step instructions. This created "Moravec's Paradox"—tasks hard for humans (chess, calculus) are easy for computers, but simple tasks babies do naturally (recognizing faces, judging distances) are incredibly difficult for AI.
Data scientist Herbert Roitblat explains: "Human brains evolved over millions of years to perform basic tasks effortlessly. Intellectual activities like math are very recent and require lots of training. Intelligence makes those tasks possible, not the other way around."
Machine Learning: Instead of explicit rules, engineers train AI models using examples. A weather model might learn from temperature and humidity data to predict rain. But humans still must define the problem, gather and label data, and choose the model architecture. Deep learning uses neural networks inspired by the brain to classify images and transcribe speech, but engineers still design the architecture and training parameters.
"The real genius comes from how the system is designed, not from any intelligence of its own." - Roitblat
Reinforcement Learning: Closest to how humans learn—an AI agent explores an environment, performs actions, and receives rewards or penalties. It learns through trial and error which actions lead to success. Despite remarkable results like mastering video games, these systems need massive human help. Engineers must design the reward system, simplify problems, and choose architecture. When OpenAI created an AI for DotA 2, they had to significantly simplify the game rules.
What All AI Systems Need from Humans
Roitblat identifies the fundamental issue: "Current AI works because designers figured out how to structure and simplify problems so computers can solve them. For true general intelligence, computers must define and structure their own problems."
Every machine learning system needs three human-provided elements:
- A representation of the problem
- A way to measure success
- A method for improvement
Possible Solutions and Their Limits
Some researchers believe bigger neural networks are the answer. The human brain has 100 trillion connections; the largest AI has 1 trillion. But Roitblat disagrees: "Large language models are achievements, but not general intelligence. They model word sequences and create text with similar patterns. That's useful, but it's not general intelligence."
Other approaches include Hybrid AI (combining symbolic reasoning with neural networks), System 2 Deep Learning (helping networks learn abstract concepts), and Self-Supervised Learning (teaching AI to learn by exploring). But these still don't solve the core problem—AI needs humans to structure the problem space. None address where that structure comes from.
What True Intelligence Requires
To move toward general intelligence, AI must: recognize that problems exist, define what they are, represent them in solvable ways, identify knowledge gaps, and seek new information independently.
Bottom Line: AI has far to go before discovering and solving problems independently. Current flaws reflect how we've designed AI, not computer limits. As we learn about artificial and human intelligence, we'll continue advancing—one step at a time.