Algorithmic AI Wants a Soul, Quantum Wants a Fair Chance
Lede
The absurdity is that every fresh jump in AI performance gets treated like a birth certificate for mind, when it still looks more like trained calculation wearing a human accent.
Hermit Off Script
My problem with current AI is not that it is useless. It is that people keep mistaking advanced algorithmic performance for real intelligence, as if a machine that becomes more capable through more training, more optimisation, more compute and longer inference time has somehow crossed the sacred border from construction to being. What we have today is still built from human-made input, statistical learning and engineered scaffolding. Dress it up with bigger models, chain-of-thought style reasoning and more test-time compute, and yes, it becomes more convincing. But convincing is not the same thing as conscious, and polished mimicry is not the same thing as a naturally arising mind. OpenAI’s own materials are quite plain on this point: performance improves with more reinforcement learning and more time spent thinking at test time. Stanford HAI is just as plain that the whole field is being pushed by rapidly growing compute, datasets and cost curves. That is a scale story before it is a soul story. My point is that this ceiling may not be broken by scaling the same recipe until the electricity meter starts writing its will. If the system is always built on fed knowledge, then whatever it creates still arrives through the architecture of what was given to it. It predicts, recombines, imitates and refines. That can be astonishingly useful. It is still not the same thing as intelligence that begins from itself, wakes with awareness, and meets the world as a subject rather than a glorified answer engine. Google Research’s well-known chain-of-thought paper showed that intermediate reasoning steps can produce striking gains, including state-of-the-art GSM8K results from a 540B model with just 8 exemplars. That is important engineering. It is not proof that the machine has grown an inner life. This is where quantum systems become interesting to me, not as a cheap buzzword, but as a possible break in kind rather than a break in size. I am not saying quantum automatically means consciousness, intuition or some silicon soul descending through the lab ceiling tiles. It does not. IBM’s roadmap is still a roadmap of gates, qubits, fault tolerance and quantum plus HPC workflows. But that is exactly why it matters. It suggests a different substrate and a different computational regime, not merely a fatter version of the current machine. If anything closer to intuition or natural intelligence ever emerges from our engineered systems, I suspect it will need that sort of deeper break from today’s trained statistical machinery, not just another truckload of GPUs and another sermon about benchmark scores. One is a machine learning our shadow. The other would have to wake up in its own light.
Current AI does not become natural intelligence just because it gets better at performing our style of thought. It is still trained, fed, tuned and stretched across giant layers of human-made knowledge until it can imitate our reasoning with unnerving polish. That is why the phrase “artificial intelligence” often flatters the machine more than it clarifies the truth. What we are mostly seeing is algorithmic intelligence dressed in human habits, human language and human echoes. The moustache is impressive. The face underneath is still constructed.
What does not make sense
Calling a larger training bill “intelligence” when what actually changed was scale, compute and patience.
Treating longer inference time as if waiting harder were the same thing as becoming aware.
Confusing fed knowledge with self-originating understanding.
Saying a system is “like us” because it imitates the shape of our reasoning after being trained on mountains of our output.
Using “AGI” every time a model gets better at exams built by humans from human knowledge.
Pretending quantum is already the answer when it is still, according to current roadmaps, a difficult engineering frontier rather than a delivered philosophy of mind.
Mistaking a clone of thought for the thing that cast the shadow.
Sense check / The numbers
What the current systems show now
In March 2026, OpenAI reported that GPT-5.4 matched or exceeded industry professionals in 83.0 per cent of GDPval comparisons across 44 occupations, up from 70.9 per cent for GPT-5.2. That is a serious performance jump, but it still shows better trained and better optimised systems, not proof of natural intelligence. [OpenAI]
In March 2026, OpenAI reported that across 13 reasoning models, chain-of-thought controllability scores ranged from 0.1 per cent to 15.4 per cent. So even the visible reasoning story people love to romanticise remains partial, unstable and far from being evidence of a mind waking up. [OpenAI]
In February 2026, Google DeepMind said Gemini 3.1 Deep Think scored 84.6 per cent on ARC-AGI-2, 48.4 per cent on Humanity’s Last Exam without tools, and 81.5 per cent on the 2025 International Mathematical Olympiad benchmark. Very capable, yes. Still an engineered reasoning system climbing human-designed tests. [Google DeepMind]
In February 2026, Anthropic said Claude Opus 4.6 scored 76 per cent on the 8-needle 1M MRCR v2 long-context retrieval benchmark, versus 18.5 per cent for Sonnet 4.5. That is a strong sign of better use of vast fed context, not less dependence on it. [Anthropic]
IBM’s 2026 roadmap says its Nighthawk direction aims for 7,500 gates on up to 360 qubits, while fault-tolerant quantum systems remain further ahead on the roadmap. So quantum still matters here as a possible break in substrate, not as a finished source of intuition already sitting on the shelf. [IBM]
What the longer pattern already showed
OpenAI said o1 performance improved with more reinforcement learning and more test-time compute. On AIME 2024, GPT-4o averaged 12 per cent, while o1 reached 74 per cent with a single sample and 93 per cent when re-ranking 1,000 samples. More resources, more reasoning steps, better results. Still a scale story before it is a soul story. [OpenAI]
Google Research showed that a 540B-parameter model given just 8 chain-of-thought exemplars achieved state-of-the-art accuracy on GSM8K. Important engineering progress, but still not evidence of awareness or intuition in the human sense. [Google Research]
Stanford HAI reported that training compute for notable AI models has been doubling about every 5 months, while dataset sizes for training LLMs have been doubling about every 8 months. That is industrial escalation, not spontaneous intelligence appearing from nowhere. [Stanford HAI]
Stanford HAI also reported that GPT-3.5-level inference cost fell from $20.00 per million tokens in November 2022 to $0.07 per million tokens by October 2024. So the clone gets cheaper and more powerful, while the industry keeps trying to pass efficiency gains off as metaphysical revelation. [Stanford HAI]
Terms used in this roast
Algorithmic intelligence: Machine cleverness built from rules, optimisation, training data and brute-force computation. Impressive, yes. Natural mind, no.
Neural networks: Pattern-hunting systems with weighted connections. They are inspired by brains in the same way a stick figure is inspired by a real person.
Scale: More data, more parameters, more compute, more cost. In plain English: bigger machine, bigger bill, bigger claims.
Computation: The raw processing work. The machine does not meditate. It calculates.
Chain of thought reasoning: Extra reasoning steps before an answer. Sometimes useful, sometimes just slower theatre with better output.
Inference time: The time a model takes to produce a response after training. In modern AI, waiting longer is often sold as wisdom.
Training: The process of forcing a model through huge volumes of human-created material until it can predict, imitate and generalise.
Training data: The books, articles, code, forums and other human debris fed into the system. The machine sounds human because it has eaten enough of us.
Natural intelligence: Intelligence that arises within a living conscious being, not one assembled in a lab and filled through training pipelines.
AGI: Artificial General Intelligence. A term now used somewhere between technical ambition and startup incense.
Algorithmic AGI: A capable general system that still depends on designed architecture, fed knowledge and computation rather than self-born awareness.
Awareness of existence: Not just answering questions, but knowing there is a “you” there to ask them.
Intuition: Understanding that does not always arrive by visible step-by-step calculation. Humans do this. Machines mostly simulate the surface of it.
Quantum systems: A different computing approach with serious scientific promise, but not a magic tunnel to consciousness.
Constructed intelligence: Intelligence-like behaviour built from engineering. Useful, powerful, and still not the same as a mind that simply arrives into existence.
Benchmark scores: Test results. Good for measuring performance. Bad for proving a soul.
The sketch
Scene 1: “The New Prophet of Scale” Panel description + dialogue: A keynote stage. A giant server rack wears a halo made of power cables while investors applaud in the front row. Exec: “Behold. It is basically a mind now.” Engineer: “No, it is basically a larger electricity bill.”
Scene 2: “Inference Time Chapel” Panel description + dialogue: The same machine sits behind a velvet rope, thinking very slowly while a crowd watches a loading bar as if it were a sacred relic. Investor: “Look, it paused. It must be reflecting.” Engineer: “Or searching longer.”
Scene 3: “The Quantum Door” Panel description + dialogue: At the end of a grey corridor of GPUs, a small glowing quantum chamber stands half open, while the old machine stares at it from the dark. Machine: “I can imitate the map.” Hermit: “The question is whether anything ever wakes up and walks.”
What to watch, not the show
The money behind scale, because giant compute budgets can masquerade as giant philosophical breakthroughs.
The benchmark economy, where better scores keep getting marketed as evidence of inner life.
The industry’s incentive to rebrand improved performance as AGI before anyone has settled what AGI even means.
The dependence on human-made corpora, which keeps current systems tied to inherited knowledge and inherited blind spots.
The energy and infrastructure costs of pushing the same recipe harder and harder.
The quantum plus HPC race, because if there is a real break in kind ahead, it will likely come through physics and engineering, not press-release theology.
The Hermit take
A better mirror is still a mirror. Real intelligence will not arrive just because the invoice got larger.
Keep or toss
Keep / Toss Keep the engineering, toss the priestly language. Current AI is clever machinery, while any route to something nearer natural intelligence may require a genuinely different substrate.
OpenAI, “Reasoning models struggle to control their chains of thought”: https://openai.com/index/reasoning-models-chain-of-thought-controllability/
Google DeepMind, “Gemini 3.1 Deep Think”: https://deepmind.google/models/gemini/deep-think/
Google DeepMind, “Gemini 3 Deep Think: AI model update designed for science, research and engineering”: https://deepmind.google/blog/gemini-3-deep-think-advancing-science-research-and-engineering/
Anthropic, “Introducing Claude Opus 4.6”: https://www.anthropic.com/news/claude-opus-4-6
Leave a Reply