Algorithmic AI Wants a Soul, Quantum Wants a Fair Chance


Algorithmic AI Wants a Soul, Quantum Wants a Fair Chance

Lede

The absurdity is that every fresh jump in AI performance gets treated like a birth certificate for mind, when it still looks more like trained calculation wearing a human accent.



What does not make sense

  • Calling a larger training bill “intelligence” when what actually changed was scale, compute and patience.
  • Treating longer inference time as if waiting harder were the same thing as becoming aware.
  • Confusing fed knowledge with self-originating understanding.
  • Saying a system is “like us” because it imitates the shape of our reasoning after being trained on mountains of our output.
  • Using “AGI” every time a model gets better at exams built by humans from human knowledge.
  • Pretending quantum is already the answer when it is still, according to current roadmaps, a difficult engineering frontier rather than a delivered philosophy of mind.
  • Mistaking a clone of thought for the thing that cast the shadow.

What the current systems show now

  1. In March 2026, OpenAI reported that GPT-5.4 matched or exceeded industry professionals in 83.0 per cent of GDPval comparisons across 44 occupations, up from 70.9 per cent for GPT-5.2. That is a serious performance jump, but it still shows better trained and better optimised systems, not proof of natural intelligence. [OpenAI]
  2. In March 2026, OpenAI reported that across 13 reasoning models, chain-of-thought controllability scores ranged from 0.1 per cent to 15.4 per cent. So even the visible reasoning story people love to romanticise remains partial, unstable and far from being evidence of a mind waking up. [OpenAI]
  3. In February 2026, Google DeepMind said Gemini 3.1 Deep Think scored 84.6 per cent on ARC-AGI-2, 48.4 per cent on Humanity’s Last Exam without tools, and 81.5 per cent on the 2025 International Mathematical Olympiad benchmark. Very capable, yes. Still an engineered reasoning system climbing human-designed tests. [Google DeepMind]
  4. In February 2026, Anthropic said Claude Opus 4.6 scored 76 per cent on the 8-needle 1M MRCR v2 long-context retrieval benchmark, versus 18.5 per cent for Sonnet 4.5. That is a strong sign of better use of vast fed context, not less dependence on it. [Anthropic]
  5. IBM’s 2026 roadmap says its Nighthawk direction aims for 7,500 gates on up to 360 qubits, while fault-tolerant quantum systems remain further ahead on the roadmap. So quantum still matters here as a possible break in substrate, not as a finished source of intuition already sitting on the shelf. [IBM]

What the longer pattern already showed

  1. OpenAI said o1 performance improved with more reinforcement learning and more test-time compute. On AIME 2024, GPT-4o averaged 12 per cent, while o1 reached 74 per cent with a single sample and 93 per cent when re-ranking 1,000 samples. More resources, more reasoning steps, better results. Still a scale story before it is a soul story. [OpenAI]
  2. Google Research showed that a 540B-parameter model given just 8 chain-of-thought exemplars achieved state-of-the-art accuracy on GSM8K. Important engineering progress, but still not evidence of awareness or intuition in the human sense. [Google Research]
  3. Stanford HAI reported that training compute for notable AI models has been doubling about every 5 months, while dataset sizes for training LLMs have been doubling about every 8 months. That is industrial escalation, not spontaneous intelligence appearing from nowhere. [Stanford HAI]
  4. Stanford HAI also reported that GPT-3.5-level inference cost fell from $20.00 per million tokens in November 2022 to $0.07 per million tokens by October 2024. So the clone gets cheaper and more powerful, while the industry keeps trying to pass efficiency gains off as metaphysical revelation. [Stanford HAI]


The sketch

Scene 1: “The New Prophet of Scale”
Panel description + dialogue:
A keynote stage. A giant server rack wears a halo made of power cables while investors applaud in the front row.

Exec: “Behold. It is basically a mind now.”
Engineer: “No, it is basically a larger electricity bill.”

Scene 2: “Inference Time Chapel”
Panel description + dialogue:
The same machine sits behind a velvet rope, thinking very slowly while a crowd watches a loading bar as if it were a sacred relic.

Investor: “Look, it paused. It must be reflecting.”
Engineer: “Or searching longer.”

Scene 3: “The Quantum Door”
Panel description + dialogue:
At the end of a grey corridor of GPUs, a small glowing quantum chamber stands half open, while the old machine stares at it from the dark.

Machine: “I can imitate the map.”
Hermit: “The question is whether anything ever wakes up and walks.”



What to watch, not the show

  • The money behind scale, because giant compute budgets can masquerade as giant philosophical breakthroughs.
  • The benchmark economy, where better scores keep getting marketed as evidence of inner life.
  • The industry’s incentive to rebrand improved performance as AGI before anyone has settled what AGI even means.
  • The dependence on human-made corpora, which keeps current systems tied to inherited knowledge and inherited blind spots.
  • The energy and infrastructure costs of pushing the same recipe harder and harder.
  • The quantum plus HPC race, because if there is a real break in kind ahead, it will likely come through physics and engineering, not press-release theology.

The Hermit take

A better mirror is still a mirror.
Real intelligence will not arrive just because the invoice got larger.

Keep or toss

Keep / Toss
Keep the engineering, toss the priestly language.
Current AI is clever machinery, while any route to something nearer natural intelligence may require a genuinely different substrate.


Sources

  • OpenAI, “Introducing GPT-5.4”: https://openai.com/index/introducing-gpt-5-4/
  • OpenAI, “Reasoning models struggle to control their chains of thought”: https://openai.com/index/reasoning-models-chain-of-thought-controllability/
  • Google DeepMind, “Gemini 3.1 Deep Think”: https://deepmind.google/models/gemini/deep-think/
  • Google DeepMind, “Gemini 3 Deep Think: AI model update designed for science, research and engineering”: https://deepmind.google/blog/gemini-3-deep-think-advancing-science-research-and-engineering/
  • Anthropic, “Introducing Claude Opus 4.6”: https://www.anthropic.com/news/claude-opus-4-6
  • IBM, “Quantum Roadmap 2026”: https://www.ibm.com/roadmaps/quantum/2026/
  • OpenAI, “Learning to reason with LLMs”: https://openai.com/index/learning-to-reason-with-llms/
  • Google Research, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models”: https://arxiv.org/abs/2201.11903
  • Stanford HAI, “The 2025 AI Index Report”: https://hai.stanford.edu/ai-index/2025-ai-index-report
  • Stanford HAI, “Research and Development – The 2025 AI Index Report”: https://hai.stanford.edu/ai-index/2025-ai-index-report/research-and-development
  • IBM, “Quantum Roadmap 2026”: https://www.ibm.com/roadmaps/quantum/2026/
  • IBM, “Quantum roadmap PDF, updated April 2025”: https://www.ibm.com/roadmaps/quantum.pdf
  • IBM, “IBM lays out clear path to fault-tolerant quantum computing”: https://www.ibm.com/quantum/blog/large-scale-ftqc
  • OpenAI, “Measuring the performance of our models on real-world tasks”: https://openai.com/index/gdpval/
  • OpenAI, “Evaluating chain-of-thought monitorability”: https://openai.com/index/evaluating-chain-of-thought-monitorability/

Satire and commentary. Opinion pieces for discussion. Sources at the end. Not legal, medical, financial, or professional advice.

Leave a Reply

Your email address will not be published. Required fields are marked *


Satire and commentary. My views. For information only. Not advice.


JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.