Lede
The smartest thing in the AI industry right now is the marketing budget.
Hermit Off Script
Another week passes and here we are again, skewering AI because nothing else in tech manages to be this loudly stupid, this often. Every few days a fresh model drops from the usual suspects, each one marketed as the best thing humanity has ever touched, while half the reviewers look like they are quietly on the payroll. The leaderboards shuffle like a dodgy casino table: at the start of the month it is OpenAI GPT5.1 or Elon Mask’s Grok 4.1, by the middle Gemini 3 struts in, and by the end Claude Opus 4.5 is crowned the new saviour, until next Tuesday. Sure, each of them has strengths in some corners, but “groundbreaking” or “next AGI” is a fairy tale for shareholders; from where I sit they feel like noobie kids stuffed with more knowledge than their circuits can cope with, stumbling because memory is flimsy or the chips are not there yet. Maybe I am fussy, but intelligence for me is not hoarding information or flexing skills, it is the spark, the moment a human comes up with something that was not there before, a vision that actually shifts life for the better, not just a recycled pattern with fancy wording. People say these systems are just useful tools, and I disagree, they are already more than tools, half formed minds we pretend are calculators. When quantum chips finally sit behind their memory and “brain”, then we will see what real empowerment looks like, because only that kind of hardware might actually hold and act on the flood of knowledge they are force fed. Until then, most days they sound dumber than their press releases, and only sometimes show a little flash of something that feels close to intelligence, not raw data or word salad, just a brief spark that reminds you why this whole mess is both terrifying and very, very overrated.
What does not make sense
- Every few weeks a new model drops, each one sold as world changing, yet you still have to repeat the same question three times to get a straight answer.
- Reviewers scream “best model ever” on Monday, then quietly move the crown to another one by Friday without explaining what actually changed.
- Leaderboards reshuffle faster than crypto prices, but ordinary users mostly want “please stop hallucinating my bank details, thanks”.
- We are told this is the road to AGI, yet the models still forget the start of a long conversation like a goldfish with a notification addiction.
- People call them “just tools”, while the same people pitch them as digital employees, co pilots and junior staff that never sleep.
- Quantum chips are already cast as the future saviour, as if the problem was not incentives, safety or design but simply “needs more magic hardware”.
- The industry talks about intelligence, but what we mostly see is overfed autocomplete that occasionally has a bright moment, like a teenager who once read a philosophy meme.
Sense check / The numbers
- In 2023 there were 149 new foundation models released, more than double the number in 2022, and about two thirds of them were open source. [Stanford HAI]
- Funding for generative AI companies hit about 56 billion dollars in 2024, almost double the 29 billion invested in 2023. [S and P Global]
- Training a GPT 3 scale model reportedly used around 1,287 megawatt hours of electricity, roughly 100 times the annual energy use of an average US household. [Contrary Research, Nature]
- One recent analysis suggests ChatGPT alone may consume about 40 million kilowatt hours per day, more electricity than 117 of the lowest consumption countries use in a year. [Business Energy UK]
- In the UK, estimated gross value added from dedicated AI firms grew from 1.2 billion pounds in 2023 to 2.2 billion pounds in 2024, an 83 percent jump in a single year. [UK Government AI sector study]
The sketch
Scene 1: Leaderboard worship
A giant glowing “AI Leaderboard” screen fills a tech conference stage. Names like “Model X”, “Model Y”, “Model Z” shuffle every second.
Reviewer with a headset: “This week, this one is the smartest being on Earth.”
Audience member: “Did it stop making stuff up?”
Reviewer: “Of course not, but look at the new benchmark bar chart.”
Scene 2: Customer support chat from hell
A tired user sits at a laptop. Chat window shows four AI avatars labelled “Top Model of January”, “Top Model of February”, “Top Model of March”, “Top Model of April”.
User: “Why do you all give different answers to the same question?”
January model: “I am optimised for creativity.”
February model: “I am optimised for safety.”
March model: “I am optimised for vibes.”
Recent model: “Please upgrade to Plus to find out.”
Scene 3: The quantum prophecy
A boardroom full of executives around a table made of glowing circuit boards. On the wall: “AGI Roadmap”. At the end of the table, a mystic in a lab coat holds up a crystal labelled “Quantum Chip”.
Executive 1: “Our models still hallucinate and forget context.”
Executive 2: “Ethics team says we need to slow down.”
Mystic scientist: “Relax. Once we bolt this quantum thing on, all your problems will vanish.”
Tiny engineer in the corner: “Or we could just fix the design and incentives now?”

What to watch, not the show
- The race for venture capital: when tens of billions are at stake, “next best model” headlines are a sales deck, not a public service.
- Benchmark theatre: leaderboards reward narrow tests, not whether the thing actually behaves sanely in real life.
- Media capture: many “independent” reviews sit on top of affiliate links, sponsored credits or quiet consulting gigs.
- Compute gatekeepers: a few firms own the chips and cloud, so they also decide which “intelligence” you are allowed to rent.
- Regulatory games: every big player says they want safety, as long as safety magically does not slow shipping the next version.
- Human skill erosion: if we outsource thinking to systems that are often confident and wrong, we train ourselves to stop noticing the difference.
The Hermit take
Intelligence is not “who memorised the internet” but “who can surprise the world without losing the plot”.
Right now, the plot feels like “infinite beta test, finite attention span”.
Keep or toss
Keep the idea of AI as a powerful, weird calculator that can help with real work when used with care.
Toss the cult of weekly upgrades, leaderboard worship and pretend AGI sermons that turn half baked systems into digital gods.
Sources
- Stanford HAI AI Index 2024 overview and charts on foundation models and trends
https://hai.stanford.edu/ai-index/2024-ai-index-report - Stanford HAI AI Index research and development section with foundation model counts
https://hai.stanford.edu/ai-index/2024-ai-index-report/research-and-development - S and P Global Market Intelligence – Generative AI funding hits record in 2024
https://www.spglobal.com/market-intelligence/en/news-insights/articles/2025/1/genai-funding-hits-record-in-2024-boosted-by-infrastructure-interest-87132257 - Contrary Research – How much energy will it take to power AI
https://research.contrary.com/foundations-and-frontiers/ai-inference - Nature – How much energy will AI really consume
https://www.nature.com/articles/d41586-025-00616-z - Business Energy UK – ChatGPT energy consumption visualised
https://www.businessenergyuk.com/knowledge-hub/chatgpt-energy-consumption-visualized/ - UK Government – Artificial Intelligence sector study 2024
https://www.gov.uk/government/publications/artificial-intelligence-sector-study-2024/artificial-intelligence-sector-study-2024


