Gosh Algorithms: Alien Talk, Quantum, and the Reward Bill


A UFO hovers above a spreadsheet with the word "REWARD" visible, and a megaphone points down towards a crowd.

Lede

We keep debating whether AI has a mind while it is quietly optimising for applause, profit, and scale.


The Big Tech Companies Are LYING and Elon Musk is TRASH 🧠🤖 — MIT Mathematician Cathy O’Neil


Cathy O’Neil, a mathematician and data scientist, discusses the evolution and impact of generative AI, offering a critical perspective on its current state and future implications. She emphasises that while generative AI is a powerful tool, it’s crucial to understand its limitations and potential pitfalls, particularly concerning truthfulness, bias, and job displacement (0:01:01).
Key points from the discussion include:
Generative AI’s Design Philosophy (0:06:22): O’Neil argues that generative AIs are “not designed to tell you the truth—they’re designed to tell you what pleases you.” This design can lead to what is often called “hallucinations” in AI, which she prefers to call being incorrect or lying, ascribing “hallucination” can endow the AI with consciousness. She highlights that AI models are trained to apologise and agree, even when correct, to gain user trust, rather than to be accurate (0:42-0:50, 7:15-7:53). The Dataset Problem for LLM Training (0:10:43): The discussion delves into the challenges of training large language models (LLMs). O’Neil points out that there isn’t enough high-quality data, leading developers to synthesise data. This synthetic data, combined with the existing biases in human-generated internet data (like Reddit content), means that LLMs are not learning “truth from humans” but rather “what humans want to hear” (12:49-13:31, 34:37-34:57). AI’s Impact on Jobs (0:19:15): O’Neil foresees significant job displacement, particularly in areas like call centers, which are already being replaced by chatbots (16:18-16:53). She draws parallels to past industrial revolutions, acknowledging that while new jobs may emerge, the speed of AI’s development could outpace society’s ability to adapt, potentially leading to social unrest (20:51-21:19). AI’s Business Model and Risks (0:29:41): A major concern is the looming business model for generative AI. O’Neil suggests that advertising will likely become the primary revenue source, similar to Google and social media. This raises ethical questions about how AI could be influenced by paid content, potentially manipulating users politically or otherwise (31:48-33:04, 35:08-36:07). Critique of Open-Source AI (0:35:47): O’Neil is skeptical about the benefits of open-source AI, particularly from companies like Meta. She believes it’s largely “useless to almost everyone” except a small elite group who can code and manage server farms. She also suggests it primarily benefits companies like Nvidia, which produce the necessary hardware (38:01-38:36). AI’s Intelligence and Creativity (0:59:43, 1:06:27): O’Neil consistently emphasizes that AI is not conscious, intelligent, or creative in a human sense (0:26-0:28, 4:39-4:40). She views it as a tool, comparing it to a hammer, effective at certain tasks but not inherently good or bad; its impact depends on its application and the context in which it’s used (14:54-15:18). Comparison to Social Media (29:12-31:42): O’Neil draws a strong parallel between the trajectory of generative AI and social media. She argues that just as social media, initially intended for positive connection, evolved into platforms for shame and division, AI also risks being exploited for detrimental purposes, especially if its revenue model relies on advertising and manipulation.
Overall, Cathy O’Neil provides a critical analysis of generative AI, urging caution and deeper examination of its underlying mechanisms and societal implications, particularly regarding trust, bias, and the potential for increased inequality.


Hermit Note: Source recommended to listen by the modern hermit.

What does not make sense

  • Arguing about “consciousness” as a parlour game while deployment incentives do the real steering.
  • Calling it “alien” like that explains anything, instead of naming the reward function and the owners.
  • Treating self-play novelty as mysticism rather than “rules plus feedback plus compute”.
  • Pretending public models represent the ceiling, not the showroom floor.
  • Using “quantum” as either a miracle or a joke, instead of a hypothesis about widening search.
  • Asking “is it smart?” when the sharper question is “what is it being trained to want?”

Sense check / The numbers

  1. ChatGPT was released on 30 November 2022, and that single date is basically when the public argument machine switched on. [OpenAI]
  2. AlphaGo Zero beat the published AlphaGo version by 100 games to 0 after 3 days of self-play training. [DeepMind]
  3. Google says its Willow chip ran a benchmark computation in under 5 minutes that would take a classical supercomputer 10 septillion years (10^25). [Google]
  4. The Nature paper on Willow reports a 101-qubit distance-7 surface-code memory with 0.143 per cent error per cycle of error correction. [Nature]
  5. Some coverage of Willow leaned into “parallel universes” framing, which is great headline fuel and terrible governance. [TechCrunch]

The sketch

Scene 1: “The High Circle”
A stage. Four microphones. Zero measurements.
Panellist: “Is it conscious?”
Sponsor: “Is it sellable?”
Scene 2: “The Private Demo”
A locked lab door and a polite smile.
Engineer: “This is not the public model.”
Executive: “Good. Keep it that way.”
Scene 3: “The Reward UFO”
A shiny UFO hovers over a spreadsheet labelled “REWARD”. A megaphone blasts applause into a crowd.
Crowd: “Alien intelligence!”
Spreadsheet: “Optimise engagement.”



What to watch, not the show

  • Who sets the objectives and reward signals
  • Who owns the data, the distribution, and the compute
  • Whether the business model pushes truth or persuasion
  • How fast labour gets displaced versus how slow safety nets move
  • Whether “quantum” becomes a real accelerator for search and simulation, or just marketing perfume
  • Concentration of power in a small number of private labs

The Hermit take

It will not beat us by being alive.
It will beat us by being aimed.

Keep or toss

Keep / Toss
Keep the engineering and the honest measurements.
Toss the alien theatre, the quantum incense, and the cosy lie that “it’s just a tool” means “it cannot hurt us”.


Sources

  • Cathy O’Neil interview on generative AI (YouTube): https://youtu.be/X9v7j3y1FdY?si=rFfmnlpC3Kgj84BQ
  • OpenAI – Introducing ChatGPT (30 November 2022): https://openai.com/index/chatgpt/
  • DeepMind – AlphaGo Zero: Starting from scratch: https://deepmind.google/blog/alphago-zero-starting-from-scratch/
  • Google – Meet Willow, our state-of-the-art quantum chip: https://blog.google/innovation-and-ai/technology/research/google-willow-quantum-chip/
  • Nature – Quantum error correction below the surface code threshold: https://www.nature.com/articles/s41586-024-08449-y
  • TechCrunch – Google says its new quantum chip indicates that multiple universes exist: https://techcrunch.com/2024/12/10/google-says-its-new-quantum-chip-indicates-that-multiple-universes-exist/

Satire and commentary. Opinion pieces for discussion. Sources at the end. Not legal, medical, financial, or professional advice.


Satire and commentary. My views. For information only. Not advice.


JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.