Code Red, Code Green: Altman, Google, AGI and the IQ Myth


A split traffic light shows red "CODE RED" and green "CODE GREEN" above a small person holding a phone toward a large brain-shaped vault with a padlock.

Lede

The industry is sprinting toward “AGI” while simultaneously insisting the public does not want more intelligence, just more buttons.


Sam Altman: How OpenAI Wins, AI Buildout Logic, IPO in 2026?


What does not make sense

  • Calling it a “race to AGI” while reassuring everyone the main consumer desire is “not more IQ” is like selling a sports car by bragging about the cup holders.
  • “Be in code red forever” is not a strategy; it is a personality disorder with a budget line.
  • Trusting Google more because you use Gmail and Android daily is fair – but it is also how distribution quietly wins while everyone argues about benchmarks.
  • Complaining about corporate greed while cheering for trillion-scale infrastructure is like booing the casino because the lights are too bright.
  • Wanting AI to be “free for everyone” while the incentives reward subscriptions, ads, and lock-in is a lovely prayer aimed at a payments terminal.
  • Saying guardrails will not matter if the AI is smart enough is optimistic – guardrails are usually there because humans are not.

Sense check / The numbers

  1. Altman described “code red” as a recurring response to competitive threats, historically lasting “six or eight week” periods, and suggested it might happen “once, maybe twice a year”. He also said ChatGPT grew from about 400 million weekly active users earlier in 2025 to 800 million weekly active users by mid-December 2025. [Big Technology]
  2. In the same conversation, infrastructure commitments were discussed at roughly $1.4 trillion “over a very long period”, alongside a reported trajectory of about $20 billion in revenue in 2025, with Altman arguing compute has been a persistent constraint. [Big Technology]
  3. Reuters reported OpenAI was in early talks to raise up to $100 billion at around a $750 billion valuation, with discussion of a possible IPO filing in the second half of 2026 and an earlier employee share sale reported at about $6.6 billion. [Reuters]
  4. Google began rolling out NotebookLM in July 2023, and later launched standalone Android and iOS apps in May 2025. [Google Blog] [TechCrunch]
  5. Google DeepMind introduced Nano Banana Pro on 20 November 2025, positioning it as an image generation and editing model available across Google products. [Google Blog]
  6. Google updated Veo 3 to support vertical 9:16 video and 1080p (for 16:9) and cut pricing to about $0.40 per second for Veo 3 and $0.15 per second for Veo 3 Fast (as reported). [The Verge] [DeepMind]

The sketch

Scene 1: “Code Red Karaoke”
Panel: A boardroom siren blares. A CEO holds a mic like a fire extinguisher.
Dialogue: “We are in code red!”
Dialogue: “Great. Can the free tier do algebra yet?”
Scene 2: “The IQ Diet Plan”
Panel: A product launch slide reads: “Not More IQ”. Behind it, a locked door labelled “Internal Models”.
Dialogue: “Consumers do not want more IQ.”
Dialogue: “So why is the good brain behind a paywall?”
Scene 3: “Two Churches, One Collection Plate”
Panel: Two altars: one labelled “Nonprofit Future”, one labelled “Subscription Now”. A tired user kneels between them holding a phone.
Dialogue: “I just want a smarter friend.”
Dialogue: “That will be 19.99 a month, amen.”



What to watch, not the show

  • Distribution beats genius: whoever owns the default screen wins more days than the best lab.
  • Compute economics: scaling laws meet electricity bills, and poetry loses to power purchase agreements.
  • Governance theatre: “guardrails” without public accountability become branding, not safety.
  • Two-tier intelligence: the best capability gets rationed to enterprises, while consumers get a demo with manners.
  • The spiritual gap: tools amplify what people are ready to face – and most people are still running from themselves.
  • Nonprofit talk vs capital reality: incentives currently reward extraction, not liberation.

The Hermit take

Chasing AGI without fixing access is just building a palace with no doors.
If you want an intelligent guardian, start by guarding the public interest.

Keep or toss

Keep the demand for truly smarter models and universal access.
Toss the hero myth that permanent panic equals progress, and the cosy lie that people do not want more intelligence.


Landau “law” (usually: Landau symbols / Bachmann-Landau notation)

When someone says “Landau’s law” in a maths + algorithms chat, nine times out of ten they mean Landau symbols – the shorthand for how fast things grow (or how small an error term is) when the input gets large, or a variable gets close to a point. It is the language of “this term matters” vs “this term is noise”.

Sometimes people mean Landau’s function g(n).

Note: people sometimes say ‘Landau’s law’ and mean different things. In algorithms and asymptotics, it usually means Landau symbols (big O, little o) for comparing growth rates.
In group theory/number theory, it can mean Landau’s function g(n): the maximum order of a permutation in the symmetric group S_n, equivalently the largest lcm you can get from a partition of n (example: g(5) = 6 from 5 = 2 + 3).
Landau proved g(n) grows roughly like exp((1 + o(1)) * sqrt(n ln n)) (written in log form as ln g(n) / sqrt(n ln n) -> 1).

The core idea (in human words)

You take two functions, f and g, and ask: as n gets huge (or x approaches some point), is f:

  • never bigger than a constant multiple of g? (big-O)
  • much smaller than g? (little-o)
  • at least a constant multiple of g? (big-Omega)
  • sandwiched between constant multiples of g? (big-Theta, “tight bound”)

This is why it is used for algorithm complexity and for approximation errors in maths.


Definitions you can actually use

Assume g(n) > 0 for all sufficiently large n (so division is safe).

1) Big-O (upper bound)

“f grows no faster than g, up to a constant factor.”

Formal:
f(n) = O(g(n)) as n -> infinity
means there exist constants C > 0 and n0 such that for all n >= n0:
|f(n)| <= C * g(n).

2) Little-o (strictly smaller order)

“f becomes negligible compared to g.”

Formal (equivalent forms, when g(n) != 0 eventually):
f(n) = o(g(n)) as n -> infinity
means for every C > 0 there exists n0 such that for all n >= n0:
|f(n)| <= C * g(n),
which is equivalent to:
lim_{n->infinity} f(n)/g(n) = 0.

3) Big-Omega (lower bound)

“f grows at least as fast as g, up to a constant factor.”

One standard CS meaning:
f(n) = Omega(g(n)) as n -> infinity
means there exist constants C > 0 and n0 such that for all n >= n0:
f(n) >= C * g(n) (assuming f,g are nonnegative for large n).

4) Big-Theta (tight bound)

“f and g grow at the same rate, up to constant factors.”

Formal:
f(n) = Theta(g(n))
means f(n) = O(g(n)) and f(n) = Omega(g(n)).


Quick examples (the ones people argue about on phones)

Example A: polynomial “dominant term”

Let T(n) = 4n^2 – 2n + 2.

For large n, n^2 dominates n and constants, so:
T(n) = O(n^2)
and more strongly:
T(n) = Theta(n^2).

Example B: error term in an approximation (near 0)

People write things like:
e^x = 1 + x + x^2/2 + O(x^3) as x -> 0

Meaning: the leftover error is bounded by a constant times x^3 when x is close enough to 0.


Rules of thumb (so you do not lie by accident)

  1. Constants vanish in big-O:
    O(3n) is the same class as O(n).
  2. Log bases vanish in big-O:
    log_2(n) and ln(n) differ by a constant factor, so both are O(log n).
  3. Dominant term wins for sums (with a fixed number of terms):
    If f(n) = n^3 + 20n + 7, then f(n) = Theta(n^3).
  4. Multiplication behaves nicely:
    If f = O(F) and g = O(G) then fg = O(FG). Similar statements hold for little-o with stricter limits.
  5. Transitivity (the chain rule of sanity):
    If f = O(g) and g = O(h), then f = O(h). Likewise for little-o.

The common mistake that makes people think they “won” the argument

People say: “Heapsort is O(n log n)” and use that as if it means “exactly n log n”.

But big-O is only an upper bound. Many slower algorithms are also O(n log n) for some ranges if you are sloppy about domains or constants, and many functions are O(n^2) while still being much smaller than n^2 in practice. If you mean “tight”, use Theta. Knuth wrote about this exact confusion and pushed clearer use of O, Omega, Theta in CS.


“Theorems” and famous results where this notation shows up

You do not need these to use Landau symbols, but they explain why mathematicians love them:

  • Stirling’s formula uses asymptotic equivalence ( ~ ) and is often paired with O and o to quantify error in factorial growth.
  • Prime Number Theorem is commonly written with ~ (pi(x) ~ x/log x) and then sharpened with error terms using O/o notation.

Tiny sanity check: are we sure this is the right “Landau law”?

There are other “Landau laws” in physics and applied fields (for example, “Landau’s law” in nonlinear elasticity). Still, our description – formulas, theorems, back-and-forth – fits the standard maths/CS meaning: Landau symbols / Bachmann-Landau notation.


Sources

  • Big Technology – Sam Altman on OpenAI plan to win (transcript and figures): https://www.bigtechnology.com/p/sam-altman-on-openais-plan-to-win
  • Reuters – OpenAI fundraising and IPO talk (18 Dec 2025): https://www.reuters.com/technology/openai-discussed-raising-tens-billions-valuation-about-750-billion-information-2025-12-18/
  • Google Blog – Introducing NotebookLM (12 Jul 2023): https://blog.google/technology/ai/notebooklm-google-ai/
  • TechCrunch – NotebookLM standalone apps (19 May 2025): https://techcrunch.com/2025/05/19/google-launches-standalone-notebooklm-app-for-android/
  • Google Blog – Introducing Nano Banana Pro (20 Nov 2025): https://blog.google/technology/ai/nano-banana-pro/
  • DeepMind – Veo model page: https://deepmind.google/models/veo/
  • The Verge – Veo 3 vertical video and pricing update: https://www.theverge.com/news/774352/google-veo-3-ai-vertical-video-1080p-support
  • YouTube – Big Technology Podcast episode (video link): https://youtu.be/2P27Ef-LLuQ?si=ZMvCHgpZaijEMQ7d

Sources (for Landau law section)

  • Wikipedia – Big O notation (includes Bachmann-Landau family and formal definitions): https://en.wikipedia.org/wiki/Big_O_notation
  • MIT (lecture note PDF) – Big O notation, also called Landau’s symbol (definitions + examples): https://web.mit.edu/16.070/www/lecture/big_o.pdf
  • Wolfram MathWorld – Landau Symbols (big-O and little-o definitions): https://mathworld.wolfram.com/LandauSymbols.html
  • Donald E. Knuth (1976, PDF) – “Big Omicron and Big Omega and Big Theta” (notation clarity and common misuse): https://danluu.com/knuth-big-o.pdf
  • Internet Archive – Edmund Landau, Handbuch der Lehre von der Verteilung der Primzahlen (historical reference): https://archive.org/details/handbuchderlehre02landuoft
  • Wikipedia – Landau’s function g(n) (definition, example g(5)=6, growth statement): https://en.wikipedia.org/wiki/Landau%27s_function
  • arXiv survey paper on Landau’s function g(n) (more detail): https://arxiv.org/abs/1312.2569

Satire and commentary. Opinion pieces for discussion. Sources at the end. Not legal, medical, financial, or professional advice.


Satire and commentary. My views. For information only. Not advice.


JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.