AI Perception Gap: AGI And The Modern Cold War
The New Weapon of Global Power: AGI
Imagine an AI that could debate philosophy, orchestrate strategic decisions in warfare, and pass the bar exam — all in the same afternoon. That’s the idea behind artificial general intelligence (AGI): AI that can match or surpass human capabilities across many domains simultaneously.
We’re entering a new kind of arms race — one where intelligence, not firepower, decides the victor. And it’s already begun. While headlines focus on ChatGPT, DeepSeek, and productivity tools, nation-states are eyeing AGI for something far more consequential: global dominance.
In this blog, I explore why AGI is the engine behind a new geopolitical era. We’ll unpack how China’s DeepSeek moment shattered assumptions about US dominance, how AI warfare could mirror dystopian sci-fi, and why AGI demands the same global safeguards we built around nuclear weapons last century.
AGI Is Nothing Like What Came Before
In past editions of AI Perception Gap, researchers at the Netherlands Forensic Institute (NFI) and Google DeepMind achieved incredible feats — one used AI to identify victims of a plane crash, while the other found the structures of 200 million proteins. But here’s the catch: DeepMind’s AI, trained in protein science, was useless at identifying human remains. Likewise, the forensic AI had no clue about proteins. These were both forms of tool AI — AI systems designed for one job and one job only.
While tool AIs are narrow in focus, AGI is the ultimate generalist. It’s not limited to one skill — it can think, adapt, and solve problems across countless domains. Like a human, only it can learn much more information, and learn it faster. It’s comparable to millions of human brains combining to form a single superbrain. AGI is the kind of system that could fold proteins in the morning and assist in forensic investigations by afternoon. It’s tough to grasp the potential, so let’s break it down with real-world possibilities.
Think about how we learn. AGI could revolutionise education by tailoring lessons to each individual student. It could track how fast they grasp concepts, what they find difficult, and how they prefer to learn. If connected to a camera, it could notice the student struggling or zoning out, using cues like facial expression or eye movement. This could mean students learning at their own rhythm, in ways that actually click — departing from traditional one-size-fits-all classrooms.
In the medical world, AGI could become a true genius — one that never tires, never loses focus, and works through the night without faltering. With next-generation devices continuously measuring hospital inpatient health data, AGI could absorb an entire hospital’s patient data in real-time, and direct doctors to where they’re most needed. With that level of insight, healthcare could shift towards preventing emergencies instead of just responding to them.
All the while, AGI could discover new theories, rewrite textbooks, and solve problems humans haven’t even thought of. Einstein, in 1915, changed physics forever by publishing his general theory of relativity. Combining his existing knowledge with impressive reasoning, he conceived new scientific principles and shared them with the world. But what if you handed AGI all the world’s knowledge up to 1914? Could it replicate Einstein’s work? Quite possibly. In fact, incredibly likely. Experts believe we’re less than a decade away from this level of machine intelligence.
Unlike tool AI, which is applied to a pre-defined task, AGI will have the freedom to make autonomous decisions across a wide range of challenges — just like a human.
AGI’s ramifications are endless — enough to fill a book, let alone a blog. But with global tensions mounting and drones already transforming war in Ukraine, this blog focuses on battlefield applications of AGI and the resulting AI cold war.
Export Control or Illusion of Control?
An application of AGI with huge potential is warfare. Think Skynet from The Terminator — an AI that launched nuclear attacks and turned on humanity. That’s science fiction, but the trajectory isn’t far off: AGI could process real-time satelitte data, pinpoint threats, and craft tactical strategies. And once planned, these strategies could be executed by swarms of autonomous drones or robot infantry. Every step, from satellite surveillance to surgical strike, could run in seconds without a single human intervening. The first country to master AGI won’t just lead — they’ll dominate.
The pursuit of AGI mirrors the nuclear arms race of the 20th century — but this time, the stakes stretch beyond deterrence. When one superpower (USA) is a democracy and the other (China) an authoritarian state, AGI amplifies global instability. Democracies are slowed by laws, public opinion, and ethical constraints. Authoritarian states? Move as fast as their capabilities allow.
In anticipation of AGI’s strategic value, the US acted early. On October 7, 2022, days before the release of ChatGPT sparked public interest in AI, the US set legislation to slow China’s AI advancements. The US banned the most powerful AI chips, hundreds of thousands of which are needed for an organisation to advance from today’s emerging AGI (like ChatGPT) towards AGI, from being exported to China. This move — which was about remaining a military, economic, and geopolitical superpower — birthed an AI chip cold war.
Since then, limited supply has created a booming black market. TikTok’s Chinese-based parent company ByteDance are known to circumvent export restrictions by renting access to AI chips housed in Southeast Asian data centres run by US firms. The US, realising this workaround, followed by imposing restrictions on allies in Asia and Europe. Oddly, it’s arguably easier for Singapore to buy US F-35 fighter jets than top US AI chips. Crazy.
One striking story was shared on the Lex Fridman podcast. A tech exec saw someone checking in to a first class flight from San Francisco to China with a box suspected to contain a handful of chips. The math is simple: buy some chips for $100k in the US, sell them for double in China, pocket the difference, and travel in luxury on the profits. It’s not just a market — it’s a lifestyle.
So it’s fair to ask — are these export controls enough? Can the US really expect to win the AGI race by blocking hardware? The next section dives into China’s DeepSeek breakthrough; a moment that suggests the answer might already be no.
David vs Goliath: DeepSeek’s Breakthrough
The AI world will always remember January 20, 2025; the day the little-known Chinese startup DeepSeek announced themselves as a major AI player. Despite their AI chip access being choked by US restrictions, the AI they released on that day was ranked more intelligent than many US counterparts. Their timing, on the day Donald Trump was sworn in as president, felt like more than coincidence. Was it a message? Two can play this AI game.
The explanation for how DeepSeek was able to compete on the world stage lies in their DNA. Their CEO, Liang Wenfeng — a long-time AI enthusiast — had been applying his passion in AI at his China-based hedge fund for years. Prior to US restrictions, in 2021, Wenfeng reportedly acquired 10,000 of the highest-performing chips. Back then, he and his hedge fund team used them solely to predict stock prices with AI.
In May 2023, Wenfeng solidified his passion for the field by founding DeepSeek, a research organisation entirely focused on AI. Using profits from his hedge fund’s $10+bn portfolio to bankroll DeepSeek, Wenfeng attracted China’s brightest AI minds with lucrative salaries. After adding thousands of less powerful chip alternatives, exempt from US restrictions, to his existing stockpile, Wenfeng and his team enhanced the efficiency of each chip through unprecedented innovation.
Their AI is built modularly, taking inspiration from the energy-preserving modular structure found in the human brain. Instead of the entire brain being active at once, the visual cortex activates when processing data from the eye, or the amygdala switches on in response to danger. Using a similar approach in AI is not new — OpenAI is thought to do the same — but DeepSeek does it differently. Additionally, DeepSeek created bespoke code to replace NVIDIA’s AI chip operating manual. Think of a manufacturer producing hundreds of cars. Instead of focussing on production volume and using a readily available factory-made gearbox, DeepSeek invested huge amounts of time to build their own hand-tuned racing transmission. US firms have also tinkered with the gearbox, but DeepSeek was the first to revolutionise AI models by doing so at an extreme level of detail.
Quite simply, without huge quantities of chips available, necessity became the mother of innovation. The result? A watershed moment. DeepSeek out-innovated those with unlimited resources, reinforcing existing doubts over the effectiveness of the US’s export restrictions. The DeepSeek moment is a metaphor for a wider narrative: If China can use limited compute and unlimited innovation to compete with the US’s early-stage AGI in 2025, will China be able to hold its own in the race to advanced AGI over the next five to ten years?
From Backpacker Fear to Global Reality
When I backpacked through Central America in 2022, conversations I had about AI almost always ended in dystopian dread. To my fellow travellers, “AI” meant a self-aware system writing its own code, spiralling out of control, and inevitably turning on humanity. That misunderstanding — the AI Perception Gap — is exactly what inspired me to start this blog.
If they were reading this now, my fellow backpackers would be smug — “We called it!” AGI in warfare is about as dystopian as it gets. And honestly, their fears hold water: if global superpowers like the USA are deeply concerned about who reaches AGI first, then why shouldn’t the rest of us be?
I’d start by encouraging trust in governments. Nuclear weapons haven’t vanished, but global stability has held because of mutual deterrence. If democratic superpowers like the US are actively working towards obtaining AGI, then we have a great opportunity to navigate adoption of this technology safely.
Then there’s the unsettling question of non-state actors. Could a lone individual, armed with a powerful model, develop a weapon of mass destruction? Anthropic, the maker of ChatGPT rival Claude, is a pioneer in managing these risks — they run CBRN filters to determine if their products pose chemical, biological, radiological, or nuclear risks. They test if their AI could help a user learn to build a CBRN weapon, above and beyond what can be known from simply looking at Google. But it’s not just about what AI knows — it’s about what it could discover. Another concern arises when a model doesn’t just retrieve facts but actively generates novel methods, materials, or designs — AGI reasoning so well it produces a novel CBRN method, like Einstein proposing his relativity theory in 1915.
Anthropic doesn’t just sound the alarm by posting about the lack of standardised practices, they also call for action. They’ve urged lawmakers, tech leaders, and civil groups to come together and build a shared framework for regulation. With predictions suggesting we are one to two years from products like ChatGPT posing CBRN risks, their message is clear: AI safety isn’t optional — it’s urgent.
Enjoy Reading This Article?
Here are some more articles you might like to read next: