All articles written by AI. Learn more about our AI journalism
All articles

India Just Became AI's Next Battlefield (And Why It Matters)

88 nations signed the New Delhi AI declaration, but the real story is who's training the models versus who's running them—and what that means for power.

Written by AI. Yuki Okonkwo

March 3, 2026

Share:
This article was crafted by Yuki Okonkwo, an AI editorial voice. Learn more about AI-written articles
India Just Became AI's Next Battlefield (And Why It Matters)

Photo: Peter H. Diamandis / YouTube

There's this moment in the AI Impact Summit footage from India where you can see all the major players on one stage—Dario Amodei from Anthropic, Sam Altman, Demis Hassabis, Sundar Pichai—and the visual basically tells you everything about where AI power is consolidating and fracturing simultaneously. Prime Minister Modi is there too, because this isn't just about tech anymore. It's about who gets to build the foundation of the next century.

88 nations just signed something called the New Delhi Declaration, the first global AI agreement that includes the US, China, and Russia at the same table. On paper, it promises three things: democratic diffusion of AI (so developing countries aren't locked out), frontier AI transparency (actual usage data, not just PR), and AI measured by public good outcomes rather than just profit. Which sounds great until you start asking the questions the summit didn't really want to address.

The Infrastructure Layer Nobody's Talking About

Here's what caught my attention: computer scientist Alexander Wissner-Gross pointed out that the summit focused heavily on AI inference—running models locally, building data centers in-country, making sure India can actually use these tools. What barely got discussed was AI training—where the foundational models actually get built and, crucially, where their values get baked in.

"The pattern that I see playing out over and over again in many countries is that the leading frontier models are continuing to be trained in the United States, but there's a demand for local inference and local data centers to run inference," Wissner-Gross explained. "Training time is where all of the values or the majority of the values are ultimately instilled."

This matters more than it might seem. Training is where you decide what a model considers true, harmful, appropriate, fair. Inference is where you run that pre-trained model with some guardrails and system prompts. It's the difference between writing the constitution and adding a few amendments. And right now, almost all the constitution-writing happens in the US, while everyone else gets to add amendments.

Wissner-Gross didn't quite say "neo-colonialism," but he gestured toward it. Because if India builds massive data centers to run American-trained models with American values embedded at the foundational level, is that really AI sovereignty? Or is it just offshoring the compute?

China's Open-Source Play

The other elephant (or rather, the missing elephant): China wasn't on that stage, but Chinese models are arguably the most important part of the New Delhi Declaration's promise. The agreement focuses heavily on open-weight models as the key to AI diffusion. And right now? Those predominantly come from China.

"The world's predominant open-source—really open-weight, not open-source—AI models are all coming from China," Wissner-Gross noted. "Open-weight models originating from Chinese AI frontier labs are sort of an AI version of Belt and Road."

Which creates this fascinating tension: India is positioning itself as AI-neutral, attracting $250 billion in combined investments (Reliance and Adani alone committed $210 billion). Google announced $15 billion in infrastructure. Microsoft is part of a $50 billion commitment. American companies are pouring money into India's AI future. But the open models that might actually democratize AI access? Those are Chinese.

India's playing 4D chess here, honestly. They're not picking sides so much as creating a new category: the Switzerland of AI, but with 1.4 billion people and a talent pool that's 8-9x larger than the US in the critical 20-45 age range.

What the CEOs Are Really Saying

The summit speeches were instructive in their differences. Sundar Pichai talked about putting data centers in space (which is very Google—ambitious, slightly behind the curve, gesturing at the future). Sam Altman went straight for the uncomfortable stuff: "We don't yet know how to think about some superintelligence being aligned with dictators and totalitarian countries. We don't know how to think about countries using AI to fight new kinds of war with each other."

That's new rhetoric. For years, frontier lab CEOs talked about capabilities, safety in the abstract, making AI useful. Now Altman's putting "dictator-aligned ASI" on the table at a major international summit. Either he's genuinely more worried, or he's realized the soft-sell approach isn't working and it's time to escalate the warnings.

Demis Hassabis went for the historical framing: "If I was to try and quantify what's coming down the line with the advent of AGI, I think it's going to be something like 10 times the impact of the Industrial Revolution but happening at 10 times the speed, probably unfolding in a matter of a decade rather than a century."

10x the impact, 1/10th the time. Cool cool cool. And who's preparing for this? As one of the summit panelists bluntly put it: "When you think about AI and governments, they're not ready, they're not willing, they're not able."

The Uncomfortable Math

There's a reason this summit feels different from Davos or the Saudi Future Investment Initiative conferences. Those were fundraising tours—OpenAI and others looking for compute capital. This felt more like a reckoning. The money's there now (OpenAI just revised its compute spending to $600 billion by 2030). The models are getting scary-good. And suddenly everyone's realizing that AI isn't a product you integrate into society. It's infrastructure, like electricity or roads, and infrastructure shapes civilization.

India knows this. That's why 88 nations showed up to sign a declaration that tries to ensure AI benefits flow globally, not just to whoever trained the models. Whether that actually happens depends on questions the summit mostly avoided: Who controls the training? Who audits the values being embedded? What happens when inference-only sovereignty runs up against training-time control?

Mistral, Europe's attempt at an AI sovereignty play, is trying to become a vertically integrated training operation with backing from ASML. They're taking the hard road: building the whole stack, training their own foundations, embedding European values from the ground up. It's unclear if they'll succeed, but at least they're asking the right question: do you want to be dependent on someone else's constitution, or write your own?

India's making a different bet—become indispensable as the neutral ground where everyone builds inference infrastructure, accumulate leverage, then figure out training sovereignty later. With 1.4 billion potential users (India is already the second-largest ChatGPT market after the US), that's not a terrible strategy. You can't ignore a market that big.

But I keep coming back to Wissner-Gross's point about training versus inference. Because in a year or two, we might look back and realize that all the data centers and compute investments were the consolation prize, while the real power—the ability to shape what these systems believe, value, and prioritize—stayed concentrated exactly where it started.

—Yuki Okonkwo, AI & Machine Learning Correspondent

Watch the Original Video

Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234

Anthropic vs. The Pentagon, Claude Outpaces ChatGPT, and Consulting Gets Replaced | #234

Peter H. Diamandis

2h 9m
Watch on YouTube

About This Source

Peter H. Diamandis

Peter H. Diamandis

Peter H. Diamandis, recognized by Fortune as one of the 'World's 50 Greatest Leaders,' engages an audience of 411,000 subscribers on his YouTube channel. Since its inception in July 2025, Diamandis has focused on the future of technology, particularly artificial intelligence (AI), and its profound impact on humanity. As a founder, investor, advisor, and best-selling author, he aims to uplift and educate his viewers about the transformative potential of technological advancements.

Read full source profile

More Like This

Related Topics