More than a decade ago, when Sam Altman was discussing the launch of a new artificial intelligence venture, he reached for a dramatic analogy: a “Manhattan Project for AI.” That project eventually became OpenAI, the company behind ChatGPT and one of the most powerful AI firms in the world.
Altman was concerned that the future of AI would be dominated by a single profit-seeking mega-corporation like Google. But, over time, the Manhattan Project analogy revealed something deeper: from the beginning, some of the most influential figures in Silicon Valley viewed AI as a strategic capability on the scale of nuclear weapons.
That mindset has only hardened. Eric Schmidt, the former CEO of Google who was leading the tech giant during that same 2015 timeframe, now openly compares the ongoing race for superintelligence to the original Manhattan Project. With global investments in AI training doubling every year for nearly a decade, Schmidt predicts that the immense data centers housing the most capable systems will soon be effectively nationalized. Several modern-day “Manhattan Projects” are already underway across the globe, driven by an urgency reminiscent of the mid-20th century, when the United States poured 0.4% of its GDP into developing nuclear capabilities ahead of its rivals.
The nuclear analogy raises an uncomfortable question: should we ensure that superintelligence never falls into the “wrong hands”? And who gets to define what “wrong hands” means? This is no longer a corporate dispute: it is unmistakably geopolitical.
Anthropic CEO Dario Amodei has openly argued that the most advanced AI systems should not empower China and, more broadly, should not strengthen authoritarian states. He has also supported restrictions on advanced chip exports, precisely to slow the progress of geopolitical rivals. In that worldview, limiting access to AI is part of the strategy, rather than an unfortunate side effect of competition.
And this is no longer a debate about some distant future. This year, Anthropic described a frontier model, Mythos, so capable in cybersecurity tasks that the company decided not to release it publicly. Instead, access was reportedly limited to select partners, including major corporations and, according to media reports, US government agencies. Whether one agrees with that decision or not, it points to the direction of travel: not universal access, but tiered access; not open capability, but controlled distribution.
Where does this leave the rest of the world, including nations like Georgia? At first glance, the rest of the world may seem to have little reason to worry. Consumers can still subscribe to ChatGPT or Claude for $20 to $200 a month. Open-weight models from China, France, and elsewhere can still be downloaded and, in some cases, run on local hardware. Startups can still build on foreign APIs. Researchers can still experiment. But this should not be confused with sovereignty.
Subscription access is not control. API access is not resilience. Open weights without affordable computing power (compute) are only theoretical freedom.
The likely direction is obvious. Closed models will become more expensive, more restricted, and more politically conditioned. Those hoping to run open-source models will slam into barriers when attempting to acquire the cutting-edge chips required to power them.
Returning to the Manhattan Project analogy, it is highly likely that we will soon witness the formation of a new “Nuclear Club” for artificial intelligence. We can expect the eventual drafting of a digital Non-Proliferation Treaty, designed explicitly to limit sovereign states outside the dominant superpowers from possessing their own high-end, unrestricted AI capabilities.

Whether a country earns a seat in this exclusive club depends entirely on how it acts today. Securing a place in the future of AI is not just a matter of capital: it requires forward-thinking legislation, the cultivation of elite tech talent, and the formation of strategic international partnerships.
For Georgia, Sovereign AI does not have to begin with building its own frontier model. It can begin with something more realistic: securing local hosting, trusted cloud capacity, resilient public-sector infrastructure, and a path toward shared national compute.
The path to Sovereign AI is anything but straightforward, and navigating it will require highly unusual alliances. The companies building decentralized AI networks seek to solve the most critical bottleneck in the industry: access to compute. Instead of nations spending billions of dollars competing for limited and soon-to-be outdated Nvidia chips, anyone, from an individual developer to a sovereign government, could tap into a decentralized network.
This provides guaranteed access to powerful computing resources, free from cross-border censorship and geopolitical embargoes.
This is not a magical solution. Decentralized compute does not eliminate the need for policy, legal clarity, talent development, cybersecurity, or long-term institutional planning. It does not instantly create national AI capacity. But for countries that do not have the budgets of great powers, it may become one of the few realistic ways to avoid total dependence on a narrow set of foreign platforms.
The age of AI is being shaped more and more by Manhattan Project logic: concentration, secrecy, strategic competition, and restricted access. In such a world, countries without limitless resources cannot afford passivity. They will not secure their future by waiting politely for a seat at someone else’s table.
They will have to be faster, more pragmatic, and more creative.
Op-ed by Sultan Suleimanov
Sultan Suleimanov is a Research Analyst at DeAI Nation, an organization working to support and promote a decentralized AI ecosystem. Prior to joining DeAI Nation, he worked at Russian independent media outlets for more than a decade.













