AI Race: The Promise and Perils of Techno-Utopians

Exploring the accelerating global race for AI dominance

The accelerating global race for artificial intelligence (AI) dominance, driven by a belief in its inevitability, vast commercial promise, and geopolitical urgency—particularly with respect to U.S. competition with China—poses serious concern. Although AI’s transformative potential is undeniable, development is proceeding with insufficient regard for systemic risks, including mass labor displacement, erosion of human agency, and inadequate governance. As such, the prevailing “Techno-Utopian” vision—championed by leading technologists and venture capitalists as a mix of ideology and a business model that minimizes downside risks in favor of unbounded growth—is up for critique.

Drawing historical analogies to the 1930s Technocracy movement and the British East India Company, the authors explore how Big Tech increasingly functions as a sovereign actor, reshaping governance and society with limited accountability. Despite high-profile warnings from AI pioneers and corporate leaders, global coordination remains fragmented, with the United States lagging in regulation and safety investment. Without a course correction toward human-centered innovation and enforceable global guardrails, AI risks being shaped not by ethics and foresight, but by power and profit.

The Red Cell Project

The Red Cell series is published in collaboration with The National Interest. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges.

A belief in the inevitability of artificial intelligence (AI), the promise of boundless benefits, and the fear of losing to China, or to each other, are driving American AI industry rivals into a furious, two-tiered race for AI dominance. This race is accelerating with insufficient regard for the risks—many raised by AI creators and corporate leaders themselves. Concerns about safety, human impact, and the inability to prevent catastrophic outcomes have been downplayed in the pursuit of speed and supremacy. The challenge is not to stop the train or even slow it down—rather, it is to shift direction to ensure that innovation remains aligned with human objectives.

Techno-Utopia: “More Everything, Forever”

“We are building a brain for the world,” explains OpenAI CEO Sam Altman in a blog post about artificial superintelligence, defined as AI that exceeds human intelligence. Robots could one day run entire supply chains by extracting raw materials, driving trucks, and operating factories. Even more amazing, certain robots could manufacture more robots to construct chip fabrication plants, data centers, and other infrastructure. Machines will not just power the future; they will build it—indefinitely.

Tech leaders and venture capitalists like Elon Musk, Mark Zuckerberg, and Marc Andreessen envision a techno-determined future where AI eradicates disease, reverses environmental collapse, discovers limitless energy, and ushers in an era of abundance. What’s not to like? However, tech leaders also envision that AI will transform society so completely that people will retreat into the metaverse, form AI friendships, transact in cryptocurrencies, and rely on universal basic income funded partly by tech companies. Some take it to an extreme, darker vision in which nation-states gradually dissolve into network states governed by technocratic elites—a modern version of Socrates’ philosopher kings. Realistic or not, it is wholly transformative in its ambition, without adequately wrestling with the downside consequences.  It mirrors Aldous Huxley’s dystopian vision in Brave New World, and its core conviction is that technology, especially AI, will deliver “more everything, forever.”

Technocracy Reborn

Such a world echoes the 1930s Technocracy movement, which was led by Elon Musk’s grandfather Joshua N. Haldeman and advocated rule by engineers and scientists but failed to gain traction. In Marc Andreessen’s 2023 reboot, the “Techno-Optimist Manifesto,” the venture capitalist argues that all problems, natural and technological, can be solved with more technology. This mindset ignited a fierce backlash in 2018, sparked by technology companies’ unchecked power over user data and the content on their platforms, shattering consumer trust and the illusion that more technology inevitably leads to progress.

Andreessen identifies “the enemy” of progress as statism, collectivism, central planning, and monopolies. This, too, is familiar: free-market capitalism with minimal regulation is the same environment that fostered the early growth of the Internet. Andreessen’s manifesto treats society like a broken app: flawed, but fixable, as long as developers keep upgrading the technology stack. Similarly, the enemies he identifies are actually complex outcomes of human history, power struggles, and clashing ideologies. Furthermore, they are not malfunctions to be overcome with a software patch. They are part of the human condition.

Where humans fit into the picture when machines do all the heavy lifting is conspicuously absent. According to this vision, machines think, build, and decide for people—not with them. It is a future of engineered consensus, where politics and consent of the governed disappear alongside human agency. No wonder some in the industry, like Google CEO Sundar Pichai, support integrating social scientists, philosophers, and ethicists into the conversation—not least to remind Silicon Valley’s data scientists and venture capitalists that human agency must remain intact. That is the part of the software that needs to be patched.

Social Dislocation and the Rise of the Techno-State

It would be a mistake, however, to see the Techno-Utopians as merely profit-seekers or ideologues. They have good reason to believe that AI can solve previously intractable problems. DeepMind’s AlphaFold has already revolutionized biology, earning its creators a 2024 Nobel Prize for accurately predicting protein structures, an advancement that accelerates drug discovery. Google’s earthquake alerts improve public safety, and AI-driven cooling can cut commercial energy consumption by 9-13 percent. AI unequivocally has enormous potential to improve human life.

However, AI also promises to transform civilization with unprecedented speed, all the while disrupting societies. Historically, even in societies with abundant leisure time, people have had purposeful work. Yet talk of universal basic income suggests otherwise. When the Industrial Revolution displaced pre-industrial and rural physical labor, new jobs were created and change unfolded over a few centuries, giving societies time to adjust. The AI revolution, in contrast, threatens to rapidly replace knowledge workers without offering any immediate fallback jobs. Half of entry-level white-collar jobs might vanish within five years. Today’s business school graduates cannot become tomorrow’s bankers if the path from apprenticeship to expertise collapses without something to replace it. Hollowing out the professional class destroys upward mobility by replacing higher-wage, middle-income jobs with fewer and lower-wage roles that merely support automated systems. Furthermore, when AI automates jobs that rely on reasoning, judgment, accountability, ethics, and morality—human faculties it lacks—performance and safety could suffer.

The relentless push for speed overshadows the essential task of preparing for change. AI is already remaking society, transforming how Americans communicate, work, wage war, and engage in politics. It has become the leading edge of the national securitization of economic decision-making and geo-strategic competition. Society’s transformation accelerates as the technology hooks users and spreads so quickly that they must adapt. China’s rapid deployment of AI—in surveillance, governance, and military applications—is often invoked to justify further acceleration. However, if America leads in developing cutting-edge AI models while China leads in deploying its less advanced ones, the challenge is not just to deploy faster. It is to lead by deploying differently, namely by prioritizing safety, transparency, and preparedness for disruption, thus anchoring American AI deployment in human-centric principles rather than geopolitical urgency.

In a recent Foreign Affairs essay, American political scientist Ian Bremmer describes this new world as “technopolar,” where the technology industry is an omnipresent, non-state actor, shaping geopolitics and public life. Without question, the major U.S. technology companies, or “Big Tech” firms, are performing roles historically the dominion of nation-states, and in some respects, have fused with the state. While it is unclear whether they constitute a global “pole” like China, Russia, or the European Union (EU), it is certain that Big Tech represents a new—largely unaccountable—form of power.

Big Tech Channels the British East India Company

A better analogy for Big Tech’s weight is the British East India Company, which, from 1600 to the early 1800s, acted as a sovereign entity with immense autonomy, backed by the British Crown. The East India Company created a large part of the British Empire, acquiring colonies with its own army, managing its own administration, and giving the Crown a cut. It took about two centuries for the nation-state to reassert itself and dissolve the company. Likewise, today’s technology giants have amassed enormous power, influencing state behavior through their control of digital infrastructure and platforms that have become integral to modern society. The main question is: Will nation-states reassert sufficient control?

Probably, but not anytime soon. Big Tech is shaping U.S. government policy (evident in the administration’s recently released AI Action Plan) partly because Congress has only recently begun to act, and partly due to the rapid pace of change. Furthermore, since China debuted its DeepSeek AI model, geopolitics has fostered a symbiosis between government and industry. Unlike in China, where technology companies are firmly under the control of the party-state, in the United States, Big Tech is influencing the government’s approach. The industry’s political footprint in Washington has expanded as public scrutiny has grown: U.S. technology giants spent $61 billion on lobbying in 2024. Musk alone donated $288 million in campaign funds last year.

As AI’s full social, economic, and geostrategic impact is beginning to be felt by humans, AI’s creators are increasingly acknowledging that they do not fully understand how these systems function. Many systems perform well in low-stakes, error-tolerant environments. But engineering control becomes urgent in high-impact domains, like finance, law, health care, and defense, where failures have serious consequences. Hallucinations—false or fabricated responses—are common in complex reasoning tasks, approaching 80 percent in some evaluations.

If the engineers who design these systems cannot satisfactorily explain how they work, can AI truly be controlled? DeepMind cofounder Mustapha Suleyman is unsure. Others say that identifying failure modes is difficult—rendering a “kill switch” (to shut down AI if it fails) unrealistic. Yoshua Bengio, a pioneer in the field, has called for investment in advanced monitoring to control agentic AI, which can act autonomously, before “we build things that can destroy us.”

Not everyone agrees. Meta’s chief AI scientist, Yann LeCun, another foundational voice, rejects “AI doomism.” Current models are far from achieving autonomy or general intelligence, he contends, and safe, controllable AI is fundamentally an engineering problem. Yet his proposed solution—open-source innovation to improve testing and debugging—would give China access. For the moment, proprietary models still dominate the U.S. ecosystem, though interest in open-source models is growing.

No Guardrails, Fragmented Governance

Even as risks have become more visible and concerns more widespread, regulation has not kept pace. International efforts like the United Kingdom (UK)-hosted Bletchley Declaration in 2023 offered hope for coordinated governance with major AI powers. The United States, China, and the EU committed to safe, human-centric development. But at the 2025 Paris Global AI Summit, both the United States and the UK declined to sign a follow-up declaration on ethics and safety. In fact, Vice President J. D. Vance publicly pushed back against “excessive” safety-oriented regulations.

The reality, however, is that the science for evaluating performance and safety is inadequate. This is particularly concerning as AI advances from being a tool augmenting human capabilities—like generative AI (think ChatGPT)—to being “agentic” and acting autonomously without human supervision (think driverless vehicles). Despite the associated exponential increase in risk, only two percent of global AI R&D is devoted to safety, and only five percent to human alignment. Many leading safety researchers have exited Big Tech firms in frustration. Meanwhile, binding global standards and enforceable oversight mechanisms for agentic AIdo not exist. Furthermore, there are no technical protocols to address misalignment (when AI does not pursue goals its designers want) nor trusted mechanisms for fail-safe intervention (e.g. a kill switch).

Although the pace of AI development argues for flexible regulations, the United States appears to be an outlier. Both China and the EU, despite having very different governance systems, are more focused on safety. In this fragmented regulatory landscape, global coordination becomes both more difficult and more urgent.  

Ignored Warnings

Now is the moment to step back and revisit first-order questions: Where is AI taking humanity? Will it replace people? Even Techno-Utopians like Sam Altman and Elon Musk have expressed alarm. Musk estimates there is a 20 percent chance that AI could destroy humanity; Altman believes it could overpower people entirely. In March 2023, Musk, along with technology entrepreneur Steve Wozniak and dozens of AI scientists, signed an Open Letter calling for a pause in the training of more advanced AI systems, asking bluntly: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” Altman did not sign the letter, but it quotes OpenAI’s statement that advanced AI projects should limit how quickly computational resources are increased for model training.

Nevertheless, AI development is racing ahead at warp speed. Developers are engaged in a dizzying contest to seize the future, hyped by the belief in AI’s inevitability and fear of missing out on a financial and geopolitical bonanza. Irrational exuberance has overtaken market discipline. Despite delayed returns and unresolved risks, capital is pouring into AI development with near-religious fervor: $471 billion flowed into U.S. AI ventures between 2013 and 2024, with another $300 billion projected for 2025 alone.

Yet, the risks are not hypothetical, and the warnings are clear, issued by those building the systems themselves. Accountability is missing, sacrificed at the altar of acceleration, leaving little room for oversight or correction. Without common global rules to safeguard human-centered innovation, AI governance risks being shaped solely by power and profit. Integrating ethics and foresight into technological development will become even more difficult the longer this dynamic continues, with likely profound consequences for humanity.

The Future, Unsupervised

Rarely has a technology with such transformative power—and such visible, well-understood risks and disruptions—been unleashed with so little preparation. As Anthropic CEO Dario Amodei recently warned, “You can’t just step in front of the train and stop it.” But you can—and must—”steer it 10 degrees in a different direction from where it was going.” That is the central challenge for the United States in the AI race.

Recent & Related

Report
Christopher Preble • Lucas Ruiz 
Podcast Episode 🎧
Christopher Preble • Melanie Marlowe • Zack Cooper
Podcast Episode 🎧
Christopher Preble • Melanie Marlowe • Zack Cooper