Don’t Tread on Me: AI and the Future of US Democracy

Safeguards must be put in place to prevent AI from eroding accountability and critical reasoning — the core of American democracy

Much of today’s AI debate focuses on geopolitical rivalry, national security, economic displacement, safety, and civil liberties. This paper reframes the debate along a new axis: AI as a force that simultaneously erodes human cognition and democratic accountability, undermining the foundations of democratic self-rule, unless restored through federal law. Absent such safeguards, AI renders power illegible to the public and becomes an instrument of cognitive and democratic decline. AI is democratically legitimate only if it satisfies two conditions: First, AI-mediated decisions must remain under identifiable human authority and be transparent enough to be contestable; second, educational institutions nationwide must teach Americans to use AI without surrendering their own judgment.

The Red Cell Project

The Red Cell series is published in collaboration with The National Interest. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges.

“Don’t Tread on Me” has long rallied Americans against tyranny. It carries new force today as artificial intelligence (AI) technologies that think for us spread across public life. The premise of American democracy is that power rests with the people: Institutions must answer to them, and they enforce accountability by questioning and contesting authority. AI challenges this framework in two ways. In governance, it obscures accountability, and in daily use, our tendency to offload cognitive tasks to machines becomes habit, dulling skills needed to question power.

Carl Sagan and Henry Kissinger foresaw the risk of democracy hollowed out at both ends. Sagan, a scientist, feared that reliance on technologies we do not understand would cost us the ability to question authority. Kissinger, a historian of power, feared that leaders would replace human reason with inscrutable algorithms, shifting what counts as truth — and who defines it — to those who wield power. Their warnings are no longer theoretical: AI now mediates life-altering decisions in courts, policing, and public administration.

Research shows AI can enhance our cognition when used thoughtfully. But uncritical reliance can weaken it. In governance, the problem is compounded by AI’s logic, which is hard to trace. Either it is proprietary or it relies on complex pattern recognition — or both. Inscrutable public administration and cognitive impact together create a structural governance risk with strategic implications. If people and institutions rely on these technologies uncritically, citizens may lose the ability to hold power accountable — subverting democratic self-rule.

The solution is to keep AI-mediated decisions under human authority and transparent enough to be contestable, and to teach people to use AI without surrendering their judgment. Yet American political trends, particularly democratic drift, make such safeguards unlikely. AI does not cause this drift, but it can institutionalize it.

This paper argues that AI is democratically legitimate only if it maintains both accountable governance and human cognitive capacity. Federal law can do this through regulations affecting institutional procedures, without sacrificing innovation. Absent such safeguards, AI renders power illegible to the public — and becomes an instrument of cognitive and democratic decline.

When Algorithms Govern: Power without Accountability or Consent

Opaque AI technologies function as tools of governance in the U.S., shaping decisions affecting liberty, opportunity, and access to public goods, yet without comprehensive federal oversight. Opacity in governance is not new — think classified decisions and complex bureaucracy. AI, however, combines opacity, behavioral influence, and scale. Its impact on democratic processes and constitutionally protected rights recalls Harvard professor Lawrence Lessig’s revised dictum that “code may be law, but not all law is legal.”

Traditional AI aids decisions that directly impact individual liberty. Tools like COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) forecast recidivism, informing sentencing and parole decisions in some U.S. courts. A life-altering judgment is delegated to copyright-protected software, removing our ability to hold a human to account, or understand the methodology behind the judgment.

Researchers from the Max Planck Institute and Technische Universität Berlin have shown that the COMPAS algorithm is biased in favor of jail over bail for defendants awaiting trial. Its priority is to reduce the risk of re-offense, not minimize unnecessary detention. That normative trade-off, they contend, must be made transparent to restore accountability to judges and lawmakers. If code is law, it must be treated like law. Any exercise of legal authority based on code must be linked to identifiable human decision-makers.

The courts’ normalization of algorithms placing people in life-altering categories without anyone knowing precisely why is equally concerning. A challenge to COMPAS on due process grounds failed in the Wisconsin Supreme Court, even though the court acknowledged its opacity. Tools like COMPAS continue to spread in U.S. courts, without compensating efforts to mitigate harm.

Predictive policing technologies also disrupt accountability. In one Florida case, an AI determination was used to justify coercive action. Officers randomly visited residents AI flagged as likely to commit crimes. A 2020 investigation by the Pulitzer Prize–winning Tampa Bay Timesrevealed that the policy was to “make their lives miserable until they move or sue,” according to a former officer. Frivolous charges like “overgrown grass” with disproportionate fines were common. Property crime decreased — though not necessarily because of the police visits — but violent crime increased. A 2024 lawsuit stopped this program, but in the absence of clear federal constraints or guidelines, similar technologies are spreading across the U.S.

No other technology challenges the boundary between governance and control at this scale. Pew Research and other national surveys show that majorities of Americans across party lines, especially educators, are seriously concerned. From jobs to privacy to civil liberties, Americans are anxious about how decisions directly affecting them will be made by technologies they cannot understand or challenge. The capabilities of advanced AI intensify these fears.

Your Brain on Autocomplete: Risks and Rewards

As generative AI becomes a daily tool for students and professionals, scientists are investigating how frequent interaction affects cognition. Imagine relying so heavily on GPS that you can no longer drive to work without it — the routes and turns blur in your mind. If chatbots become the go-to for problem-solving, does the brain, like the driver, forget how to navigate on its own?

Generative AI entered mainstream use in 2022, and empirical research remains limited. Emerging evidence suggests its impact on cognition depends on how — and by whom — the technology is used. Peer-reviewed studies show that unmitigated reliance on AI weakens problem-solving, reduces cognitive engagement, and impairs long-term learning retention. However, simple interventions — prompts encouraging reflection and evaluation — can enhance critical thinking by breaking habitual substitution of our judgment for AI’s. This may explain why research shows older users fare better than younger ones; presumably, life experience breeds enough skepticism to question AI output.

These findings track automation age studies showing that humans using complex technologies risk “cognitive offloading” — delegating too much judgment to machines and impairing their own skills. The harm comes from using AI passively, without challenging or correcting it. Calculators raised similar, ultimately unfounded, fears about undermining math skills. Unlike calculators, however, AI substitutes for human reasoning and judgment, and is becoming infrastructural.

In an AI-saturated world, humans need to work with AI skeptically to protect — and ideally sharpen — their own cognitive skills.

Why Complacency Rules

The challenge is that oversight tends to follow catastrophe. Three Mile Island and Chernobyl triggered more robust nuclear regulation. AI’s erosion of human cognition and democratic accountability, however, is likely to emerge gradually, like the 2008 financial crisis, which arose from opaque financial instruments and hidden leverage silently creating systemic risk.

The strongest incentive to act is the infeasibility of a “kill switch” to shut down errant AI, highlighted in research from Anthropic. Persuasion may soon be our only option, according to Nobel Laureate Geoffrey Hinton, one of AI’s forefathers, because advanced AI technologies increasingly adapt to our attempts to control them. If the absence of an emergency override represents the loss of human authority over machines, the need to negotiate AI behavior means meaningful human oversight will likely get harder.

As AI spreads, our dependence will deepen — because it is effective, efficient, and unavoidable.  We risk sleepwalking into relying on systems whose logic we cannot fully inspect.

The weakening of democratic norms in the U.S. amplifies that risk.

American Political Trends: Performance vs. Consent

Decades of expanding presidential power have edged the U.S. closer to governance legitimized by performance rather than consent. A subset of government and technology elites even advocate insulating governance from public challenge altogether through fully centralized power. Citizens trade democratic self-governance for stability and abundance, and dissenters relocate. AI does not create this model, but it can operationalize it at low political cost. 

Voter behavior compounds the risk. Anti-democratic actions by political elites is rising, yet voters do not punish it electorally, even when they oppose it, according to researchers from Stanford, Dartmouth, and the University of Pennsylvania. Polling reflects this paradox. While three-quarters of Americans believe democracy is under threat, one in five endorse “burning down” our institutions, and a third aged 18–29 believe democracy is no longer viable.

Concurrently, capital, wealth, and now AI expertise are concentrated within a small number of technology companies. Two decades ago, only 20% of AI PhDs chose careers in industry; today, over 70% do, according to an MIT study. The private sector thus exercises disproportionate influence over this century’s most transformative technology. Some industry leaders suggest democracy and freedom are incompatible. If democracy is treated as a hindrance to freedom, not its guarantor, fully centralized power is no longer an abstract theory.

The line dividing authoritarian AI — used for surveillance and control — and democratic AI — used to facilitate governance is increasingly messy. Democracies deploy AI to automate decisions impacting personal liberty without accountability, and profile citizens with limited transparency. The critical question is no longer where AI originates, but what kind of politics it enables. Authoritarian AI need not spread from Beijing; it can emerge in Washington through complacency, elite ambition, or democratic decay.

Keeping Humans-in-the-Loop: International Lessons and U.S. Action

Some governments are using AI to deepen democratic accountability. In Taiwan, platforms like vTaiwan facilitate online policy debates, and government agencies respond to outcomes. Estonia provides near-complete access to government documents, which Estonians can use to contest decisions. The European Union (EU) focuses on legal and economic risk, but its General Data Protection Regulation includes the right to information about the logic of automated decisions, and its AI Act emphasizes transparency. Brazil is developing a model grounded in constitutional law and judicial oversight, with guidelines emphasizing human supervision, transparency, and explainability.

None of these approaches maps neatly onto the U.S. The Taiwan and Estonia models are suited to small, relatively homogeneous states confronting existential external threats, and their centralization would conflict with states’ rights and federalism — shared authority between state and federal governments, each a check on the other. The EU prioritizes risk mitigation over civic empowerment. Brazil’s model, once complete, will be worth studying.

Yet Americans are concerned about AI’s impact on their rights, freedoms, and cognition. State and local governments are responding with laws and guidelines governing automated decision-making, algorithmic transparency, and cognitive protection in education. However, only the federal government has the reach, expertise, and resources to address to systemic risks.

The main objection to federal safeguards invokes the US-China race for technological supremacy. However, in an era of multipolarity and intelligent machines, winning that race also means safeguarding accountable governance and human cognitive skills.

Federal intervention can address these public concerns without slowing innovation. First, federal law could indirectly mandate teaching AI fluency without cognitive surrender in school curricula if Congress amended existing education statutes to define these terms and link federal funding to implementation. Second, amending relevant administrative lawsto define life-altering decisions made by AI and require review by an identifiable human decision-maker, with documented reasoning for approval or disapproval, would keep humans as the ultimate deciders in governance and such decisions contestable. Third, requiring federal agencies to publish inventories of AI used in decision-making — along with decision criteria, known limitations, and appeal pathways — would help surface embedded value judgments and open them to challenge.

The Human Imperative: Preserving Cognition and Democracy

Democracy depends on citizens who can question power — and on institutions that can be questioned. “Don’t Tread on Me” embodies that meaning. Today, it applies to AI technologies that reshape what it means to think, reason, govern, and be free.

The cognitive and democratic risks of AI are escalating but not inevitable. Keeping humans in the loop is decisive in determining whether these technologies empower or control us. Accepting the absence of a “kill switch” to shut down AI shows how far we are drifting from that goal.

If truth is to remain empirical, grounded in facts and skeptical inquiry — and not just simulated or defined by powerful actors who demand that we not believe our lying eyes — then we need a public capable of inquiry and institutions we can hold to account. Otherwise, power will flow to those who control the machines — not because they understand them, but because no one else does.

Recent & Related

Report
Christopher Preble • Lucas Ruiz 
Podcast Episode 🎧
Christopher Preble • Melanie Marlowe • Zack Cooper
Podcast Episode 🎧
Christopher Preble • Melanie Marlowe • Zack Cooper