DURAN | Policy Proposal | Democratizing Control of Strategic AI Systems through Public Equity Governance

Executive Summary

Artificial intelligence has become the defining strategic infrastructure of the 21st century — one that will determine the future of military power, economic competitiveness, and democratic legitimacy. Yet unlike past national-scale technologies, AI is being developed, deployed, and governed not by the state, but by a small number of private actors whose objectives are aligned with capital, not the Constitution.

This proposal introduces a structural solution: the Sovereign Shareholding Framework, a policy designed to maintain U.S. technological leadership while restoring the American people’s stake in the systems that will increasingly shape their lives. It calls for the establishment of a non-dilutable public equity trust, in which a legally defined share of ownership in strategically significant AI companies is held by the American public. This ownership would grant oversight rights, safety audit access, and deployment transparency, without disrupting private capital markets or innovation velocity.

This approach is not theoretical. It builds on the proven precedent of Section 230, which gave internet platforms legal breathing room to scale, while enabling the U.S. to dominate the digital age. Where Europe overregulated and fell behind, the U.S. gave space — and led. The Sovereign Shareholding model offers the same bandwidth for AI, while correcting for the democratic vacuum that now defines digital power.

To operationalize this framework, the proposal recommends utilizing the newly established American Sovereign Wealth Fund to acquire preferred equity in qualifying AI firms. These investments would be passive in governance but permanent in principle — ensuring that the long-term direction of strategic AI remains anchored to national sovereignty and constitutional values, not market monopolies or unaccountable boards.

In a time of accelerating change, America must choose between two futures: one in which AI governs the public, and another in which the public governs AI. This policy offers a third way — one that secures innovation while securing the republic. If artificial intelligence is to govern the systems that govern us, then the American people must govern it — not as spectators, but as sovereign shareholders.


Policy Vision | A High-Speed Train, Owned by the People


Artificial intelligence is not merely another technological leap; it is a civilization-shaping infrastructure moving with historic velocity. Like a high-speed train, it is accelerating — autonomously, globally, and with extraordinary force. The question is no longer whether we can slow it down, but who controls the track, and who decides the destination.

Today, that track is laid by a handful of unelected private entities. These companies govern the most powerful models in history, yet remain unbound by democratic mandate, public oversight, or constitutional principle. Their decisions — about what gets built, who gets access, and when to deploy — carry societal implications on par with national defense, yet are made in boardrooms, not legislatures.

This proposal asserts a simple but transformative premise: the speed of innovation must be matched by the legitimacy of its direction. The American people do not need to seize the train. They need to own the rail.

Through a structural mechanism of public equity governance, this policy ensures that private innovation continues at full velocity — but along a path aligned with public interest, constitutional accountability, and national strategic coherence. By embedding non-dilutable, governance-entitled public shares in firms developing Strategic AI Systems, we establish a forward-facing model of technological co-governance that avoids both regulatory overreach and democratic abdication.

This is not a call to nationalize. It is a call to constitutionalize — to install the modern equivalent of checks and balances in the infrastructure that will govern everything else.

The United States has faced this question before. With nuclear energy, with financial systems, and with the internet, we chose differently each time — sometimes wisely, sometimes recklessly. But AI is unique in its scale and speed. It demands a governance model that is as fast as the technology, but as stable as the republic.

Let the train move fast — but let the American people choose where it goes.


Core Policy Mechanism | Public Equity Governance

At the heart of this proposal lies a structural innovation designed to reconcile private sector velocity with constitutional accountability: the creation of a permanent public equity governance layer within companies whose AI systems exert strategic influence over society, security, or sovereignty.

This mechanism requires that companies developing or deploying what are classified as Strategic AI Systems—those with population-scale reach, critical infrastructure integration, or alignment with national security functions—allocate a non-dilutable class of public equity into a federally managed trust. These shares would not carry traditional voting power, nor would they interfere with day-to-day operations, capital structure, or fiduciary obligations. Rather, they would embed a legally defined governance channel for public interest protections, institutional access, and strategic oversight.

Specifically, these sovereign public shares would entitle the American public—acting through a federally appointed board or designated public trust—to:

  • Review alignment procedures, risk disclosures, and safety audits prior to public deployment of high-impact systems;

  • Appoint independent observers or advisory directors to corporate AI governance boards;

  • Flag or delay systems that fail minimum safety or alignment benchmarks in cases of public sector use;

  • Receive regular transparency reporting about model training inputs, use cases, and deployment boundaries.

This approach does not presume that government should design or operate foundational AI systems. Instead, it affirms that no technology with this level of public consequence should exist without public representation baked into its structure. In the same way that shareholder rights protect financial capital, sovereign equity would protect democratic capital—the authority of the people to shape the systems that increasingly shape them.

Importantly, this framework does not impose adversarial regulation. It is not designed to constrain innovation, but to legitimize it at scale. By offering structural governance in exchange for legal breathing room, the model mirrors the success of Section 230’s early internet protections—while correcting for its core flaw: the absence of accountability once dominance was achieved.

The result is a constitutional upgrade to 21st-century infrastructure. It replaces dependence with stewardship, opacity with transparency, and risk drift with institutional ballast. It gives the private sector a stable path to scale, while ensuring the public remains sovereign over systems that increasingly govern thought, choice, and power.


Framework | From Principle to Policy, Without Disruption

Transformational policy requires more than vision—it demands a measured, actionable, and institutionally coherent path to execution. The Sovereign Shareholding Framework is designed to integrate seamlessly into the existing structure of federal law, market operations, and public trust. Its implementation does not require dismantling private industry, expanding bureaucratic control, or rewriting the Constitution. It requires only political will and legal design.

Implementation would proceed in three deliberate phases:

Phase I: Classification and Legal Foundation (Year 1)
Congress, in coordination with the Department of Commerce, the Office of Science and Technology Policy (OSTP), and national security agencies, would pass legislation formally defining “Strategic AI Systems.” These systems would be classified based on their scale, influence over public functions, integration with government or defense infrastructure, and capacity to affect national cognition or stability. Alongside this classification, Congress would authorize the creation of a federal public equity trust, structured either within the Treasury or as a new independent entity charged with managing the public’s ownership stake.

Phase II: Share Issuance and Governance Architecture (Years 2–3)
Once classification is established, qualifying firms would be required to issue a fixed, non-dilutable class of sovereign public shares. These would be allocated to the federal trust with specific legal entitlements tied to transparency, audit access, and structural oversight. The trust would appoint a panel of independent public interest trustees, drawn from a mix of AI safety experts, constitutional scholars, national security professionals, and civic governance leaders. Their mandate would not be to interfere with commercial operations, but to monitor systemic alignment and advise on strategic deployment risks.

Phase III: Oversight Activation and Legal Safe Harbors (Years 3–5)
As the equity mechanism matures, participating companies would become eligible for a defined set of legal protections and innovation incentives, including limited liability for safe, pre-approved deployments, and fast-track access to government procurement and infrastructure partnerships. This completes the policy loop: companies are given room to build, as long as the public is given the right to steer. Just as Section 230 shielded the internet during its infancy, this model provides bandwidth for AI to scale—but with a governance layer calibrated to the stakes of the present era.

At every phase, implementation is structured to preserve innovation velocity while institutionalizing democratic legitimacy. This is not a regulatory dragnet. It is a public constitutional scaffold—lightweight, permanent, and designed to keep the rails aligned while the train accelerates.

This framework requires no disruption of existing corporate governance structures. It does not expropriate, centralize, or control. It simply ensures that when artificial intelligence systems begin to shape the world at planetary scale, the voice of the American people is built into the code of the institutions deploying them.


The Role of the American Sovereign Wealth Fund


While the public equity trust provides the governance architecture for democratic oversight of strategic AI systems, it must be reinforced with financial structure. To that end, this proposal calls for a strategic role for the newly established American Sovereign Wealth Fund (ASWF)—not as a tool of nationalization, but as a modern instrument of public investment in foundational infrastructure.

The ASWF was conceived to secure long-term national interests through direct ownership of economically or strategically vital assets. Artificial intelligence now qualifies as both. The scale, speed, and systemic reach of foundation models, inference infrastructure, and autonomous decision systems make them core components of national cognitive and strategic power. Like energy in the 20th century or telecommunications during the Cold War, AI has emerged as the operating substrate of modern sovereignty.

The ASWF should be authorized to acquire preferred, non-voting equity positions in companies that meet the Strategic AI System classification. These stakes would:

  • Be permanent and non-dilutable, ensuring enduring public participation;

  • Come with structured transparency and audit rights defined by statute;

  • Provide companies with liquidity, credibility, and access to federal innovation partnerships, without sacrificing their market orientation or autonomy.

This approach offers three structural advantages.

First, it positions the American public as a long-term stakeholder in the profits and direction of the most consequential technological systems of the 21st century. This corrects the current imbalance in which public risk is assumed—through dependency on private AI infrastructure—without any corresponding public return or influence.

Second, it signals global leadership. At a time when state-capital hybrids like China are embedding centralized AI governance into their industrial strategies, the United States can model a third path: democratic co-ownership without authoritarian control. Through the ASWF, America can assert technological sovereignty within a liberal constitutional framework, setting a global precedent for market-aligned, rights-respecting AI governance.

Third, it de-risks the broader policy vision. By grounding public governance in financial architecture, the model aligns with existing economic norms and institutional tools. It reduces political friction by offering a non-regulatory, investment-driven path to democratic legitimacy, allowing AI firms to scale without fear of hostile interference, while still remaining structurally answerable to the people.

This is not a retreat from capitalism. It is a defense of democracy through market-aligned civic architecture.

The United States did not become a global superpower by stepping back from technological revolutions. It led by ensuring that when new infrastructure emerged, it was ultimately accountable to the public it served. The Sovereign Shareholding Framework, reinforced by the ASWF, reaffirms that principle in an age where cognition itself is becoming an asset class.


Legal and Constitutional Foundation

At its core, the Sovereign Shareholding Framework is not a novel ideology—it is a modern constitutional response to the question of who governs when technology exceeds the capacity of traditional institutions. The legal basis for this framework is not invented; it is deeply rooted in the enumerated powers of Congress, longstanding federal practice, and the structural logic of republican governance.

Under Article I, Section 8 of the United States Constitution, Congress is granted expansive authority to regulate interstate commerce, protect national security, and promote the general welfare. The rapid emergence of artificial intelligence as a cross-sectoral infrastructure—with influence over defense systems, financial markets, healthcare, energy, education, and democratic processes—places it firmly within the scope of strategic commercial regulation and national governance.

Moreover, Article IV, Section 4 guarantees to every state a republican form of government. As AI systems increasingly mediate the administration of law, the flow of information, and the shaping of public opinion, allowing them to operate outside of public accountability threatens the very foundation of that guarantee. The Constitution does not merely empower the state to regulate such systems—it obliges it to ensure that sovereign decision-making remains answerable to the people.

Historically, the federal government has exercised similar constitutional authority in response to paradigm-shifting technologies. From the regulation of railroad monopolies and energy utilities in the 19th and 20th centuries, to the creation of federal communications and aviation authorities, Congress has acted to ensure that critical infrastructure serves national and civic interests, not just private profit. More recently, the establishment of public equity positions during the 2008 financial crisis demonstrated that direct public ownership of systemic private assets is both legal and precedented—when national stability is at stake.

The Sovereign Shareholding model builds on this tradition. It does not seek to constrain markets or override innovation. Rather, it introduces a structural legal layer that ensures that systems capable of shaping human cognition, law, and public order remain tethered to constitutional sovereignty.

It is neither regulatory overreach nor speculative idealism. It is a constitutional upgrade for the age of algorithmic governance.

By embedding the American people—through a legally established trust and, where applicable, the American Sovereign Wealth Fund—as non-dilutable co-owners of strategic AI, this policy translates abstract democratic principles into operational governance capacity.

In doing so, it affirms what the Constitution has always required:

That power, wherever it arises, remains accountable to the public it governs.


Co-Governance as the New Constitutional Mandate

Artificial intelligence is not just a market force. It is a force of political structure — one that can either consolidate power in the hands of unelected corporate entities, or one that can be reclaimed as part of a constitutional framework for democratic co-governance.


This proposal asserts that America’s strategic technologies must serve the republic, not replace it. Through public equity governance and sovereign investment, we can ensure that innovation continues at full speed — but that it travels on tracks laid by the people, for the people, and with the consent of those governed.


Let the future be fast. But let it also be ours.


If artificial intelligence is to govern the systems that govern us, the American people must govern it — not as subjects, but as sovereign shareholders.


A Constitutional Future for Artificial Intelligence

Artificial intelligence is not simply a new technology. It is a new system of power—one that increasingly determines who speaks, who decides, who sees what, and who controls the infrastructure beneath it all. In this emerging era, we face a binary political choice: either allow these systems to consolidate under the authority of unelected private entities, or design institutional mechanisms to ensure that they remain governed by the people whose lives they shape.

The Sovereign Shareholding Framework offers a third way: constitutional alignment through structural innovation. It preserves America’s unparalleled capacity for private sector innovation, while embedding a permanent public stake in the systems that will define the next century.

By establishing non-dilutable public equity stakes in companies developing Strategic AI Systems, and by activating the American Sovereign Wealth Fund as a financial vehicle of civic co-ownership, this proposal provides both a governance scaffold and an economic strategy for securing the republic in the age of algorithmic infrastructure.

It avoids the false dichotomy between regulation and laissez-faire. Instead, it reintroduces democratic structure at the layer of ownership and oversight—where influence truly resides. It is not a rejection of capitalism, nor a plea for centralized state control. It is a reaffirmation that the legitimacy of power in a constitutional republic requires public representation, even—especially—when that power is embedded in private code.

America’s founding premise was not that the state would control every domain of life. It was that wherever power is exercised, that power must remain accountable to the public. The Founders could not have predicted artificial intelligence—but they provided the blueprint for what to do when unaccountable systems begin to govern us: realign them with the sovereignty of the people.

The time to act is not after these systems are entrenched. It is now—while direction is still being set, and while democratic structure can still be built into the foundations of the AI economy. If we wait, the window will close. The train will be in motion—and the public will be passengers, not owners.

We do not need to fear AI. But we must not allow it to become a substitute for government. It must be subject to it.

Let the models evolve. Let the infrastructure scale. But let the republic endure—not beside artificial intelligence, but within it.


Next
Next

DURAN | Cognitive Sovereignty | How Understanding MKULTRA's Subproject 68 Could Be The Key To Cognitive Security