Open, Until It Isn’t
A critique of “Open Global Investment” for AGI, and a sturdier alternative.
Nick Bostrom’s proposal for “Open Global Investment” seeks to govern a civilization-scale technology with institutions we already know how to operate. Public companies with widely held shares, corporate boards with enhanced oversight, and a state that regulates without seizing are familiar instruments. Their very familiarity is the point: OGI promises to be implementable in the world we actually inhabit rather than in a world we wish existed. Yet implementability is not the same as resilience. The crucial test is whether these components, once coupled to frontier-capability AI, hold their shape under real stress or snap the first time incentives, geopolitics, and system tempo turn adversarial.
Begin with ownership. Open share registers do not yield open control when decisions are urgent and contested. In modern markets, dispersed equity consolidates into a handful of voting blocs held by index complexes and large active managers, while dual-class structures and insider control separate cash flow from authority. The result is that “openness” at the cap table does not translate into meaningful leverage for ordinary shareholders, particularly for those outside the host jurisdiction. If a frontier lab must choose between slowing to resolve a safety anomaly or accelerating to protect a lead, the structural bias of public markets points toward speed. The governance core remains narrow precisely when widening the circle of restraint would be most valuable.
Move next to the state. OGI envisages legal and diplomatic assurances against expropriation, even the scattering of infrastructure across borders to raise the cost of seizure. But national-security instruments are designed to pierce private arrangements swiftly when core interests are implicated. Export controls, emergency economic powers, foreign-ownership mitigation regimes, and priority production statutes exist to override commercial commitments on short notice. Democracies cannot bind future executives on matters of security. Rational counterparties price this possibility in advance and hedge by building duplicate capabilities, suppliers, and routes. The hedging reintroduces competitive dynamics that OGI hopes to soften, quietly recreating the race under a different guise.
Nor is openness a constant. Even if foreign investors are invited to buy shares, they cannot rely on continued access to frontier models, chips, or tooling once those are deemed dual-use. Export rules evolve quickly and reach extraterritorially; they widen in crises and loosen in lulls. That pattern may be prudent security policy, but it is corrosive for a governance model that depends on steady international buy-in. “Open” becomes a fair-weather promise.
Industrial structure compounds the tension. One reading of OGI gestures toward consolidation around a single lead developer to gain coherence and speed. The monoculture that follows becomes a single point of catastrophic failure, organizationally, operationally, and politically. Another reading preserves a small field of competitors and with it the familiar incentives to ship and scale rapidly. Markets that reward short-run growth and penalize caution do not suddenly invert because the product is profound. OGI, as framed, offers no stable middle that avoids both monoculture fragility and capability racing.
Tempo aggravates these weaknesses. As capabilities climb, conventional regulation cannot keep pace, which suggests closer coupling between firm and state. But the moment an AGI developer becomes economically systemic, coupling invites conflicts of interest into the heart of safety governance. If the equity sits in public pensions, sovereign funds, or macro-prudential portfolios, decisions that dent valuation acquire fiscal shadows. Under stress, those shadows bias toward permissiveness rather than prudence.
Jurisdiction adds friction of its own. Distributing data centers and compute to deter seizure multiplies legal collisions. One sovereign’s data-access order can contradict another’s localization or privacy law. In an incident measured in hours, the absence of pre-built procedures for resolving those collisions at machine tempo converts “distributed” into “seizable”, if not physically, then legally, as each jurisdiction asserts primacy and compliance teams freeze.
Fairness, finally, must be understood more thickly than a spreadsheet of global upside. In many low- and middle-income countries, household participation in equity markets is limited, foreign allocation channels are narrow, and sovereign investors rarely obtain the governance rights that matter when national priorities collide with corporate calendars. In practice, the governance core remains centered in the host state, amplified by a small number of asset managers, and ultimately supervised by the host’s courts and regulators. Perceived exclusion does not remain rhetorical. It drives the construction of parallel stacks, local training pipelines, alternative hardware routes, sovereign clouds built to avoid foreign veto. Once those stacks exist, they must justify themselves by moving faster or relaxing constraints. A fairness deficit thus translates directly into capability acceleration under weaker guardrails, the very failure mode OGI seeks to avoid.
These are not reasons to discard the virtues that make OGI attractive. Its chief merit is that it can be stood up within the contours of existing law. It is partially incentive-compatible in that it gives powerful actors something to lose if they break the compact. It is modular; it can coexist with regulation and treaties rather than requiring the wholesale creation of new supranational organs. The right response is to keep those virtues and replace the brittle joints.
The core design move is to separate returns from rights. Capital can retain returns within normal tax and windfall-sharing rules. But the rights that matter for catastrophic-risk control should sit outside the corporate charter and above any one government’s day-to-day politics. What is needed is a narrow public authority, anchored by treaty, capable of acting at machine tempo on a small set of safety questions without owning the companies, dictating research programs, or setting prices. Think of it as a civilizational safety brake: a lever that can be pulled quickly and with reasons, without rewiring the steering or nationalizing the car.
Such a forum would carry a limited mandate. It would define baseline safety standards tied to concrete capability triggers rather than to crude proxies like parameter count. It would have the power to condition, phase, or defer specified classes of high-risk deployments and to direct short, time-boxed emergency measures when objective indicators are met. It would audit compliance with compute accounting, evaluations, and red-teaming requirements that firms accept as a condition of operating across member jurisdictions. It would not own equity, allocate markets, or run research; its authority would be circumscribed to catastrophic-risk governance.
Legitimacy would come from who sits at the table and how decisions are made. Seats should be guaranteed for the host states of participating firms, for major consumer markets and critical-supply nodes, and for rotating regional groups that ensure continuous participation from the Global South. An independent technical secretariat, funded on a stable levy and staffed to maintain test suites, run evaluations, and verify telemetry, would do the detailed work. Strict conflict-of-interest rules would separate members’ roles as investors or customers from their adjudicative role in safety decisions. The forum’s routine business, updating test suites and telemetry schemas, should proceed by qualified majority with strict deadlines. Actions that halt or condition deployment should require a super-majority that includes affirmative votes from three blocs, host states, major-exposure states, and regional representatives, so that no single geopolitical cluster can impose or block action unilaterally. The clocks should be engineered for the domain: routine matters in days, not months; emergency measures on timelines measured in hours where technically feasible and justified by objective triggers.
Authority requires information, but information does not require custody of model weights. Participating firms should implement attested telemetry and compute metering that expose standardized logs, red-team results, and evaluation outputs to the secretariat under strict access controls. The forum should hold split keys that govern the scheduling of covered compute jobs across cooperating clouds and hardware vendors, with key shares held by the secretariat, the host regulator, and a rotating set of regional representatives. Continuous verification would run quietly in the background; intrusive audits would require heightened votes and leave a hardened paper trail. The ideals here are legibility, proportionality, and a minimum viable footprint that is sufficient to bound tail risk without inviting full-spectrum micromanagement.
Emergency powers should be legible and limited. The forum should publish a small menu of actions, compute throttling for specified training runs, temporary suspension of a deployment class, a time-boxed red-team surge, each tied to objective tripwires such as failed alignment benchmarks, credible exploit kits in the wild, or cross-model correlated anomalies. Every action should sunset automatically, every decision should carry a reasoned public justification with narrowly tailored redactions where necessary, and every episode should be reviewed ex post by an independent panel that assesses necessity and proportionality. The aim is to regularize state intervention in advance so that when, not if, it arrives, it does not detonate the cooperation the model is meant to sustain.
Accountability must bite to be real. The forum should publish reasoned decisions and disclose voting records on safety-critical measures. It should issue periodic state-of-safety reports that benchmark firms against agreed metrics. An ombudsperson should accept petitions from civil society and affected communities with authority to place items on the agenda or to initiate reviews. Funding should come from a fixed levy on covered compute spend and from licenses tied to high-risk deployments. That funding model aligns resources with activity and insulates operations from ad hoc appropriations. Every three years, an external review, appointed in part by independent scientific academies, should audit the forum’s performance and recommend reforms.
Inside firms, governance should be made just-in-time. Corporate charters can be amended to create a board-level safety committee with its own budget and a dual fiduciary duty: to the corporation and to treaty compliance. When the forum issues a binding directive within scope, that committee should be obligated to implement it and to report execution on fixed timelines. This creates a legally cognizable point of responsibility and avoids paralyzing full-board politics during emergencies. Over time, dual-class voting can sunset automatically at predefined capability thresholds, pass-through voting can be required for specified safety matters, and compensation can be indexed to safety performance with clawbacks for breaches, so that internal incentives point in the same direction as external obligations.
Disputes are inevitable; the system must metabolize them quickly. Technical disagreements over tests, telemetry, or benchmarks should go first to a standing technical chamber with short, enforceable deadlines. Legal or state-to-state disputes, claims of ultra vires action or discriminatory treatment, should proceed to expedited arbitration modeled on trade law. Interim measures should be available sparingly: a brief, non-renewable pause to prevent irreparable harm while a panel hears the case, or an order for targeted disclosure. Remedies should be automatic and pre-specified. For firms, non-compliance should trigger loss of cross-jurisdictional operating privileges and heightened audit. For states, it should suspend reciprocal market-access benefits under the treaty. Hard edges make the promises credible.
Crucially, leverage should be infrastructural rather than rhetorical. Member jurisdictions should condition export-licensing reciprocity, access to certain cloud interconnects, and eligibility for public procurement on adherence to forum decisions. Cloud providers and hardware vendors should embed compliance checks in orchestration layers so that violating a compute directive becomes technically difficult and commercially expensive. A public compliance registry should allow investors, insurers, and counterparties to price non-compliance as a balance-sheet risk. When the control plane and the business plane align, compliance stops being a public-relations performance and becomes an operational default.
Consider how this would work in practice. An AGI developer is preparing to release a general-purpose agent capable of autonomous tool use across member states. Pre-deployment evaluations reveal anomalous behavior on a subset of tool-chain interdiction tests, correlated with known jailbreak patterns. The firm’s board safety committee reports the results to the secretariat. An extraordinary session of the forum is called. After reviewing the record, the forum issues a conditional approval that allows deployment only in monitored sandbox tiers with obligations to collect additional telemetry and to restrict specified egress channels. The company implements the gating within two weeks and reports compliance to the public registry. A month later, a red-team surge uncovers a reproducible exploit that the sandbox would have contained but a full release would not. The intervention does not micromanage product; it bounds risk during uncertainty, preserves a path to general availability once the exploit is addressed, and leaves a transparent record explaining both the caution and the resumption.
This architecture is compatible with the spirit of OGI while correcting its fragility. It keeps market-facing implementability and avoids a brittle attempt at world government. It places catastrophic-risk authority in a narrow forum that can act quickly with reasons and representation. It regularizes the state’s inevitable emergency role rather than pretending that it can be wished away by distributing shares or data centers. It gives firms a clear internal machinery for obeying safety directives without collapsing into committee theater. And it provides a path for participation by states that would otherwise peel off into parallel stacks, thereby reducing the centrifugal pressures that accelerate capabilities under weaker rules.
Bostrom is right to prize practicality. Immaculate institutions will not arrive on time. But OGI, as framed, is open only until it is not, open until export controls bite, until emergency law is invoked, until legitimacy runs low and competitors race ahead on their own platforms. The remedy is not to abandon OGI’s realism but to give it a spine. Separate returns from rights. Anchor a small set of public safety rights in treaty rather than in corporate bylaws or ad hoc interagency committees. Equip those rights with objective triggers, hard clocks, public reasons, and infrastructural leverage. Wire firms internally so that compliance is a matter of governance, not of press releases. If the aim is to reduce catastrophic risk without pretending away the world as it is, these upgrades turn a clever aspiration into a structure that can carry weight.