Blog

Compute as Power: Reflection on "Sharing AI Compute in the Global South"

By Tamsin Connerly // February 16, 2026

At “Sharing AI Compute in the Global South” at Duke University, the event began with a simple framing that grounded discussions throughout the day: “whoever controls compute has leverage over everyone else.” Though conversations delved into more technical infrastructure, compute as a governance question remained the focus of the event. Compute access is a prerequisite for building AI, scaling its real-world use, and shaping the global rules that govern responsible deployment. This lens was particularly helpful for my own research through Duke’s Deep Tech initiative on AI regulation in the Global South. For stakeholders to effectively govern AI, they must have access to resources that allow them to have agency over how systems are built, deployed, priced, and regulated, rather than remaining dependent on decisions made elsewhere.

Speakers defined the “compute divide” as the gaps in the ability to access compute power for the global arrival of powerful new AI systems. This divide matters, as compute is now foundational to work that affects public welfare: language technologies, scientific modeling, biomedical research, and climate analysis. When access is uneven, AI is more likely to benefit countries with abundant compute power while other nations fall behind. Additionally, there is still no comprehensive, shared long-term vision for what equitable compute access should look like. The discussion identified three interconnected risks associated with the compute divide. Dependency occurs when countries rely on third-party infrastructure and lose bargaining power over pricing and access. Exclusion accompanies dependency when researchers and governments without affordable compute cannot build, test, or audit AI systems. With time, these gaps can also result in fragmentation: countries develop uneven capabilities and incompatible governance approaches that make shared standards harder to establish. 

My research project echoes the concerns raised by this divide. Historically, technological innovation has concentrated in the Global North, leaving Global South stakeholders to “catch up.” In relation to AI, that imbalance is especially troubling because models developed in the Global North can produce harms, such as misinformation, biased decision systems, and surveillance applications, that travel across borders and shape realities elsewhere. If Global South nations remain sidelined from the infrastructure that allows for AI development and oversight, they will be forced to react to technologies and standards set by others, rather than helping shape them from the start. 

Panelists proposed multinational compute pooling as a potential solution to the compute divide, and argued that pooling resources across countries or institutions could help participants reach a level of compute that none could achieve alone while preserving national autonomy in a way that reliance on a foreign hyperscaler often does not. In principle, shared infrastructure could reduce dependency and expand agency by distributing control over access, pricing, and governance among the participating stakeholders.

Nonetheless, the discussion clarified that while shared compute infrastructure may be technically feasible, it raises difficult governance questions. Speakers also addressed the operational realities of compute clusters and shared systems, translating the idea of “pooled compute” into how it actually works day-to-day. The governance challenges that emerge once a shared system exists were also emphasized, including how fairness is defined, who sets priorities, and how access is scheduled and managed when demand outstrips supply.

At the international level, those challenges intensify. As several of the speakers explained, the key issue is not just whether shared compute can be built, but who gets to govern it and how. A multinational compute pool would require ground rules on membership, contributions, allocation formulas, transparency, dispute resolution mechanisms, and explicit prioritization criteria. In the absence of clear rules, pooling may not solve structural inequalities, especially if the largest contributors can completely dictate the allocation of power. 

The urgency of acting early to prevent the compute divide from hardening was also emphasized, alongside the need to design AI investments with safety and security in mind from the outset. India has recently committed $1.3 billion to AI initiatives, including shared compute resources and capacity building. Speakers made a broader point: embedding governance priorities directly into infrastructure investment strategies is essential to minimizing AI’s risks as these systems scale.

Overall, the summit made it unambiguous that AI regulation and AI equity cannot be separated from the infrastructure that enables AI in the first place. Shared compute could be a path toward protecting agency and expanding participation, but only if it is paired with governance frameworks that are transparent, legitimate, and aligned with public interests. In other words, meaningful Global South participation cannot rest on “being included in the conversation” alone; it also requires access to the structural prerequisites for building and overseeing AI. In this case, that prerequisite is compute.

This event was organized by the student-led India Forum @ Duke, the Duke AI in Product Innovation Program, and the Duke Deep Tech Program, with support from the Oxford Martin AI Governance Initiative and the Committee on Global Thought at Columbia University. This event reflection is provided for informational purposes only and should not be interpreted as representing the official positions or endorsements of the organizers, speakers, affiliated institutions, or any government bodies, including the Government of India. For any queries or concerns, please contact the India Forum at Duke.