AI-generated image of AI governance structures
Author profile picture

The term ‘technopolar’ world, coined by theorists Ian Bremmer and Mustafa Suleyman, encapsulates the current era where power is increasingly exercised by controlling computing capacity, algorithms, and data. This paradigm shift has significant implications for global governance as tech companies, rather than traditional state actors, begin to shape international norms and influence geopolitical events. For example, Elon Musk’s Starlink played a critical role in Ukraine’s resistance during the 2022-23 conflict, underscoring the profound impact that technology companies can have on international affairs.

Why you should read this

AI technologies rapidly evolve, prompting changes in international policy-making and regulatory frameworks. Governments and international bodies are grappling with the need to establish norms to keep pace with technological advancements.

AI and the evolution of international norms

The Wadhwani Center for AI and Advanced Technologies highlights various national and multilateral AI governance efforts, including U.S., E.U., and G7 initiatives. These efforts aim to create a cohesive regulatory environment that can manage the complexities of AI, such as the AI Seoul Summit co-hosted by the Republic of Korea and the United Kingdom in May 2024.

Challenges in AI governance

One of the primary challenges in AI governance is the disparity between the Global North and the Global South regarding resources and infrastructure. Policymakers in the Global South advocate for more equitable resource allocation to ensure that AI advancements benefit all regions. The AI for Good Global Summit, held in Geneva on May 30-31, 2024, emphasized the need for international cooperation and ethical guidelines to bridge this digital divide. The lack of regulatory safeguards in many countries also poses risks of privacy violations and discrimination, further complicating the global governance landscape.

Efforts towards responsible AI development

Several organizations and governments are actively working to create frameworks that ensure the responsible development and deployment of AI. The European Union has adopted the EU Artificial Intelligence Act, which categorizes AI uses into four levels of risk. Similarly, Singapore has advanced its Global AI Governance Law and Model AI Governance Framework. Private organizations like ISACA have introduced the Artificial Intelligence Audit Toolkit to help auditors verify that AI systems meet governance and ethical standards. These measures are crucial in ensuring that AI technologies are developed in a manner that prioritizes safety, transparency, and accountability.

The role of tech companies in AI governance

Tech companies significantly influence AI governance, often shaping how people interact with technology and impacting labor markets and geopolitics. The UK AI Safety Summit held in November 2023, where Prime Minister Rishi Sunak interviewed Elon Musk, highlighted the critical role of tech companies in global governance. The summit discussed the importance of co-governance of technology by the state and the private sector, stressing the need for states to learn how to build and make tech, not just interact with it.

Future directions in AI governance

The future of AI governance will be shaped by the evolving relationships between states and tech companies, especially in emerging technologies like AI and quantum computing. International bodies such as the G7 and the United Nations actively discuss AI governance, focusing on creating inclusive and intersectional regulatory frameworks. The House of Commons Science, Innovation, and Technology Committee’s report emphasizes the need for a principles-based approach to AI regulation, focusing on safety, transparency, fairness, accountability, and governance. As AI continues to permeate various sectors, the challenge for global governance will be to create robust frameworks that can adapt to the rapid pace of technological change.