The Biden-Harris administration announces new actions advancing responsible AI innovation, protecting citizens’ rights and safety. Key measures include $140m investment for seven National AI Research Institutes, public assessments of generative AI systems, and draft policy guidance on AI use by the US government. Meanwhile, the EU pushes for excellence and trust in AI, proposing legal frameworks addressing fundamental rights and safety, with a focus on liability rules. As generative AI gains prominence, EU lawmakers are updating the AI Act to regulate technologies like ChatGPT. Both sides of the Atlantic work towards alignment on AI risk management policies, as collaboration between the EU and US is essential for global AI governance.
White House meeting highlights responsible AI innovation
On 4 May 2023, Vice President Harris and senior Administration officials met with CEOs of Alphabet, Anthropic, Microsoft, and OpenAI to discuss responsible AI innovation and the importance of driving trustworthy and ethical innovation with safeguards that mitigate risks and potential harms to individuals and society. This meeting is part of the Biden-Harris Administration’s broader effort to engage with advocates, companies, researchers, civil rights organizations, non-profit organizations, communities, international partners, and others on critical AI issues.
The White House’s new actions include investing $140 million in funding to launch seven new National AI Research Institutes, further expanding the existing network of organizations involved in ethical, trustworthy, and responsible AI research and development. Additionally, leading AI developers such as Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI have committed to participating in a public evaluation of AI systems on an evaluation platform developed by Scale AI at the AI Village at DEFCON 31.
EU develops liability rules for AI and addresses generative AI
The European Commission is working on a legal framework for AI, focusing on fundamental rights and safety, ensuring that those harmed by AI systems enjoy the same level of protection as those harmed by other technologies. The European Parliament has adopted a legislative own-initiative resolution on civil liability for AI and requested the Commission to propose legislation. Consequently, the Commission delivered the Proposal for an Artificial Intelligence Liability Directive (AILD) on 28 September 2022, which aims to improve the internal market functioning and lay down uniform non-contractual civil liability rules for AI-involved damage.
As generative AI technologies like ChatGPT grow rapidly, EU lawmakers are racing to update the AI Act to regulate such systems. MEPs Dragos Tudorache and Brando Benifei proposed changes requiring companies with generative AI systems to disclose copyrighted material used in training models, receiving cross-party support. The Committee plans to vote on the deal on 11 May, and if successful, it will move to the trilogue stage for debate between EU member states, the European Commission, and the European Parliament.
Aligning AI regulations: EU and US collaboration
Despite their differing approaches to AI risk management, the EU and the US share common ground on risk-based approaches, principles of trustworthy AI, and international standards. The EU-US Trade and Technology Council has been working on metrics and methodologies for trustworthy AI, collaborating on international AI standards, and studying emerging AI risks and new technologies.
Policy recommendations for both sides include executing federal agency AI regulatory plans, designing strategic AI governance, and deepening knowledge sharing on standards development, AI sandboxes, large public AI research projects, open-source tools, regulator-to-regulator exchanges, and AI assurance ecosystems. Deepening collaboration between the EU and the US is crucial for ensuring AI risk management policies become pillars of global AI governance and democratic control.