Author profile picture

OpenAI CEO Sam Altman has expressed concerns about the potential impact of upcoming European Union (EU) AI regulations on the company. The EU’s proposed AI Act is the first global attempt to govern AI, requiring companies to disclose copyrighted material used in generative AI systems like ChatGPT. OpenAI will try to comply, but a failure to do so might force them to leave Europe. EU parliamentary committees have reached an agreement on stricter AI regulations, which further classify AI tools by risk levels and introduce mandatory transparency and liability rules for developers.

Stricter AI regulations under discussion

The EU parliamentary committees’ agreement on stricter AI regulations includes the AI Act, which covers facial recognition, biometric surveillance, and AI applications. The proposed law will coordinate details with the European Commission and member states. AI tools are classified by risk level, with governments and companies having varying obligations depending on the risk. Green Left MEP Kim van Sparrentak stated, “AI should serve people, society, environment, not the other way around”.

The AI Act aims to set AI standards before China and the US, addressing issues such as job losses, disinformation, and copyright infringements. The law categorizes high-risk AI systems and bans dangerous methods, while regulating high-risk AI with an approval seal and allowing flexibility for rapid development adjustments. Banned AI systems include human behavioural assessment and real-time biometric identification for law enforcement. The Parliament proposes further bans on emotion recognition by police, border control, employment, education, and crime and fraud prediction based on profiling.

Legislation and its impact on AI companies

High-risk AI systems include police tracking, resume scanning, nuclear reactor security, and water supplies. The requirements for these systems involve no discrimination, transparency, clarity for regulators, and mandatory risk analysis. The legislation is the first to address generative AI models, with a plenary vote in the European Parliament scheduled for next month. Negotiations with the Council of the European Union will follow, and the law is expected to be enacted early next year, with implementation in two years.

As the AI market continues to grow, companies like OpenAI and Google must adapt and comply with the proposed European regulations. The future of AI in Europe, particularly for generative AI systems like ChatGPT and Bard, remains uncertain. The stricter regulations aim to control dangerous AI usage, prohibit subliminal and manipulative techniques, and protect fundamental rights, health, safety, the environment, democracy, and the rule of law. Companies deploying AI must ensure compliance, but the potential impact on innovation and market growth in the EU is yet to be seen.

Google’s Bard faces similar challenges

Google’s AI assistant, Bard, is also affected by the stringent regulations in the EU. Powered by the PaLM 2 AI model, Bard competes with OpenAI’s ChatGPT in the generative AI chatbots market. Google has expanded Bard to 180 countries, but has chosen to exclude the EU and Canada. The EU’s General Data Protection Regulation (GDPR) is a potential reason for this exclusion, as it guarantees user rights such as access, rectification, erasure, and restriction of processing. Companies face fines if their AI training data prevents EU users from exercising these rights.

Google’s Bard chatbot collects user information and potentially uses this data for training, which could lead to GDPR compliance difficulties. OpenAI’s ChatGPT faced a temporary ban in Italy due to GDPR violations. The ban was lifted after OpenAI implemented privacy changes, clarified user data deletion, and complied with transparency and data processing measures. By not releasing Bard in the EU, Google avoids similar regulatory issues.