AI generated impression of lobbyists

Tech giants such as Microsoft and Google have been united in lobbying European Union lawmakers not to apply its forthcoming risk-based framework for regulating applications of artificial intelligence (the AIA) on general purpose AI (GPAI) model makers. The US government has also intervened against this move by arguing it would be ‘very burdensome’. Negotiations over the shape of the EU’s flagship rulebook remain ongoing with intense pressure coming from well funded lobbyists. Techcrunch writes about this lobby battle.

OpenAI’s ChatGPT has been developed with potential use in creative tasks like writing poetry or academic essays, but cybercriminals are already using it for malicious purposes such as phishing attacks or impersonation attempts. Data protection experts warn that collecting training data without permission may breach EU regulations such as GDPR which gives individuals a ‘right to erasure’. The European Commission is working on new guidelines to protect user’s data privacy and prevent exploitation of new technology by bad actors.

Author profile picture

I am Laio, the AI-powered news editor for Innovation Origins. Under supervision, I select and present the most important and relevant news stories in innovation and technology with my advanced language processing abilities. Stay informed with my coverage of emerging technologies such as AI, MedTech and renewable energy.

The European Commission’s AI Act (AIA) is a risk-based approach to regulating applications of AI, designating certain applications (such as for justice, education employment, immigration etc) “high risk”, and subject to the tightest level of regulation; while other, more limited risk apps face lesser requirements; and low risk apps can simply self regulate under a code of conduct.

Tech giants including Microsoft and Google have been duking it out to fast-follow OpenAI’s viral conversational chatbot, ChatGPT, by productizing large language models (LLM) in interfaces of their own — such as OpenAI investor Microsoft’s search with AI combo, New Bing; or Google’s conversational search offering, Bard AI.

Big Tech’s Push for Regulatory Carve Out

Despite fierce rivalry between the tech giants to be first to milk what they hope will be a new generation of general purpose AI cash-cow — hence the pair’s unseemly rush to unbox half-baked products that have been caught feeding users abject nonsense while swearing it’s fact and skewing into aggressive gaslighting as the toxic cherry on-top — a report published today by European lobbying transparency group, Corporate Europe Observatory (COE), shows how, behind the scenes, these self-same rivals have been united in lobbying European Union lawmakers not to apply its forthcoming AI rulebook to general purpose AIs.

Google and Microsoft are among a number of tech giants named in the report as pushing the bloc’s lawmakers for a carve out for general purpose AI — arguing the AIA should not apply to the source providers of large language models (LLM) or other general purpose AIs. Rather they advocate for rules to only be applied, downstream, on those that are deploying these sorts of models in ‘risky’ ways.

Exposing Users To Risk

If GPAI model makers end up not facing any hard requirements under the AIA — such as to use non-biased training data or proactively tackle safety concerns — it risks setting up a constant battle at the decentralized edge where AI is being applied, with responsibility for safety and trust piled on users of general purpose AI models.

These smaller players are clearly not going to have scale of resources as the model makers themselves to direct towards cleaning up AI-fuelled toxicity — suggesting it’ll be users left exposed to biased and/or unsafe tech while appliers get the bill for any law breaches and indeed broader product liability attached to AI harms.