From the curvature of cucumbers to the size of fish fingers, no aspect of life is too small or insignificant to escape the watchful gaze of Brussels. So it should come as no surprise that the EU is now turning its attention to the world of artificial intelligence. That’s right, folks – the EU has released a set of draft regulations on AI.
The above sentences were not written by me, but are in fact the work of an artificial intelligence. They were written by ChatGPT, a language model built by the research lab and corporation OpenAI. The AI was fed large amounts of text, instructed to identify its patterns, and then trained to produce plausible original responses to prompts. In this case, I instructed ChatGPT to “write a funny article about EU plans to regulate AI”.
Daily life
Artificial intelligence has already begun to feature as part of our daily lives. We communicate with customer service chatbots, use an automatic translation, and select automatically-generated responses in email and messaging apps that guess how we intend to complete our sentences.
AI is at work when Google chooses results to show us according to its guesses about what we are searching for, and it is already used to filter out unqualified candidates in the initial rounds of job applications.
Farmers can use AI to monitor crops and distribute fertilizer or water where needed, or to autonomously feed livestock. Computer programmers use automatically-generated code to speed up their work. AIs are being trained to detect tumors in scan results, with the aim of helping doctors to diagnose cancers.
There are probably many more uses of AI that we can not yet imagine. The technology is expected to change the way many jobs are done, perhaps taking over the more formulaic kinds of writing, as demonstrated in the first sentences of this article.
The advance of this technology raises interesting ethical questions, and a dilemma about how to regulate it.
There is the question of liability. AI systems are able to act autonomously, so if they make an error, who is responsible? The risks are significant, whether a misdiagnosis, unfair discrimination in a hiring process, or a self-driving vehicle that causes a car crash.
Prejudiced
AIs can learn to be prejudiced from the information they are fed on. The sentences that began this article serve as a good example. They indicate ChatGPT has learned from the Boris Johnson style of English-language reporting on the EU, which has tended to exaggerate or fabricate supposed plans for the petty regulation of foodstuffs by a nebulous “Brussels”.
Training an AI based on past hiring decisions could teach it to discriminate based on gender or ethnicity. Because of the datasets used in its development, facial recognition AI can work less well for black people, putting them at a disadvantage when such systems are used to verify identity, as they already are in online banking and to access social services.
Insurance companies could use an AI to exclude people with certain lifestyles or family medical histories from coverage, leading to unjust outcomes. There are privacy concerns, too, about the use of data to train the AIs.
Ultimately, artificial intelligence systems may become hugely powerful. Who will they serve?
Science Fiction
Currently, they are being developed by the world’s wealthiest tech companies, and so can be expected to advance these already-powerful private interests, to be designed in a way that is culturally slanted towards the United States, and to further entrench inequality.
It might sound like science fiction, but there are deep concerns about whether an artificial intelligence could ultimately work against the interests of humanity. Repressive governments are already using AIs fed on the mass data collection of the population as a way to bestow or withhold benefits.
AIs can behave with some autonomy, so an AI with access to the internet could train itself to become the most effective ever scam network, running infinite simultaneous phishing, ransom, or romance scams, and learning from each experience to be more effective in the next. An AI could even build a new artificial intelligence system of its own, to serve a further purpose still.
AI Act
The EU’s attempt to grapple with this emerging brave new world is called the AI Act. As early-mover regulation that would apply to 450 million of the world’s wealthier people, it is likely to be globally influential.
The draft law proposed by the European Commission has a risk-based approach. It bans outright the kind of AIs that are deemed to carry “unacceptable risk”, such as the kind of social scoring system associated with the Chinese government. “High-risk” systems, like those used in transport, education, law enforcement, or recruitment, are obliged to reduce risk and build in human oversight. Systems with “minimal” or “limited” risk, such as chatbots, spam filters, or video games, have less stringent rules.
The legislation has been evolving as it is negotiated, with the 27 EU member states recently agreeing a compromise amongst themselves. The new draft would exempt military, defense, and national security AIs from being covered by the regulation. It would also allow police in exceptional circumstances to use remote biometric surveillance in public spaces, such as using facial scanning to find suspects. Negotiations with the European Parliament are upcoming.
Some civil society groups have warned that safeguards of the legislation are too weak. It’s clear that the EU governments do not want efforts to reduce the risks of artificial intelligence to deprive them of the technology’s potential opportunities, or to shut down innovation in an industry that many see as promising economic growth.
This article was previous published in the Irish Times.