The past twelve months can be defined as the age of realization about AI. The general public comprehended what AI can do: boost cancer detection, decipher multi millennia-old papyri, or recreate John Lennon’s voice. Most importantly, the AI’s potential to enter any of society’s realms became evident to most people. However, the potential risks of AI also emerged.
- AI is showing up as a horizontal technology with the potential to disrupt any sector.
- Therefore, educating people about it is fundamental to creating trust and awareness.
- With regulations starting to be implemented, much of the work lays ahead.
In an effort to protect from AI threats and regulate its development, governments are starting to issue the first regulations. At the end of October, United States President Joe Biden issued an executive order setting new standards for AI security. Requiring developers to share their safety test results with the US government, developing new tools to ensure the trustworthiness of AI systems, and launching measures to protect Americans’ privacy were some of the measures included in the bill. The EU is also regulating the AI space with its EU AI Act.
Reggie Townsend is vice president of data ethics at SAS, a global leader in AI and analytics software, and a National Artificial Intelligence Advisory Committee (NAIAC) member. This committee advises the US president on AI. He is also a board member at EqualAI, a nonprofit that seeks to reduce bias in AI. An engineer by training and technologist at heart, Townsend has always enjoyed learning about emerging technologies. Becoming more familiar with AI, his curiosity led him to a point of concern that then evolved into a necessity to act. He firmly believes educating people about AI is needed and can help overcome misconceptions.
This is an article from IO Next: The Year Of… For the last magazine of this year, we selected the articles that stuck with us the most, whether it was an impressive interview, an important story or just something funny.
Why Mauro selected this story for the magazine:
In many ways, 2023 was the year of AI. From the general public realization of its capabilities to the implementation of the first regulations, 2023 will be remembered as a crucial year in the history of AI. What impact can we expect from AI in the years to come? How can we limit its risks? To close the year, we had an interview with Reggie Townsend, one of the experts on the committee advising the US President about AI.
What is the biggest misconception about AI?
“I think many people have a Terminator view about AI. They hear a lot of the doomsday talk circulating throughout the media, which is hard to break because it involves emotions. What’s perhaps as troubling to me is the people who don’t engage at all. It’s good that people should be free not to care. However, AI is a horizontal and ubiquitous technology. It is showing up in newsrooms, clinical settings, and banks. I equate it to electricity in that regard; we all tap into the electricity grid, and so, at some point, we will all tap into AI in some fashion. Just like we all know there is a risk with plugging a device into a socket if it’s wet, we need to increase our knowledge about AI similarly.”
Did your experience change your concerns about AI?
“The more knowledge you get, the more you change, or at least you should. We still have concerns with respect to how AI is showing up in law enforcement, judicial settings, or healthcare, for instance. These are all matters of high impact that regard all of us, technologists and non-technologists alike. We must understand what AI is as a lifecycle, as a long process of accumulating data and building models to support and make decisions from data, and how that AI gets deployed in a specific context. Contexts change and, therefore, potential impacts vary as well.”
In an interview you previously gave, you said that one of your concerns was the spread of misinformation that can result from AI. Is it still the case?
“This is not just an AI matter. Social media platforms are the primary way of distributing content. Social media use AI, and what they do is amplify what we see as essential or entertaining. Therefore, they will algorithmically keep showing this content. Human nature intersects with this all, and we gravitate towards some sensational news that doesn’t always equate to truth. But it’s also a platform distribution issue and a natural one. And we have to be able to wrestle with all of those simultaneously, so that is still a concern of mine.”
How can we prevent the spread of misinformation from happening?
“The first step is awareness and education; this is the topic we need to elevate. It’s not about making everyone a data scientist or a statistics Ph.D. but teaching people the basics of how data works in our lives, where it shows up, and how it is being used. If you can inform people and put them in the best position to make decisions for themselves, that’s a win-win.
The other thing to consider is the role of us technologists. We need to prove ourselves worth the trust of our customers. Our platforms have to be trustworthy. That’s why one of the things we are doing at SAS is trying to be as transparent as possible, providing frameworks for example, by building explainability capabilities into our platforms so people can understand the chain of custody related to how data was used. Then governments have a role as well, and we are seeing a lot of governmental activity happening at the moment. This all-in moment requires each of us to step up.”
What’s your definition of trustworthy AI?
“When I talk about trustworthy AI, I’m talking about whether or not AI is being used ethically. I’m talking about ensuring adequate levels of transparency and explainability, so all of the intangible characteristics we seek when we say trust. Trust is an emotion, not a mathematical equation. In scientific circles, we can very quickly define trust. I think it’s a fool’s errand; you know trust when you feel trust. So, we will do some things tangibly that we know are attributes of displays of trust, but we can’t force anyone to trust us. All we can do is act in trustworthy ways and then allow another person to feel that feeling or not.”
How do you see governments stepping in to regulate the AI space?
“What’s reassuring is that there is a discussion taking place. What’s also reassuring is that the discussions are leading to a common sense of purpose and values. Declarations are great. The promises are fantastic. The next step is how we actuate them, in other words, how we enable our values. The hard work is ahead of us. The EU chose the hard law approach, while the UK and the US opted for a soft law one.”
EU approach vs US approach
The EU AI Act designs a regulatory framework for AI in the European single market, establishing a risk-based approach towards AI systems emphasizing the respect of EU values and fundamental rights.
Biden’s Executive order underscores American leadership in taking the opportunities offered by AI, underlining safety, equity, and civil right.
You can find a more detailed comparison here.
Some believe that these regulations will hinder AI development.
“Any regulation is to impede. I think the real question is, do innovators have the ability to innovate? Are they going as fast as they could? Maybe not, but should they? At some point, we have to put some boundaries in place; for me, the question is, how broad are the margins? Some of the best innovations come from constraints, not without them.”
What would you like to see in the AI space five years from now?
“I would like to see some, if not hard, but some pseudo standards that we all accept and agree upon. I would love to see people become more educated about the AI space. I would love to see a way for individuals to have greater control over their data so that it’s used for their greatest benefit.”