Image: AI-generated
Author profile picture

The task of navigating through the online network has become particularly challenging after the advancement of artificial intelligence. Social media platforms have been flooded with disinformation, leaving users unaware of certain content’s reliability. 

Why this is important

The so-called deep fake images and videos have disrupted elections and raised concerns about potential risks to democracies. As a response, countries have been tightening their legislation to counter the unstoppable flow of disinformation. According to the 2024 Digital News Report, 59% of people surveyed are concerned about what is real and what is fake on the internet.

There has been a myriad of AI-generated material focused on deliberately disseminating false and malicious content on social media, raising an important question: Is AI to blame for the rise of disinformation?

According to Beatriz Farrugia, a research associate at the Atlantic Council’s Digital Forensic Research Lab and professor, “AI tools make disinformation spread more quickly.”

Farrugia, who is an expert in disinformation and extremism, says there are no current studies that can prove that artificial intelligence consequently leads to a greater number of disinformation pieces. “It is a very difficult thing to measure, and if we did that, the results would be limited to a certain society in a specific time frame.” 

Not a new technique

Since the Roman Empire, disinformation has been used as a weapon to manipulate narratives. Back then, the intention was to prevail on the battlefield. With the advancement of artificial intelligence, the battlefield has been modified, and currently, the end goal is to cause disruption:

“Disinformation campaigns have always happened and will continue to happen. What changes now is the tactic, the method, the frequency, and the speed with which they spread, and the impact they have,” explains Farrugia. 

Social media’s response

Over the past years, social media companies have adapted their platforms to counter the negative impacts of disinformation. Meta, TikTok, YouTube, and X, the most popular out there, have implemented content moderation, allowing users to report malicious or false posts. Yet, platforms have not been able to fully tackle this issue. In the European parliament’s elections, earlier this year, a coordinated disinformation campaign on X aimed to disrupt electoral processes in France, Germany, and Italy.

Now there’s a new component to the equation, AI-generated content. Due to a wave of conspiracy stories and misinformation spread on X and TikTok, users worry about how to distinguish between trustworthy and untrustworthy content online. Recently, many unreliable narratives and deepfakes about the war in Gaza and the US elections have been gathering millions of views.

Tightening regulations  

Across the world, countries are imposing stricter regulations on these platforms. In Brazil, the Supreme Court has temporarily banned Elon Musk’s X, as the company did not meet the court’s deadline to name a new legal representative in the country. The clash between the tech billionaire and Brazil’s highest court is part of an ongoing campaign against disinformation in the most populous country in Latin America. 

Other nations have also been facing challenges in regulating social media networks, especially amid the surge of generative AI tools. The European Commission has charged X, formerly Twitter, for breaches against the new Digital Services Act (DSA), designed to reduce disinformation and illegal online content. The Commission accused X of not doing enough to counter the spread of malicious and toxic content on the platform.     

Confusion about AI

The term “artificial intelligence” still generates confusion among people. Not everyone knows what it means, the possibilities offered by it, or how to identify AI-generated content. 

For instance, there have been discussions on social media in which users accuse certain content of being ‘deep fakes’ or “AI-generated’, even when this content is legit. Hence, this lack of understanding can be just as damaging as the content created by artificial intelligence itself:  

“In an election scenario, this confusion about AI can be a risk,” explains Farrugia. “Instead of strengthening democratic discussions, the information ecosystem is destabilized.”

Balancing the equation

AI-powered applications can manipulate, change, and create new images, videos, and audio versions. While AI opens up many possibilities, it can sometimes be misused to spread false information. Indeed, tech companies should be the legal party responsible for countering these negative effects. Yet, a big part of the problem also lies in the way people use this technology, whether with good or malicious intent. 

“We can’t just blame the technology companies. We also have to put the responsibility on society,” says Farrugia. She explains that there should be more attention to the way these tools are being deployed and to “the laws we need to prevent online crimes.” 

“There is no overnight solution to the problem,” as it might take a lot of time and effort to find a middle ground between AI, social media platforms, and society. Yet, it is essential to work on media literacy and intelligence, educating people on how to navigate in this AI network era. Users should learn to reflect and make informed decisions based on the content gathered on social media.  

Further, platforms should create mechanisms, labels, and categories indicating which content is AI-generated. By raising awareness, the harmful impacts of disinformation campaigns can be mitigated:

“It’s like learning a new language. No one is going to be fluent in a few classes. But we need to take these steps in education and in the evolution of the platforms to prevent it from happening.”

Double-edged sword

Farrugia explains that it is also possible to use AI tools as a counter-offence. At her work, artificial intelligence and language models have been used to detect cases of disinformation and patterns of malicious actors running disinformation campaigns. 

“We collect data and then use AI to analyze and find patterns more quickly. So we gain speed in the fight against disinformation,” she says.