Global media and communication experts are forecasting a democratized information landscape for 2050, shaped by trusted AI narratives. In a study spanning four weeks of surveys performed before the arrival of ChatGPT, a tension between optimism and concerns about misuse and misinformation is revealed. In their publication in the journal AI & Society, Katalin Feher, Lilla Vicsek, and Mark Deuze envisage AI as both a tool and an independent actor, advocating for AI-to-AI solutions to combat technological abuse.
The Glasses Model of AI Trust emerges, balancing hope and uncertainty, while the study underscores the need for responsible AI policies and research. Experts believe future generations can navigate these challenges, preserving social values and trust in AI advancements.
Why you should know
Oftentimes, negative forces like fear, denial, and anger are predominant in the AI-in-media debate. This worldwide study shows another side of the medal. The study’s participants place significant faith in future generations to maintain a balance between the potential and risks of AI. As part of Innovation Origins’s trust in the future use of AI in journalism, this article was written by Laio, our AI-supported editor.
Assessing trust in AI within media and info-communication
Trust in artificial intelligence (AI) forms the linchpin of the future media and information-communication landscape. As an emergent field, Information Communication and Media (ICM) must grapple with integrating society, culture, and technology while contending with the trustworthiness of the AI tools they are developing. This trust issue is critical, given the rapid spread of AI-driven phenomena such as conversational media, deepfakes, and bot journalism, which pose new challenges to the accuracy and reliability of information dissemination.
The study, conducted via a survey of more than 300 experts, aimed to elucidate the visions and concerns of those at the forefront of AI in media. The participants, representing diverse regions and sectors, shared their insights over four weeks in 2022, just before the launch of ChatGPT. Their responses provide a detailed look into the future as imagined by those shaping it.
The Glasses Model of AI Trust
The Glasses Model of AI Trust is central to understanding the study’s findings. It symbolizes the delicate balancing act between the optimistic beliefs in AI’s potential to democratize information and the growing concerns over its capacity to misinform and manipulate. The model encapsulates the duality of experts’ perspectives, acknowledging the transformative promise of AI while remaining vigilant to its pitfalls.
Experts foresee a future where AI simplifies vast data into crisp, adaptable information tailored to media, person, and place. This future envisions a seamless collaboration between humans and machines, magnifying the reach and impact of human narratives through AI’s capabilities. However, the model also flags the risk of reliance on potentially biased and unreliable data, which could exacerbate misinformation and trust issues.
Universal access to information and AI’s role
A recurring theme throughout the responses is the aspiration for universal access to information. By 2050, experts predict that AI-driven media will be democratically available, providing personalized and unbiased narratives. This vision is supported by the belief that AI can control and mitigate its adverse effects, ensuring that information remains trustworthy and accessible to all.
In striking contrast, AI is acknowledged as a “black box” technology – sophisticated yet opaque, its inner workings obscured from human understanding. This duality presents a substantial challenge to the trust in AI as the technology redefines new media and computer-mediated communication.
Challenges of AI-driven ICM transformation
The transformation AI brings to ICM systems is multifaceted. While experts anticipate cost-effective and productive operations, concerns about the potential for fake media, systemic bias, and misuse persist. These apprehensions highlight the necessity for a nuanced approach to AI deployment that considers socio-cultural values and the impact on trust.
Experts argue that AI-driven ICM has the potential to benefit users by enhancing their experience of their surroundings. Yet, the trade-off is the risk of an information overload, unfiltered by human judgment, which could lead to machine-dominated communication and news production.
Experts’ faith in future generations
The study’s participants place significant faith in future generations to maintain a balance between the potential and risks of AI. They believe that key social and human values will persist, with new generations continuing to build trust in emerging AI systems. This trust is seen as crucial to preserving democratic values amid the technological revolution.
This future-oriented optimism is, however, not without its caveats. Some experts express concerns about professional near-sightedness, suggesting that overemphasizing current trends may lead to an overly rosy outlook. There is an implicit warning here: without a critical and responsible approach to AI development, we risk underestimating its long-term impacts.
AI’s impact on society, economy, and culture
The experts acknowledge that while AI will inevitably alter societal, economic, and cultural landscapes, fundamental issues like profit maximization and societal asymmetry will likely remain unchanged. They highlight the importance of AI in simplifying complex datasets, potentially leading to more effective media that work hand in hand with human interests.
Despite these challenges, there is a consensus among respondents that AI has a pivotal role in the fight against misinformation, as demonstrated during the COVID-19 pandemic across various social media platforms. This belief underpins the notion that AI, if regulated properly, can serve as a powerful ally in maintaining the integrity of information in the future.
The road to 2050: AI’s evolving landscape
Looking towards 2050, the survey participants envision a world where AI breaks down language barriers, fostering a more integrated global internet. They see AI not just as technology but as a partner in progress, capable of addressing our time’s infodemic and post-truth challenges. This progress is contingent on developing technologies that can effectively verify information sources, mitigating deepfakes and synthetic media risks.
Nevertheless, experts are mindful of the potential for AI to be utilized for anti-democratic purposes, such as surveillance and intimidation. The call for cross-cultural AI ethics is clear, stressing the need for an ethical foundation to guide the responsible use of AI technology and ensure societal well-being in the distant future.