“What did OpenAI think of it? How did they rate our work?”
Out of 860 proposals, OpenAI chose ten, all of which were awarded $100,000 to develop a project that would use AI to support democracy. One of them was the Eindhoven-based Common Ground initiative (initially called ‘Deliberation at Scale‘). In a pressure cooker, the project was developed, released, tested, and successfully assessed. And now, it’s finished. In a LinkedIn post, CeesJan Mol shared his views on the initiative:
“Thursday night was the third and final test, now with 449 participants. And the application, now called Common Ground, worked! Not everything went well: “Some of the participants left the group in the middle of the discussion“; “I wasn’t sure if they could see my answers because no one responded to me.” But did this form of democracy work? Were people able to agree among themselves? Yep. Opinions were expressed 6835 times, on 602 statements. Most of which were made by people themselves. Ultimately, we had 226 statements on which we could confidently say that people agreed. In short, engaging with each other on Common Ground promotes opinion-building!”
How did OpenAI respond? Mol: “The email we found in our inbox this morning: ‘We’re so impressed by what you’ve built, the new ideas you’ve implemented, and how you’ve inspired us and the other teams.’ And whether we would like to consider how we would like to move forward in our collaboration.”
The report by Common Ground
In an era where artificial intelligence (AI) is rapidly transforming every facet of our lives, the question of how to ensure its ethical and democratic deployment has become paramount. The final report by the Common Ground initiative sheds light on the importance of public input in shaping the future of AI and of democracy itself.
The report, titled “Democratic Inputs to AI” is a culmination of research and public consultations. It emphasizes the need to integrate democratic values into the development and deployment of AI systems. The findings are a testament to the growing realization that for AI to be truly beneficial, it must be aligned with the values and aspirations of the people it serves.
Key Findings
- Public Engagement is Crucial: One of the primary takeaways from the report is the importance of public engagement in AI decision-making. The study found that most participants believe that public input is essential in shaping the ethical guidelines and policies surrounding AI. This sentiment was echoed across various demographics, indicating a universal desire for more democratic control over AI’s trajectory.
- Transparency and Accountability: The report highlights the public’s demand for greater transparency in AI systems. Participants expressed concerns about the “black box” nature of many AI algorithms, which often operate without clear explanations of their decision-making processes. There is a strong call for developers and policymakers to ensure that AI systems are transparent and can be held accountable for their actions.
- Addressing Bias and Discrimination: Another significant concern raised by participants was the potential for AI systems to perpetuate or amplify societal biases. The report suggests a pressing need to address these issues head-on, with participants advocating for rigorous testing and auditing of AI systems to ensure fairness.
- Education and Awareness: The study found that while many are enthusiastic about the potential benefits of AI, there is also a lack of understanding about its workings and implications. The report recommends increased efforts in educating the public about AI, ensuring that they are well-informed and can participate meaningfully in discussions about its future.
- Collaboration is Key: The report underscores the importance of collaboration between various stakeholders, including tech companies, governments, academia, and civil society. Such collaboration can ensure that AI is developed in a manner that is both innovative and aligned with democratic values.
Implications for the Future
The findings of the “Democratic Inputs to AI” report aim to have implications for the future of AI development. The public wants a say in how AI is used and governed. Ignoring these sentiments could lead to mistrust and resistance against AI technologies.
Moreover, the report’s recommendations provide a roadmap for policymakers, developers, and other stakeholders to ensure that AI is developed and deployed ethically. By prioritizing transparency, addressing biases, and fostering public engagement, we can pave the way for an AI future that is both innovative and democratic.