Democracy and Disinformation: How can Information Systems and the Media Landscape work for Business, Consumers and Democracy?
Democracy and Disinformation, an event organized by EuroCham and GABA, drew an engaged crowd on April 24 at SAP Labs in San Francisco. Democracy is on the ballot in 40 national elections around the world this year, and this discussion addressed the challenge of maintaining the foundations for democracy and prosperity in this moment of rapid technological advance in our media and information ecosystems.
Business leaders, public officials and voters all require reliable information on which to base their decisions. Artificial Intelligence-driven tools such as generative AI are rapidly developing and being adopted within enterprise and government systems. Venerated news organizations and trusted local outlets have lost subscribers and viewers to a panoply of platforms and influencers peddling disinformation, fringe ideology, and conspiracy.
How can Information Systems and the Media Landscape work for Business, Consumers and Democracy? It is a complex topic, and each panelist contributed a unique perspective to the discussion: traditional news media, technology, digital law, and the intersection of tech enterprise and state government. Alexander Schaefer is Vice President of Engineering and Head of SAP BTP Innovation North America and leads a team with the mission to build relevant, reliable, and responsible Business AI. Sally Lehrman is an award-winning journalist and founder/Chief Executive of The Trust Project, an international collaboration that aims to strengthen public confidence in the news through accountability and transparency. Florence G’Sell is Professor of private law, University of Lorraine, leads the Digital, Governance and Sovereignty Chair at Sciences Po, and is currently a visiting professor at Stanford’s Cyber Policy Center’s program on the Governance of Emerging Technologies. Nolween Godard is a software executive, former Director of the California Office of Data and Innovation, and a former Co-President for the Alliance for Inclusive Artificial Intelligence at UC Berkeley’s Haas School of Business.
The discussion touched on the current threats of the generation and dissemination of misinformation as well as intentional disinformation, but the focus was on the solutions for counteracting manipulation and elevating reliable, fact-based information. The solutions are more diverse than most people realize.
A free and independent press is essential to any thriving democratic system. The right to press freedom is enshrined in the founding documents of the United Nations as well as in many national constitutions including the US. Why? Because information and knowledge are power, and the intentional manipulation is a threat to popular sovereignty and invitation to tyranny. Reputable news organizations adhere to professional standards such as maintaining a firewall between news and opinion and reporting on their own mistakes. Traditional news and media have been hit hard by the digital revolution which has undermined their historical financial models and the public’s trust in their reporting.
Research by The Trust Project indicates that when people feel they cannot discern what information sources they can trust, they withdraw from all sources, but when they can trust a news organization, they are more willing to pay for a subscription. Over 300 news organizations around the world have committed to The Trust Project’s principles of socially responsible journalism, display the 8 Trust Indicators, and pass the compliance review. For more on innovative efforts related to trusted journalism, see the in-depth research by Center for News, Technology & Innovation (CNTI), independent global policy research center that seeks to encourage independent, sustainable media, maintain an open internet and foster informed public policy conversations. Sally stressed,
“Companies have a major stake in trustworthy news. They rely on fair and accurate news coverage, even in difficult situations. Decision-making at every level requires it. They know the value of employee health and welfare (think Covid). This is why we need ESG and T – for Trust. At the Trust Project, we are thinking more and more about ways corporations can get involved in strengthening a safe and honest information ecosystem. To share our 8 Trust Indicators in trainings, civic outreach, and beyond.”
New technology solutions alone cannot protect public officials, voters, companies, and consumers from the impacts of influencers and malign actors. Alex noted that the generation and spread of disinformation is too easy and profitable today, and one simple step for platform providers is to charge a fee for setting up an account. AI-powered content moderation and automated tools and plugins can cross-reference claims made on digital platforms against reliable sources. As an example of a crowdsourced fact-checking program, Alexander described Community Notes on X (formerly known as Birdwatch on Twitter) where users can add context to potentially misleading tweets. Other users rate the helpfulness of these contributions, and the most highly rated notes are prioritized.
Sally emphasized that a social media platform is not a public but a commercial space. They do not promote real discussion but more and more emotional content, opinions and not facts. Social media and other platforms have many options for limiting the spread of harmful information. They can improve transparency by labeling the origin and spread of information. They can analyze user behavior patterns, engagement metrics, and content consumption habits to identify potential disinformation campaigns and coordinated inauthentic behavior. Platforms can also adjust their algorithms to prioritize authoritative and trusted sources, but people must always be involved to monitor the algorithms and content. While humans are naturally subjective and with bias, people and organizations can and need to develop an ethos of impartiality and professionalism.
Companies are developing governance frameworks for AI, their information systems, and platforms. Alexander emphasized this critical need with the example of SAP,
“Reliable and truthful information, in particular from Generative AI, is crucial for both individuals and businesses. Our customers rely on accurate analytics and data, to make critical business decisions. At SAP, we are building Business AI that is responsible, reliable, and relevant, governed by SAP’s Global AI Ethics Policy.”
SAP’s own chatbot draws exclusively from high-quality business data and follows the global AI policy not to use customer data unless they agree to it. To limit the generation of misinformation, Alexander explained the importance of “constitutional AI” which is a framework and set of principles to help make advanced AI systems more reliable, robust, and aligned with human values as they become increasingly capable. Such a framework creates legal/ethical boundaries (such as truthfulness and human rights) for the AI system, awareness of its own uncertainty, and human oversight. Alex stressed that rules will only go so far: “We need individuals who can assess and critically think about information.”
There are social factors behind the intentional spread of disinformation. Loneliness and disconnection play a factor in the susceptibility to disinformation according to research by The Trust Project. This corresponds to research by behavioral economist, Dan Ariely, on why people knowingly spread disinformation: 1) They feel overwhelmed by complex issues and seek simple explanations, 2) They feel put down by people who have a better grasp of the issues and then seek community with like-minded individuals, 3) They can level-up in their new-found communities by inventing and spreading more salacious untruths. While emotional, personality and social factors are at play, developing critical-thinking skills throughout the community is an important step, and the 8 Trust Indicators provide helpful guidance on how we consume information.
Transparency is essential. “When you are consciously spreading false information, you are killing democracy,” noted Florence G’Sell. She described current Chinese policy developments regulating disinformation and stressed that while we want to protect people from fake news, we have to protect freedom of speech. She explained that the EU AI Act regulates providers of AI, deployers of AI and any provider or deployer that is established outside of EU if the output is utilized in the EU. Disclosure is mandatory for AI systems. People must be aware that they are interacting with a chatbot. People must be aware when they are interacting with synthetic content (audio, image, video or text), so watermarking is mandatory. People must be warned when they are viewing a deepfake. They must be aware when a newspaper or other media publishes AI-generated or manipulated texts.
In California, Nolween developed Governor Newsom’s Executive Order N-12-23 on Generative Artificial Intelligence (gen AI) and the State of California Benefits and Risks of Generative Artificial Intelligence Report. She explained the complexity of applying critical standards to state procurement across many agencies when different AI systems are increasingly imbedded in tools and information systems the state uses. For more on state policy developments, see the California Initiative for Technology and Democracy (CITED), a project of California Common Cause, for their detailed policy proposal for state-level solutions to the threats that disinformation, AI, deepfakes, and other emerging technologies pose to democracy.
In sum, protecting democracy from disinformation requires a combination of technology solutions, smart standards and enforceable regulation, critical thinking skills on the part of consumers, and ethical leadership in corporate governance in the development and deployment of technology tools and digital platforms. It takes all of us to protect and maintain a prosperous economy and thriving democracy.
For more great pictures of the event, please see our event photo album here.