Information Technology
News Type
News
Date
Paragraphs

Introduction


Generative AI has become an incredibly attractive and widespread tool for people across the world. Alongside its rapid growth, AI tools present a host of ethical challenges relating to consent, security, and privacy, among others. As Generative AI has been spearheaded primarily by large technology companies, these ethical challenges — especially as viewed from the vantage point of ordinary people — risk being overlooked for the sake of market competition and profit. What is needed, therefore, is a deeper understanding of and attention to how ordinary people perceive AI, including its costs and benefits.

The Meta Community Forum Results Analysis, authored by Samuel Chang, James S. Fishkin, Ricky Hernandez Marquez, Ayushi Kadakia, Alice Siu, and Robert Taylor, aims to address some of these challenges. A partnership between CDDRL’s Deliberative Democracy Lab and Meta, the forum enables participants to learn about and collectively reflect on AI. The impulse behind deliberative democracy is straightforward: people affected by some policy or program should have the right to communicate about its contents and to understand the reasons for its adoption. As Generative AI and the companies that produce it become increasingly powerful, democratic input becomes even more essential to ensure their accountability. 

Motivation & Takeaways


In October 2024, the third Meta Community Forum took place. Its importance derives from the advancements in Generative AI since October 2023, when the last round of deliberations was held. One such advancement is the move beyond AI chatbots to AI agents, which can solve more complex tasks and adapt in real-time to improve responses. A second advancement is that AI has become multimodal, moving beyond the generation of text and into images, video, and audio. These advancements raise new questions and challenges. As such, the third forum provided participants with the opportunity to deliberate on a range of policy proposals, organized around two key themes: how AI agents should interact with users and how they should provide proactive and personalized experiences for them.

To summarize some of the forum’s core findings: the majority of participants value transparency and consent in their interactions with AI agents as well as the security and privacy of their data. In turn, they are less comfortable with agents autonomously completing tasks if this is not transparent to them. Participants have a positive outlook on AI agents but want to have control over their interactions. Regarding the deliberations themselves, participants rated the forum highly and felt that it exposed them to alternative perspectives. The deliberators wanted to learn more about AI for themselves, which was evidenced by their increased use of these tools after the deliberations. Future reports will explore the reasoning and arguments that they used while deliberating.
 


 

Image
Map of where participants hailed from.


The participants of this Community Forum were representative samples of the general population from five countries - Turkey, Saudi Arabia, India, Nigeria, and South Africa. Participants from each country deliberated separately in English, Hindi, Turkish, or Arabic.



Methodology & Data


The deliberations involved around 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey. Participants varied in terms of age, gender, education, and urbanicity. Because the deliberative groups were recruited independently, the forum can be seen as five independent deliberations. Deliberations alternated between small group discussions and ‘plenary sessions,’ where experts answered questions drawn from the small groups. There were around 1000 participants in the control group, who did pre- and post-surveys, but without deliberating. The participant sample was representative with respect to gender, while the treatment and control groups were balanced on demography as well as on their attitudes toward AI. Before deliberating on the proposals, participants were presented with background materials as well as a list of costs and benefits to consider.

In terms of the survey data, large majorities of participants had previously used AI. There was a statistically significant increase in these proportions after the forum. For example, in Turkey, usage rates increased from nearly 70% to 84%. In several countries, there were large increases in participants’ sense of AI’s positive benefits after deliberating, as well as a statistically significant increase in their interest. The deliberations changed participants’ opinions about a host of claims; for example, “people will feel less lonely with AI” and “more proactive [agents] are intrusive” lost approval whereas “AI agents’ capability to increase efficiency…is saving many companies a lot of time and resources” and “AI agents are helping people become more creative” gained approval. After deliberating, participants demonstrated an improved understanding of some factual aspects of AI, although the more technical aspects of this remain challenging. One example here is AI hallucinations, or rather, the generation of false or nonsensical outputs, usually because of flawed training data.
 


 

Image
Chart: How should AI agents remember users' past behaviors or preferences? Percentage in favor


Proposals


Participants deliberated on nineteen policy proposals. To summarize these briefly: In terms of whether and how AI remembers users’ past behaviors and preferences, participants preferred proposals that allowed users to make active choices, as opposed to this being a default setting or only being asked once. They also preferred being reminded about the ability of AI agents to personalize their experience, as well as agents being transparent with users about the tasks they complete. Participants preferred that users be educated on AI before using it, as well as being informed when AI is picking up on certain emotional cues and responding in “human-like” ways. They also preferred proposals whereby AI would ask clarifying questions before generating output. Finally, when it comes to agents helping users with real-life relationships, this was seen as more permissible when the other person was informed. Across the proposals, gender was neither a significant nor consistent determinant of how they were rated. Ultimately, the Meta Community Forum offers a model for how informed, public communication can shape AI and the ethical challenges it raises.

*Research-in-Brief prepared by Adam Fefer.

 
Hero Image
Agentic AI Workflow Automation, Artificial intelligence AI driven decision-making concept illustration blue background iStock / Getty Images
All News button
0
Subtitle

CDDRL Research-in-Brief [4-minute read]

Date Label
News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Authors
News Type
News
Date
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

At a high level, Meta used this Forum to:

  • Expand public input into AI development beyond the Global North, and into the Global South. This latest Forum involved roughly 1,000 people from India, Turkey, Nigeria, Saudi Arabia, and South Africa.
  • Push the boundaries on the topics that the public will have input into. We moved from the foundational principles people wanted to see in GenAI towards addressing specific value and risk tradeoffs associated with issues like personalization and human-like AI.


The Forum resulted in several key findings on the principles that should underpin AI agents, including:
 

  • Participants supported AI agents remembering their prior conversations to personalize their experience, especially if transparency and user controls are in place.
  • Participants were more supportive of culturally/regionally-tailored AI agents compared to standardized AI agents.
  • Participants were in favor of human-like AI agents that can respond to emotional cues.
  • Across topics, participants consistently favored options for AI to include transparency and user control features.

Maturing our Community Forum Program


Beyond the findings of any one Forum, the Deliberative Democracy Lab and Meta have heard important feedback from stakeholders and have implemented several programmatic changes to mature our program. These include:
 

  • More disclosure around the impact of results: Meta will share more information about how results are being actioned within the company on its Transparency Center page, which will be updated throughout the year.
  • Following up with participants: We heard the importance of going back to participants to explain what we learned from their input and what we are doing with it. The Deliberative Democracy Lab will be hosting calls with participants from each of our past Community Forums, dating back to 2022, to update them on the findings from the Forum and Meta’s response.
  • Supporting AI deliberation: A team of Meta AI experts has begun partnering with the Deliberative Democracy Lab to conduct research on how AI might further scale deliberation and optimize the Community Forum process. This includes, but is not limited to, using AI to aggregate themes that are emerging in discussions in real time and support engagement between participants and experts in plenary sessions.
  • Supporting external research: Meta is supporting a consortium of independent researchers from around the world who will evaluate the data from its Forums and publish research papers on the deliberations and results. This will culminate in an academic conference later this year.

Read More

Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Back view of crop anonymous female talking to a chatbot of computer while sitting at home Getty Images
All News button
1
Subtitle

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’

Date Label
Paragraphs

Economic growth is uneven within many developing countries as some sectors and industries grow faster than others. India is no exception, where anemic performance in manufacturing has been offset by robust growth in services. Standard scholarly explanations fail to explain this kind of variation. For instance, the factor endowments that are required for services—such as an educated workforce or access to electricity and other infrastructure—should also complement manufacturing. Reciprocally, if a state’s institutions hold back manufacturing, they should also impair growth in services. Why have services in India outperformed manufacturing? We examine India’s performance in the computing industry, where a dynamic software services sector has emerged even as its computer hardware manufacturing sector has flagged. We argue that the uneven outcomes between the software and hardware sectors are due to the variable needs of the respective sectors and the state’s capacity to coordinate agencies. The policies required to promote the software sector needed minimal coordination between state agencies, whereas the computer hardware sector required a more centralized state apparatus for successful state-business engagement. Domestic and transnational political networks were critical for the success of the software sector, but similar networks could not deliver the same benefits to the computer hardware industry, which required more coordination-intensive policies than software. A state’s ability to coordinate industrial policy is thus a critical determinant for effective sectoral political networks, shaping sectoral variations within an economy.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Studies in Comparative International Development
Authors
Dinsha Mistree
Paragraphs

The Chinese government is revolutionizing digital surveillance at home and exporting these technologies abroad. Do these technology transfers help recipient governments expand digital surveillance, impose internet shutdowns, filter the internet, and target repression for online content? We focus on Huawei, the world’s largest telecommunications provider, which is partly state-owned and increasingly regarded as an instrument of its foreign policy. Using a global sample and an identification strategy based on generalized synthetic controls, we show that the effect of Huawei transfers depends on preexisting political institutions in recipient countries. In the world’s autocracies, Huawei technology facilitates digital repression. We find no effect in the world’s democracies, which are more likely to have laws that regulate digital privacy, institutions that punish government violations, and vibrant civil societies that step in when institutions come under strain. Most broadly, this article advances a large literature about the geopolitical implications of China’s rise.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Perspectives on Politics
Authors
Erin Baggott Carter
Brett Carter
Number
Published online 2025:1-20
Paragraphs

We are on the verge of a revolution in public sector decision-making processes, where computers will take over many of the governance tasks previously assigned to human bureaucrats. Governance decisions based on algorithmic information processing are increasing in numbers and scope, contributing to decisions that impact the lives of individual citizens. While significant attention in the recent few years has been devoted to normative discussions on fairness, accountability, and transparency related to algorithmic decision-making based on artificial intelligence, less is known about citizens’ considered views on this issue. To put society in-the-loop, a Deliberative Poll was thus carried out on the topic of using artificial intelligence in the public sector, as a form of in-depth public consultation. The three use cases that were selected for deliberation were refugee reallocation, a welfare-to-work program, and parole. A key finding was that after having acquired more knowledge about the concrete use cases, participants were overall more supportive of using artificial intelligence in the decision processes. The event was set up with a pretest/post-test control group experimental design, and as such, the results offer experimental evidence to extant observational studies showing positive associations between knowledge and support for using artificial intelligence.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
AI & SOCIETY
Authors
James S. Fishkin
Alice Siu
-
AI in Education Deliberative Poll for High School Educators

Are you worried about the impact AI can have on your classroom or excited about its potential? Do you wonder how you can utilize AI in your teaching or do you feel like it dehumanizes the learning process? Are you eager to learn about what “Artificial Intelligence” entails and how it can impact your classroom? 

If any of these questions have crossed your mind, we invite you to join Stanford's Deliberative Democracy Lab on Saturday, May 18, from 10:00 am to 2:45 pm (Pacific Time) to discuss with fellow educators how AI should be used and regulated in schools. You will discuss policies regarding the use of AI in schools — whether it should be banned from the Wi-Fi or left up to teachers and students to discern what “appropriate usage” means. You will also get to meet and ask questions to experts in the fields.

This will be an online event hosted on Stanford's Online Deliberation Platform. There will be sessions between deliberating teachers and expert panels where there will be Q&A time. Further details will be emailed to you.

SCHEDULE

10:00 am - 11:15 am: First Small Group Deliberation Session

11:15 am - 12:00 pm: Plenary Session 1

12:00 pm - 12:45 pm: Break

12:45 pm - 2:00 pm: Second Small Group Deliberation Session

2:00 pm - 2:45 pm: Plenary Session 2

This event is being led by students at The Quarry Lane School, Saratoga High School, and Lynbrook High School.

Online.

Open to high school educators only.

Workshops
Authors
Rachel Owens
News Type
News
Date
Paragraphs

In a CDDRL seminar series talk, Daniel Chen — Director of Research at the French National Center for Scientific Research and Professor at the Toulouse School of Economics — examined whether data science can improve the functioning of courts and unlock their impact on economic development. Improving courts’ efficiency is paramount to citizens' confidence in legal institutions and proceedings.

In a nationwide experiment in Kenya, Chen and his co-authors employed data science techniques to identify the causes of case backlog in the judicial system. They developed an algorithm to identify major sources of court delays for each of Kenya’s 124 court stations. Based on the algorithm, they compiled a one-page report — specific to the local court and tailored to that month’s proceedings — which provided an analysis of court adjournments, reasons for delay, and tangible action items.

To measure the effect of these one-pagers, Chen established two treatment groups and one control. Those in the first treatment group received a singular one-pager, sent just to the courts. The second received one for the courts and one for a Court User Committee (CUC). The committee, which consists of lawyers, police, and members of civil society, was asked to discuss the one-pagers during their quarterly meetings. 

To measure the relevant effects, the authors examined three primary outcomes, namely: (1) adjournment (or case delay) rates; (2) quality and citizen satisfaction; and (3) measures of economic development, including contracting, investment, and business creation. 

Results showed the intervention was associated with a 22 percent improvement in adjournments, or a decline in trial length by 120 days. They found that there was no effect on either the number of cases filed or the proxies for quality. Citizen satisfaction rates also went up, with a reduction in complaints about speed and quality, and the intervention was associated with an increase in formal written contracts and higher wages.

Read More

María Ignacia Curiel presents during CDDRL's research seminar
News

Do Institutional Safeguards Undermine Rebel Parties?

CDDRL postdoctoral fellow’s findings show that institutional safeguards meant to guarantee the representation of parties formed by former rebel groups may actually weaken such parties’ grassroots support.
Do Institutional Safeguards Undermine Rebel Parties?
Larry Diamond speaks during CDDRL's research seminar
News

Is the World Still in a Democratic Recession?

Is the world still in a democratic recession? Larry Diamond — the Mosbacher Senior Fellow in Global Democracy at FSI — believes it is.
Is the World Still in a Democratic Recession?
Janka Deli presents during CDDRL seminar
News

Can Markets Save the Rule of Law?: Insights from the EU

CDDRL postdoctoral fellow challenges the conventional wisdom that deterioration in the rule of law generates decline in economic vitality.
Can Markets Save the Rule of Law?: Insights from the EU
Hero Image
Daniel Chen
Daniel Chen presents in CDDRL's research seminar on November 9, 2023.
Rachel Cody Owens
All News button
1
Subtitle

Improving courts’ efficiency is paramount to citizens' confidence in legal institutions and proceedings, explains Daniel Chen, Director of Research at the French National Center for Scientific Research and Professor at the Toulouse School of Economics.

Date Label
Subscribe to Information Technology