Technology
Paragraphs

On January 22, South Korea introduced its AI Basic Act, which it claimed to be “the world’s first comprehensive body of laws to regulate artificial intelligence.” The government claims the legislation will help propel the country to be a leader in the global race for AI leadership by establishing a “foundation for trust” while also protecting the interests of citizens.

All Publications button
1
Publication Type
Commentary
Publication Date
Journal Publisher
Tech Policy Press
Authors
Charles Mok
Paragraphs

Europe’s non-coercive form of global influence on technology governance faces new challenges and opportunities in the world of artificial intelligence regulations and governance. As the United States and China pursue divergent models of competition and control, Europe must evolve from exporting regulation to exercising genuine governance. The challenge is to transform regulatory strength into strategic capability, while balancing human rights, innovation, and digital sovereignty. By advancing a new Brussels Agenda grounded in values, institutional coherence, and multi-stakeholder collaboration, Europe can reaffirm its global role, demonstrating that ethical governance and technological ambition don’t need to be opposing forces in the age of intelligent systems.

ABOUT THE VOLUME

Designing Europe’s Future: AI as a Force of Good

AI is not just a technological tool; it is a transformative force that can make our societies more prosperous, sustainable, and free – if we dare to embrace it.

All Publications button
1
Publication Type
Book Chapters
Publication Date
Subtitle

Essay within "Designing Europe’s Future: AI as a Force of Good," published by the European Liberal Forum EUPF (ELF), edited by Francesco Cappelletti, Maartje Schulz, and Eloi Borgne.

Journal Publisher
European Liberal Forum EUPF
Authors
Charles Mok
Authors
News Type
News
Date
Paragraphs

Motivation & Overview


India’s services sector is internationally renowned and has helped propel the country’s economic growth. Indeed, in recent years, a majority of the value added to India’s GDP has been concentrated in services. Especially noteworthy are India’s software and computing services, which include large multinational conglomerates like Infosys and Tata Communications Services. 

Yet as Indian software has flourished, the growth of its computer hardware and manufacturing has been sluggish. Tellingly, India is still a net importer of hardware and other electronics. At first glance, this divergence is puzzling because both the software and hardware sectors should have benefited from India’s educated labor pool and infrastructure. How can these different sectoral outcomes be explained?
 


 

Image
Fig. 1: Electronics production value compared to software and software service revenues

 

Fig. 1: Electronics production value compared to software and software service revenues.
 



In “Comparing Advantages in India’s Computer Hardware and Software Sectors,” Dinsha Mistree and Rehana Mohammed offer an explanation in terms of state capacity to meet the different functional needs of each sector. Their account of India’s computing history emphasizes the inability of various state ministries and agencies to agree on policies that would benefit the hardware sector, such as tariffs. Meanwhile, cumbersome rulemaking procedures inherited from British colonialism impeded the state’s flexibility. Although this disadvantaged India’s hardware sector, its software sector needed comparatively less from the state, building instead on international networks and the efforts of individual agencies.

The authors provide a historically and theoretically rich account of the political forces shaping India’s economic rise. The paper not only compares distinct moments in Indian history but also draws parallels with other landmark cases, like South Korea’s 1980s industrial surge. Such a sector-based analysis could be fruitfully applied to understand why different industries succeed or lag in emerging economies. 

Different Sectors, Different Needs


In order to become competitive — both domestically and (especially) internationally — hardware manufacturers often need much from the state, what the authors call a “produce and protect regime.” This can include the construction of factories and the formation of state-owned industries (SOEs), as well as tariffs to reduce competition or labor laws that restrict union strikes. Perhaps most importantly, manufacturers need a state whose legislators and bureaucrats can coordinate with each other in response to market challenges. Such a regime is incompatible with excessive “red tape” or with the “capture” of regulators by narrow interest groups. Because customers tend to view manufactured goods as “substitutable” with each other, firms will face intense competition as regards price and quality.
 


 

Image
Fig. 2: Inter-agency coordination required for sectoral success

 

Fig. 2: Inter-agency coordination required for sectoral success.
 



The situation is very different for service providers, whose success depends on building strong relationships with customers. States are not essential to this process, even if their promotional efforts can be helpful. Coordination across government agencies is similarly less important, as just one agency could provide tax breaks or host promotional events that benefit service providers. Compared with manufacturing, customers tend to view services as less substitutable — they are more intangible and customizable, which renders competition less fierce. Understanding India’s computing history reveals that the state’s inability to meet hardware manufacturers’ needs severely constrained the sector’s growth. 

The History of Indian Computing


Although India inherited a convoluted bureaucracy from the British Raj, the future of its computing industry in the 1960s seemed promising: political elites in New Delhi supported a produce-and-protect regime, relevant agencies and SOEs were created, and foreign computing firms like IBM successfully operated in the country. 

Yet by the 1970s, some bureaucrats and union leaders feared that automation would threaten the federal government’s functioning and India’s employment levels, respectively. Strict controls in both the public and private sectors were thus adopted, for example, requiring trade unions — which took a strong anti-computer stance — to approve the introduction of computers in specific industries. The authors make special mention of India’s semiconductor industry. It arguably failed to develop due to lackluster government investment, the need for manufacturers to obtain multiple permits across agencies, decision makers ignoring recommendations from specialized panels, and so on.

Meanwhile, implementing protectionist policies proved challenging. For example, decisions to allow the importation of previously banned components required permission from multiple ministries and agencies. After India’s 1970s balance-of-payments crisis, international companies deemed inessential were forced to dilute their equity to 40% and take on an Indian partner. IBM then left the Indian market. At the same time, SOEs faced growing competition over government contracts and workers, owing to the growth of state-level SOEs.

The mid-1980s represented a partial turning point as Rajiv Gandhi became Prime Minister and liberalized the computing industry. Within weeks, Rajiv introduced a host of new policies and shifted the government’s focus from supporting public sector production to promoting private firms, which would no longer face manufacturing limits and would be eligible for duty exemptions. Changes to tariff rates and import limits would not require approval from multiple agencies. Meanwhile, international firms reengaged with Indian markets via the building of satellite links, facilitating cross-continental work, such as between Citibank employees in Mumbai and Santa Cruz.

However, this liberalizing period was undermined and partially reversed after 1989, when Rajiv’s Congress Party (INC) lost its legislative majority and public policy became considerably more fragmented. Anti-computerization forces, especially the powerful Indian trade unions, worked to stymie Rajiv’s reforms. Pro-market reformists were forced out of their positions in Indian bureaucracies. Rajiv was assassinated in 1991, after which Congress formed a minority government with computer advocate P. V. Narasimha Rao as PM. Yet all of this occurred at a delicate time, as India was at risk of defaulting and had almost completely exhausted its foreign exchange.

By the late 1990s, both the hardware and software sectors should have benefited from the rising global demand for computers, yet India’s history of poor state coordination hindered manufacturers. Meanwhile, software firms were able to take advantage of global opportunities given their comparatively limited needs from state actors and political networks — for example, helping European Union banks change their computer systems to Euros. Ultimately, the Indian state has powerfully shaped the fortunes of these different sectors.

*Research-in-Brief prepared by Adam Fefer.

Hero Image
Monitor showing Java programming Ilya Pavlov via Unsplash
All News button
1
Subtitle

CDDRL Research-in-Brief [4-minute read]

Date Label
Paragraphs

Recent reporting on Meta’s internal AI guidelines serves as a stark reminder that the rules governing AI behaviors are frequently decided by a small group of the same people, behind closed doors. The sheer scale of work every AI company grapples with, from determining ethics and mapping acceptable behaviors to enforcing content policies, affects millions of people through processes that the public has no visibility into.

The truth is that these silos are constantly happening across the industry.

Tech policy, particularly AI policy, is often so complex and evolves so rapidly that everyday perspectives are not easily captured. As consumers, we’ve grown accustomed to a system where the most important decisions about technology governance happen in exclusive settings.

But what if we flipped the script? What if users helped create the rules?

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Tech Policy Press
Authors
Alice Siu
0
CDDRL Honors Student, 2025-26
img_1259_3_-_emma_wang.jpg

Major: Political Science
Hometown: Naperville, Illinois
Thesis Advisor: Jonathan Rodden

Tentative Thesis Title: Broadband for All: Historical Lessons and International Models for U.S. Internet Policy

Future aspirations post-Stanford: After completing my master's in computer science, I hope to go to law school and work in technology law.

A fun fact about yourself: I started lion dancing when I came to college!

Date Label
News Type
News
Date
Paragraphs

Introduction


Generative AI has become an incredibly attractive and widespread tool for people across the world. Alongside its rapid growth, AI tools present a host of ethical challenges relating to consent, security, and privacy, among others. As Generative AI has been spearheaded primarily by large technology companies, these ethical challenges — especially as viewed from the vantage point of ordinary people — risk being overlooked for the sake of market competition and profit. What is needed, therefore, is a deeper understanding of and attention to how ordinary people perceive AI, including its costs and benefits.

The Meta Community Forum Results Analysis, authored by Samuel Chang, James S. Fishkin, Ricky Hernandez Marquez, Ayushi Kadakia, Alice Siu, and Robert Taylor, aims to address some of these challenges. A partnership between CDDRL’s Deliberative Democracy Lab and Meta, the forum enables participants to learn about and collectively reflect on AI. The impulse behind deliberative democracy is straightforward: people affected by some policy or program should have the right to communicate about its contents and to understand the reasons for its adoption. As Generative AI and the companies that produce it become increasingly powerful, democratic input becomes even more essential to ensure their accountability. 

Motivation & Takeaways


In October 2024, the third Meta Community Forum took place. Its importance derives from the advancements in Generative AI since October 2023, when the last round of deliberations was held. One such advancement is the move beyond AI chatbots to AI agents, which can solve more complex tasks and adapt in real-time to improve responses. A second advancement is that AI has become multimodal, moving beyond the generation of text and into images, video, and audio. These advancements raise new questions and challenges. As such, the third forum provided participants with the opportunity to deliberate on a range of policy proposals, organized around two key themes: how AI agents should interact with users and how they should provide proactive and personalized experiences for them.

To summarize some of the forum’s core findings: the majority of participants value transparency and consent in their interactions with AI agents as well as the security and privacy of their data. In turn, they are less comfortable with agents autonomously completing tasks if this is not transparent to them. Participants have a positive outlook on AI agents but want to have control over their interactions. Regarding the deliberations themselves, participants rated the forum highly and felt that it exposed them to alternative perspectives. The deliberators wanted to learn more about AI for themselves, which was evidenced by their increased use of these tools after the deliberations. Future reports will explore the reasoning and arguments that they used while deliberating.
 


 

Image
Map of where participants hailed from.


The participants of this Community Forum were representative samples of the general population from five countries - Turkey, Saudi Arabia, India, Nigeria, and South Africa. Participants from each country deliberated separately in English, Hindi, Turkish, or Arabic.



Methodology & Data


The deliberations involved around 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey. Participants varied in terms of age, gender, education, and urbanicity. Because the deliberative groups were recruited independently, the forum can be seen as five independent deliberations. Deliberations alternated between small group discussions and ‘plenary sessions,’ where experts answered questions drawn from the small groups. There were around 1000 participants in the control group, who did pre- and post-surveys, but without deliberating. The participant sample was representative with respect to gender, while the treatment and control groups were balanced on demography as well as on their attitudes toward AI. Before deliberating on the proposals, participants were presented with background materials as well as a list of costs and benefits to consider.

In terms of the survey data, large majorities of participants had previously used AI. There was a statistically significant increase in these proportions after the forum. For example, in Turkey, usage rates increased from nearly 70% to 84%. In several countries, there were large increases in participants’ sense of AI’s positive benefits after deliberating, as well as a statistically significant increase in their interest. The deliberations changed participants’ opinions about a host of claims; for example, “people will feel less lonely with AI” and “more proactive [agents] are intrusive” lost approval whereas “AI agents’ capability to increase efficiency…is saving many companies a lot of time and resources” and “AI agents are helping people become more creative” gained approval. After deliberating, participants demonstrated an improved understanding of some factual aspects of AI, although the more technical aspects of this remain challenging. One example here is AI hallucinations, or rather, the generation of false or nonsensical outputs, usually because of flawed training data.
 


 

Image
Chart: How should AI agents remember users' past behaviors or preferences? Percentage in favor


Proposals


Participants deliberated on nineteen policy proposals. To summarize these briefly: In terms of whether and how AI remembers users’ past behaviors and preferences, participants preferred proposals that allowed users to make active choices, as opposed to this being a default setting or only being asked once. They also preferred being reminded about the ability of AI agents to personalize their experience, as well as agents being transparent with users about the tasks they complete. Participants preferred that users be educated on AI before using it, as well as being informed when AI is picking up on certain emotional cues and responding in “human-like” ways. They also preferred proposals whereby AI would ask clarifying questions before generating output. Finally, when it comes to agents helping users with real-life relationships, this was seen as more permissible when the other person was informed. Across the proposals, gender was neither a significant nor consistent determinant of how they were rated. Ultimately, the Meta Community Forum offers a model for how informed, public communication can shape AI and the ethical challenges it raises.

*Research-in-Brief prepared by Adam Fefer.

 
Hero Image
Agentic AI Workflow Automation, Artificial intelligence AI driven decision-making concept illustration blue background iStock / Getty Images
All News button
0
Subtitle

CDDRL Research-in-Brief [4-minute read]

Date Label
News Type
News
Date
Paragraphs

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

A group of technology companies convened by Stanford University’s Deliberative Democracy Lab will gather public feedback about complex questions the AI industry is considering while developing AI agents. This convening includes Cohere, Meta, Oracle, and PayPal, advised by the Collective Intelligence Project.

This Industry-Wide Forum brings together everyday people to weigh in on tech policy and product development decisions where there are difficult tradeoffs with no simple answers. Technology development is moving so quickly, there is no better time than right now to engage the public in understanding what an informed public would like AI technologies to do for them. This Forum is designed based on Stanford's method of Deliberative Polling, a governance innovation that empowers the public’s voices to have a greater say in decision-making. This Forum will take place in Fall 2025. Findings from this Forum will be made public, and Stanford’s Deliberative Democracy Lab will hold webinars for the public to learn and inquire about the findings.

"We're proud to be a founding participant in this initiative alongside Stanford and other AI leaders," said Saurabh Baji, CTO of Cohere. "This collaborative approach is central to enhancing trust in agentic AI and paving the way for strengthened cross-industry standards for this technology. We're looking forward to working together to shape the future of how agents serve enterprises and people."

In the near term, AI Agents will be expected to conduct a myriad of transactions on behalf of users, opening up considerable opportunities to offer great value as well as significant risks. This Forum will improve product market fit by giving companies foresight into what users want from AI Agents; it will help build trust and legitimacy with users; and it will strengthen cross-industry relations in support of industry standards development over time.

"We support The Forum for its deliberative and collaborative approach to shaping public discourse around AI agents," said Prakhar Mehrotra, SVP of AI at PayPal. "Responsibility and trust are core business principles for PayPal, and through collaborative efforts like these, we seek to encourage valuable perspectives that can help shape the future of agentic commerce."

The Forum will be conducted on the AI-assisted Stanford Online Deliberation Platform, a collaboration between Stanford’s Deliberative Democracy Lab and Crowdsourced Democracy Team, where a cross-section of the public will deliberate in small groups and share their perspectives, their lived experiences, and their expectations for AI products. This deliberation platform has hosted Meta’s Community Forums over the past few years. The Forum will also incorporate insights from CIP's Global Dialogues, conducted on the Remesh platform.

“Community Forums provide us with people’s considered feedback, which helps inform how we innovate,” said Rob Sherman, Meta’s Vice President, AI Policy & Deputy Chief Privacy Officer. “We look forward to the insights from this cross-industry partnership, which will provide a deeper understanding of people’s views on cutting-edge technology.”

This methodology is rooted in deliberation, which provides representative samples of the public with baseline education on a topic, including options with associated tradeoffs, and asks them to reflect on that education as well as their lived experience. Deliberative methods have been found to offer more considered feedback to decision-makers because people have to weigh the complexity of an issue rather than offering a knee-jerk reaction.

"This industry-wide deliberative forum represents a crucial step in democratizing the discourse around AI agents, ensuring that the public's voice is heard in a representative and thoughtful way as we collectively shape the future of this transformative technology," said James Fishkin, Director of Stanford's Deliberative Democracy Lab.

This Industry-Wide Forum represents a pivotal step in responsible AI development, bringing together technology companies and the public to address complex challenges in AI agent creation. By leveraging Stanford's Deliberative Polling methodology and making findings publicly available, the initiative promises to shape the future of AI with enhanced transparency, trust, and user-centric focus. Find out more about Stanford’s Deliberative Democracy Lab at deliberation.stanford.edu.

Media Contact: Alice Siu, Stanford Deliberative Democracy Lab

Read More

Back view of crop anonymous female talking to a chatbot of computer while sitting at home
News

Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI

Participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’
Meta and Stanford’s Deliberative Democracy Lab Release Results from Second Community Forum on Generative AI
Chatbot powered by AI. Transforming Industries and customer service. Yellow chatbot icon over smart phone in action. Modern 3D render
News

Navigating the Future of AI: Insights from the Second Meta Community Forum

A multinational Deliberative Poll unveils the global public's nuanced views on AI chatbots and their integration into society.
Navigating the Future of AI: Insights from the Second Meta Community Forum
Collage of modern adults using smart phones in city with wifi signals
News

Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab

More than 6,300 deliberators from 32 countries and nine regions around the world participated in the Metaverse Community Forum on Bullying and Harassment.
Results of First Global Deliberative Poll® Announced by Stanford’s Deliberative Democracy Lab
Hero Image
Futuristic 3D Render Steve Johnson via Unsplash
All News button
1
Subtitle

There is a significant gap between what technology, especially AI technology, is being developed and the public's understanding of such technologies. We must ask: what if the public were not just passive recipients of these technologies, but active participants in guiding their evolution?

Date Label
Paragraphs

In an era marked by rapid technological advancements, increasing political polarization, and democratic backsliding, reimagining democracy requires innovative approaches that foster meaningful public engagement. Over the last 30 years, Deliberative Polling has proven to be a successful method of public consultation to enhance civic participation and informed decision-making. In recent years, the implementation of online Deliberative Polling using the AI-assisted Stanford Online Deliberation Platform, a groundbreaking automated platform designed to scale simultaneous and synchronous deliberation efforts to millions, has put deliberative societies within reach. By examining two compelling case studies—Foreign Policy by Canadians and the Metaverse Community Forum—this paper highlights how technology can empower diverse voices, facilitate constructive dialogue, and cultivate a more vibrant democratic process. This paper demonstrates that leveraging technology in deliberation not only enhances public discourse but also paves the way for a more inclusive and participatory democracy.
 

About "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative"


Democracy has undergone profound changes over the past decade, shaped by rapid technological, social, and political transformations. Across the globe, citizens are demanding more meaningful and sustained engagement in governance—especially around emerging technologies like artificial intelligence (AI), which increasingly shape the contours of public life.

From world-leading experts in deliberative democracy, civic technology, and AI governance we introduce a seven-part essay series exploring how deliberative democratic processes like citizen’s assemblies and civic tech can strengthen AI governance. The essays follow from a workshop on “Democratic Legitimacy for AI: Deliberative Approaches to Inclusive Governance” held in Vancouver in March 2025, in partnership with Simon Fraser University’s Morris J. Wosk Centre for Dialogue. The series and workshop were generously supported by funding from the Canadian Institute for Advanced Research (CIFAR), Mila, and Simon Fraser University’s Morris J. Wosk Centre for Dialogue

All Publications button
1
Publication Type
Book Chapters
Publication Date
Subtitle

Part of "Deliberative Approaches to Inclusive Governance: An Essay Series Part of the Democratic Legitimacy for AI Initiative," produced by the Centre for Media, Technology and Democracy.

Authors
Alice Siu
Book Publisher
Centre for Media, Technology and Democracy
Paragraphs

In October 2024, Meta, in collaboration with the Stanford Deliberative Democracy Lab, implemented the third Meta Community Forum. This Community Forum expanded on the October 2023 deliberations regarding Generative AI. For this Community Forum, the participants deliberated on ‘how should AI agents provide proactive, personalized experiences for users?’ and ‘how should AI agents and users interact?’ Since the last Community Forum, the development of Generative AI has moved beyond AI chatbots and users have begun to explore the use of AI agents — a type of AI that can respond to written or verbal prompts by performing actions for you, or on your behalf. And beyond text-generating AI, users have begun to explore multimodal AI, where tools are able to generate content images, videos, and audio as well. The growing landscape of Generative AI raises more questions about users’ preferences when it comes to interacting with AI agents. This Community Forum focused deliberations on how interactive and proactive AI agents should be when engaging with users. Participants considered a variety of tradeoffs regarding consent, transparency, and human-like behaviors of AI agents. These deliberations shed light on what users are thinking now amidst the changing technology landscape in Generative AI.

For this deliberation, nearly 900 participants from five countries: India, Nigeria, Saudi Arabia, South Africa, and Turkey, participated in this deliberative event. The samples of each of these countries were recruited independently, so this Community Forum should be seen as five independent deliberations. In addition, 1,033 persons participated in the control group, where the participants did not deliberate in any discussions; the control group only completed the two surveys, pre and post. The main purpose of the control group is to demonstrate that any changes that occur after deliberation are a result of the deliberative event.

All Publications button
1
Publication Type
Reports
Publication Date
Subtitle

April 2025

Authors
James S. Fishkin
Alice Siu
Subscribe to Technology