Global Trust in AI is Declining, And More Regulation Won’t Fix It
A new study from the AI Collective Institute challenges the widely held belief that AI regulation meaningfully boosts public trust in AI-enabled technologies.
WASHINGTON, D.C. , DC, UNITED STATES, October 1, 2025 /EINPresswire.com/ -- A pioneering new study from the AI Collective Institute challenges a widely held assumption in global AI policymaking: that national AI regulation meaningfully boosts public trust in AI-enabled technologies.
Drawing on data from 47 countries, the Institute's first policy research report: "Can We Regulate Trust? A Global Analysis of the Correlation Between National AI Regulation and Public Trust," finds no significant correlation between the presence of national AI regulation and high levels of public trust in AI systems. Instead, countries where citizens report higher rates of daily intentional use of AI show greater levels of trust, suggesting that familiarity and experience with AI outweigh regulation in building public confidence.
“Regulation remains essential for accountability and safety, but it doesn’t automatically build trust,” said Liel Zino, the AI Collective Institute’s Policy Director. “If governments want people to trust AI, they must prioritize exposure, literacy, and inclusive participation alongside regulation.”
The study combined the AI Collective Institute’s independent data analysis of each country’s regulatory landscape with public sentiment data from the KPMG 2025 Global Trust in AI Survey.
In countries where citizens reported higher daily AI usage rates there were positive and significant correlations with higher levels of public trust in AI systems. By contrast, though more than half (57%) of countries included in the study have national AI frameworks, the data did not indicate a statistically significant link between regulation and trust in those countries.
Key Findings:
1. Regulation ≠ Trust: Countries with AI-specific regulations do not necessarily have higher levels of public trust in AI compared to countries without AI-specific regulations.
2. Experience Matters: Public trust trends higher in countries where people use AI regularly in daily life compared to countries where regular usage is less frequent.
3. Trust in Decline: Despite the accelerating flurry of AI regulatory policies across the globe in recent years, trust in AI fell from 61% in 2019 to 46% in 2025.
Policy Recommendations:
1. Regulation is still vital, but not sufficient to improve public trust. To bridge the trust gap, the AI Collective Institute recommends that local and national governments:
2. Invest in AI Literacy: Embed AI literacy in school curricula and launch nationwide education and awareness programs to empower citizens through enhancing their understanding of AI systems and their rights and protections when using AI.
3. Lead by Example: When governments deploy AI responsibly and fairly in public sector services, they provide tangible proof of AI’s trustworthiness and collective benefits to society.
4. Foster Collaboration: Public-private partnerships between government, industry, and academia advance responsible AI development that aligns innovative solutions with public values.
5. Advance Regulation Responsibly: Strong legal frameworks for accountability, public safety, and ethical oversight create the necessary conditions for earning trust through transparent and accountable AI adoption.
Without deliberate action, governments risk losing public buy-in for one of the most transformative technologies of the century. Building trust in AI requires a multifaceted strategy, combining safeguards with education, exposure, and meaningful public engagement.
“Government leaders and policymakers need to prioritize proactively increasing public trust,” said Ms. Zino. “The countries that fail to do so will be left behind in the AI-driven world of tomorrow.”
About the AI Collective Institute
The AI Collective Institute is the policy and research arm of the AI Collective, the world’s largest open community devoted to ensuring artificial intelligence serves the public good. The Collective connects more than 70,000 technologists, researchers, policymakers, and creatives through over 60 city-based chapters, championing AI that is open, responsible, and aligned with human values.
Founded in 2025, the Institute channels this global expertise into action. It convenes working groups, publishes studies, and issues governance proposals that bring builder and user perspectives into policymaking. By translating community insights into practical recommendations, the AI Collective Institute strengthens public dialogue and advocates for inclusive and accountable AI policies.
Elizabeth Farrell
The AI Collective & AI Collective Institute
elizabeth@aicollective.com
Visit us on social media:
LinkedIn
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
