
The original version of this article was published in Italian by the same author on 26 May 2025.
Stanford University’s 2025 Artificial Intelligence Index Report, developed by the Human-Centered Artificial Intelligence – HAI – centre, presents a clear picture: artificial intelligence is no longer a niche technology, but a mass phenomenon – one that is radically transforming how people access information, communicate, and form opinions.
With over 100 million monthly active users on ChatGPT and AI tools now embedded across every sector of society, a quiet yet far-reaching revolution is underway.
AI for all
The year 2024 marked a turning point in the accessibility of artificial intelligence.
According to data collected by Stanford, the use of chatbots and virtual assistants increased by 250% compared to the previous year, with particularly significant growth among young adults (18-34 years), and, surprisingly, among those over 55.
This widespread adoption suggests that AI is no longer perceived as a complex technology reserved for experts, but as an everyday tool accessible to everyone.
The removal of technical barriers has had profound consequences on the formation of public opinion.
For the first time in history, millions of people have direct access to tools capable of generating content, analysing information, and providing real-time answers on any topic.
This represents a fundamental shift in how people access knowledge and process information.
A new mediator of information
Traditionally, public opinion was shaped through institutional mediators: newspapers, television, experts, and opinion leaders.
Artificial intelligence is introducing a new type of mediation, characterised by extreme personalisation and immediate interactivity.
Users no longer passively consume pre-packaged content, but actively engage in dialogue with systems that adapt to their specific questions and level of understanding.
This change has profound implications.
On one hand, AI can democratise access to complex information, making technical or specialised topics understandable to a broader audience.
On the other hand, it raises crucial questions about the quality and reliability of the information provided.
The AI Index 2025 highlights that 68% of AI system users consider the responses received to be “generally accurate,” but only 34% systematically verify the information through alternative sources.

Image: Canva.
Personalised isolation
One of the most significant aspects emerging from Stanford’s report is the intensification of the echo chamber effect through AI.
Artificial intelligence systems, designed to maximise user engagement and satisfaction, naturally tend to provide responses that confirm existing expectations and biases.
This mechanism, amplified by the ease of use and the interactive nature of conversation, risks creating information bubbles even more impermeable than those generated by social media.
The impact on the polarisation of public opinion is already visible.
The report documents how the frequent use of AI chatbots is linked to greater certainty in one’s political opinions, and reduced willingness to consider alternative viewpoints.
This phenomenon is particularly pronounced on controversial topics such as climate change, health policies, and immigration.
The boundaries of public debate
Artificial intelligence is also changing the very terms of public debate.
The ability to quickly generate structured arguments, statistics, and counterarguments is altering the pace and quality of online discussions.
While this can elevate the level of discourse by providing everyone with tools to participate in complex debates, it also risks homogenising forms of argumentation and reducing the diversity of perspectives.
Particularly concerning is the rise of what Stanford researchers define as “artificial astroturfing”, the use of AI to generate content designed to mimic grassroots public opinion while being carefully orchestrated.
The report estimates that up to 15% of social media content expressing political opinions could be generated or significantly assisted by AI, often without users being aware.
Use without trust
The AI Index 2025 reveals a paradox in the public’s relationship with artificial intelligence: while dependence on these tools for daily information grows, widespread distrust persists regarding their capabilities and intentions.
According to the report, 72% of respondents use AI-based services regularly, yet only 45% trust the information they receive more than that from traditional sources.
This ambivalence reflects a broader crisis of epistemic authority in contemporary society.
AI enters an already fragmented media landscape, not replacing traditional media entirely, but often coexisting with them in contentious ways.
The result is a multiplication of available “truths,” each seemingly backed by compelling data and arguments.
Democratic asymmetries
The transformations documented in the AI Index 2025 have far-reaching implications for the functioning of modern democracies.
Public opinion, which is increasingly mediated by AI, raises fundamental questions about the quality of public discourse and citizens’ ability to make informed decisions.
But it’s not all negative.
AI has also demonstrated considerable potential for civic education and democratic inclusion.
Real-time translation tools are breaking down language barriers, while language simplification systems are making complex political and legal documents more accessible.
In some contexts, AI is enabling traditionally marginalised groups to participate in politics by equipping them with tools to articulate and share their perspectives.
Structures of responsibility
The Stanford report highlights the urgent need for new governance frameworks to manage AI’s impact on public opinion.
This is not merely about regulating the technology itself, but about rethinking the entire information ecosystem of democratic societies.
Among the recommendations emerging from academia and institutions, special attention is given to developing transparency standards for AI systems used in public communication, promoting digital literacy focused specifically on AI, and creating fact-checking mechanisms tailored to new forms of AI-generated content.

Photo: Unsplash.
Pivotal years
The impact of AI on public opinion is neither inherently positive nor negative – it will depend on the decisions made in the crucial years ahead.
The challenge is not to halt or limit technological innovation, but to steer it toward forms that strengthen rather than erode democratic discourse.
This will require coordinated efforts from technologists, lawmakers, educators, and citizens to build a future in which artificial intelligence amplifies collective wisdom rather than fragmenting it.
The time to do this work is now.
Stanford HAI’s AI Index 2025 is not just a report on technology – it’s a call to action for everyone who cares about the future of democracy in the age of artificial intelligence.