
The original version of this article was published in Italian by the same author on 7 July 2025.
Media, publishing, journalism, artificial intelligence, and users. Behind these words lie connections that are reshaping not only the web, but our very perception of the world.
At The Big Interview, a series of panels organised by Wired Italia and held on 26 June at Bocconi University in Milan, Mediatrends – invited as a guest – explored this complex terrain with Mariarosaria Taddeo, philosopher of digital ethics and Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford.
At the Wired event, Taddeo spoke on a panel titled Regulating AI in Armed Conflicts, which began with a simple but urgent question: what do we deem unacceptable?

Mariarosaria Taddeo speaking at The Big Interview, an event organised by Wired Italia at Bocconi University on 26 June 2025. Photo: Franco Russo.
Between AI and cybersecurity
After earning a Master’s degree and PhD in philosophy, Mariarosaria Taddeo worked as a researcher in cybersecurity and ethics at the University of Warwick and held a Marie Curie Fellowship at the University of Hertfordshire before joining the Oxford Internet Institute.
Her recent work focuses on the ethics and governance of digital technologies, from designing governance frameworks to harness AI, to exploring the ethical implications of defence technologies – including cybersecurity and the governance of cyber conflicts.
She has published more than 150 papers on topics ranging from trustworthy digital technologies to the ethical governance of AI for national defence and cybersecurity ethics. Her work has appeared in leading journals including Nature, Nature Machine Intelligence, Science, and Science Robotics.
Taddeo has led, currently leads and co-leads several projects in the field of digital ethics.
She is currently guiding an ongoing initiative on Ethical Principles for the Use of AI in National Security and Defence, funded by the United Kingdom’s Defence Science and Technology Laboratory.
She also led a project funded by the NATO Cooperative Cyber Defence Centre of Excellence to define ethical guidelines for the regulation of cyber conflicts.
In 2020, ORBIT – the Observatory for Responsible Research and Innovation in ICT, linked to the University of Oxford – included her among the 100 most influential women in AI ethics worldwide.
That same year, she was named one of 2020’s twelve exceptional emerging talents by the Women’s Forum for Economy and Society. In 2023, she was awarded the title of Grand Officer of the Order of Merit of the Italian Republic.
She received the World Technology Award for Ethics in 2016 and the Simon Award for Outstanding Research in Computing and Philosophy in 2010.
Taddeo was also part of the expert task force on AI and cybersecurity at the Centre for European Policy Studies — a major Brussels-based think tank whose publications shape EU cybersecurity policy.

Photo: Pexels.
Tailored information and filter bubbles
Generative AI is fast becoming one of the main gateways to information. As a result, digital news outlets must now adapt to a landscape where content is increasingly filtered through tools like ChatGPT and Gemini.
According to Taddeo, “there are several aspects to consider. One is how people consume information, which is increasingly personalised – tailored around individual priorities and opinions. I believe AI will only reinforce this trend.”
AI-powered virtual assistants have taken personalised news consumption a step further.
“We’ll use AI as a medium to access the information we’re looking for – but it will learn to know us, anticipate our needs, and deliver what it thinks we want. This is problematic, because exposure to content outside our expectations – not necessarily in terms of quality, but of subject – is essential to developing critical thinking and maintaining pluralism in democratic societies.”
Taddeo notes that newsrooms, too, will inevitably undergo major shifts.
“We’re already seeing some of these changes – for instance, AI being used to write articles or conduct research. I believe the more journalists are trained on these issues, the fewer problems we’ll face down the road.”
In this evolving landscape, the United States offers some telling examples.
“Rather than being devoured by AI for data extraction, American outlets are trying to reach agreements. This is something we need to monitor closely, because protecting copyright is key – and it must be addressed before individual outlets start going it alone.”
Taddeo argues that keeping a close eye on deals between media organisations and major AI companies is now essential.

Image: Wikimedia Commons.
AI and content indexing
Among the key areas to watch, Taddeo highlights AI’s growing role as an “information mediator.”
“AI is already embedded in search engines, but it will play an increasingly central role in mediating users’ access to information,” she says.
“From this perspective, it will be crucial to understand how AI will index content in the future. The more seamlessly technology integrates into our daily lives, the more invisible it becomes. And this quiet presence – making decisions on our behalf – can be dangerous, as it gradually erodes our ability to choose for ourselves.”
So, will users find it increasingly difficult to act autonomously? “I believe this is the most critical issue at the heart of the entire debate,” Taddeo says.
This scenario also overlaps with the logic of clickbait – the business model that underpinned many early online newsrooms.
But AI is reshaping that logic too – and here again, the outlook is far from optimistic.
“I’ve always believed that clickbait stems from two factors: the technology itself and the way we consume news,” Taddeo says.
“AI will have a major influence in this context as well, and it will be all the more effective the less informed the average user is.”
There’s another side to this coin, and, as Taddeo notes, it has deep roots: “Clickbait has always worked in much the same way.”

Photo: Unsplash.
An ever-growing bubble
If these are the premises, then the most immediate consequence for users will be “a faster, more streamlined way of consuming news. Those who know how to ask the right questions of AI will be exposed to content that aligns more closely with what they expect,” Taddeo says.
“This too comes down to education: those who ask the wrong questions will get poorer answers in return.”
Lack of awareness always carries consequences — and perhaps the only way to combat it is by studying it.
“Today, anyone engaging with AI without proper awareness risks losing the very sense of what information is,” she says.
“To give a practical example: if I read an article by a journalist I know, I might disagree with their perspective — but if I believe the content was written by AI, I’m more likely to accept it at face value. That’s what we call tech bias: the tendency to trust what we read simply because it comes from a machine. It’s not a new issue, but how far this trend will go is something we need to examine closely.”
In this vast sea of technology and data, how can we still ensure pluralistic access to information?
As a fundamental principle, Taddeo says, “pluralism must be safeguarded, and we must take a proactive role in preserving it.”
Because “information isn’t being censored, but we’re simply no longer exposed to its full diversity. We’re immersed in what are called filter bubbles and end up narrowing the range of content we engage with. Once pluralism is lost, it will be incredibly hard to recover.”