New research shows how people get trapped in cocoons of troublingly similar online content – and how a thumbs-down can help break the cycle.
New recommendation algorithms mean information platforms are becoming more and more tailored to users. However, the precision of content recommendations has given rise to the phenomenon of ‘information cocoons’, where users are isolated from diverse information and eventually become trapped in a single topic or viewpoint.
Through large-scale empirical research and information dynamics modelling from the perspective of statistical physics, a Tsinghua team have provided some key insights into how human-AI interactions can lead to the emergence of information cocoons in online media.
“Information cocoons not only deprive humans of the diversity of information available for informed decision-making and innovation, but also exacerbate social polarization and reinforce biases,” says Yong Li, an information science and technology researcher at Tsinghua University, who led the new study, which was published in Nature Machine Intelligence.
Even worse, most online users have no idea when they get trapped in such a cocoon and are therefore being exposed to only a fraction of available information, he says.
Deep information cocoons
Helping people to realise they are stuck in information cocoons, and offering ways to help them escape, requires a deeper understanding of how these states form in the first place. The new study addresses that question: investigating information cocoons in the real world using two large datasets involving more than 500 million records — one drawn from video content and the other from Microsoft News.
The analysis suggests the information cocoon state is different to what people call an ‘echo chamber’ — a situation where like-minded peers gather on social media. In contrast, information cocoons form because of a multi-layered feedback loop between people and AI-driven recommendation algorithms.
Based on the way information cocoons seem to form in the real world, the study worked out a mechanistic model that can recreate the steps involved. This model can predict critical transitions between three states: diversification, partial information cocoons, and deep information cocoons.
Professor Yong Li is an information science and technology researcher at Tsinghua University.
The mechanisms behind information cocoons
Two mechanisms drive the system away from diversification and towards an information cocoon. The first is similarity-based matching, which is the way that an algorithm tracks what people see and offers them more of the same. Algorithms that use positive feedback amplify this effect and reinforce the cocoons that restrict information diversity.
So, how can someone tell if they have become trapped in an information cocoon? The most observable phenomenon is the increasing homogeneity of the information accessible to them, Li explains.
“For example, initially, users have access to a diverse range of topics such as celebrities, sports, animals, and more. However, as the interactions between users and the recommendation algorithm increase, their access is gradually limited to only a few of these topics.”
To escape from a cocoon, online users must reverse the steps, by giving the algorithm negative feedback on its choices — such as a thumbs down on content that a user would typically thumbs up — and by randomly exploring other options. Even then, individuals can struggle to break out on their own, says Li. “Government-level regulations are urgently required,” he says.
Piao, J., Liu, J., Zhang, F., Su, J. & Li, Y. Human–AI adaptive dynamics drives the emergence of information cocoons Nature Machine Intelligence 5(11), 1-11 (2023) doi: 10.1038/s42256-023-00731-4
Editor: Guo Lili