A new Stanford-led research published in Science shows that a web-based tool can drastically reduce partisan animosity on X feeds without platform cooperation.
By reordering content to downrank anti-democratic posts, the bipartisan study demonstrated a path toward lowering political polarisation and giving users control over their algorithms
New research from Stanford University demonstrates that algorithm intervention can reduce partisan animosity and control political unrest on X feeds
A groundbreaking study from Stanford University has unveiled a new web-based research AI tool capable of significantly cooling down the partisan rhetoric on social media platforms like X, all without the platform’s direct involvement.
The multidisciplinary research, published in the journal Science, not only offers a concrete way to reduce political polarisation but also paves the path for users to gain more control over the proprietary algorithms that shape their online experience.
Reducing the temperature of the feed
The researchers sought to counter the toxic cycle where social media algorithms often amplify emotionally charged, divisive content to maximise user engagement. The developed tool acts as a seamless web extension, leveraging a large language model (LLM) to scan a user’s X feed for posts containing anti-democratic attitudes and partisan animosity.
This harmful content includes things like advocating for violence or extreme measures against the opposing party. Instead of removing the content, the AI tool simply reorders the feed, pushing these incendiary posts lower down the timeline.
In a 10-day experiment conducted during the 2024 election with roughly 1,200 participants, those whose feeds had this toxic content downranked showed a measurable improvement in their views toward the opposing political party. The effect was universal, observed in both liberal and conservative users.
“Social media algorithms directly impact our lives, but until now, only the platforms had the ability to understand and shape them,” said Michael Bernstein, a professor of computer science and the study’s senior author. “We have demonstrated an approach that lets researchers and end users have that power.”
Small change, major impact on polarisation: Controlling the political temperature
The study’s impact demonstrates that a subtle algorithmic change can have a substantial effect on political attitudes. On average, participants who experienced reduced exposure to toxic content reported attitudes toward the opposition that were 2 points warmer on a 100-point scale.
Researchers noted this is equivalent to the estimated change in political attitudes that typically occurs in the general U.S. population over a period of three years. Furthermore, downranking the content also reduced participants’ reported feelings of anger and sadness, highlighting the emotional toll of highly polarised feeds.
According to author Tiziano Piccardi, now an assistant professor at Johns Hopkins University, the findings clearly illustrate the causal link. “When the participants were exposed to less of this content, they felt warmer toward the people of the opposing party,” he stated.
This success is particularly notable because previous attempts to mitigate polarisation, such as displaying posts chronologically, have shown mixed or negligible results.
By creating an AI tool that works independently of the platform and making the code available, the team has opened new doors for researchers and developers to create nuanced, effective interventions that could promote healthier democratic discourse and greater social trust.











