AI Auditing

Causally Estimating the Effect of YouTube's Recommender System Using Counterfactual Bots

In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content, with potentially radicalizing consequences. However, attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals -- what a user would have viewed in the absence of algorithmic recommendations -- and hence cannot disentangle the effects of the algorithm from a user's intentions. Here we propose a method that we call "counterfactual bots" to causally estimate the role of algorithmic recommendations on the consumption of highly partisan content. By comparing bots that replicate real users' consumption patterns with "counterfactual" bots that follow rule-based trajectories, we show that, on average, relying exclusively on the recommender results in less partisan consumption, where the effect is most pronounced for heavy partisan consumers. Following a similar method, we also show that if partisan consumers switch to moderate content, YouTube's sidebar recommender "forgets" their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually towards moderate content. Overall, our findings indicate that, at least on YouTube, individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.

Overview of the counterfactual bot method to disentangle YouTube’s recommender system from user preferences utilizing counterfactual bots. By measuring the difference in the partisanship of watched videos ($Y$) by control bot ($Z = 0$) and watched videos by rule-base counterfactual bots ($Z = 1$), our design eliminates the “preference” or “choice” component of observed consumption, allowing us to estimate the causal effect of algorithmic recommendations. As a result of algorithmic recommendations, causal inference can be calculated by $Y(Z = 0) − Y(Z = 1) = \Sigma_x p(x) (Y(0|x) − Y(1|x))$, where $Y(0)$ is the actual realization of watching potential outcomes for a real user, $Y(1)$ is the potential outcome when watching is ruled by the recommendation, and X is the heterogeneity of the user. Here, the treatment $Z$ can be a “user” treatment, in which the bot continues to follow the historical trajectory of the real user or a “counterfactual” treatment in which they follow some other rule as “up next”: clicking on the top-ranked sidebar, “random sidebar”: choosing a random video from the sidebar, and “random home”: choosing a random video from the homepage.

References

2024

  1. DALL·E 2023-10-23 01.32.49 - Drawing_ A split screen. On one side, human users are engrossed in watching YouTube videos, with a flow of video thumbnails indicating recommendations.png
    Causally estimating the effect of YouTube’s recommender system using counterfactual bots
    Homa Hosseinmardi, Amir Ghasemian, Miguel Rivera-Lanas, and 3 more authors
    Proceedings of the National Academy of Sciences, 2024