A research study found that collaborative filtering-based multimedia recommender systems exhibit systematic popularity bias, providing significantly worse recommendations to users with niche interests compared to users who prefer mainstream content.
Researchers from now-Center GmbH in Graz and Graz University of Technology conducted a comprehensive study examining popularity bias in collaborative filtering-based multimedia recommender systems (MMRS) across four domains: music (Last.fm), movies (MovieLens), digital books (BookCrossing), and animes (MyAnimeList). The study applied four different CF-based recommendation algorithms on datasets split into three user groups based on their inclination to popularity: LowPop, MedPop, and HighPop, with each group containing 1000 users. The algorithms tested included two K-Nearest Neighbors variants (UserKNN and UserKNNAvg), a non-negative matrix factorization variant (NMF), and a CoClustering algorithm. The researchers found that the probability of a multimedia item being recommended strongly correlates with the item's popularity, and that users with less inclination to popular items (LowPop) receive statistically significantly worse multimedia recommendations than users with medium and high inclination to popular items. The study revealed that although users with little interest in popular items tend to have the largest user profiles, they receive the lowest recommendation accuracy. The evaluation used mean absolute error (MAE) measurements against a five-fold cross validation protocol. The research highlighted how collaborative filtering systems systematically favor mainstream content over niche interests, creating a feedback loop where popular items become even more popular while long-tail items remain hidden.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Accuracy and effectiveness of AI decisions and actions are dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed