Exploration in Recommender Systems
Автор: ACM RecSys
Загружено: 2022-01-30
Просмотров: 1116
Описание:
RecSys 2021 Exploration in Recommender Systems
Authors: Minmin Chen, Google
Abstract: In the era of increasing choices, recommender systems are becoming indispensable in helping users navigate through the million or billion pieces of content available on recommendation platforms. As the focus of these systems shifts from attracting short-term user attention toward optimizing long term user experience on these platforms, reinforcement learning (and bandits) have emerged as appealing techniques. The exploration-exploitation tradeoff, being the foundation of bandits and RL research, has been extensively studied. An agent is incentivized to exploit in order to maximize its return, i.e., by repeating actions it has taken in the past that produced higher rewards. On the other hand, the agent needs to explore previously unseen actions in order to discover potentially better ones. Exploration has been shown to be extremely useful in solving tasks of long horizons or sparse reward. The value of exploration in recommender systems on the other hand are less well understood.
In this talk, we examine the roles of exploration in recommender systems in three facets: 1) system exploration to reduce model uncertainty in regions with sparse user feedback; 2) user exploration to introduce users to new interests/tastes; and 3) online exploration to take into account real-time user feedback. We showcase how each aspect of exploration contributes to the long term user experience through offline and live experiments on industrial recommendation platforms. We hope this talk can inspire more follow up work in understanding and improving exploration in recommender systems.
DOI: https://doi.org/10.1145/3460231.3474601
DOI:
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: