Optimizing Recommendations with Multi-Armed & Contextual Bandits for Personalized Next Best Actions
Автор: WiDS Worldwide
Загружено: 2025-01-22
Просмотров: 673
Описание:
In this WiDS Upskill Workshop, Keerthi Gopalakrishnan explores how Multi-Armed Bandit (MAB) and Contextual Bandit algorithms can optimize online recommendations for next-best-action scenarios. These techniques help balance the trade-off between exploration (trying new recommendations) and exploitation (leveraging successful actions) to drive better personalization and engagement.
Keerthi will break down key MAB concepts, including:
Epsilon-greedy
Upper Confidence Bound (UCB) & Contextual UCB
Thompson Sampling
Real-world applications in recommendation systems
This session is ideal for:
Data Scientists and Machine Learning Engineers
Product Managers and Data Strategists
Researchers and Academics
Prior knowledge: A background in supervised learning and evaluation metrics is recommended. Familiarity with online learning or decision-making algorithms is helpful but not required.
Learn more about WiDS Upskill Workshops: https://www.widsworldwide.org/learn/u...
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: