ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

A Multi-Armed Bandit Framework for Recommendations at Netflix | Netflix

Автор: AI Council

Загружено: 2018-05-30

Просмотров: 38334

Описание: Get the slides: https://www.datacouncil.ai/talks/a-mu...

ABOUT THE TALK:

In this talk, we will present a general multi-armed bandit framework for recommending titles to our 117M+ members on the Netflix homepage. A key aspect of our framework is closed loop attribution to link how our members respond to a recommendation. Our framework performs frequent updates of policies using user feedback collected from a past time interval window.

We will take deeper look at the system architecture. We will illustrate the use of that framework by focusing on two example policies – a greedy exploit policy which maximize the probability a user will play a title and an incrementality-based policy. The latter is a novel online learning approach that takes the causal effect of a recommendation into account. An incrementality-based policy recommends titles that brings about the maximum increase in a specific quantity of interest, such as engagement. This helps discount the effect of recommendations when a user would have played anyway. We describe offline experiments and online A/B test results for both of these example policies.

ABOUT THE SPEAKERS:

Jaya Kawale is a Senior Research Scientist at Netflix working on problems related to targeting and recommendations. She received her PhD in Computer Science from the University of Minnesota and has published research papers at several top-tier conferences. Her main areas of interest are large scale machine learning and data mining.

Elliot is a software engineer at Netflix on the Personalization Infrastructure team. Currently, he builds big data systems for personalizing recommendations for Netflix subscribers, using a variety of technologies including Scala, Spark/Spark Streaming, Kafka, and Cassandra. He graduated from UC Berkeley (B.S.) and Stanford (M.S.) and has previously worked at eBay and Apple.

ABOUT DATA COUNCIL:
Data Council (https://www.datacouncil.ai/) is a community and conference series that provides data professionals with the learning and networking opportunities they need to grow their careers. Make sure to subscribe to our channel for more videos, including DC_THURS, our series of live online interviews with leading data professionals from top open source projects and startups.

FOLLOW DATA COUNCIL:
Twitter:   / datacouncilai  
LinkedIn:   / datacouncil-ai  

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
A Multi-Armed Bandit Framework for Recommendations at Netflix | Netflix

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

Trends in Recommendation & Personalization at Netflix

Trends in Recommendation & Personalization at Netflix

Multi-Armed Bandits: A Cartoon Introduction - DCBA #1

Multi-Armed Bandits: A Cartoon Introduction - DCBA #1

How Netflix Uses Data, Surveys, and A/B Testing to Perfect Its Recommendation Algorithm

How Netflix Uses Data, Surveys, and A/B Testing to Perfect Its Recommendation Algorithm

Многорукий бандит: концепции науки о данных

Многорукий бандит: концепции науки о данных

Artwork Personalization at Netflix |  Netflix

Artwork Personalization at Netflix | Netflix

What the Heck is an In Memory Data Grid | Pivotal

What the Heck is an In Memory Data Grid | Pivotal

A/B Testing vs. Multi-Armed Bandits: What You Need To Know (Outperform Podcast)

A/B Testing vs. Multi-Armed Bandits: What You Need To Know (Outperform Podcast)

Tony Jebara, Netflix - Machine Learning for Recommendation and Personalization

Tony Jebara, Netflix - Machine Learning for Recommendation and Personalization

RecSys 2020 Tutorial: Introduction to Bandits in Recommender Systems

RecSys 2020 Tutorial: Introduction to Bandits in Recommender Systems

Выборка Томпсона, однорукие бандиты и бета-распределение

Выборка Томпсона, однорукие бандиты и бета-распределение

Contextual Bandits : Data Science Concepts

Contextual Bandits : Data Science Concepts

Многорукие бандиты — объяснение обучения с подкреплением!

Многорукие бандиты — объяснение обучения с подкреплением!

Математика, лежащая в основе рекомендательных систем

Математика, лежащая в основе рекомендательных систем

07 06 Project 2 Multi Armed Bandits Algorithm

07 06 Project 2 Multi Armed Bandits Algorithm

Почему нейросети постоянно врут? (и почему этого уже не исправить)

Почему нейросети постоянно врут? (и почему этого уже не исправить)

CS885 Lecture 8a: Multi-armed bandits

CS885 Lecture 8a: Multi-armed bandits

Recommender System and It's Design | Machine Learning | Community Webinar

Recommender System and It's Design | Machine Learning | Community Webinar

Delivering High Quality Analytics at Netflix

Delivering High Quality Analytics at Netflix

Критическая база знаний LLM за ЧАС! Это должен знать каждый.

Критическая база знаний LLM за ЧАС! Это должен знать каждый.

Reinforcement Learning #1: Multi-Armed Bandits, Explore vs Exploit, Epsilon-Greedy, UCB

Reinforcement Learning #1: Multi-Armed Bandits, Explore vs Exploit, Epsilon-Greedy, UCB

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]