ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Reproducibility of LLM-based Recommender Systems: the Case Study of P5 Paradigm

recsys

Автор: ACM RecSys

Загружено: 2025-02-17

Просмотров: 129

Описание: by Pasquale Lops (University of Bari Aldo Moro), Antonio Silletti (University of Bari Aldo Moro), Marco Polignano (University of Bari Aldo Moro), Cataldo Musto (University of Bari Aldo Moro) and Giovanni Semeraro (University of Bari Aldo Moro)

Abstract:
Recommender systems can significantly benefit from the availability of pre-trained large language models (LLMs), which can serve as a basic mechanism for generating recommendations based on detailed user and item data, such as text descriptions, user reviews, and metadata. On the one hand, this new generation of LLM-based recommender systems paves the way for dealing with traditional limitations, such as cold-start and data sparsity. Still, on the other hand, this poses fundamental challenges for their accountability. Reproducing experiments in the new context of LLM-based recommender systems is challenging for several reasons. New approaches are published at an unprecedented pace, which makes difficult to have a clear picture of the main protocols and good practices in the experimental evaluation. Moreover, the lack of proper frameworks for LLM-based recommendation development and evaluation makes the process of benchmarking models complex and uncertain.

In this work, we discuss the main issues encountered when trying to reproduce P5 (Pretrain, Personalized Prompt, and Prediction Paradigm), one of the first works unifying different recommendation tasks in a shared language modeling and natural language generation framework. Starting from this study, we have developed LaikaLLM, a framework for training and evaluating LLMs, specifically for the recommendation task. It has been used to perform several experiments to assess the impact of using different LLMs, different personalization strategies, and a novel set of more informative prompts on the overall performance of recommendations in a fully reproducible environment.

Full Text:: https://dl.acm.org/doi/10.1145/364045...

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Reproducibility of LLM-based Recommender Systems: the Case Study of P5 Paradigm

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR

FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR

Session 6: Recommendation as Language Processing RLP

Session 6: Recommendation as Language Processing RLP

Building Recommender Systems with Large Language Models // Sumit Kumar // LLMs in Production

Building Recommender Systems with Large Language Models // Sumit Kumar // LLMs in Production

Ranking-Aware Unbiased Post-Click Conversion Rate Estimation

Ranking-Aware Unbiased Post-Click Conversion Rate Estimation

«Жить надо сегодня». Олег Тиньков и Майкл Калви о взлете нового финтех-стартапа Plata

«Жить надо сегодня». Олег Тиньков и Майкл Калви о взлете нового финтех-стартапа Plata

Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED

Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED

Силовой захват власти / Новая спецоперация РФ?

Силовой захват власти / Новая спецоперация РФ?

Краткое объяснение больших языковых моделей

Краткое объяснение больших языковых моделей

What is an AI Recommendation Engine?

What is an AI Recommendation Engine?

Как LLM могут хранить факты | Глава 7, Глубокое обучение

Как LLM могут хранить факты | Глава 7, Глубокое обучение

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]