LLM Hallucinations in RAG QA - Thomas Stadelmann, deepset.ai
Автор: deepset, makers of Haystack
Загружено: 2023-08-16
Просмотров: 7256
Описание:
Hallucinations are one of the biggest challenges in running LLM-based applications as they can significantly undermine the trustworthiness of your application. Retrieval-augmented QA not only enables us to run LLMs on any data but also to mitigate hallucinations. However even with retrieval augmentation we cannot fully avoid them.
In this webinar Thomas will show you current approaches on how to systematically detect hallucinations, paving the way for automating this critical issue.
#ai #llm #generativeai #developer #deepset.ai
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: