How Many Labelled Examples Do You Need for a BERT-sized Model to Beat GPT4 on Predictive Tasks?
Автор: Toronto Machine Learning Society (TMLS)
Загружено: 2023-11-01
Просмотров: 1889
Описание:
Speaker: Matthew Honnibal: Founder and CTO, Explosion AI
Large Language Models (LLMs) offer a new machine learning interaction paradigm: in-context learning. This approach is clearly much better than approaches that rely on explicit labelled data for a wide variety of generative tasks (e.g. summarisation, question answering, paraphrasing). In-context learning can also be applied to predictive tasks such as text categorization and entity recognition, with few or no labelled exemplars.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: