LLMs | Parameter Efficient Fine-Tuning (PEFT) | Lec 14.1
Автор: LCS2
Загружено: 2024-09-27
Просмотров: 4844
Описание:
tl;dr: This lecture covers various techniques of Parameter Efficient Fine-Tuning (PEFT) that enable significant modifications to LLMs without overhauling their entire structure, focusing on customizing models for specific applications with minimal computational cost and resource usage.
🎓 Lecturer: Dinesh Raghu [https://research.ibm.com/people/dines...]
🔗 Get the Slides Here: http://lcs2.in/llm2401
📚 Suggested Readings: - [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/pdf/2104.08691)
[Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://arxiv.org/pdf/2101.00190)
[Parameter-Efficient Transfer Learning for NLP](https://arxiv.org/pdf/1902.00751)
[LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/pdf/2106.09685)
This lecture delves into Parameter Efficient Fine-Tuning (PEFT) techniques, which are crucial for adapting large language models (LLMs) without the need for extensive retraining of all parameters. We'll explore innovative methods such as prompt tuning, prefix tuning, adapters, and low-rank adaptation (LoRA), which enable more targeted and resource-efficient modifications to pre-trained models. These techniques are essential for anyone looking to customize LLMs for specific tasks while maintaining scalability and efficiency.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: