The ALPACA Code explained: Self-instruct fine-tuning of LLMs
Автор: Discover AI
Загружено: 2023-04-10
Просмотров: 7558
Описание:
Pytorch code to fine tune and INSTRUCTION fine-tune your Large Language Models (like Alpaca LLM AI) w/ instruct fine tuned data sets: beautiful, but non-trivial code endeavors. Use your data set (and instruct fine-tune it) to fine-tune your LLM for your multiple tasks in parallel!
Self-instruct is a method to generate data sets, where ChatGPT /GPT-4 or other LLMs generate synthetic data sets according to our needs for fine tuning or instruct fine tuning our LLM for specific tasks (like summarization, translation, Q+A, ..).
SELF-INSTRUCT: Aligning Language Model
with Self Generated Instructions
https://arxiv.org/pdf/2212.10560.pdf
Stanford ALPACA:
https://crfm.stanford.edu/2023/03/13/...
https://github.com/tatsu-lab/stanford...
#ai
#naturallanguageprocessing
#finetuning
#chatgpt
#machinelearning
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: