ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

2308.08747 - An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual

Автор: AI Paper Cast

Загружено: 2025-12-09

Просмотров: 9

Описание: title: An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning
author: Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, Yue Zhang
arXiv:2308.08747 - https://arxiv.org/abs/2308.08747

Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge for achieving a satisfactory performance in downstream tasks. As large language models (LLMs) have demonstrated remarkable performance, it is intriguing to investigate whether CF exists during the continual instruction tuning of LLMs. This study empirically evaluates the forgetting phenomenon in LLMs' knowledge during continual instruction tuning from the perspectives of domain knowledge, reasoning, and reading comprehension. The experiments reveal that catastrophic forgetting is generally observed in LLMs ranging from 1b to 7b parameters. Surprisingly, as the model scale increases, the severity of forgetting intensifies in such a model sale range which may result from the much significant initial performance in the larger LLM. Comparing the decoder-only model BLOOMZ with the encoder-decoder model mT0, BLOOMZ exhibits less forgetting and retains more knowledge. Interestingly, we also observe that LLMs can mitigate language biases, such as gender bias, during continual fine-tuning. Furthermore, our findings indicate that general instruction tuning can help alleviate the forgetting phenomenon in LLMs during subsequent fine-tuning.

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
2308.08747 - An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

2511.03628 - LiveTradeBench: Seeking Real World Alpha with Large Language Models

2511.03628 - LiveTradeBench: Seeking Real World Alpha with Large Language Models

The unique phenomenon of

The unique phenomenon of "XD" in Poland

THIS is why large language models can understand the world

THIS is why large language models can understand the world

Large Language Models (LLMs) - Everything You NEED To Know

Large Language Models (LLMs) - Everything You NEED To Know

Чем ОПАСЕН МАХ? Разбор приложения специалистом по кибер безопасности

Чем ОПАСЕН МАХ? Разбор приложения специалистом по кибер безопасности

2511.02864 - Mathematical exploration and discovery at scale

2511.02864 - Mathematical exploration and discovery at scale

Fine-tuning Large Language Models (LLMs) | w/ Example Code

Fine-tuning Large Language Models (LLMs) | w/ Example Code

ЛУЧШАЯ БЕСПЛАТНАЯ НЕЙРОСЕТЬ Google, которой нет аналогов

ЛУЧШАЯ БЕСПЛАТНАЯ НЕЙРОСЕТЬ Google, которой нет аналогов

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

RAG vs Fine-Tuning vs Prompt Engineering: Optimizing AI Models

2107.03312 - SoundStream: An End to End Neural Audio Codec

2107.03312 - SoundStream: An End to End Neural Audio Codec

AI vs Human Thinking: How Large Language Models Really Work

AI vs Human Thinking: How Large Language Models Really Work

Почему мозг никогда не спит — и как это разрушает вашу жизнь?

Почему мозг никогда не спит — и как это разрушает вашу жизнь?

Introduction to large language models

Introduction to large language models

There Is Something Faster Than Light

There Is Something Faster Than Light

Stress Testing the Reasoning Competence of Language Models With Formal Proofs

Stress Testing the Reasoning Competence of Language Models With Formal Proofs

Japan Starts New Robotic Trend | Best Tech at IREX Expo

Japan Starts New Robotic Trend | Best Tech at IREX Expo

2511.07384 - Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence

2511.07384 - Teaching Pretrained Language Models to Think Deeper with Retrofitted Recurrence

2510.27688 - Continuous Autoregressive Language Models

2510.27688 - Continuous Autoregressive Language Models

«Что не так с квантовой физикой и путешествиями во времени?» – Д. Горбунов, А. Арбузов, А. Семихатов

«Что не так с квантовой физикой и путешествиями во времени?» – Д. Горбунов, А. Арбузов, А. Семихатов

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]