ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)

Автор: Yannic Kilcher

Загружено: 2022-11-04

Просмотров: 42491

Описание: #ai #language #knowledge

Large Language Models have the ability to store vast amounts of facts about the world. But little is known, how these models actually do this. This paper aims at discovering the mechanism and location of storage and recall of factual associations in GPT models, and then proposes a mechanism for the targeted editing of such facts, in form of a simple rank-one update to a single MLP layer. This has wide implications both for how we understand such models' inner workings, and for our ability to gain greater control over such models in the future.

OUTLINE:
0:00 - Introduction
1:40 - What are the main questions in this subfield?
6:55 - How causal tracing reveals where facts are stored
18:40 - Clever experiments show the importance of MLPs
24:30 - How do MLPs store information?
29:10 - How to edit language model knowledge with precision?
36:45 - What does it mean to know something?
39:00 - Experimental Evaluation & the CounterFact benchmark
45:40 - How to obtain the required latent representations?
51:15 - Where is the best location in the model to perform edits?
58:00 - What do these models understand about language?
1:02:00 - Questions for the community

Paper: https://arxiv.org/abs/2202.05262
Follow-up paper on Mass-Editing Memory in a Transformer: https://arxiv.org/abs/2210.07229

Abstract:
We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuron activations that are decisive in a model's factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forward weights to update specific factual associations using Rank-One Model Editing (ROME). We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task, comparable to existing methods. To perform a more sensitive evaluation, we also evaluate ROME on a new dataset of counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available at this https URL

Authors: Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov

Links:
Homepage: https://ykilcher.com
Merch: https://ykilcher.com/merch
YouTube:    / yannickilcher  
Twitter:   / ykilcher  
Discord: https://ykilcher.com/discord
LinkedIn:   / ykilcher  

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannick...
Patreon:   / yannickilcher  
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)

RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)

LLaMA: Open and Efficient Foundation Language Models (Paper Explained)

LLaMA: Open and Efficient Foundation Language Models (Paper Explained)

Alignment faking in large language models

Alignment faking in large language models

Galactica: A Large Language Model for Science (Drama & Paper Review)

Galactica: A Large Language Model for Science (Drama & Paper Review)

ROME: Locating and Editing Factual Associations in GPT with David Bau

ROME: Locating and Editing Factual Associations in GPT with David Bau

Масштабируемость интерпретируемости

Масштабируемость интерпретируемости

LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры

LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры

[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

OpenAI is Suddenly in Trouble

OpenAI is Suddenly in Trouble

Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)

Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)

Marek Meissner - Szaleństwo Putina. Degradacja Rosji jest nieodwracalna.

Marek Meissner - Szaleństwo Putina. Degradacja Rosji jest nieodwracalna.

JEPA - A Path Towards Autonomous Machine Intelligence (Paper Explained)

JEPA - A Path Towards Autonomous Machine Intelligence (Paper Explained)

This Algorithm Could Make a GPT-4 Toaster Possible

This Algorithm Could Make a GPT-4 Toaster Possible

Как LLM могут хранить факты | Глава 7, Глубокое обучение

Как LLM могут хранить факты | Глава 7, Глубокое обучение

Introduction to Generative AI

Introduction to Generative AI

Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)

Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)

Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)

Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained)

Дэвид Бау — Редактирование фактов в GPT, Интерпретируемость

Дэвид Бау — Редактирование фактов в GPT, Интерпретируемость

[1hr Talk] Intro to Large Language Models

[1hr Talk] Intro to Large Language Models

State of GPT | BRK216HFS

State of GPT | BRK216HFS

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]