ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Build Specialist LLMs Like It’s 2019 (Randall Balestriero)

Автор: Machine Learning Street Talk

Загружено: 2025-04-23

Просмотров: 15801

Описание: Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matching the performance of costly pre-trained models. This raises questions about when giant pre-training efforts are truly worth it.

He also talks about how self-supervised learning (where models learn from data structure itself) and traditional supervised learning (using labeled data) are fundamentally similar, allowing researchers to apply decades of supervised learning theory to improve newer self-supervised methods.

Finally, Randall touches on fairness in AI models used for Earth data (like climate prediction), revealing that these models can be biased, performing poorly in specific locations like islands or coastlines even if they seem accurate overall, which has important implications for policy decisions based on this data.

SPONSOR MESSAGES:
***
Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/
***

TRANSCRIPT + SHOWNOTES:
https://www.dropbox.com/scl/fi/n7yev7...

TOC:
1. Model Training Efficiency and Scale
[00:00:00] 1.1 Training Stability of Large Models on Small Datasets
[00:04:09] 1.2 Pre-training vs Random Initialization Performance Comparison
[00:07:58] 1.3 Task-Specific Models vs General LLMs Efficiency

2. Learning Paradigms and Data Distribution
[00:10:35] 2.1 Fair Language Model Paradox and Token Frequency Issues
[00:12:02] 2.2 Pre-training vs Single-task Learning Spectrum
[00:16:04] 2.3 Theoretical Equivalence of Supervised and Self-supervised Learning
[00:19:40] 2.4 Self-Supervised Learning and Supervised Learning Relationships
[00:21:25] 2.5 SSL Objectives and Heavy-tailed Data Distribution Challenges

3. Geographic Representation in ML Systems
[00:25:20] 3.1 Geographic Bias in Earth Data Models and Neural Representations
[00:28:10] 3.2 Mathematical Limitations and Model Improvements
[00:30:24] 3.3 Data Quality and Geographic Bias in ML Datasets

REFS:
[00:01:40] Research on training large language models from scratch on small datasets, Randall Balestriero et al.
https://openreview.net/forum?id=wYGBW...
[00:10:35] The Fair Language Model Paradox (2024), Andrea Pinto, Tomer Galanti, Randall Balestriero
https://arxiv.org/abs/2410.11985
[00:12:20] Muppet: Massive Multi-task Representations with Pre-Finetuning (2021), Armen Aghajanyan et al.
https://arxiv.org/abs/2101.11038
[00:14:30] Dissociating language and thought in large language models (2023), Kyle Mahowald et al.
https://arxiv.org/abs/2301.06627
[00:16:05] The Birth of Self-Supervised Learning: A Supervised Theory, Randall Balestriero et al.
https://openreview.net/forum?id=NhYAj...
[00:21:25] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Adrien Bardes, Jean Ponce, Yann LeCun
https://arxiv.org/abs/2105.04906
[00:25:20] No Location Left Behind: Measuring and Improving the Fairness of Implicit Representations for Earth Data (2025), Daniel Cai, Randall Balestriero, et al.
https://arxiv.org/abs/2502.06831
[00:33:45] Mark Ibrahim et al.'s work on geographic bias in computer vision datasets, Mark Ibrahim
https://arxiv.org/pdf/2304.12210

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Build Specialist LLMs Like It’s 2019 (Randall Balestriero)

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

It's Not About Scale, It's About Abstraction

It's Not About Scale, It's About Abstraction

Трамп объявил о прекращении огня / Конец российского наступления?

Трамп объявил о прекращении огня / Конец российского наступления?

Итоги дня | Обыски у главы МВД | Взрыв в центре Москвы | Кремль про конфликт с Баку

Итоги дня | Обыски у главы МВД | Взрыв в центре Москвы | Кремль про конфликт с Баку

Самая холодная деревня в мире (10 минут на улице могут стоить жизни) -71°C

Самая холодная деревня в мире (10 минут на улице могут стоить жизни) -71°C

Параллельные миры, квантовая механика и кот [Veritasium]

Параллельные миры, квантовая механика и кот [Veritasium]

I Took an IQ Test to Find Out What it Actually Measures

I Took an IQ Test to Find Out What it Actually Measures

THIS is why large language models can understand the world

THIS is why large language models can understand the world

Andrej Karpathy: Software Is Changing (Again)

Andrej Karpathy: Software Is Changing (Again)

Азербайджан и Россия — дальше будет хуже | Рейды в Екатеринбурге, задержания в Баку

Азербайджан и Россия — дальше будет хуже | Рейды в Екатеринбурге, задержания в Баку

Краткое объяснение больших языковых моделей

Краткое объяснение больших языковых моделей

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]