ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Sharut Gupta - Redefining Context for Powerful Test-Time Adaptation Using Unlabeled Data

Автор: MIT Embodied Intelligence

Загружено: 2024-12-05

Просмотров: 768

Описание: Title: Redefining Context for Powerful Test-Time Adaptation Using Unlabeled Data

Abstract:
Foundation models, while powerful, often struggle under distribution shifts in unfamiliar domains, typically requiring costly data collection and retraining to maintain performance. Test-Time Adaptation (TTA) has emerged as a promising approach to address these limitations, enabling models to adapt dynamically to new target domains at test time. In this talk, I will present TTA approaches by rethinking the notion of “context”—an abstract concept drawn from in-context learning—to address two fundamental challenges: improving out-of-distribution generalization and aligning representations with varying task-specific inductive biases, such as fairness constraints. Specifically, we explore two ways of leveraging unsupervised in-context learning, allowing models to use unlabeled data to adapt their behavior flexibly. First, we will demonstrate how using unlabeled domain data as context can align models with diverse distributions, enhancing their robustness in changing environments. Next, we will extend this idea to further improve this alignment by enforcing task-specific inductive priors. Together, these approaches showcase the potential of unsupervised, context-driven TTA to address key challenges of current-generation foundation models. Finally, we will explore the broader implications of this context-driven perspective for building world models, planning, and robust decision-making.


Biography:
Sharut Gupta is a third-year PhD candidate in Electrical Engineering and Computer Science (EECS) at MIT, advised by Prof. Stefanie Jegelka. Her research interests focus on multi-modal representation learning, robustness, and out-of-distribution generalization. She received her Bachelor’s and Master’s (Dual) degrees from the Indian Institute of Technology Delhi (IIT Delhi), where she completed her thesis research with Prof. Yoshua Bengio on "A Causal Perspective on Efficient Distributed Systems”. Sharut is a recipient of the MIT Presidential Fellowship and has completed research internships at FAIR (Meta AI) and Google DeepMind.

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Sharut Gupta - Redefining Context for Powerful Test-Time Adaptation Using Unlabeled Data

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

Felix Yanwei Wang - Inference-Time Policy Customization Through Interactive Task Specification

Felix Yanwei Wang - Inference-Time Policy Customization Through Interactive Task Specification

ICCV 2023 Tutorial: Test-time Adaptation: Formulations, Methods and Benchmarks

ICCV 2023 Tutorial: Test-time Adaptation: Formulations, Methods and Benchmarks

CMU Advanced NLP Spring 2026 (7): Scaling Laws and In-Context Learning

CMU Advanced NLP Spring 2026 (7): Scaling Laws and In-Context Learning

Context Rot: When Long Context Fails

Context Rot: When Long Context Fails

Charlie Snell, UC Berkeley. Title: Scaling LLM Test-Time Compute

Charlie Snell, UC Berkeley. Title: Scaling LLM Test-Time Compute

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

LLM fine-tuning или ОБУЧЕНИЕ малой модели? Мы проверили!

Training Dynamics of In-Context Learning in Linear Attention

Training Dynamics of In-Context Learning in Linear Attention

Управление поведением LLM без тонкой настройки

Управление поведением LLM без тонкой настройки

Вывод LLM с длинным контекстом нового поколения с использованием LMCache - Цзюньчэнь Цзян (Универ...

Вывод LLM с длинным контекстом нового поколения с использованием LMCache - Цзюньчэнь Цзян (Универ...

Почему «Трансформеры» заменяют CNN?

Почему «Трансформеры» заменяют CNN?

Stanford CS330 Deep Multi-Task & Meta Learning - Domain Adaptation l 2022 I Lecture 13

Stanford CS330 Deep Multi-Task & Meta Learning - Domain Adaptation l 2022 I Lecture 13

[Live] ScaleML Series Day 2 — Efficient & Effective Long-Context Modeling for Large Language Models

[Live] ScaleML Series Day 2 — Efficient & Effective Long-Context Modeling for Large Language Models

Deep Representation Learning - Yoshua Bengio (MILA, Canada)

Deep Representation Learning - Yoshua Bengio (MILA, Canada)

Context Rot: How Increasing Input Tokens Impacts LLM Performance (Paper Analysis)

Context Rot: How Increasing Input Tokens Impacts LLM Performance (Paper Analysis)

EI Seminar - Jason Ma - Recent Progress on Foundation Model Supervision for Robot Learning

EI Seminar - Jason Ma - Recent Progress on Foundation Model Supervision for Robot Learning

Как внимание стало настолько эффективным [GQA/MLA/DSA]

Как внимание стало настолько эффективным [GQA/MLA/DSA]

What Is In-Context Learning in Deep Learning?

What Is In-Context Learning in Deep Learning?

Может ли у ИИ появиться сознание? — Семихатов, Анохин

Может ли у ИИ появиться сознание? — Семихатов, Анохин

Test-Time Training Done Right - Tianyuan Zhang|ASAP21

Test-Time Training Done Right - Tianyuan Zhang|ASAP21

Самая сложная модель из тех, что мы реально понимаем

Самая сложная модель из тех, что мы реально понимаем

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]