ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

I lead a Google DeepMind team at 26. If you want to work at an AI company... | Neel Nanda (Part 2)

Автор: 80,000 Hours

Загружено: 2025-09-15

Просмотров: 92085

Описание: PART 1 — a comprehensive update on mechanistic interpretability:    • We Can Monitor AI’s Thoughts… For Now | Go...  

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”

This means creating as many opportunities as possible for surprisingly good things to happen: Write publicly. Reach out to researchers whose work you admire. Say yes to unusual projects that seem a little scary.

Nanda’s own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.

His YouTube channel (   / @neelnanda2469  ) features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.

Most remarkably, he ended up running DeepMind’s mechanistic interpretability team. He’d joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it’s gone reasonably well.”

His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.

In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind.

And be sure to check out part 1 of Rob and Neel’s conversation!
   • We Can Monitor AI’s Thoughts… For Now | Go...  

Full transcript and links to learn more:
https://80k.info/nn2

What did you think of the episode?
https://forms.gle/6binZivKmjjiHU6dA

Chapters:
• Cold open (00:00:00)
• Who’s Neel Nanda? (00:01:11)
• Luck surface area and making the right opportunities (00:01:47)
• Writing cold emails that aren't insta-deleted (00:03:54)
• How Neel uses LLMs to get much more done (00:09:18)
• “If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:45)
• Why Neel refuses to share his p(doom) (00:27:54)
• How Neel went from the couch to an alignment rocketship (00:32:00)
• Navigating towards impact at a frontier AI company (00:40:05)
• How does impact differ inside and outside frontier companies? (00:50:45)
• Is a special skill set needed to guide large companies? (00:56:56)
• The benefit of risk frameworks: early preparation (01:01:00)
• Should people work at the safest or most reckless company? (01:06:20)
• Advice for getting hired by a frontier AI company (01:09:40)
• What makes for a good ML researcher? (01:14:05)
• Three stages of the research process (01:20:50)
• How do supervisors actually add value? (01:33:16)
• An AI PhD – with these timelines?! (01:35:37)
• Is career advice generalisable, or does everyone get the advice they don't need? (01:42:24)
• Remember: You can just do things (01:45:25)

This episode was recorded on July 21.

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Coordination, transcriptions, and web: Katy Moore

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
I lead a Google DeepMind team at 26. If you want to work at an AI company... | Neel Nanda (Part 2)

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

Как Маск планирует вывести в космос тераватт GPU

Как Маск планирует вывести в космос тераватт GPU

We Can Monitor AI’s Thoughts… For Now | Google DeepMind's Neel Nanda

We Can Monitor AI’s Thoughts… For Now | Google DeepMind's Neel Nanda

DAY# 36- No more 'best self' time to take a load off :) it's heart time,  easy timess, ill show you!

DAY# 36- No more 'best self' time to take a load off :) it's heart time, easy timess, ill show you!

The Thinking Game | Full documentary | Tribeca Film Festival official selection

The Thinking Game | Full documentary | Tribeca Film Festival official selection

Something big is happening...

Something big is happening...

Критическая база знаний LLM за ЧАС! Это должен знать каждый.

Критическая база знаний LLM за ЧАС! Это должен знать каждый.

Нил Нанда – Механистическая интерпретируемость: Вихревой тур

Нил Нанда – Механистическая интерпретируемость: Вихревой тур

Нейросети захватили соцсети: как казахстанский стартап взорвал все AI-тренды и стал единорогом

Нейросети захватили соцсети: как казахстанский стартап взорвал все AI-тренды и стал единорогом

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

Когда всех айтишников уволят, инженеры останутся нужны – Мы обречены

Когда всех айтишников уволят, инженеры останутся нужны – Мы обречены

Andrej Karpathy: Software Is Changing (Again)

Andrej Karpathy: Software Is Changing (Again)

Inside Google DeepMind: AGI, Robotics, & World Models Explained - Demis Hassabis

Inside Google DeepMind: AGI, Robotics, & World Models Explained - Demis Hassabis

Ilya Sutskever – We're moving from the age of scaling to the age of research

Ilya Sutskever – We're moving from the age of scaling to the age of research

Mechanistic Interpretability - NEEL NANDA (DeepMind)

Mechanistic Interpretability - NEEL NANDA (DeepMind)

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI

What Happened With Sparse Autoencoders?

What Happened With Sparse Autoencoders?

Может ли у ИИ появиться сознание? — Семихатов, Анохин

Может ли у ИИ появиться сознание? — Семихатов, Анохин

He Co-Invented the Transformer. Now: Continuous Thought Machines [Llion Jones / Luke Darlow]

He Co-Invented the Transformer. Now: Continuous Thought Machines [Llion Jones / Luke Darlow]

The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)

The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)

Adam Marblestone – AI is missing something fundamental about the brain

Adam Marblestone – AI is missing something fundamental about the brain

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]