CSCI 3151 - M37 - Overfitting, capacity, and deep network generalization
Автор: Atlantic AI Institute
Загружено: 2026-02-13
Просмотров: 13
Описание:
This module reframes the classic “bias–variance / overfitting” story in the way it actually shows up in modern deep learning: large networks often have far more parameters than data, can drive training error near zero, and yet may still generalize—until they don’t. We connect the textbook U-shaped picture to deep-network realities like effective capacity (architecture + optimizer + regularization + data), interpolation, and why “more parameters” is not the whole story for generalization.
On the practical side, we run controlled PyTorch experiments that vary MLP capacity and track both training and validation performance. We also discuss common deep-learning failure modes that are not captured by training loss alone, including spurious correlations and distribution shift, and how tools like early stopping, weight decay, dropout, and data augmentation act as levers on effective capacity.
Course module page:
https://web.cs.dal.ca/~rudzicz/Teaching/CS...
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: