Foundations of Machine Learning • Part 2.1: PAC Learning Explained (Slides by Prof. Mohri, NYU)
Автор: SodiumMan
Загружено: 2025-06-03
Просмотров: 352
Описание:
In this video, we explore the PAC (Probably Approximately Correct) learning framework, which provides a formal foundation for machine learning. You will learn what it means for a learning algorithm to be “probably” and “approximately” correct, how to distinguish true error from empirical error, and why distribution-free learning is important.
We present the formal PAC definition, explain how the accuracy parameter ε and confidence parameter δ determine the required sample size, and work through a detailed example showing that axis-aligned rectangles in ℝ² are PAC-learnable. By the end, you will understand how PAC bounds answer the question “How much data do I need?” and appreciate the roles that ε and δ play in guaranteeing learning performance.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: