Convergence of Continuous-Time Stochastic Gradient Descent with Applications to Deep Neural Networks
Автор: Centre de Recerca Matemàtica
Загружено: 2026-01-20
Просмотров: 53
Описание:
Convergence of Continuous-Time Stochastic Gradient Descent with Applications to Deep Neural Networks
Speaker: Eulàlia Nualart, Universitat Pompeu Fabra - Barcelona School of Economics
Abstract:
This talk studies a continuous-time approximation of the stochastic gradient descent process for minimizing the population expected loss in learning problems. The main results establish general sufficient conditions for convergence, extending the results of Chatterjee (2022) established for (non-stochastic) gradient descent.
Professor Nualart shows how the main result can be applied to the case of overparametrized neural network training. This is joint work with Gábor Lugosi (UPF).
About the workshop:
This talk was presented at "Mathematical Foundations of Machine Learning: PDEs, Probability, and Dynamics," held at the Centre de Recerca Matemàtica (CRM) in Barcelona, January 7-9, 2026.
About the speaker:
Eulàlia Nualart is a researcher at Universitat Pompeu Fabra and Barcelona School of Economics, working on probability theory and stochastic analysis with applications to machine learning.
More information: https://www.crm.cat/mathematical-foun...
#MachineLearning #Mathematics #AI #DeepLearning #NeuralNetworks #TheoreticalML #DataScience #AppliedMathematics #Research #AcademicTalk #CRM #Barcelona #MathematicalFoundations #ArtificialIntelligence
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: