Easy Steps to Understand Maximum Likelihood Estimation (MLE) for Deriving Mean Squared Error Loss
Автор: Dr. Data Science
Загружено: 2022-10-02
Просмотров: 2808
Описание:
In this video, we discuss how to derive the Mean Squared Error (MSE) loss or cost function using the Maximum Likelihood Estimation (MLE) technique, which is a popular statistical inference technique. We provide some notation and problem setup for supervised learning and then introduce the concept of empirical risk minimization (ERM), which is necessary for model fitting and training in machine learning and deep learning. We then introduce the likelihood function, log likelihood, and negative log likelihood (NLL). Also, we discuss the difference between min and argmin using an example to apply MLE to estimate the parameter of the Bernoulli distribution, by hand and using Python/NumPy. We then introduce necessary assumptions to build the MSE loss function using the likelihood approach. The main assumption is that the output follows a normal distribution with mean value obtained by the machine learning model or mapping and a fixed variance. This allows us to find the negative log likelihood (NLL) function, which is shown to be proportional to the mean squared error (MSE) loss function.
#mle #mse #empiricalrisk
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: