Rethinking Pre-training and Self-Training
Автор: Connor Shorten
Загружено: 2020-06-18
Просмотров: 8186
Описание:
*ERRATA* at 9:31 I called the large scale jittering "color jittering", this isn't an operation specifically on colors.
This video explores an interesting paper from researchers at Google AI. They show that self-training outperforms supervised or self-supervised (SimCLR) pre-training. The video explains what self-training is and how all these methods attempt to utilize extra data (labeled or not) for better performance on downstream tasks.
Thanks for watching! Please Subscribe!
Paper Links:
Rethinking Pre-training and Self-training: https://arxiv.org/pdf/2006.06882.pdf
OpenImages Dataset:https://storage.googleapis.com/openim...
RetinaNet: https://arxiv.org/pdf/1708.02002.pdf
Rethinking ImageNet Pre-training: https://arxiv.org/pdf/1811.08883.pdf
Image Classification State-of-the-Art: https://paperswithcode.com/sota/image...
Self-Training with Noisy Student: https://arxiv.org/pdf/1911.04252.pdf
Rotation Self-Supervised Learning: https://arxiv.org/pdf/1803.07728.pdf
POET: https://arxiv.org/pdf/1901.01753.pdf
ImageGPT: https://openai.com/blog/image-gpt/
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: