Lab Walkthrough: Load Models & Inference with Hugging Face pipeline() — IBM Coursera | Sep2025
Автор: Johnny Hung
Загружено: 2025-09-03
Просмотров: 50
Описание:
In this hands-on lab recap, I show how to use PyTorch + Hugging Face Transformers to load pretrained models and run inference with the high-level pipeline() API. We compress ~20 lines of boilerplate into ≤5 lines per task, so you can prototype fast and verify results instantly.
What’s inside
Sentiment analysis: input any paragraph → classify as POS / NEG
Text generation: seed with “Once upon a time…” → continue the story
Language identification: feed “comment ça va” → detect the language
Masked-word fill: “The capital of France is MASK.” → predict the missing token
#HuggingFace #Transformers #PyTorch #NLP #MachineLearning #GenerativeAI #IBM #courserafreecourses
see notebook collection in here and do it by your own
https://github.com/wolfmib/ja_ai_engi...
Why pipeline()?
It bundles tokenization ➜ model inference ➜ post-processing, so you can focus on ideas, not plumbing.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: