ycliper

Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
Скачать

Microservices with FastAPI and Docker (Python based) - Hands on Tutorial

python microservices

microservice api

fast api

docker image

docker microservice

python microservice

restful api

Dockerfile

uvicorn

onnx python

onnxruntime

post request

test api

machine learning artifacts

mlops

Автор: Data Science Garage

Загружено: 2021-09-20

Просмотров: 8592

Описание: In this tutorial you will learn about APIs and microservice management. You will learn the principles of API and microservice design for Machine Learning (ML) inference so that you can design your own ML solution.

FastAPI is a modern, fast, web framework for building APIs with Python 3.6+ based on standard Python type hints.

The following parts will be covered with this tutorial:
Introduction to APIs and microservices
REST API-based microservices
Hands-on implementation of serving an ML model as an API
Developing a microservice using Docker (you will learn how to setup your Dockerfile for your container image)
Testing the API service.

The artifacts which we will use in the tutorial are:
model data scaler - it scales input data to be representative for the ML model (classifier).
SVC classifier - ML model, classification classifier, which makes binary classification (predictions of weather).

API is the gateway that enables developers to communicate with an application. APIs enable two things:
Access to an application's data
The use of an application's functionality.

On the other hand, Microservices are a modern way of designing and deploying apps to run a service.

So in this video example we will apply the principles of APIs and microservices and develop a RESTful API service to serve the ML model. The ML model we will serve will be for the business problem - weather prediction using ML. We will use the FastAPI framework to serve the model as an API and Docker to containerize the API service into a microservice.

Some key points you should keep in mind:
A POST request is often used to create new resources. For ML applications, it is used to infer predictive ML models.
For feature definitions in variables Python based file we will use pydantic module: https://pydantic-docs.helpmanual.io/
uvicorn - is an ASGI (Asynchronous Server Gateway Interface) server implementation package (https://pypi.org/project/uvicorn/).
onnxruntime - used to deserialize and infer onnx models.
uvicorn-gunicorn-fastapi image on Docker Hub: https://hub.docker.com/r/tiangolo/uvi...
Docker documentation for RUN command: https://docs.docker.com/engine/refere...

Github repo for the tutorial: https://github.com/PacktPublishing/En...
Engineering MLOps book on Amazon: https://www.amazon.com/Engineering-ML...

#fastapi #microservices #api

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Microservices with FastAPI and Docker (Python based) -  Hands on Tutorial

Поделиться в:

Доступные форматы для скачивания:

Скачать видео

  • Информация по загрузке:

Скачать аудио

Похожие видео

© 2025 ycliper. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]