How to allocate LLM costs & track usage of models on Amazon Bedrock and Sagemaker Studio
Автор: DoiT
Загружено: 2024-07-01
Просмотров: 381
Описание:
AI/ML expert Eduardo Mota talks cost allocation and LLMs on Amazon Bedrock and Sagemaker Studio, demonstrating how to
He covers the following:
👉 Model invocation logging to collect metadata, requests, and responses for all model invocations on your AWS account.
👉 How model invocations are stored in S3 buckets and CloudWatch logs
👉 Exploring various metrics shared in CloudWatch logs like tokens used or latency, and connecting requests to models and people.
👉 Tagging in Sagemaker Studio
👉 Demonstrating how to examine model metrics using Chat playground in Bedrock
👉 Getting granular model usage information using Custom models in Bedrock or Sagemaker JumpStart.
This clip comes from an AMA we did on implementing LLMs on AWS. Check out the other answers to questions asked in the playlist here: • Ask DoiT Anything: Implementing LLMs on AWS
Need personalized help with implementing LLMs on AWS, or with your cloud infrastructure more generally? Let's talk about how DoiT can help your team optimize performance, reduce costs, and accelerate your cloud journey. Reach out here: https://www.doit.com/contact/
#aws #llm #generativeai #cloudcomputing #finops
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: