Deploy large language model locally | Private LLMs with Langchain and HuggingFace API
Автор: Nechu_ENG
Загружено: 2023-06-14
Просмотров: 376
Описание:
Notebook:https://colab.research.google.com/dri...
🔴Privacy has become a big problem when we work with models like ChatGPT.
The transfer of personal data limits many users who prefer to protect their information instead of taking full advantage of language models.
🟢 What if we can protect our data and take advantage of all the value of AI?
🤖In my last video we deployed a local LLM (large language model), ensuring that our information never leaves our computer.
Are we limited by hardware?
Not necessarily!
We explore a second option with HuggingFace and its InferenceAPI service.
📚 Chapters:
00:00 Introduction to the problem
00:40 Run LLM locally with Transformer library (HuggingFace)
02:56 Run LLM locally with Langchain
04:43 HuggingFace API
07:13 HugginFace API with Langchain
08:43 Next steps
📖 Other articles: / dbenzaquenm
☕ To chat or have a coffee:
/ daniel-benzaquen-moreno
Повторяем попытку...

Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: