Spring AI Series 5: Run With a Local LLM With Ollama
Автор: DevXplaining
Загружено: 2024-12-09
Просмотров: 1272
Описание:
Hi and welcome back to DevXplaining channel! This video walks through how to set up Spring AI with Ollama to run any local models. Using local LLM models enables you to take full ownership of the data, avoid any subscriptions to 3rd party services, and just use your local resources. Additionally, it allows you to optimize what/how you want to run the models, and join it all together with other Spring AI and Spring Boot features. We'll do a RESTful JSON API Controller and make it invoke a local Ollama server, using locally-run Mistral model. It's surprisingly easy!
As always, appreciate any likes and feedback, and subscribe to my channel for more!
Here are the links:
https://ollama.com/
https://start.spring.io/
https://docs.spring.io/spring-ai/refe...
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: