Run LLM AI Locally on 2018 Dell Latitude 7290 w/ Intel Core i5-8350U + Intel UHD 620 - CPU GPU Test
Автор: Flyandance
Загружено: 2025-10-23
Просмотров: 18
Описание:
Average 10 tok/s.
When running LLM on your own machine locally, you absolutely need to have a full understanding of every aspect of how a LLM works in order to get the best result, and this video explains some of the concepts. To put it simply, the parameters and quantization level affects how efficient the model could be run on your machine. Size of the parameter matters, but newer and smaller models are getting better so that they could out-perform older large parameter models. Within the same model, quantization level determines its file size and processing speed. 1b and q4 are generally recommended on less powerful machines.
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: