The T5 Text Encoder Shoot-Out (ComfyUI Wrapper Workflow)
Автор: Mark DK Berry
Загружено: 2025-09-18
Просмотров: 312
Описание:
The T5 Text Encoder Shoot-Out (ComfyUI Wrapper Workflow)
(ADDENDUM: 25th September 2025. Since posting this video a new t5 Text Encode node has been brought to my attention and details of that can be found in https://markdkberry.com/workflows/res... )
I recently recieved information that using the "umt5-xxl-encoder-Q6_K.gguf" in my ComfyUI workflows might be worse on the memory load than using the "umt5-xxl-enc-bf16.safetensors" that most people go with.
Here is the comment that triggered this investigation - "that still reserves more RAM than the cached node would, because it doesn't reserve anything. it (your way) still offloads it to RAM when it's done, the Cached text encoder (bf16) in the wrapper removes the whole model, so it has zero memory impact"
I'd originally swapped the t5 to the GGUF model in an attempt to solve memory issues when Wan 2.2 first came out, but admittedly hadn't looked at it against since.
Because this information came from a highly reputable source, I thought I better investigate. So, back to the lab to put them head to head in a shoot out, and may the best t5 text encoder win.
This video shares the results of that shoot-out.
Follow my YT channel and website to be kept up to date with latest AI projects and workflow discoveries as I make them.
🌐 More from me:
https://www.markdkberry.com
https://markdkberry.bandcamp.com/
IG: @markdkberry
X: @markdkberry
If you have any questions ask in the comment section, or find me on social media here https://markdkberry.com/contact/
#comfyui #stablediffusion #lowvram #memory #ram #vram #textencoder #t5
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: