Skip to content

Multiple gpu's #162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
SuperComboGamer opened this issue Mar 19, 2023 · 15 comments
Open

Multiple gpu's #162

SuperComboGamer opened this issue Mar 19, 2023 · 15 comments
Labels
Feature A new feature to add to ComfyUI.

Comments

@SuperComboGamer
Copy link

Is there anyway to use multiple gpu's for the same image or use multiple gpus for high batches to spread out the load.

@78Alpha
Copy link

78Alpha commented Mar 19, 2023

It looks like it uses accelerate, so you could try

accelerate config

in the venv or environment and setup multi-gpu from there

@comfyanonymous
Copy link
Owner

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

@SuperComboGamer
Copy link
Author

I figured out how to do multiple gpus for separate images on a different ui. But I want to be able to use 2 gpu's for one image at a time

@WASasquatch
Copy link
Contributor

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Would be amazing for running Comfy on farms, and remoting it in for jobs.

@s-marcelle
Copy link

Right now accelerate is only enabled in --lowvram mode.

The plan is to add an option to set the GPU comfyui will run on.

This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so...

I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

@SuperComboGamer
Copy link
Author

Right now accelerate is only enabled in --lowvram mode.
The plan is to add an option to set the GPU comfyui will run on.
This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so...

I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

If you use easy diffusion it will let u use more than one gpu for different images at a time but one 2 gpus for one at the same time. I have went through around 100 ui allready for stable diffusion I have found thag cumfyui is the fastest working one so u could use easy diffusion to create a huge batch at a time then go to comfy ui to make alot of steps for a singular image.

@WASasquatch
Copy link
Contributor

did I add multi-gpu support for easy diffusion? I can't even remember anymore.

Right now accelerate is only enabled in --lowvram mode.
The plan is to add an option to set the GPU comfyui will run on.
This is going to be further in the future but I'm planning on eventually adding support for connecting the UI to multiple comfyui backends at the same time so you can queue prompts on multiple GPUs/machines over the network.

Curious as to how far in the future we will have the ability to choose GPU because I am trying my best to have it running on my system's build in GPU. My inner noob crashed while doing so...
I honestly cant wait and again THANK YOU FOR THIS GREAT PIECE OF WORK

If you use easy diffusion it will let u use more than one gpu for different images at a time but one 2 gpus for one at the same time. I have went through around 100 ui allready for stable diffusion I have found thag cumfyui is the fastest working one so u could use easy diffusion to create a huge batch at a time then go to comfy ui to make alot of steps for a singular image.

@unphased
Copy link
Contributor

Can someone clarify if it's possible to "send" workflows defined by comfyui into EasyDiffusion to leverage the multiGPU capability?

@kxbin
Copy link

kxbin commented Nov 4, 2023

I hope to provide guidance on how to develop this feature

@dnalbach
Copy link

Being able to use multiple GPUs would really help in the future with stable diffusion video and whatever comes later. SVD uses dramatically more memory.

@rrfaria
Copy link

rrfaria commented Apr 10, 2024

@robinjhuang robinjhuang added the Feature A new feature to add to ComfyUI. label Jul 3, 2024
@robinjhuang
Copy link
Collaborator

Have you guys tried using Swarm to achieve this?
https://github.com/mcmonkeyprojects/SwarmUI

@bedovyy
Copy link

bedovyy commented Jul 20, 2024

HF diffusers can use Multi GPU in parallel using distrifuser or PipeFusion.
https://github.com/mit-han-lab/distrifuser
https://github.com/PipeFusion/PipeFusion

I have tested distrifuser, and the result was quite good.
(I used run_sdxl.py --mode benchmark, it may be generating one image with 50 steps)

1x 3090 2x 3090 (PCIe x8/x8) 1x 4090 4090 + 3090 (PCIe x16/x4)
13.88824 s 7.93942 s 6.82159 s 8.04754 s

Is there plan to support such thing?

@yggdrasil75
Copy link

now, with flux being massive I fear that larger models will become more common. my 3090 cant handle flux alone, it has to offload into system ram or disk. it would be nice to have the ability to split the workflow off onto my p40 so that the model isnt being loaded and unloaded from the main 3090.
the p40 will slow down the 3090, but not nearly as much as system ram or swap space does, and it can process at least something while the swap would just sit there waiting.

@yincangshiwei
Copy link

HF diffusers can use Multi GPU in parallel using distrifuser or PipeFusion. https://github.com/mit-han-lab/distrifuser https://github.com/PipeFusion/PipeFusion

I have tested distrifuser, and the result was quite good. (I used run_sdxl.py --mode benchmark, it may be generating one image with 50 steps)

1x 3090 2x 3090 (PCIe x8/x8) 1x 4090 4090 + 3090 (PCIe x16/x4)
13.88824 s 7.93942 s 6.82159 s 8.04754 s
Is there plan to support such thing?

This effect seems great. I also hope ComfyUI can be integrated and tried.

comfyanonymous pushed a commit that referenced this issue May 6, 2025
* Add Ideogram generate node.

* Add staging api.

* Add API_NODE and common error for missing auth token (#5)

* Add Minimax Video Generation + Async Task queue polling example (#6)

* [Minimax] Show video preview and embed workflow in ouput (#7)

* Remove uv.lock

* Remove polling operations.

* Revert "Remove polling operations."

* Update stubs.

* Added Ideogram and Minimax back in.

* Added initial BFL Flux 1.1 [pro] Ultra node (#11)

* Add --comfy-api-base launch arg (#13)

* Add instructions for staging development. (#14)

* remove validation to make it easier to run against LAN copies of the API

* Manually add BFL polling status response schema (#15)

* Add function for uploading files. (#18)

* Add Luma nodes (#16)

* Refactor util functions (#20)

* Add VIDEO type (#21)

* Add rest of Luma node functionality (#19)

* Fix image_luma_ref not working (#28)

* [Bug] Remove duplicated option T2V-01 in MinimaxTextToVideoNode (#31)

* Add utils to map from pydantic model fields to comfy node inputs (#30)

* add veo2, bump av req (#32)

* Add Recraft nodes (#29)

* Add Kling Nodes (#12)

* Add Camera Concepts (luma_concepts) to Luma Video nodes (#33)

* Add Runway nodes (#17)

* Convert Minimax node to use VIDEO output type (#34)

* Standard `CATEGORY` system for api nodes (#35)

* Set `Content-Type` header when uploading files (#36)

* add better error propagation to veo2 (#37)

* Add Realistic Image and Logo Raster styles for Recraft v3 (#38)

* Fix runway image upload and progress polling (#39)

* Fix image upload for Luma: only include `Content-Type` header field if it's set explicitly (#40)

* Moved Luma nodes to nodes_luma.py (#47)

* Moved Recraft nodes to nodes_recraft.py (#48)

* Add Pixverse nodes (#46)

* Move and fix BFL nodes to node_bfl.py (#49)

* Move and edit Minimax node to nodes_minimax.py (#50)

* Add Minimax Image to Video node + Cleanup (#51)

* Add Recraft Text to Vector node, add Save SVG node to handle its output (#53)

* Added pixverse_template support to Pixverse Text to Video node (#54)

* Added Recraft Controls + Recraft Color RGB nodes (#57)

* split remaining nodes out of nodes_api, make utility lib, refactor ideogram (#61)

* Add types and doctstrings to utils file (#64)

* Fix: `PollingOperation` progress bar update progress by absolute value (#65)

* Use common download function in kling nodes module (#67)

* Fix: Luma video nodes in `api nodes/image` category (#68)

* Set request type explicitly (#66)

* Add `control_after_generate` to all seed inputs (#69)

* Fix bug: deleting `Content-Type` when property does not exist (#73)

* Add preview to Save SVG node (#74)

* change default poll interval (#76), rework veo2

* Add Pixverse and updated Kling types (#75)

* Added Pixverse Image to VIdeo node (#77)

* Add Pixverse Transition Video node (#79)

* Proper ray-1-6 support as fix has been applied in backend (#80)

* Added Recraft Style - Infinite Style Library node (#82)

* add ideogram v3 (#83)

* [Kling] Split Camera Control config to its own node (#81)

* Add Pika i2v and t2v nodes (#52)

* Temporary Fix for Runway (#87)

* Added Stability Stable Image Ultra node (#86)

* Remove Runway nodes (#88)

* Fix: Prompt text can't be validated in Kling nodes when using primitive nodes (#90)

* Fix: typo in node name "Stabiliy" => "Stability" (#91)

* Add String (Multiline) node (#93)

* Update Pika Duration and Resolution options (#94)

* Change base branch to master. Not main. (#95)

* Fix UploadRequest file_name param (#98)

* Removed Infinite Style Library until later (#99)

* fix ideogram style types (#100)

* fix multi image return (#101)

* add metadata saving to SVG (#102)

* Bump templates version to include API node template workflows (#104)

* Fix: `download_url_to_video_output` return type (#103)

* fix 4o generation bug (#106)

* Serve SVG files directly (#107)

* Add a bunch of nodes, 3 ready to use, the rest waiting for endpoint support (#108)

* Revert "Serve SVG files directly" (#111)

* Expose 4 remaining Recraft nodes (#112)

* [Kling] Add `Duration` and `Video ID` outputs (#105)

* Fix: datamodel-codegen sets string#binary type to non-existent `bytes_aliased` variable  (#114)

* Fix: Dall-e 2 not setting request content-type dynamically (#113)

* Default request timeout: one hour. (#116)

* Add Kling nodes: camera control, start-end frame, lip-sync, video extend (#115)

* Add 8 nodes - 4 BFL, 4 Stability (#117)

* Fix error for Recraft ImageToImage error for nonexistent random_seed param (#118)

* Add remaining Pika nodes (#119)

* Make controls input work for Recraft Image to Image node (#120)

* Use upstream PR: Support saving Comfy VIDEO type to buffer (#123)

* Use Upstream PR: "Fix: Error creating video when sliced audio tensor chunks are non-c-contiguous" (#127)

* Improve audio upload utils (#128)

* Fix: Nested `AnyUrl` in request model cannot be serialized (Kling, Runway) (#129)

* Show errors and API output URLs to the user (change log levels) (#131)

* Fix: Luma I2I fails when weight is <=0.01 (#132)

* Change category of `LumaConcepts` node from image to video (#133)

* Fix: `image.shape` accessed before `image` is null-checked (#134)

* Apply small fixes and most prompt validation (if needed to avoid API error) (#135)

* Node name/category modifications (#140)

* Add back Recraft Style - Infinite Style Library node (#141)

* Fixed Kling: Check attributes of pydantic types. (#144)

* Bump `comfyui-workflow-templates` version (#142)

* [Kling] Print response data when error validating response (#146)

* Fix: error validating Kling image response, trying to use `"key" in` on Pydantic class instance (#147)

* [Kling] Fix: Correct/verify supported subset of input combos in Kling nodes (#149)

* [Kling] Fix typo in node description (#150)

* [Kling] Fix: CFG min/max not being enforced (#151)

* Rebase launch-rebase (private) on prep-branch (public copy of master) (#153)

* Bump templates version (#154)

* Fix: Kling image gen nodes don't return entire batch when `n` > 1 (#152)

* Remove pixverse_template from PixVerse Transition Video node (#155)

* Invert image_weight value on Luma Image to Image node (#156)

* Invert and resize mask for Ideogram V3 node to match masking conventions (#158)

* [Kling] Fix: image generation nodes not returning Tuple (#159)

* [Bug] [Kling] Fix Kling camera control (#161)

* Kling Image Gen v2 + improve node descriptions for Flux/OpenAI (#160)

* [Kling] Don't return video_id from dual effect video (#162)

* Bump frontend to 1.18.8 (#163)

* Use 3.9 compat syntax (#164)

* Use Python 3.10

* add example env var

* Update templates to 0.1.11

* Bump frontend to 1.18.9

---------

Co-authored-by: Robin Huang <[email protected]>
Co-authored-by: Christian Byrne <[email protected]>
Co-authored-by: thot experiment <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature A new feature to add to ComfyUI.
Projects
None yet
Development

No branches or pull requests