You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue exists on a clean installation of Fooocus
The issue exists in the current version of Fooocus
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
I am experiencing significant performance issues while running Fooocus on my Mac Mini M2. The application seems to rely heavily on RAM instead of utilizing the GPU (MPS), resulting in very slow image generation compared to other tools like AUTOMATIC1111’s Web UI.
Steps to reproduce the problem
1. Set up Fooocus environment on a Mac Mini M2.
2. Run the following command:
3. python entry_with_update.py --disable-offload-from-vram --always-gpu --attention-split --all-in-fp16
4. 3. Observe the resource usage and performance during image generation.
Expected Behavior:
Fooocus should efficiently utilize the GPU (MPS) for image generation, leading to faster performance and reduced RAM usage.
Actual Behavior:
Fooocus primarily uses RAM, causing slow performance and high memory consumption. The GPU (MPS) is not being utilized effectively.
Environment:
• Device: Mac Mini M2
• OS: macOS 13.4
• Python: 3.10.12
• Fooocus Version: 2.3.1
• PyTorch Version: 2.0.0+cpu
What should have happened?
• Improve MPS support for better GPU utilization.
What browsers do you use to access Fooocus?
Apple Safari
Where are you running Fooocus?
Locally
What operating system are you using?
macOS 13.4
Console logs
Unfortunately, I do not have specific console logs. However, here are the steps I followed:
1. Activated the Fooocus environment using conda activate fooocus.
2. Ran the command:
python entry_with_update.py --disable-offload-from-vram --always-gpu --attention-split --all-in-fp16
3. Observed high RAM usage and slow performance.
Additional information
Potential Solutions:
• Improve MPS support for better GPU utilization.
• Optimize memory management to reduce RAM usage.
Thank you for your attention to this issue. Please let me know if any further information is needed.
The text was updated successfully, but these errors were encountered:
Pytorch is still missing support for quite a few functions for the MPS backend, see tracking issue pytorch/pytorch#77764.
This is also why Fooocus does not offer official support for MacOS. Please discuss this issue in #129, as also described in the readme. You might find a solution there.
side note: running python entry_with_update.py --disable-offload-from-vram works flawlessly for me on an Macbook M1 Max 32GB (Sonoma 14.4.1 and 14.5), but this might not be the case for you, especially with less RAM.
Checklist
What happened?
I am experiencing significant performance issues while running Fooocus on my Mac Mini M2. The application seems to rely heavily on RAM instead of utilizing the GPU (MPS), resulting in very slow image generation compared to other tools like AUTOMATIC1111’s Web UI.
Steps to reproduce the problem
Expected Behavior:
Fooocus should efficiently utilize the GPU (MPS) for image generation, leading to faster performance and reduced RAM usage.
Actual Behavior:
Fooocus primarily uses RAM, causing slow performance and high memory consumption. The GPU (MPS) is not being utilized effectively.
Environment:
What should have happened?
What browsers do you use to access Fooocus?
Apple Safari
Where are you running Fooocus?
Locally
What operating system are you using?
macOS 13.4
Console logs
Unfortunately, I do not have specific console logs. However, here are the steps I followed: 1. Activated the Fooocus environment using conda activate fooocus. 2. Ran the command: python entry_with_update.py --disable-offload-from-vram --always-gpu --attention-split --all-in-fp16 3. Observed high RAM usage and slow performance.
Additional information
Potential Solutions:
Thank you for your attention to this issue. Please let me know if any further information is needed.
The text was updated successfully, but these errors were encountered: