-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mps not supported now on 2.1.66 #690
Comments
is that showing if you do not use FreeU or you always see this? |
if i use freeU i get this : |
macos 14.0 [Fooocus Model Management] Moving model(s) has taken 61.51 seconds |
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860 |
same here :( i want to use fooocus on my macbook m1 so bad :) |
yes - this is so sad :( @lllyasviel heeeeeeeeeelp |
bump |
Pay attention to the same problem |
bump |
same. |
Same here, Fooocus v2.1.8241 |
bump |
1 similar comment
bump |
[Fooocus] Encoding positive #1 ... |
The same error
The same speed Bump |
If my understanding is correct, it looks like a compatibility issue with pytorch and apple silicon. Does anyone know if there is a way to adjust the parameters in Fooocus to avoid using the 'aten:std_mean.coorection', but rather some other calculation that would be supported? |
Bump, I'd love to get an update on that. |
any update? |
1 similar comment
any update? |
bumpedy bump bump - can we haz some mac arm love plz |
UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1701416305940/work/aten/src/ATen/mps/MPSFallback.mm:13.) |
same here |
Same on Intel iMac 2020 with AMD Graphics? |
Macbook M3 Pro suffers from the same problem - So i guess the whole lineup of Apples Silicon CPU's is affected |
Same on MacBook Pro Apple M2 Pro, macOS Sonoma Version 14.1.
|
Same issue! Bump |
Same |
same |
same |
bump for visibility!
Sent from [Proton Mail](https://proton.me/mail/home) for iOS
…On Thu, Dec 21, 2023 at 20:58, Dkray ***@***.***(mailto:On Thu, Dec 21, 2023 at 20:58, Dkray <<a href=)> wrote:
same
M2 Pro, Fooocus version: 2.1.854
—
Reply to this email directly, [view it on GitHub](#690 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/A3DK2WLB36BSPMSPML7DFKDYKSIHNAVCNFSM6AAAAAA6A3VCJOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRWHA2TIOJTGQ).
You are receiving this because you commented.Message ID: ***@***.***>
|
The most interesting thing is that PyTorch works correctly. The test script from the Apple site produces the correct result: tensor([1.], device='mps:0') |
up |
Experiencing the same "Device mps:0 does not support the torch.fft functions used in the FreeU node, switching to CPU" even though a test script output indicates Metal is installed with GPU support (tensor([1.], device='mps:0'). I'm running ComfyUI on an M2 MBP 96gb etc. |
Unfortunately, implementing fft on the mps backend is pretty difficult. I think the reason no one has taken up adding fft support to the mps backend is because of a lack of support from Apple. I could be wrong about this, but my understanding is that cuda has fft built in or available via a library, whereas the fourier transform functionality the relevant Apple libraries has for the m-family gpus is relatively limited as of now, so an implementation of this in pytorch might require implementing a full or near-full custom fft algorithm. I don't see why I couldn't be done, but it'll take a hero. |
@igm503 I wonder if VkFFT would fit the bill? |
on the latest version of fooocus performance improved a lot. I'm running with:
as I have a M1 with 64GB. my iterations with LCM are ~1.25 per second for a 768x11344 image. Non LCM renders take roughly double the time. |
my result is - UserWarning: The operator 'aten::std_mean.correction' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:13.) |
that's correct. Hence the fallback variable. I'm running a M1 Max on Sonoma (MacOS 14). If you have an older OS / slower cpu it might take longer. If you're on a macbook, make sure your not in energy saving mode and if you overwrite the sampler in 'Developer Debug Mode'. Samplers are slower or faster depending on which one your using |
M2 Pro on Sonoma ) |
I'm getting this exact error on my M3 Pro on Sonoma 14.2.1 |
Would need to look into it, but adding aten::std_mean to the MPS backend doesn't seem too difficult, if it isn't already there in a nightly. |
I'd be happy to see someone do it. It's not difficult for those who know how |
up |
Same here on Mac mini M2 / Sonoma 14.2.1 |
FYI, the warning about cpu fallback for aten::std_mean isn't an error--if that's all you're seeing, then the program is running fine, just slower than it would be if that op was implemented for the PyTorch MPS backend. I'm going to check it out tomorrow and see if I have time to add support for it/if it's already supported in a nightly. Also FYI, if y'all are still encountering problems with torch.fft support on the MPS device, there's a pull request in the pytorch repo right now that will add more support for fft functions on the macbooks, so the next pytorch nightly releases might be worth trying. |
aten::std_mean is indeed in the latest nightly version of pytorch. Not sure when it'll make it into a release version. |
You're absolutely right, but 30+ minutes per image on an M2/16GB is not really usable. |
same |
any news?! |
@Gitterman69 in general the MPS backend is supported (beta), but not all functions are optimized. Please find the instructions on how to install Fooocus for MacOS / MPS backend in https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#mac and discuss problems in #129 |
The operator is not yet supported by Apple, that's all. You can (want to) tinker with it as much as you want. Der Operator wird bis dato nicht von Apple unterstützt, dass ist alles. Da kann man noch so viel rumschrauben (wollen) wie man will. |
Device mps:0 does not support the torch.fft functions used in the FreeU node, switching to CPU.
How can i get full MPS support on my silicon mac ?
The text was updated successfully, but these errors were encountered: