Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: You need to compile with MKL in order to use the CPU version #653

Closed
aije-eg opened this issue May 13, 2020 · 11 comments
Closed

Error: You need to compile with MKL in order to use the CPU version #653

aije-eg opened this issue May 13, 2020 · 11 comments

Comments

@aije-eg
Copy link

aije-eg commented May 13, 2020

Hello, I get an error when running marian-decoder on a pretrained model on my Mac OS - I successfully ran this same line on Ubuntu 18.04, but I will like to run it on my machine. My question is does MacOS support marian? I could not find any documentation on the MacOS CPU... Thank you very much
image

@aije-eg
Copy link
Author

aije-eg commented May 13, 2020

brew install openblas solves this and turning off DMKL when running cmake
I will close this now

@aije-eg aije-eg closed this as completed May 13, 2020
@kpu
Copy link
Member

kpu commented May 13, 2020

Your performance will be bad though...

@aije-eg
Copy link
Author

aije-eg commented May 13, 2020

Yes it is - is it bad because of openblas or because it is running CPU mode on a mac? MKL seems not to work well for me

@kpu
Copy link
Member

kpu commented May 13, 2020

openblas performance is bad. If you're looking for an open-source alternative for fp32 GEMM that's only 1-3x slower than MKL, consider https://github.com/oneapi-src/oneDNN.

@emjotde
Copy link
Member

emjotde commented May 13, 2020

The other thing is all the BLAS implementations want to run multi-threaded using all available cores while being horrible at multi-threading apparently. It's usually worth it testing if single or few threads are working better:

OMP_NUM_THREADS=1 ./marian-decoder ...

@kpu
Copy link
Member

kpu commented May 13, 2020

Main issue is the BLAS people like to test on either made-up matrix sizes or examples from vision, which typically have much larger matrix sizes. See also: apache/mxnet#17980

@aije-eg
Copy link
Author

aije-eg commented May 13, 2020

I will look into the link kpu sent, I tried single threading (as per emjotde's comment) and it doesnt do well either, maybe I will play around with the numbers there. Also does this mean I can't compile MKL on my mac?

@aije-eg
Copy link
Author

aije-eg commented May 13, 2020

I may have to watch a video, I dont really understand the documentation there, installed and followed a tutorial and it still would not find MKL, so I might have broken something

@kpu
Copy link
Member

kpu commented May 13, 2020

Your goal is just running a pre-trained model right? Might want to wait for #595 then the vast majority of compute on CPU can be done in integers.

@aije-eg
Copy link
Author

aije-eg commented May 13, 2020

Yes just to run pre-trained models, it runs properly with marian-server on ubuntu, but I just want to grab my already trained model and then run it on the mac using the marian-decode function... Okay I will wait for that release!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants