Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Greentea LibDNN benchmarks #106

Closed
wants to merge 2 commits into from
Closed

Greentea LibDNN benchmarks #106

wants to merge 2 commits into from

Conversation

naibaf7
Copy link

@naibaf7 naibaf7 commented May 25, 2016

This PR enables LibDNN as the default convolution engine in Greentea benchmarks.

Note that LibDNN is available for both CUDA and OpenCL, and I also enabled this as default in the Makefile.
It is important to select the correct GPU: I have set GPU 1 as OpenCL and GPU 0 as CUDA in the benchmark scripts, but if the system has more than one CUDA GPU, then the GPU ID for OpenCL has to be changed to #CUDA GPUs + 1 (i.e. 2 if 2 CUDA GPUs are present).

@soumith
Could you do me a favor and benchmark this in CUDA and OpenCL? Would help a lot in the further development of the convolution engine.

It is not yet autotuned for the Titan X but should give a hint on performance anyways. If the OpenCL score greatly differs from the CUDA score, this is most likely due to the slow ViennaCL operations used for weight updates and auxiliary math operations. CLBlast and clBLAS would do better, but compiling them takes more time.

Thanks :)

@naibaf7 naibaf7 mentioned this pull request May 26, 2016
@naibaf7 naibaf7 closed this Jan 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant