-
Notifications
You must be signed in to change notification settings - Fork 6.8k
MXNET R, GPU speed-up much less for regression example than classification #5052
Comments
My results with
|
@matt32106 @khalida Run1: Run2: This is the result obtained on Windows server ec2 instance with p2.xlarge instances (Nvidia Tesla K80). Doesn't seems like we have a benchmark to compare performances but it should depend too much on GPU/CPU/OS and system load and other issues like cold start. Trying to get results on few more configurations to analyze the numbers. |
@khalida Please try to run the examples again to see the speedup. In my opinion , you are facing the problem of cold start and hence classsification CPU time is significantly more than other numbers. I also faced a similar issue. While comparing again, I didn't find too much of a difference in speedup for classification/regression example while running on Windows ec2 server. |
@sandeep-krishnamurthy Could you please close the issue as the query has been answered. @khalida Please feel free to reopen in case of more question or closed in error. |
In the example code below I run simple classification and regression examples for MXNET running from R. In both examples I first solve with CPU, and then GPU.
In the classification example I get a ~ 32x speed-up when using a GPU, however the speed-up when running the regression example is much less (about 3x).
Details are given below, my main questions would be:
The output I get:
Example code:
Details of my set-up
Environment info
Operating System: Ubuntu 16.04
R details:
The text was updated successfully, but these errors were encountered: