-
Notifications
You must be signed in to change notification settings - Fork 6.8k
add symbol.SwapAxis operator, just can do Forward(). #502
Conversation
I am sorry for did not see your comments. |
thanks for your suggestions. |
I have finished the new code based on your suggestions. |
@@ -38,15 +38,15 @@ ADD_CFLAGS = | |||
#--------------------------------------------- | |||
|
|||
# whether use CUDA during compile | |||
USE_CUDA = 0 | |||
USE_CUDA = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change this back to default, as most users don't have cuda
Thanks for the contribution. I have made a few comments on the code, in general
|
Thanks very much for your suggestions. |
…swapaxis files.some code style change. function is ready to go.
function is ready to go. please check it. |
I come across a problem:
|
std::accumulate can not be recognized by nvcc. |
Oh, yap. Maybe create our own version of prod function is easier. Then I think it is OK. Thanks |
For the error, as it indicates, it means the shape's size do not match the size of TBlob. Likely due to shape initialization error, either InferShape or Shape2Five |
The platform dependency issue was likely due to some uninitialized memory(variable) that causes uncertainties, but just my guess |
resolve all the comments by cpplint, there are detailed messages |
For output, we need to do
The error was due to we did not call asnumpy to wait for the result, and the system start to shutdown before the computation starts. |
I can see nothing comments in the swapaxis files by cpplint? where are they? |
there is something error...
|
Hmm the error message occurs before these when it scan through files. You
|
cpu is work right now, but not for gpu: def test2():
data_in = mx.symbol.Variable('data')
conv = mx.symbol.Convolution(data=data_in, kernel=(3, 3), num_filter=16)
datatmp = np.ones((1, 1, 32, 64))
mxdata = mx.nd.array(datatmp)
weightmp = np.ones((16, 1, 3, 3))
mxweight = mx.nd.array(weightmp)
biastmp = np.zeros(16)
mxbias = mx.nd.array(biastmp)
exe_c = conv.bind(ctx=mx.gpu(0), args=[mxdata, mxweight, mxbias])
exe_c.forward()
out = exe_c.outputs[0].asnumpy()
print out
test2() Error
|
|
…support.change test function name to test_swapaxes.
std::vector<TShape> *out_shape, | ||
std::vector<TShape> *aux_shape) const override { | ||
int input_num = in_shape->size(); | ||
if (input_num == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CHECK_EQ(input_shape->size(), 1);
Thanks for the good job! I have last few comments, please address them and rebase to resolve the conflict to current master here. http://mxnet.readthedocs.org/en/latest/contribute.html#how-to-resolve-conflict-with-master The conflict might have something to do with commits you have in mshadow, if that is the case, the best way might be reset mshadow's version, or keep a copy of your files, and do a clean fork |
Done, please have a check! |
oh, sorry, have not updated your newest comments. |
I think you forgot I can not push to your repository. |
Do I have another way to add your newest commits without do second time fork? |
something is wrong when I do rebase your mxnet. |
I do fetch your mxnet, but my repository is changed to my older version one when I do rebase. |
I have pushed the newest code to my forked mxnet, can you check it? |
I need a little time to figure out the strange problem of rebase. |
Can I do merge? |
The general instruction is here http://mxnet.readthedocs.org/en/latest/contribute.html#how-to-resolve-conflict-with-master If you find files with conflicts, edit the files and merge the conflict, and do a git add as indicated in the instruction |
the situation is: I have commit a lot in my local repository, when I rebase to your newest master, my current work directory content is updated to the first commit. What can I do? |
OK, likely I found the way.... |
ok ,now, please check. |
closed due to #519 |
this is a early version, can run now, but haven't been tested.
This is just a look-version, means let you have a look and give me some ideas.