Skip to content
This repository has been archived by the owner on Oct 15, 2019. It is now read-only.

Zero gradient for concatenate #154

Open
wddabc opened this issue Mar 10, 2017 · 2 comments
Open

Zero gradient for concatenate #154

wddabc opened this issue Mar 10, 2017 · 2 comments
Assignees

Comments

@wddabc
Copy link

wddabc commented Mar 10, 2017

Looks like the computation graph breaks on the concatenation operation.
MWE:

import minpy.numpy as np
from minpy.core import grad

def foo_nocat(x):
    return 3*x 

def foo_cat(x):
    catx = np.concatenate([x, x], axis=1)
    return np.dot(catx, np.array([[1], [2]]))

test_x = np.array([[3]]) 
print grad(foo_nocat)(test_x)  # correct_output 
print grad(foo_cat)(test_x)  # should be the same
@jermainewang
Copy link
Member

@ZihengJiang Could you have a look? Also put this in unittest.

@ZihengJiang ZihengJiang self-assigned this Mar 10, 2017
@Taco-W
Copy link
Member

Taco-W commented May 31, 2017

@ZihengJiang Any follow-up on this?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants