-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Improve linear algebra functions #14962
Comments
Hey, this is the MXNet Label Bot. |
@mxnet-label-bot add [Feature Request] |
@mseeger @asmushetzel I noticed you both had insightful comments on a related PR: #15007 One thing I haven't seen validated yet related to this issue (#14962) is whether MXNet contributors feel SVD and the Moore-Penrose inverse should be exposed via MXNet? At a glance, libraries like NumPy, SciPy, and Tensorflow are doing this. From an adoptability standpoint alone, it seems like a good idea to support these ops. |
Hello, in fact we are starting work on the SVD, will come soon. With that, you can do the others. |
IMHO, one should not implement all sorts of derived computations as operators (inverse, logdet, det, ...). Just as we implement conv2d as op, but then resnet-cell as HybridBlock. It is seriously inefficient to recompute decompositions several times, just because you don't understand what is really happening inside. |
@mseeger I agree with your sentiment overall. Users shouldn't wander into using these multi-step ops without a warning, and the likelihood these multi-step ops are actually needed is tiny. Personally I don't have a use for them, I'm just following up on loose ends from other user-reported issues in #14360. I think the most valid argument I've been reading is that every other numerical computation library supports the functions, and just for the sake of porting code across libraries it would make sense to have an equivalent in MXNet. If there is no one-to-one translation, or at least information on how to implement this op in MXNet, anyone without some fluency in linear algebra may get stuck. That said, I'm not necessarily arguing that these multi-step ops need to be implemented. Even if there is just a 1-2 sentence blurb in the documentation describing why the op isn't supported and how to use the lower-level ops to get the desired result, I think that would satisfy anyone porting code. |
Regarding svd and pinv: if you implement svd, a short stub in the documentation describing why pinv isn't provided and how to use svd to get pinv might be useful. I can add that if you like. This same paradigm could also be used for the more complex ops supported by other frameworks. I'm not really taking a strong stand on this one, just suggesting this as another option! |
One of my projects needs to do matrix inversion which is not very accurate and convenient to do with Cholesky factorization, so I decided to implement myself and improve the linalg package in the same time. New operators to add:
Pytorch use MAGMA in their implementation, I'll try to only use cuBLAS and cuSolver in order not to add dependencies.
The text was updated successfully, but these errors were encountered: