-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes for an Lapack module #1281
Conversation
The error is my bug, and it will be fixed on master soon. Then there is a problem that in |
Thanks for fixing the problem and spotting the other problem. I will fix the second and (fingers crossed) rebase after your commit. Should I then create a new pull request and close this one? |
You can force-push (i.e., |
Add `base/Lapack.jl` containing a module with the definitions of the direct `Lapack` interface functions. Code from `base/{factorizations,linalg_lapack,linalg_specialized}.jl` was moved to this file. The order of files in `sysimg.jl` was changed to accommodate these changes. New functions were added to the list of exports in `export.jl`.
I think this should be good to go now. |
This is a great reorganization and I look forward to merging very soon! |
This is awesome. I am not a fan of the names I suggest a little more work before merging:
The code has actually become a lot more compact since we started the work on lapack, and we have a whole lot more functionality in there now. |
BTW, @StefanKarpinski has said that we should follow the convention of using only lower case characters for filenames as convention. I have not yet reverted DSP.jl and DSP_fftw.jl, because once you introduce upper case names, it is a real pain on platforms where the filesystem ignores the case (OS X, for example, and perhaps Windows also). It would be best not to introduce Lapack.jl, and instead introduce lapack.jl in the pull request. I am not sure what is the best way to do this. For my RNG stuff, I just found it easier to junk my pull request and create a new one. |
I would prefer to separate the blas and lapack interfaces. They are separate libraries after all. |
They really are joined at the hip, and we do ship them as one library (libopenblas). This only combines the interfaces in the Lapack module, while still allowing the flexibility to ship different libraries for BLAS and LAPACK. I recollect discussion in some other issue where it seemed like merging these was the right thing to do. I do not have a strong opinion on this one way or another. |
Yes, in our default setup they're linked together, but lapack calls blas and not the other way around, so there is an abstraction layer there. |
Add `base/Lapack.jl` containing a module with the definitions of the direct `Lapack` interface functions. Code from `base/{factorizations,linalg_lapack,linalg_specialized}.jl` was moved to this file. The order of files in `sysimg.jl` was changed to accommodate these changes. New functions were added to the list of exports in `export.jl`.
Change instances of $string(foo) to $(string(foo)) for new semantics of $ operator.
I agree that Lapack subroutines call BLAS subroutines but I don't think that is relevant here. In Julia you just see a I have committed changes to the I agree with moving all the My LapackMod branch is becoming a mess. If there are no objections I will close this pull request, start from a clean version of master and create a new pull request. My relationship with git is improving but still has a long way to go before we can be considered friends. |
This is just great. I really like how you centralized some of the checks, and creating the SymTridiagonal type is a great idea. I also liked the one-line comments in some places on what the different Lapack routines do; once this gets merged I may contribute more of those over time. @ViralBShah , on the Do we want to create a SVDDense type? Again, I'd say something to consider after merging...this is too good to let languish! |
Let us keep BLAS separate for now, but in a Blas module. Feel free to create a new pull request. -viral On 16-Sep-2012, at 7:51 PM, dmbates [email protected] wrote:
|
I will think of alternate names. -viral On 16-Sep-2012, at 8:22 PM, Tim Holy [email protected] wrote:
|
Looking at Right now I can't test changes on a fresh pull of the master branch because of the segfault issue I mentioned on julia-dev. |
Some linalg commments! I have translated some code from Matlab and have some suggestions for the linear algebra module in Julia. My understanding is that a major revision of the module is underway by Douglas Bates and hence I guess it is easier to write my comments here. Please correct me if I am wrong as I am new to Julia and contributing in general. I found a bug in _jl_lapack_gels and I have made a pull request with fix which pao was kind to redirect to this place. When mulitplying zero-dimension matrix as fx randn(4, 0) * randn(0, 4) a 4x4 zero matrix is returned which i very convenient. However, _jl_gemm at line 430 in linalg_blas.jl passes stride(A, 2) and stride(B, 2) which may be zero. The BLAS subroutine should have a value of at least one so I guess the easiest fix would be just to write max(stride(B, 2), 1). It works on my machine. The error is only shown when Julia is build with MKL, but the requirement is general BLAS. I often use thin SVDs and hence I have made the method function svd{T<:Union(Float64,Float32,Complex128,Complex64)}(A::StridedMatrix{T}, vecs::Int64) to avoid the computation of the large U matrix. I believe it would be a good idea with such a method in Julia. A side note. I read some comments about the two SVD implementations. I vote for just one SVD function at the user level. I have no opinions on which of the Lapack routines it should call. There is a svdvals but no eigvals. I think it would be nice with a way to avoid the computation of the vectors. |
@ViralBShah Have you considered alternative names for lud, chold, etc? |
I actually prefer making lu return the factorization object, even though it breaks matlab compatibility. Once we have lud for dense, would we use lus for sparse? -viral On 18-Sep-2012, at 9:52 PM, dmbates [email protected] wrote:
|
Actually the proposed convention used the 'd' for 'decomposition', not 'dense'. So lu - Matlab compatible, lu(A::StridedMatrix) = factors(lud(A)) I don't have strong opinions on whether or not there should be both an By the way, I haven't been able to write a |
@ViralBShah, aside from the general problems from incompatibility, we've realized that the same logic applied to Would |
@dmbates, presumably the first person who needs factors of the BunchKaufman decomposition will figure this out :-). Thanks for adding it, I'd never heard about it before. |
Ok, I recollect the discussion now. How about lufact? Should we standardize on using factorizations as the terminology in names, and avoid the name decomposition? Basically, a consistent naming scheme one way or another. -viral On 18-Sep-2012, at 11:49 PM, dmbates [email protected] wrote:
|
I feel kind of guilty about the term factorizations rather than decompositions because I was the one who introduced the term. In many ways I think that ludecomp, choldecomp, ... sound better than lufact, cholfact, ... but I am happy to go with whatever decision is made. I am going to close this pull request now and open up another on a clean branch. |
Would it be nuts to use the uppercase version to indicate that you want a decomposition object? So Alternative idea would be to have a decomposition package that includes the decomposition versions of all the Matlab style calls. Don't really care for that, however. I'd be more inclined to just use the Matlab names and have people call them differently. The one that's problematic — |
We have reserved the uppercase for things like type and module names, and I'd be ok with either name, and we can still switch from factorizations to If anything, the decomposition versions should be the default, since they I would think that the simplest and cleanest thing to do right now is to -viral On Wed, Sep 19, 2012 at 10:46 PM, Stefan Karpinski <[email protected]
-viral |
@timholy Even in SVD, IIRC, you can do the decomposition, but choose not to -viral On Wed, Sep 19, 2012 at 12:08 AM, Tim Holy [email protected] wrote:
-viral |
@ViralBShah I don't think it is possible to create an SVD in a compact form and later use that form to generate U and/or V' Neither |
@dmbates, thanks for pointing out. It is late, and I got lazy to look up On Wed, Sep 19, 2012 at 11:32 PM, dmbates [email protected] wrote:
-viral |
For SVD, I would argue (tentatively) that providing left and right SVD variants is probably the best way to go. Maybe U,S = lsvd(X)
S,V = rsvd(X) or U,S = svdl(X)
S,V = svdr(X) |
Why not use multiple dispatch and create an svd method taking two additional Char arguments that get passed to Lapack.gesvd |
Passing |
How about |
I'm not quite clear on what we're talking about anymore. Just whether to compute the left and/or right factors? I guess there are three flags you could potentially pass in: boolean for each of the three components U, S and V. Giving false for any of them would result in not returning that one. This is problematic for type inference though, where writing something in the spirit of |
Okay, I can see why making type inference easier is a valid reason. In my comment on different versions of |
Just asked Alan about terminology and he says "singular value decomposition" but "LU factorization". History being what it is. My big question is: would the performance vs. compatibility problem be totally solved if the syntax |
For what it's worth, mathematica decided to call everything decompositions: LUDecomposition, QRDecomposition, etc. |
wikipedia.org uses "decomposition" for Cholesky, LU, QR and SVD and they don't have anything to say about Bunch-Kaufman. Not that I would hold wikipedia.org to be the definitive authority on usage. As far as I can recall, the SVD is the only decomposition that has so many options. If you look at the comments in
plus some version of the last two for combinations like 'A', 'N', I would suggest To me it seems that we are getting too detailed. If someone really wants one of these arcane combinations they should call Lapack.gesvd! or Lapack.gesdd! directly. |
I agree. A big reason to have easily-accessible direct bindings to lapack is to get those extra behaviors in case you want them. |
Add
base/Lapack.jl
containing a module with the definitions of the directLapack
interface functions. Code frombase/{factorizations,linalg_lapack,linalg_specialized}.jl
was moved to this file. The order of files insysimg.jl
was changed to accommodate these changes. New functions were added to the list of exports inexport.jl
.Currently this code fails the
test/lapack.jl
script because the first call to eig() fails. For example