You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for (c, b) in blocks(t)
U, Σ, V = _svd!(b, alg)
Udata[c] = U
Vdata[c] = V
if @isdefined Σdata # cannot easily infer the type of Σ, so use this construction
Σdata[c] = Σ
else
Σdata = SectorDict(c=>Σ)
end
dims[c] = length(Σ)
end
This code gets executed unconditionally, independently of the truncation parameter, so the full svd is always computed. Is there a way to only calculate the biggest singular values, until truncdim, without calculating the rest? Similar to what the LowRankApprox.jl package does.
The text was updated successfully, but these errors were encountered:
Not with the current functionality, tsvd is really supposed to be the counterpart to full svd, but with some more options, and an interface that I find better suited for tensors.
Under the hood, any TensorMap is a block diagonal matrix (depending on how you want to partition the indices in left and right / domain and codomain). Any of the methods in LowRankApprox.jl could be applied to those dense matrices on the diagonal. But for many use of our use cases, the truncation really throws away only a fraction of the singular values, and first computing all of them seems like the most efficient strategy.
Nonetheless, I agree that there are other use cases where methods from LowRankApprox.jl could be useful.
See tensors/factorizations.jl , 415:425
This code gets executed unconditionally, independently of the truncation parameter, so the full svd is always computed. Is there a way to only calculate the biggest singular values, until truncdim, without calculating the rest? Similar to what the LowRankApprox.jl package does.
The text was updated successfully, but these errors were encountered: