-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.7 upgrade, comments, bugfixes and cleanup #43
v0.7 upgrade, comments, bugfixes and cleanup #43
Conversation
REQUIRE
Outdated
@@ -1,2 +1,3 @@ | |||
julia 0.6 | |||
IterativeSolvers 0.4.1 | |||
julia 0.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we still maintain v0.6 compatibility?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe through Compat
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was mostly supported thanks to Compat
, just needed a few @static
if statements.
REQUIRE
Outdated
@@ -1,2 +1,3 @@ | |||
julia 0.6 | |||
IterativeSolvers 0.4.1 | |||
IterativeSolvers 0.7.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably use v0.6.0 because IterativeSolvers 0.7.0 needs Julia 0.7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually more stuff need to be changed, I was running the tests wrongly. This too yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, ping me when you're done making all those. We should add v0.7 to .travis.yml
as well. Thanks a lot for this PR! There was a lot of dead code on the classical AMG which I didn't have time to get to.
Not all deps are fully upgraded to v0.7 so they are failing the v0.7 and nightly tests. For instance, I had to use the JLD PR JuliaIO/JLD.jl#215 to make sure tests pass on my machine. Not sure what the best fix for this is, maybe allow failures and wait? |
Codecov Report
@@ Coverage Diff @@
## master #43 +/- ##
==========================================
- Coverage 90.85% 87.58% -3.28%
==========================================
Files 10 10
Lines 536 604 +68
==========================================
+ Hits 487 529 +42
- Misses 49 75 +26
Continue to review full report at Codecov.
|
src/AMG.jl
Outdated
@@ -1,8 +1,12 @@ | |||
module AMG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you also modify this module name to AlgebraicMultigrid
? This was AMG from the time it was ranjanan/AMG.jl
Would it help to convert all the JLD files to use JLD2 instead? |
I think JLD2 is itself not upgraded yet, but moving to JLD2 seems like the right thing anyways since JLD2 seems more supported than JLD. |
JLD2 also only gives warnings but tests still pass, so we are good now. The only problem now is METADATA. The url there links to this package even though it uses the old |
v0.7 tests fail on Windows with JLD2, same error as in JuliaIO/JLD2.jl#79 (comment) |
Hey, sorry about the late reply: JuliaCon and all. 😄 Okay so JLD and JLD2 both fail on Windows master, but the package is not directly dependent on them. If you can find any alternative method of saving this data, I'd be open to it. |
I implemented a type stable parallel smoothing using KissThreading. This package is not registered yet so we cannot officially depend on it, so I commented out the code for now. I will do one last test of this package then it should be ready to review and hopefully merge. |
I believe this PR is ready for a review @ranjanan. |
Wow, this has turned out to be quite a great pull request! Thanks! 😄 I am travelling at the moment so please give me a couple days to review this. |
src/multilevel.jl
Outdated
levels::Vector{Level{Ti,Tv}} | ||
final_A::SparseMatrixCSC{Ti,Tv} | ||
coarse_solver::S | ||
presmoother::Pre | ||
postsmoother::Post | ||
workspace::TW |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the fact that we now store this debugging information. Is there a way we can not store it by default and give the user the option to store it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not for debugging, it is for pre-allocation. This is how we can do a fully inplace solve, so the ldiv!
now has 0 allocations.
else | ||
x .= b | ||
end | ||
solve!(x, p.ml, b, maxiter = 1, calculate_residual = false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maintain tol = 1e-12
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because calculate_residual
is false, tol
will be ignored and since tol
is way too small, it will never really lead to the termination of the loop. Computing the tolerance is expensive so blindly running one iteration of the AMG solver is more efficient. Again this is what is going to happen anyways unless the residual norm is less than tol
.
else | ||
x .= b | ||
end | ||
solve!(x, p.ml, b, maxiter = 1, calculate_residual = false) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
src/preconditioner.jl
Outdated
else | ||
x = copy(b) | ||
end | ||
solve!(x, p.ml, b) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And here
@mohamed82008 Unfortunately, there seems to be a performance regression, on Julia v0.6
Your branch:
|
I will look into it. |
Can you post the exact code you used? |
using JLD, IterativeSolvers, AlgebraicMultigrid
using BenchmarkTools
const AMG = AlgebraicMultigrid
function bench()
a = load("/Users/ranjan/graphs/1m.jld")["G"]
@show summary(a)
b = zeros(size(a,1))
b[98531] = -1
b[332615] = 1
ml2 = smoothed_aggregation(a)
t2 = @benchmark smoothed_aggregation($a)
println("Time for constructing AMG - Julia")
display(t2)
p2 = AMG.aspreconditioner(ml2)
c2 = cg(a, b, Pl = p2, maxiter = 100_000, tol = 1e-6, verbose = true)
t2 = @benchmark x = cg($a, $b, Pl = $p2, maxiter = 100_000, tol = 1e-6)
println("Time for running CG - Julia")
display(t2)
@show sum(abs2, a*c2 - b)
ml
end
bench()
|
The performance regression in the operator building comes from making the code identical to the algorithm for unsymmetric matrices. This leads to unnecessary copying and transposing when the matrix is symmetric. Some of these can be avoided for both symmetric and unsymmetric matrices, but others are necessary for unsymmetric matrices so my solution to this is to dispatch on |
In other words, some extra allocations are not avoidable without being wrong for unsymmetric matrices, or at least not following the known algorithm. One can argue that the direction of being strongly coupled to some other variable is arbitrary and so we can freely swap the semantics of rows and columns, but I am not sure we should do that. I will make a correction that should make the operator building allocate less when wrapping the matrix with the |
Oh also in v0.6 doing |
Interestingly the real culprit was failed inference in https://github.com/mohamed82008/AlgebraicMultigrid.jl/blob/76ac8f15e8314174bfdba559095d12a23ea5e2fe/src/smoother.jl#L29. Changing 0 to |
Oh I see, |
summary(a) = "868150×868150 SparseMatrixCSC{Float64,Int64}"
Time for constructing AMG - Julia
BenchmarkTools.Trial:
memory estimate: 1.15 GiB
allocs estimate: 13412264
--------------
minimum time: 1.484 s (34.11% GC)
median time: 1.538 s (36.10% GC)
mean time: 1.536 s (35.58% GC)
maximum time: 1.584 s (35.96% GC)
--------------
samples: 4
evals/sample: 1
Time for running CG - Julia
BenchmarkTools.Trial:
memory estimate: 33.13 MiB
allocs estimate: 295
--------------
minimum time: 3.011 s (0.01% GC)
median time: 3.019 s (0.04% GC)
mean time: 3.019 s (0.04% GC)
maximum time: 3.027 s (0.07% GC)
--------------
samples: 2
evals/sample: 1
sum(abs2, a * c2 - b) = 1.7681244199514989e-12
1.7681244199514989e-12 |
b099aeb
to
94bd0df
Compare
Sorry I was a little lax on this, I was moving cities. Thanks again for the great PR! |
This PR: