Convert to voigt and mandel as SVector/SMatrix#194
Convert to voigt and mandel as SVector/SMatrix#194KnutAM wants to merge 10 commits intoFerrite-FEM:masterfrom
Conversation
|
Makes sense. Is there any reason not to just change |
|
The static version doesn't support custom ordering. |
Unless doing something bad of course :) (Or implementing an ordering type) using Tensors
v64=Tensor{2,2}((11.0, 21, 12, 22));
v32=Tensor{2,2,Float32}(v64);
julia> tosvoigt(v64)
4-element StaticArraysCore.SVector{4, Float64} with indices SOneTo(4):
11.0
22.0
12.0
21.0
julia> Tensors.DEFAULT_VOIGT_ORDER[2]
2×2 Matrix{Int64}:
1 3
4 2
julia> Tensors.DEFAULT_VOIGT_ORDER[2][1]=2
2
julia> Tensors.DEFAULT_VOIGT_ORDER[2][4]=1
1
julia> Tensors.DEFAULT_VOIGT_ORDER[2]
2×2 Matrix{Int64}:
2 3
4 1
julia> tosvoigt(v64)
4-element StaticArraysCore.SVector{4, Float64} with indices SOneTo(4):
11.0
22.0
12.0
21.0
julia> tosvoigt(v32)
4-element StaticArraysCore.SVector{4, Float32} with indices SOneTo(4):
22.0
11.0
12.0
21.0(Although it should never happen unless users do something very strange, would it make sense to change |
Question, you only need the voigt format when inserting into the residual and jacobian (which I guess tend to be normal arrays). And you can do that without allocations. So where does the speedup come from? |
|
Using static arrays for the local equation system can be ~2x faster for reasonable equation system sizes, see some benchmarks: https://knutam.github.io/Newton.jl/dev/#Benchmarks (And these are with using RecursiveFactorization for the mutating version, so should be as good as it gets (if not let me know and I can update it:)) |
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## master #194 +/- ##
==========================================
- Coverage 98.23% 96.78% -1.45%
==========================================
Files 17 17
Lines 1244 1338 +94
==========================================
+ Hits 1222 1295 +73
- Misses 22 43 +21
☔ View full report in Codecov by Sentry. |
|
I wanted to have this as well! After a bit of discussion with @KnutAM and @fredrikekre, the API now looks a bit different julia> A = rand(SymmetricTensor{2,3})
3×3 SymmetricTensor{2, 3, Float64, 6}:
0.761033 0.89233 0.280383
0.89233 0.022293 0.366955
0.280383 0.366955 0.597853
julia> v = tovoigt(A) # old syntax, PR is not breaking
6-element Vector{Float64}:
0.761033056378681
0.02229297696504351
0.5978529160509549
0.3669551182206563
0.28038330882003726
0.8923299861624863
julia> v = tovoigt(SVector, A) # tovoigt(SArray, A) also allowed so that we can have dimension agnostic code
6-element SVector{6, Float64} with indices SOneTo(6):
0.761033056378681
0.02229297696504351
0.5978529160509549
0.3669551182206563
0.28038330882003726
0.8923299861624863
julia> v = tovoigt(SMatrix, A) # not allowed, 2nd order Tensor gives a vector after all
ERROR: MethodError: no method matching tovoigt(::Type{SMatrix}, ::SymmetricTensor{2, 3, Float64, 6})
Closest candidates are:
tovoigt(::Type{<:SMatrix}, ::Tensor{4, dim, T, M}; order) where {dim, T, M}
@ Tensors ~/.julia/dev/Tensors/src/voigt.jl:308
tovoigt(::Type{<:SVector}, ::SymmetricTensor{2, dim, T, M}; offdiagscale, order) where {dim, T, M}
@ Tensors ~/.julia/dev/Tensors/src/voigt.jl:311
tovoigt(::Type{<:SMatrix}, ::SymmetricTensor{4, dim, T}; offdiagscale, order) where {dim, T}
@ Tensors ~/.julia/dev/Tensors/src/voigt.jl:314
...
Stacktrace:
[1] top-level scope
@ REPL[117]:1Custom ordering for julia> tovoigt(SVector, A; order = Tensors.DEFAULT_VOIGT_ORDER[3])
6-element SVector{6, Float64} with indices SOneTo(6):
0.761033056378681
0.02229297696504351
0.5978529160509549
0.3669551182206563
0.28038330882003726
0.8923299861624863 |
|
It turned out that @KnutAM generated functions are blazingly fast, so I figured we should reuse that for Here are a couple of benchmarking results. Converting to Benchmarking codeusing Tensors, BenchmarkTools, StaticArrays, DataFrames
to_static_voigt(A; order=nothing) = tovoigt(SArray, A; order)
number_entries = Int[]
tensors = Any[]
mutate_default_order = Float64[]
mutate_custom_order = Float64[]
static_default_order = Float64[]
static_custom_order = Float64[]
for dim in 1:3, TT in (Tensor, SymmetricTensor), order in (2,4)
println(TT, "{", order, ", ", dim, "}")
A = rand(TT{order,dim})
v = tovoigt(A)
push!(tensors, A)
push!(number_entries, reduce(*, size(v)))
push!(mutate_default_order, @belapsed @inbounds tovoigt!($v, $A))
push!(mutate_custom_order, @belapsed @inbounds tovoigt!($v, $A; order=$(Tensors.DEFAULT_VOIGT_ORDER[dim])))
push!(static_default_order, @belapsed to_static_voigt($A))
push!(static_custom_order, @belapsed to_static_voigt($A; order=$(Tensors.DEFAULT_VOIGT_ORDER[dim])))
end
df = DataFrame("tensor type"=>typeof.(tensors),
"number of entries" => number_entries,
"tovoigt!(v, A)" => mutate_default_order .* 1e9,
"tovoigt!(v, A; order=custom)" => mutate_custom_order .* 1e9,
"tovoigt(SArray, A)" => static_default_order .* 1e9,
"tovoigt(SArray, A; order=custom)" => static_custom_order .*1e9)
sort!(df, :("number of entries")) |
|
Could squeeze a bit more performance for PR: master |
|
Oopsie? |
Merged directly via commit |
|
Yea, it was easier like that. It is a shame Github doesn't understand the link between the commit and the branch/PR. |
For local residuals (in e.g. plasticity models) defined as custom types this gives very good speedup when converting to static vectors for iterations:
and we can also use this to speed up internal functions, such as the inverse where currently conversions to regular arrays are used (not included now, but can be in this or a later pr):
Also xref: #182 for speeding up eigenvalue calculation of 4th order (2-dim) tensors.