Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cofactor Matrix #218

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

termi-official
Copy link
Member

@termi-official termi-official commented May 23, 2024

This is commonly used to compute the push-forward of surface areas from reference to current configuration in continuum mechanics (Nansons formula).

using Tensors, BenchmarkTools

F = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))

# Master version
cof2(F) = det(F)*transpose(inv(F))
@btime cof2($F); # 16.583 ns
@btime gradient(cof2, $F); # 233.892 ns
@btime gradient(cof2, $F, :all); # 236.603 ns

# This PR
@btime cof($F); # 5.890 ns
@btime gradient(cof, $F); # 78.131 ns
@btime gradient(cof, $F, :all); # 92.096 ns

@termi-official
Copy link
Member Author

Nightly failure is unrelated to the PR. The behavior for the built-in eigenvalue reconstruction seems to have changed.

@fredrikekre
Copy link
Member

Can cof not be re-used in inv? Also, can you add a note in the changelog for 1.17.0 (and update the version number in the project file).

@@ -7,7 +7,7 @@ using Statistics: mean
using LinearAlgebra
using StaticArrays
# re-exports from LinearAlgebra
export ⋅, ×, dot, diagm, tr, det, norm, eigvals, eigvecs, eigen
export ⋅, ×, dot, diagm, tr, det, cof, norm, eigvals, eigvecs, eigen
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this come from LinearAlgebra so maybe list separately. Is cof a standard name for this operation btw?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right. In the continuum mechanics books I read this operation is usually abbreviated with cof (Holzapfel, Wriggers). German Wikipedia also uses it.

https://de.wikipedia.org/wiki/Minor_(Lineare_Algebra)#Eigenschaften

But possible that this abbreviation is a German thing. If you want something different then we can also use it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind cof, just wondering if it was called that, or something else, in other software. I can't find anything from a quick serach though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My only worry here is that some standard library might include cofactors in the future. :D

@termi-official
Copy link
Member Author

Can cof not be re-used in inv? Also, can you add a note in the changelog for 1.17.0 (and update the version number in the project file).

We could do, but I am not sure about the performance implications. Will add a benchmark.

@termi-official
Copy link
Member Author

If I understood you correctly, then you propose something along this line:

@generated function inv_new(t::Tensor{2, dim}) where {dim}
    Tt = get_base(t)
    idx(i,j) = compute_index(Tt, i, j)
    ex = quote
        transpose(cof(t))*dinv
    end
    return quote
        $(Expr(:meta, :inline))
        dinv = 1 / det(t)
        @inbounds return $ex
    end
end

Seems like this is slightly faster on my machine in 3D.

julia> F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))
3×3 Tensor{2, 3, Float64, 9}:
  1.1  -0.1  0.25
 -0.1   0.9  0.0
  0.1   0.0  1.2

julia> F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
2×2 Tensor{2, 2, Float64, 4}:
  1.1  -0.1
 -0.1   0.9

julia> @btime Tensors.inv_new($F2)
  6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
 0.918367  0.102041
 0.102041  1.12245

julia> @btime inv_new($F3)
  11.331 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
  0.936281    0.104031    -0.195059
  0.104031    1.12267     -0.0216732
 -0.0780234  -0.00866927   0.849588

julia> @btime Tensors.inv($F2)
  6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
 0.918367  0.102041
 0.102041  1.12245

julia> @btime Tensors.inv($F3)
  11.612 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
  0.936281    0.104031    -0.195059
  0.104031    1.12267     -0.0216732
 -0.0780234  -0.00866927   0.849588

Reproducer

using Tensors, BenchmarkTools

F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))

@btime Tensors.inv_new($F2)
@btime Tensors.inv_new($F3)
@btime Tensors.inv($F2)
@btime Tensors.inv($F3)

Should I include the change?

@KnutAM
Copy link
Member

KnutAM commented May 23, 2024

I'm getting

julia> @btime inv_new($F2);
  2.100 ns (0 allocations: 0 bytes)

julia> @btime inv_new($F3);
  6.800 ns (0 allocations: 0 bytes)

julia> @btime Tensors.inv($F2);
  1.900 ns (0 allocations: 0 bytes)

julia> @btime Tensors.inv($F3);
  4.300 ns (0 allocations: 0 bytes)

julia> @btime myinv($F2);
  1.900 ns (0 allocations: 0 bytes)

julia> @btime myinv($F3);
  7.307 ns (0 allocations: 0 bytes)

with

@generated function inv_new(t::Tensor{2, dim}) where {dim}
           Tt = Tensors.get_base(t)
           idx(i,j) = Tensors.compute_index(Tt, i, j)
           ex = quote
               transpose(cof(t))*dinv
           end
           return quote
               $(Expr(:meta, :inline))
               dinv = 1 / det(t)
               @inbounds return $ex
           end
       end
end

myinv(F) = cof(F)'/det(F);

@termi-official
Copy link
Member Author

I guess the optimizer has some trouble with the transpose statement here. It is also questionable how reliable the measurements are within this regime.

@KnutAM
Copy link
Member

KnutAM commented May 23, 2024

Could do

cof(t::SecondOrderTensor) = _cof(t, Val(false))
inv(t::SecondOrderTensor) = _cof(t, Val(true)) / det(t)
@generated function _cof(t::Tensor{2, dim, T}, ::Val{Transpose}) where {dim, T, Transpose}
    Tt = get_base(t)
    idx(i,j) = Transpose ? compute_index(Tt, j, i) : compute_index(Tt, i, j)
    ...

but not sure if that is worth the added complexity compared to just duplicating the code...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants