Skip to content

Commit

Permalink
update for Julia v1.6
Browse files Browse the repository at this point in the history
  • Loading branch information
chrisvwx committed Apr 4, 2021
1 parent 7df3561 commit c2f07fb
Show file tree
Hide file tree
Showing 13 changed files with 58 additions and 52 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name = "LLLplus"
uuid = "142c1900-a1c3-58ae-a66d-b187f9ca6423"
keywords = ["lattice reduction", "lattice basis reduction", "SVP", "shortest vector problem", "CVP", "closest vector problem", "LLL", "Lenstra-Lenstra-Lovász", "Seysen", "Brun", "VBLAST", "subset-sum problem", "Lagarias-Odlyzko", "Bailey–Borwein–Plouffe formula"]
license = "MIT"
version = "1.3.1"
version = "1.3.2"

[deps]
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
Expand Down
52 changes: 29 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,13 @@

LLLplus provides lattice tools such as
[Lenstra-Lenstra-Lovász](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm)
(LLL) lattice reduction. This class of tools are of practical and
theoretical use in cryptography, digital communication, and integer programming.
(LLL) lattice reduction which are of practical and
theoretical use in cryptography, digital communication, integer
programming, and more.
This package is experimental and not a robust tool; use at your own
risk :-)

LLLplus provides functions for LLL,
LLLplus has functions for LLL,
[Seysen](http://link.springer.com/article/10.1007%2FBF01202355), and
[Hermite-Korkine-Zolotarev](http://www.cas.mcmaster.ca/~qiao/publications/ZQW11.pdf)
lattice reduction
Expand All @@ -30,7 +31,7 @@ functions are also included; see the `subsetsum`,
<p>

Each function contains documentation and examples available via Julia's
built-in documentation system, for example with `?lll`. Documentation
built-in documentation system (try `?lll` or `@doc(lll)`). Documentation
for all functions is [available](https://christianpeel.github.io/LLLplus.jl/dev). A tutorial notebook is
found in the [`docs`](docs/LLLplusTutorial.ipynb) directory or on
[nbviewer](https://nbviewer.jupyter.org/github/christianpeel/LLLplus.jl/blob/master/docs/LLLplusTutorial.ipynb).
Expand All @@ -43,11 +44,12 @@ Pkg.add("LLLplus")
using LLLplus

# do lattice reduction on a matrix with randn entries
N = 100;
N = 40;
H = randn(N,N);
B,T = brun(H);
B,T = lll(H);
B,T = seysen(H);
B,T = hkz(H);

# check out the CVP solver
Q,R=qr(H);
Expand All @@ -63,28 +65,32 @@ sum(abs.(u-uhat))
<summary><b>Execution Time results</b> (click for details)</summary>
<p>

In the first test we compare the `lll` function from LLLplus, the
In the first test we compare several LLL functions: the `lll` function from LLLplus, the
`l2avx` function in the `src\l2.jl` file in LLLplus, the
`lll_with_transform` function from Nemo (which uses FLINT), and the
`lll_reduction` function from fplll. Nemo and fplll are written by
number theorists and are good benchmarks against which to compare. We
first show how the execution time varies as the basis (matrix) size
varies over [4 8 16 32 64]. For each matrix size, 20 random bases
are generated using fplll's `gen_qary` function with depth of 25
bits, with the average execution time shown; the `eltype` is `Int64`
except for NEMO, which uses GMP (its own `BigInt`); in all cases the
`δ=.99`. The vertical axis shows
execution time on a logarithmic scale; the x-axis is also
logarithmic. The generally linear nature of the LLL curves supports
the polynomial-time nature of the algorithm. The `LLLplus.lll`
function is slower, while `l2avx` is similar to fplll. Though not
shown, using bases from `gen_qary` with bit depth of 45 gives fplll
a larger advantage. This figure was generated using code in
`test/timeLLLs.jl`.
`lll_with_transform` function from
[Nemo.jl](https://github.com/Nemocas/Nemo.jl) (which uses FLINT), and
the `lll_reduction` function from
[fplll](https://github.com/fplll/fplll). Nemo is written by number
theorists, while fplll is written
by lattice cryptanalysis academics; they are good benchmarks against which to compare.
We first show how the execution time varies as the basis (matrix) size
varies over [4 8 16 32 64]. For each matrix size, 20 random bases are
generated using fplll's `gen_qary` function with depth of 25 bits,
with the average execution time shown; the `eltype` is `Int64` except
for NEMO, which can only use GMP (its own `BigInt`); in all cases the
`δ=.99`. The vertical axis shows execution time on a logarithmic
scale; the x-axis is also logarithmic. The generally linear nature of
the LLL curves supports the polynomial-time nature of the
algorithm. The `lll` function is slower, while `l2avx` is similar to
fplll. Though not shown, using bases from `gen_qary` with bit depth of
45 gives fplll a larger advantage. Though the LLLplus functions are
not always the fastest, they are in the same ballpark as the C and
C++ tools; if this package gets more users, we'll spend more time on
speed :-) This figure was generated using code in `test/timeLLLs.jl`.

![Time vs basis size](docs/src/assets/timeVdim_25bitsInt64.png)

One question that could arise when looking at the plot above is what
One additional question that could arise when looking at the plot above is what
the quality of the basis is. In the next plot we show execution time
vs the norm of the first vector in the reduced basis, this first
vector is typically the smallest; its norm is an rough indication of
Expand Down
6 changes: 3 additions & 3 deletions src/applications.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,15 @@ Combinatorial Optimization, 6th International IPCO Conference, vol 1412, pp
julia> A=[10 1 -9; 1 8 8]; xtrue=[0; 2; 9]; d=A*xtrue;
julia> integerfeasibility(A,d)
3-element Array{Int64,1}:
3-element Vector{Int64}:
0
2
9
julia> A=[10 1.1 -9.1; 1 8 8]; d=A*xtrue;
julia> integerfeasibility(A,d)
3-element Array{Float64,1}:
3-element Vector{Float64}:
0.0
2.0
9.0
Expand Down Expand Up @@ -337,7 +337,7 @@ Introduction to the LLL Algorithm and Its Applications" CRC Press, 2012.
julia> x = [0.3912641745333527; 0.5455179974014548; 0.1908698210882469];
julia> rationalapprox(x,1e4,Int64)
3-element Array{Rational{Int64},1}:
3-element Vector{Rational{Int64}}:
43//110
6//11
21//110
Expand Down
2 changes: 1 addition & 1 deletion src/brun.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Signal Processing Magazine, March 2011
# Examples
```jldoctest
julia> H=[1 2; 3 4]; B,T=brun(H); T
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
3 -1
-2 1
Expand Down
12 changes: 6 additions & 6 deletions src/cvp.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@ Agrell, IEEE Transactions on Information Theory, vol 57, issue 6 , June 2011.
# Examples
```jldoctest
julia> H=[1 2; 3 4]; Q,R=qr(H); uhat = cvp(Q'*[0,2],R)
2-element Array{Float64,1}:
2-element Vector{Float64}:
2.0
-1.0
julia> uhat = cvp(Q'*[0,2],R,Val(false),0,100)
2-element Array{Float64,1}:
2-element Vector{Float64}:
1.0
0.0
Expand Down Expand Up @@ -162,14 +162,14 @@ August 2002.
# Examples
```jldoctest
julia> H=[1 2; 3 4]; svp(H)
2-element Array{Int64,1}:
2-element Vector{Int64}:
-1
-1
julia> H = [1 0 0 0; 0 1 0 0; 208 175 663 0; 651 479 0 663];
julia> svp(H)
4-element Array{Int64,1}:
4-element Vector{Int64}:
16
-19
3
Expand Down Expand Up @@ -201,7 +201,7 @@ in the lattice with basis B.
# Examples
```jldoctest
julia> H=[1 2; 3 4]; s=svp(H); u=LLLplus.svpu(H); [u s H*u]
2×3 Array{Int64,2}:
2×3 Matrix{Int64}:
1 -1 -1
-1 -1 -1
Expand Down Expand Up @@ -234,7 +234,7 @@ Theory, vol. 48, no. 8, August 2002.
julia> H = inv([1.0 2; 0 4]);
julia> uhat = LLLplus.decodeSVPAgrell(H)
2-element Array{Int64,1}:
2-element Vector{Int64}:
-1
0
Expand Down
4 changes: 2 additions & 2 deletions src/hard_sphere.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ X=hardsphere(Y,H,Nc)
# Examples:
```jldoctest
julia> X = hardsphere([1. 1]', [1. 2; 3 4],2)
2×1 Array{Int64,2}:
2×1 Matrix{Int64}:
-1
1
Expand Down Expand Up @@ -80,7 +80,7 @@ disappear in the future.
# Examples:
```jldoctest
julia> X = LLLplus.algIIsmart([5. 8]', [1. 2; 0 4],[4,4])
2×1 Array{Float64,2}:
2×1 Matrix{Float64}:
1.0
2.0
Expand Down
4 changes: 2 additions & 2 deletions src/hkz.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ http://www.cas.mcmaster.ca/~qiao/publications/ZQW11.pdf
julia> H=[1 2; 3 4];
julia> B,T= hkz(H); B
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 -1
-1 -1
Expand Down Expand Up @@ -60,7 +60,7 @@ julia> B=[1 2; 3 4];
julia> Z,_= LLLplus.hkz_red(B);
julia> B*Z
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 -1
-1 -1
Expand Down
6 changes: 3 additions & 3 deletions src/l2.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ generally faster than `lll` on small bases, say of dimensions less than 80.
julia> H= [1 2; 3 4];B,_ = LLLplus.l2(H); B
┌ Warning: l2 is in a raw (alpha) state and may change. See the help text.
└ @ LLLplus src/l2.jl:45
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 -1
1 1
julia> H= [.5 2; 3 4]; B,_= LLLplus.l2(H); B
┌ Warning: l2 is in a raw (alpha) state and may change. See the help text.
└ @ LLLplus src/l2.jl:45
2×2 Array{Float64,2}:
2×2 Matrix{Float64}:
1.5 -1.0
1.0 2.0
Expand Down Expand Up @@ -193,7 +193,7 @@ julia> using LoopVectorization
julia> H= [1 2; 3 4];B = l2avx(H)
┌ Warning: l2avx is in a raw (alpha) state and may change. See the help text.
└ @ LLLplus ~/shared/LLLplus/src/l2.jl:42
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 -1
1 1
```
Expand Down
2 changes: 1 addition & 1 deletion src/latticegen.jl
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ O. Regev. Post-Quantum Cryptography. Chapter of Lattice-based Cryptography,
# Examples
```
julia> b=gen_qary_b(Int64,2,1,6)
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 0
7 32
Expand Down
10 changes: 5 additions & 5 deletions src/lll.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ Annalen 261, 1982. http://ftp.cs.elte.hu/~lovasz/scans/lll.pdf
# Examples
```jldoctest
julia> H= [1 2; 3 4];B,_ = lll(H); B
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 -1
1 1
julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= lll(H); B
2×2 Array{Complex{BigFloat},2}:
2×2 Matrix{Complex{BigFloat}}:
0.50+0.0im 0.0+1.0im
1.0+0.0im 0.0+0.0im
Expand Down Expand Up @@ -114,7 +114,7 @@ Algorithm and Its Applications" by Murray R. Bremner, CRC Press, 2012.
# Examples
```jldoctest
julia> H = [1 2; 3 3]; B = gauss(H)
2×2 Array{Float64,2}:
2×2 Matrix{Float64}:
1.0 0.0
0.0 3.0
Expand Down Expand Up @@ -196,12 +196,12 @@ Size reduction is the first part of LLL lattice reduction.
# Examples
```jldoctest
julia> H= [1 2; 3 4];B,_ = sizereduction(H); B
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
1 1
3 1
julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= sizereduction(H); B
2×2 Array{Complex{BigFloat},2}:
2×2 Matrix{Complex{BigFloat}}:
1.5+2.0im 0.50+0.0im
3.0+2.0im 1.0+0.0im
Expand Down
4 changes: 2 additions & 2 deletions src/seysen.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,12 @@ http://link.springer.com/article/10.1007%2FBF01202355
# Examples
```jldoctest
julia> H= [1 2; 3 4];B,T = seysen(H); B
2×2 Array{Int64,2}:
2×2 Matrix{Int64}:
-1 1
1 1
julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= seysen(H); B
2×2 Array{Complex{BigFloat},2}:
2×2 Matrix{Complex{BigFloat}}:
0.0+1.0im 0.50+0.0im
0.0+0.0im 1.0+0.0im
Expand Down
2 changes: 1 addition & 1 deletion src/utilities.jl
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ on page 2 of Bennet:
julia> H= [1 2; 3 4];B,T=lll(H);
julia> [orthogonalitydefect(H) orthogonalitydefect(B)]
1×2 Array{Float64,2}:
1×2 Matrix{Float64}:
7.07107 1.0
```
Expand Down
4 changes: 2 additions & 2 deletions src/vblast.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ is done, with the result that `W` will no longer have orthogonal rows and
# Examples
```jldoctest
julia> H= [1. 2; 3 4];W,_ = vblast(H); W
2×2 Array{Float64,2}:
2×2 Matrix{Float64}:
1.5 -0.5
0.1 0.3
julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; W,_= vblast(H); W
2×2 Array{Complex{BigFloat},2}:
2×2 Matrix{Complex{BigFloat}}:
-2.0+3.0im 2.0-1.5im
0.0779221-0.103896im 0.155844-0.103896im
Expand Down

2 comments on commit c2f07fb

@chrisvwx
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/33543

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v1.3.2 -m "<description of version>" c2f07fba3cc136c30c5dcf23e9da5630512261dc
git push origin v1.3.2

Please sign in to comment.