diff --git a/Project.toml b/Project.toml index 491535f..496827d 100644 --- a/Project.toml +++ b/Project.toml @@ -2,7 +2,7 @@ name = "LLLplus" uuid = "142c1900-a1c3-58ae-a66d-b187f9ca6423" keywords = ["lattice reduction", "lattice basis reduction", "SVP", "shortest vector problem", "CVP", "closest vector problem", "LLL", "Lenstra-Lenstra-Lovász", "Seysen", "Brun", "VBLAST", "subset-sum problem", "Lagarias-Odlyzko", "Bailey–Borwein–Plouffe formula"] license = "MIT" -version = "1.3.1" +version = "1.3.2" [deps] DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab" diff --git a/README.md b/README.md index e2e583f..dfd3edd 100644 --- a/README.md +++ b/README.md @@ -5,12 +5,13 @@ LLLplus provides lattice tools such as [Lenstra-Lenstra-Lovász](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm) -(LLL) lattice reduction. This class of tools are of practical and -theoretical use in cryptography, digital communication, and integer programming. +(LLL) lattice reduction which are of practical and +theoretical use in cryptography, digital communication, integer +programming, and more. This package is experimental and not a robust tool; use at your own risk :-) -LLLplus provides functions for LLL, +LLLplus has functions for LLL, [Seysen](http://link.springer.com/article/10.1007%2FBF01202355), and [Hermite-Korkine-Zolotarev](http://www.cas.mcmaster.ca/~qiao/publications/ZQW11.pdf) lattice reduction @@ -30,7 +31,7 @@ functions are also included; see the `subsetsum`,

Each function contains documentation and examples available via Julia's -built-in documentation system, for example with `?lll`. Documentation +built-in documentation system (try `?lll` or `@doc(lll)`). Documentation for all functions is [available](https://christianpeel.github.io/LLLplus.jl/dev). A tutorial notebook is found in the [`docs`](docs/LLLplusTutorial.ipynb) directory or on [nbviewer](https://nbviewer.jupyter.org/github/christianpeel/LLLplus.jl/blob/master/docs/LLLplusTutorial.ipynb). @@ -43,11 +44,12 @@ Pkg.add("LLLplus") using LLLplus # do lattice reduction on a matrix with randn entries -N = 100; +N = 40; H = randn(N,N); B,T = brun(H); B,T = lll(H); B,T = seysen(H); +B,T = hkz(H); # check out the CVP solver Q,R=qr(H); @@ -63,28 +65,32 @@ sum(abs.(u-uhat))

Execution Time results (click for details)

-In the first test we compare the `lll` function from LLLplus, the +In the first test we compare several LLL functions: the `lll` function from LLLplus, the `l2avx` function in the `src\l2.jl` file in LLLplus, the -`lll_with_transform` function from Nemo (which uses FLINT), and the -`lll_reduction` function from fplll. Nemo and fplll are written by -number theorists and are good benchmarks against which to compare. We -first show how the execution time varies as the basis (matrix) size -varies over [4 8 16 32 64]. For each matrix size, 20 random bases -are generated using fplll's `gen_qary` function with depth of 25 -bits, with the average execution time shown; the `eltype` is `Int64` -except for NEMO, which uses GMP (its own `BigInt`); in all cases the -`δ=.99`. The vertical axis shows -execution time on a logarithmic scale; the x-axis is also -logarithmic. The generally linear nature of the LLL curves supports -the polynomial-time nature of the algorithm. The `LLLplus.lll` -function is slower, while `l2avx` is similar to fplll. Though not -shown, using bases from `gen_qary` with bit depth of 45 gives fplll -a larger advantage. This figure was generated using code in -`test/timeLLLs.jl`. +`lll_with_transform` function from +[Nemo.jl](https://github.com/Nemocas/Nemo.jl) (which uses FLINT), and +the `lll_reduction` function from +[fplll](https://github.com/fplll/fplll). Nemo is written by number +theorists, while fplll is written +by lattice cryptanalysis academics; they are good benchmarks against which to compare. +We first show how the execution time varies as the basis (matrix) size +varies over [4 8 16 32 64]. For each matrix size, 20 random bases are +generated using fplll's `gen_qary` function with depth of 25 bits, +with the average execution time shown; the `eltype` is `Int64` except +for NEMO, which can only use GMP (its own `BigInt`); in all cases the +`δ=.99`. The vertical axis shows execution time on a logarithmic +scale; the x-axis is also logarithmic. The generally linear nature of +the LLL curves supports the polynomial-time nature of the +algorithm. The `lll` function is slower, while `l2avx` is similar to +fplll. Though not shown, using bases from `gen_qary` with bit depth of +45 gives fplll a larger advantage. Though the LLLplus functions are +not always the fastest, they are in the same ballpark as the C and +C++ tools; if this package gets more users, we'll spend more time on +speed :-) This figure was generated using code in `test/timeLLLs.jl`. ![Time vs basis size](docs/src/assets/timeVdim_25bitsInt64.png) -One question that could arise when looking at the plot above is what +One additional question that could arise when looking at the plot above is what the quality of the basis is. In the next plot we show execution time vs the norm of the first vector in the reduced basis, this first vector is typically the smallest; its norm is an rough indication of diff --git a/src/applications.jl b/src/applications.jl index 50dec74..0f01035 100644 --- a/src/applications.jl +++ b/src/applications.jl @@ -21,7 +21,7 @@ Combinatorial Optimization, 6th International IPCO Conference, vol 1412, pp julia> A=[10 1 -9; 1 8 8]; xtrue=[0; 2; 9]; d=A*xtrue; julia> integerfeasibility(A,d) -3-element Array{Int64,1}: +3-element Vector{Int64}: 0 2 9 @@ -29,7 +29,7 @@ julia> integerfeasibility(A,d) julia> A=[10 1.1 -9.1; 1 8 8]; d=A*xtrue; julia> integerfeasibility(A,d) -3-element Array{Float64,1}: +3-element Vector{Float64}: 0.0 2.0 9.0 @@ -337,7 +337,7 @@ Introduction to the LLL Algorithm and Its Applications" CRC Press, 2012. julia> x = [0.3912641745333527; 0.5455179974014548; 0.1908698210882469]; julia> rationalapprox(x,1e4,Int64) -3-element Array{Rational{Int64},1}: +3-element Vector{Rational{Int64}}: 43//110 6//11 21//110 diff --git a/src/brun.jl b/src/brun.jl index ff31800..0a264fa 100644 --- a/src/brun.jl +++ b/src/brun.jl @@ -17,7 +17,7 @@ Signal Processing Magazine, March 2011 # Examples ```jldoctest julia> H=[1 2; 3 4]; B,T=brun(H); T -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 3 -1 -2 1 diff --git a/src/cvp.jl b/src/cvp.jl index bab1ce8..a4af931 100644 --- a/src/cvp.jl +++ b/src/cvp.jl @@ -19,12 +19,12 @@ Agrell, IEEE Transactions on Information Theory, vol 57, issue 6 , June 2011. # Examples ```jldoctest julia> H=[1 2; 3 4]; Q,R=qr(H); uhat = cvp(Q'*[0,2],R) -2-element Array{Float64,1}: +2-element Vector{Float64}: 2.0 -1.0 julia> uhat = cvp(Q'*[0,2],R,Val(false),0,100) -2-element Array{Float64,1}: +2-element Vector{Float64}: 1.0 0.0 @@ -162,14 +162,14 @@ August 2002. # Examples ```jldoctest julia> H=[1 2; 3 4]; svp(H) -2-element Array{Int64,1}: +2-element Vector{Int64}: -1 -1 julia> H = [1 0 0 0; 0 1 0 0; 208 175 663 0; 651 479 0 663]; julia> svp(H) -4-element Array{Int64,1}: +4-element Vector{Int64}: 16 -19 3 @@ -201,7 +201,7 @@ in the lattice with basis B. # Examples ```jldoctest julia> H=[1 2; 3 4]; s=svp(H); u=LLLplus.svpu(H); [u s H*u] -2×3 Array{Int64,2}: +2×3 Matrix{Int64}: 1 -1 -1 -1 -1 -1 @@ -234,7 +234,7 @@ Theory, vol. 48, no. 8, August 2002. julia> H = inv([1.0 2; 0 4]); julia> uhat = LLLplus.decodeSVPAgrell(H) -2-element Array{Int64,1}: +2-element Vector{Int64}: -1 0 diff --git a/src/hard_sphere.jl b/src/hard_sphere.jl index 796e024..88b0c37 100644 --- a/src/hard_sphere.jl +++ b/src/hard_sphere.jl @@ -25,7 +25,7 @@ X=hardsphere(Y,H,Nc) # Examples: ```jldoctest julia> X = hardsphere([1. 1]', [1. 2; 3 4],2) -2×1 Array{Int64,2}: +2×1 Matrix{Int64}: -1 1 @@ -80,7 +80,7 @@ disappear in the future. # Examples: ```jldoctest julia> X = LLLplus.algIIsmart([5. 8]', [1. 2; 0 4],[4,4]) -2×1 Array{Float64,2}: +2×1 Matrix{Float64}: 1.0 2.0 diff --git a/src/hkz.jl b/src/hkz.jl index 44a009d..da6b1db 100644 --- a/src/hkz.jl +++ b/src/hkz.jl @@ -15,7 +15,7 @@ http://www.cas.mcmaster.ca/~qiao/publications/ZQW11.pdf julia> H=[1 2; 3 4]; julia> B,T= hkz(H); B -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 -1 -1 -1 @@ -60,7 +60,7 @@ julia> B=[1 2; 3 4]; julia> Z,_= LLLplus.hkz_red(B); julia> B*Z -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 -1 -1 -1 diff --git a/src/l2.jl b/src/l2.jl index 0e48fd6..394aca9 100644 --- a/src/l2.jl +++ b/src/l2.jl @@ -22,14 +22,14 @@ generally faster than `lll` on small bases, say of dimensions less than 80. julia> H= [1 2; 3 4];B,_ = LLLplus.l2(H); B ┌ Warning: l2 is in a raw (alpha) state and may change. See the help text. └ @ LLLplus src/l2.jl:45 -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 -1 1 1 julia> H= [.5 2; 3 4]; B,_= LLLplus.l2(H); B ┌ Warning: l2 is in a raw (alpha) state and may change. See the help text. └ @ LLLplus src/l2.jl:45 -2×2 Array{Float64,2}: +2×2 Matrix{Float64}: 1.5 -1.0 1.0 2.0 @@ -193,7 +193,7 @@ julia> using LoopVectorization julia> H= [1 2; 3 4];B = l2avx(H) ┌ Warning: l2avx is in a raw (alpha) state and may change. See the help text. └ @ LLLplus ~/shared/LLLplus/src/l2.jl:42 -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 -1 1 1 ``` diff --git a/src/latticegen.jl b/src/latticegen.jl index 50a5a22..faada61 100644 --- a/src/latticegen.jl +++ b/src/latticegen.jl @@ -29,7 +29,7 @@ O. Regev. Post-Quantum Cryptography. Chapter of Lattice-based Cryptography, # Examples ``` julia> b=gen_qary_b(Int64,2,1,6) -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 0 7 32 diff --git a/src/lll.jl b/src/lll.jl index 6333ce4..046cec7 100644 --- a/src/lll.jl +++ b/src/lll.jl @@ -18,12 +18,12 @@ Annalen 261, 1982. http://ftp.cs.elte.hu/~lovasz/scans/lll.pdf # Examples ```jldoctest julia> H= [1 2; 3 4];B,_ = lll(H); B -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 -1 1 1 julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= lll(H); B -2×2 Array{Complex{BigFloat},2}: +2×2 Matrix{Complex{BigFloat}}: 0.50+0.0im 0.0+1.0im 1.0+0.0im 0.0+0.0im @@ -114,7 +114,7 @@ Algorithm and Its Applications" by Murray R. Bremner, CRC Press, 2012. # Examples ```jldoctest julia> H = [1 2; 3 3]; B = gauss(H) -2×2 Array{Float64,2}: +2×2 Matrix{Float64}: 1.0 0.0 0.0 3.0 @@ -196,12 +196,12 @@ Size reduction is the first part of LLL lattice reduction. # Examples ```jldoctest julia> H= [1 2; 3 4];B,_ = sizereduction(H); B -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: 1 1 3 1 julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= sizereduction(H); B -2×2 Array{Complex{BigFloat},2}: +2×2 Matrix{Complex{BigFloat}}: 1.5+2.0im 0.50+0.0im 3.0+2.0im 1.0+0.0im diff --git a/src/seysen.jl b/src/seysen.jl index e1983e1..18c5e8b 100644 --- a/src/seysen.jl +++ b/src/seysen.jl @@ -17,12 +17,12 @@ http://link.springer.com/article/10.1007%2FBF01202355 # Examples ```jldoctest julia> H= [1 2; 3 4];B,T = seysen(H); B -2×2 Array{Int64,2}: +2×2 Matrix{Int64}: -1 1 1 1 julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; B,_= seysen(H); B -2×2 Array{Complex{BigFloat},2}: +2×2 Matrix{Complex{BigFloat}}: 0.0+1.0im 0.50+0.0im 0.0+0.0im 1.0+0.0im diff --git a/src/utilities.jl b/src/utilities.jl index a858986..42efae4 100644 --- a/src/utilities.jl +++ b/src/utilities.jl @@ -12,7 +12,7 @@ on page 2 of Bennet: julia> H= [1 2; 3 4];B,T=lll(H); julia> [orthogonalitydefect(H) orthogonalitydefect(B)] -1×2 Array{Float64,2}: +1×2 Matrix{Float64}: 7.07107 1.0 ``` diff --git a/src/vblast.jl b/src/vblast.jl index 664561c..6b04213 100644 --- a/src/vblast.jl +++ b/src/vblast.jl @@ -15,12 +15,12 @@ is done, with the result that `W` will no longer have orthogonal rows and # Examples ```jldoctest julia> H= [1. 2; 3 4];W,_ = vblast(H); W -2×2 Array{Float64,2}: +2×2 Matrix{Float64}: 1.5 -0.5 0.1 0.3 julia> H= BigFloat.([1.5 2; 3 4]) .+ 2im; W,_= vblast(H); W -2×2 Array{Complex{BigFloat},2}: +2×2 Matrix{Complex{BigFloat}}: -2.0+3.0im 2.0-1.5im 0.0779221-0.103896im 0.155844-0.103896im