Skip to content

Commit

Permalink
Added brun, cvp, svp, subsetsum, integerFeasibility (#10)
Browse files Browse the repository at this point in the history
* Added "cvp" for closest vector problem
* Added "svp" for shortest vector problem
* Added example lll applications:  "subsetsum" function which  solves subset sum problem , "integerFeasibility" for integer programming feasibility.
* As a learning tool, added "gauss" decomposition
* updated README, docs for all functions
  • Loading branch information
chrisvwx authored Apr 1, 2019
1 parent 536ebb2 commit b767593
Show file tree
Hide file tree
Showing 13 changed files with 626 additions and 226 deletions.
146 changes: 81 additions & 65 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,41 @@
# LLLplus
# LLLplus.jl

[![Build Status](https://travis-ci.org/christianpeel/LLLplus.jl.svg?branch=master)](https://travis-ci.org/christianpeel/LLLplus.jl)
<!---
[![LLLplus](http://pkg.julialang.org/badges/LLLplus_release.svg)](http://pkg.julialang.org/?pkg=LLLplus&ver=0.7)

Lattice reduction and related lattice tools are used in wireless
communication, cryptography, and mathematics. This package provides
the following tools: Lenstra-Lenstra-Lovacsz (LLL) lattice reduction,
Seysen lattice reduction, a sphere decoder, and VBLAST matrix
decomposition. This package was created as a way to explore the Julia
language, and is not intended to be a cannonical tool for lattice
reduction; *use at your own risk!* :-)

[LLL](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm) [1]
lattice reduction is a powerful tool in computer science that is used
to cryptanalysis of public-key systems, to solve quadratic equations,
and to solve other linear problems such as in multi-terminal wireless.
The LLL is often used as a bounded-complexity approximate solution to
the
-->

Lattice reduction and related lattice tools are used in
cryptography, digital communication, and integer programming. LLLplus
includes Lenstra-Lenstra-Lovacsz (LLL) lattice reduction, Seysen
lattice reduction, VBLAST matrix decomposition, and a closest vector
problem (CVP) solver.

[LLL](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm)
[1] lattice reduction is a powerful tool that is widely used in
cryptanalysis, in cryptographic system design, in digital
communications, and to solve other integer problems. LLL reduction is
often used as an approximate solution to the
[shortest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_.28SVP.29)
(SVP).
Seysen [2] introduced a lattice reduction which focuses on global
optimization rather than local optimization as in LLL.
The
[closest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Closest_vector_problem_.28CVP.29)
(CVP) is related to the SVP; in the context of multi-antenna decoding
it is referred to as
[sphere-decoding](https://en.wikipedia.org/wiki/Lattice_problem#Sphere_decoding).

Finally, we include code to do a
[V-BLAST](https://en.wikipedia.org/wiki/Bell_Laboratories_Layered_Space-Time)
(Vertical-Bell Laboratories Layered Space-Time) matrix
decomposition. This decomposition is used in a detection algorithm [3]
for decoding spatially-multiplexed streams of data on multiple
antennas or other multi-terminal systems. V-BLAST is not as widely
used outside of the wireless communication community as lattice
reduction and CVP techniques such as the sphere decoder. In the Julia
package [MUMIMO.jl](https://github.com/christianpeel/MUMIMO.jl) we use
the LLL, sphere-decoder, and V-BLAST functions in this package to
decode multi-user, multi-antenna signals.
(SVP). We also include Gauss/Lagrange 2-dimensional lattice reduction
as a learning tool, and Seysen [2] lattice reduction which
simultaneously reduces a basis `B` and its dual `inv(B)'`. The LLL and
Seysen algorithms are based on [3]. The
[CVP](https://en.wikipedia.org/wiki/Lattice_problem#Closest_vector_problem_.28CVP.29)
solver is based on [4] and can handle lattices and bounded integer
constellations.
<!-- Many GPS researchers refer to the CVP as the "integer least squares" -->
<!-- problem, while digital communication researchers often call it the -->
<!-- [sphere-decoder](https://en.wikipedia.org/wiki/Lattice_problem#Sphere_decoding). -->

We also include code to do a
[Vertical-Bell Laboratories Layered Space-Time](https://en.wikipedia.org/wiki/Bell_Laboratories_Layered_Space-Time)
(V-BLAST) [5] matrix decomposition which is used in digital
communications. In digital communications (see
[MUMIMO.jl](https://github.com/christianpeel/MUMIMO.jl)) the LLL,
Seysen, V-BLAST, and CVP functions are used to solve (exactly or
approximately) CVP problems in encoding and decoding multi-terminal
signals.

### Examples

Expand All @@ -52,19 +50,19 @@ using LLLplus
N = 1000;
H = randn(N,N) + im*randn(N,N);
println("Testing LLL on $(N)x$(N) complex matrix...")
@time (B,T,Q,R) = lll(H);
@time B,T,Q,R = lll(H);
M = 200;
println("Testing VBLAST on $(M)x$(M) chunk of same matrix...")
@time (W,P,B) = vblast(H[1:M,1:M]);
@time W,P,B = vblast(H[1:M,1:M]);

# Time LLL, Seysen decompositions of a 100x100 Int64 matrix with
# rand entries distributed uniformly between -100:100
N = 100;
H = rand(-100:100,N,N);
println("Testing LLL on $(N)x$(N) real matrix...")
@time (B,T,Q,R) = lll(H);
@time B,T,Q,R = lll(H);
println("Testing Seysen on same $(N)x$(N) matrix...")
@time (B,T) = seysen(H);
@time B,T = seysen(H);
```

### Execution Time results
Expand All @@ -76,31 +74,42 @@ In the tests we time execution of the lattice-reduction functions,
average the results over multiple random matrices, and show results as
a function of the size of the matrix and of the data type.

We first show how the time varies with matrix size (1,2,4,...64); the
We first show how the time varies with matrix size (1,2,4,...256); the
vertical axis shows execution time on a logarithmic scale; the x-axis
is also logarithmic. The generally linear nature of the LLL curve supports
the polynomial-time nature of the algorithm. Each data point
is the average of execution time of 10 runs of a lattice-reduction
is the average of execution time of 40 runs of a lattice-reduction
technique, where the matrices used were generated using 'randn' to
emulate unit-variance Gaussian-distributed values.
![Time vs matrix size](benchmark/perfVsNfloat32.png)

![Time vs matrix size](benchmark/perfVsNfloat64.png)

Though the focus of the package is on floating-point,
all the modules can handle a variety of data types. In the next figure
we show execution time for several datatypes (Int32, Int64,
Int128, Float64, BitInt, and BigFloat) which are used to
generate 100 4x4 matrices, over which execution time for the lattice
Int128, Float32, Float64, DoublFloat, BitInt, and BigFloat) which are used to
generate 40 128x128 matrices, over which execution time for the lattice
reduction techniques is averaged. The vertical axis is a logarithmic
representation of execution time as in the previous
figure. ![Time vs data type](benchmark/perfVsDataTypeN16.png)
figure.

![Time vs data type](benchmark/perfVsDataType.png)

The algorithm pseudocode in the monograph [6] and the survey paper [3]
were very helpful in writing the lattice reduction tools in LLLplus
and are a good resource for further study.

LLLplus does not pretend to be a tool for number-theory work. For
that, see the [Nemo.jl](https://github.com/wbhart/Nemo.jl) package
which uses the [FLINT](http://flintlib.org/) C library to do LLL
reduction on Nemo-specific data types.

### Future

Possible improvements include:
* Change from "LLLplus" to "LLLtoy" or some such to emphasize
the nature of this package.
* Add Block-Korkin-Zolotarev lattice redution, with improvements
as in [4], and Brun lattice reduction
as in [7], code for Babai's simple CVP approximation [8], and
Brun's integer relation decomposition.
* The [SVP](http://www.latticechallenge.org/svp-challenge/) Challenge
and the
[Ideal](http://www.latticechallenge.org/ideallattice-challenge/)
Expand All @@ -109,26 +118,33 @@ Possible improvements include:
performance tests. The main
[Lattice](http://www.latticechallenge.org/) Challenge also lists
references which could be used to replicate tests.
* Compare with the [Number Theory Library](http://www.shoup.net/ntl/).
* Compare with the [fplll](https://github.com/fplll/fplll) library,
the [Number Theory Library](http://www.shoup.net/ntl/), and
NEMO/FLINT.
<!-- * Make additions to -->
<!-- [ArbFloats](https://github.com/JuliaArbTypes/ArbFloats.jl) and -->
<!-- [IntervalArithmetic](https://github.com/eeshan9815/IntervalArithmetic.jl) -->
<!-- to enable them to work. -->

### References

[1] Lenstra, A. K.; Lenstra, H. W., Jr.; Lovász, L. (1982). "Factoring
polynomials with rational coefficients". Mathematische Annalen 261
(4): 515–534.
[1] A. K. Lenstra; H. W. Lenstra Jr.; L. Lovász, ["Factoring polynomials with rational coefficients"](http://ftp.cs.elte.hu/~lovasz/scans/lll.pdf). Mathematische Annalen 261, 1982

[2] M. Seysen, ["Simultaneous reduction of a lattice basis and its reciprocal basis"](http://link.springer.com/article/10.1007%2FBF01202355) Combinatorica, 1993.

[3] D. Wuebben, D. Seethaler, J. Jalden, and G. Matz, ["Lattice Reduction - A Survey with Applications in Wireless Communications"](http://www.ant.uni-bremen.de/sixcms/media.php/102/10740/SPM_2011_Wuebben.pdf). IEEE Signal Processing Magazine, 2011.

[4] A. Ghasemmehdi, E. Agrell, ["Faster Recursions in Sphere Decoding"](https://publications.lib.chalmers.se/records/fulltext/local_141586.pdf) IEEE
Transactions on Information Theory, vol 57, issue 6 , June 2011.

[5] P. W. Wolniansky, G. J. Foschini, G. D. Golden, R. A. Valenzuela, ["V-BLAST: An Architecture for Realizing Very High Data Rates Over the Rich-Scattering Wireless Channel"](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=738086). Proc. URSI
ISSSE: 295–300, 1998.

[2] M. Seysen,
["Simultaneous reduction of a lattice basis and its reciprocal basis"]
(http://link.springer.com/article/10.1007%2FBF01202355) Combinatorica,
Vol 13, no 3, pp 363-376, 1993.
[6] M. R. Bremner, ["Lattice Basis Reduction: An Introduction to the LLL
Algorithm and Its Applications"](https://www.amazon.com/Lattice-Basis-Reduction-Introduction-Applications/dp/1439807027) CRC Press, 2012.

[3] P. W. Wolniansky, G. J. Foschini, G. D. Golden, R. A. Valenzuela
(September 1998). ["V-BLAST: An Architecture for Realizing Very High
Data Rates Over the Rich-Scattering Wireless Channel"]
(http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=738086). Proc. URSI
ISSSE: 295–300.
[7] Y. Chen, P. Q. Nguyen, ["BKZ 2.0: Better Lattice Security Estimates"](http://www.iacr.org/archive/asiacrypt2011/70730001/70730001.pdf). Proc. ASIACRYPT 2011.

[4] Y. Chen, P. Q. Nguyen (2011) ["BKZ 2.0: Better Lattice Security Estimates"]
(http://www.iacr.org/archive/asiacrypt2011/70730001/70730001.pdf).
Proc. ASIACRYPT 2011.
[8] L. Babai, ["On Lovász’ lattice reduction and the nearest lattice point problem"](https://link.springer.com/article/10.1007/BF02579403),
Combinatorica, 1986.

75 changes: 39 additions & 36 deletions benchmark/lrtest.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ function lrtest(Ns::Int,N::Array{Int,1},L::Array{Int,1},

# Packages that need to be loaded for lrtest to work include PyPlot and
# BenchmarkTools

#lrAlgs = [lll, lllrecursive,seysen]
lrAlgs = [lll, seysen]

Expand All @@ -35,16 +35,18 @@ if length(N)>1
for s = 1:length(N)
push!(out,lrsim(Ns,N[s],L[1],dataType[1], distType, lrAlgs));
end
plotfun = PyPlot.loglog;
xscale=:log10;
yscale=:log10;
xval = N;
xlab = "Matrix Size";
tstr = @sprintf("Ns=%d,L=%d,Type=%s,dist=%s",
Ns,L[1],string(dataType[1]),distType);
tstr = @sprintf("Ns=%d,Type=%s,dist=%s",
Ns,string(dataType[1]),distType);
elseif length(L)>1
for s = 1:length(L)
push!(out,lrsim(Ns,N[1],L[s],dataType[1], distType, lrAlgs));
end
plotfun = semilogy;
xscale = :identity;
yscale = :log10;
xval = L;
xlab = "L";
tstr = @sprintf("Ns=%d,N=%d,Type=%s,dist=%s",
Expand All @@ -53,11 +55,17 @@ elseif length(dataType)>1
for s = 1:length(dataType)
push!(out,lrsim(Ns,N[1],L[1],dataType[s], distType, lrAlgs));
end
plotfun = semilogy;
xscale = :identity;
yscale = :log10;
xval = 1:length(dataType)
xtickStrs = map(string,dataType)
xlab = "dataType";
tstr = @sprintf("Ns=%d,N=%d,L=%d,dist=%s",Ns,N[1],L[1],distType);
for n=xval
if dataType[n]== DoubleFloat{Float64}
xtickStrs[n] = "Double64"
end
end
end

pColor = ["r-","b.-","k-","g-","c-","m-",
Expand All @@ -67,31 +75,33 @@ pIdx = 1;

Nout = size(out,1);

f=figure(num=1,figsize=(6.5,4.5))
clf()
#plt.style[:use]("ggplot")

times = zeros(Nout);
orthf = zeros(Nout);
pltT = plot(legend=:topleft);
#pltO = plot(legend=:topleft);

for a=1:length(out[1][2])
for k=1:Nout
times[k] = out[k][1][a];
times[k] = out[k][2][a];
orthf[k] = out[k][3][a];
end
plotfun(xval,times,pColor[pIdx],label=out[1][2][a]);
plot!(pltT,xval,times,xscale=xscale,yscale=yscale,label=out[1][1][a],linewidth=3);
# plot!(pltO,xval,orthf,xscale=xscale,yscale=yscale,label=out[1][1][a],linewidth=3);
pIdx = pIdx==length(pColor) ? 1 : pIdx + 1;
end

xlabel(xlab);
ylabel("execution time (sec)");
legend(loc=2);
plot!(pltT,xlabel=xlab,ylabel="execution time (sec)")
#plot!(pltO,xlabel=xlab,ylabel="orthogonalization factor")

if ~isempty(xtickStrs)
xticks(xval,xtickStrs)
plot!(pltT,xticks=(xval,xtickStrs))
# plot!(pltO,xticks=(xval,xtickStrs))
end
grid(figure=f,which="both",axis="y")
grid(figure=f,which="major",axis="x")

display(pltT)
#display(pltO)

return
end

#######################################################################
Expand Down Expand Up @@ -130,12 +140,12 @@ for ix = 1:Ns
# CPUtic();
# (B,T) = lrAlgs[ax](data);
# times[ax,ix] = CPUtoq();
times[ax,ix] = @belapsed (B,T) = $lrAlgs[$ax]($data) samples=2 seconds=1
# detB = abs(det(B))
# # Hermite factor
# hermitef[ax,ix] = norm(B[:,1])/detB^(1/N)
# # Orthogonality defect
# orthf[ax,ix] = prodBi(B,N)/detB
times[ax,ix] = @elapsed (B,T) = lrAlgs[ax](data)
detB = abs(det(B))
# Hermite factor
hermitef[ax,ix] = norm(B[:,1])/detB^(1/N)
# Orthogonality defect
orthf[ax,ix] = prodBi(B,N)/detB

# if abs(abs(det(T))-1.0)>1e-6
# println("For $(lrAlgs[ax]), det(T) is $(abs(det(T)))")
Expand All @@ -152,7 +162,7 @@ end
for ax = 1:length(lrAlgs)
algNames[ax] = string(lrAlgs[ax]);
end
@printf("%8d %6d %6d %10s", Ns, N, L, string(dataType))
@printf("%8d %6d %6d %10s", Ns, N, L, string(dataType)[1:min(end,10)])

#outtimes = mean(times,2);
outtimes = zeros(Nalgs,1);
Expand All @@ -162,25 +172,18 @@ mean(x) = sum(x)/length(x)
for ax = 1:Nalgs
# stimes = sort(vec(times[ax,:]));
# outtimes[ax] = mean(stimes[1:floor(Ns*2/3)]);
# outtimes[ax] = minimum(times[ax,:]);
outtimes[ax] = mean(times[ax,:]);
outHF[ax] = mean(hermitef[ax,:]);
outOF[ax] = mean(orthf[ax,:]);
end

# BOLD= "\x1B[1;30m"
# RESET="\x1B[0;0m"
# (mn,mx)=findmin(outtimes)
for ax = 1:min(Nalgs,7)
# if ax==mx
# @printf(" %s%10.4f%s",BOLD,outtimes[ax]*1000,RESET)
# else
@printf(" %10.4f",outtimes[ax]*1000)
# @printf(" %10.4f",outHF[ax])
# end
@printf(" %10.4f",outtimes[ax]*1000)
end
@printf("\n")

return outtimes, algNames
return algNames, outtimes, outOF, outHF
end

#######################################################################
Expand All @@ -191,7 +194,7 @@ function prodBi(B,N)
for m=1:N
pn+=conj(B[m,n])*B[m,n]
end
println("pn=$(pn), typeof(pn)=$(typeof(pn))")
# println("pn=$(pn), typeof(pn)=$(typeof(pn))")
prod*=sqrt(pn)
end
return prod
Expand Down
Binary file added benchmark/perfVsDataType.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed benchmark/perfVsNfloat32.png
Binary file not shown.
Binary file added benchmark/perfVsNfloat64.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 6 additions & 5 deletions benchmark/perftest.jl
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
using PyPlot
using Plots
using BenchmarkTools
using LLLplus
using Printf
using Random
using LinearAlgebra
using DoubleFloats

include("lrtest.jl")

lrtest(5,2 .^[2],[100],[Int32,Int64,Int128,Float64,BigInt,BigFloat],"rand")
savefig("benchmark/perfVsDataTypeN16.png") # run from root directory
lrtest(40,2 .^[7],[100],[Int32,Int64,Int128,Float32,Float64,Double64,BigInt,BigFloat],"rand")
savefig("perfVsDataType.png")

lrtest(5,2 .^[0:5;],[1],[Float64],"randn")
savefig("benchmark/perfVsNfloat32.png") # run from root directory
lrtest(40,2 .^[0:8;],[1],[Float64],"randn")
savefig("perfVsNfloat64.png")
Loading

0 comments on commit b767593

Please sign in to comment.