LLLplus includes Lenstra-Lenstra-Lovász (LLL), Brun, and Seysen lattice reduction; and shortest vector problem (SVP) and closest vector problem (CVP) solvers. These lattice reduction and related lattice tools are used in cryptography, digital communication, and integer programming. The historical and practical prominence of the LLL technique in lattice tools is the reason for its use in the name "LLLplus". This package is experimental; see fplll for a robust tool.
LLL  lattice reduction is a powerful tool that is widely used in cryptanalysis, in cryptographic system design, in digital communications, and to solve other integer problems. LLL reduction is often used as an approximate solution to the SVP. We also include Gauss/Lagrange, Brun  and Seysen  lattice reduction techniques. The LLL, Brun, and Seysen algorithms are based on . The CVP solver is based on  and can handle lattices and bounded integer constellations. A slow SVP solver based on the CVP tool is included as well.
We also include code to do a Vertical-Bell Laboratories Layered Space-Time (V-BLAST)  matrix decomposition which is used in digital communications. The CVP, LLL, Brun, Seysen, and V-BLAST functions can be used to solve (exactly or approximately) CVP problems; the MUMIMO.jl package demostrates how these functions can be used in decoding multi-antenna signals.
Another important application is in cryptanalysis; as an demo of a
cryptanalytic attack, see the
subsetsum function. The LLL algorithm has
been shown to solve many integer programming feasibility problems; see
integerfeasibility for a demo. Lattice tools are often used to study and solve
Diophantine problems; for example in "simultaneous diophantine
approximation" a vector of real numbers are approximated by rationals
with a common deonminator. For a demo function, see
Finally, to see how the LLL can be used to find spigot formulas for
Each function contains documentation and examples available via Julia's
built-in documentation system, for example with
for all functions is available on
pkg.julialang.org. A tutorial notebook is
found in the
docs directory or on
Here are a few examples of using the functions in the package on random lattices.
Pkg.add("LLLplus") using LLLplus # do lattice reduction on a matrix with randn entries N = 100; H = randn(N,N); B,T = brun(H); B,T = lll(H); B,T = seysen(H); # check out the CVP solver Q,R=qr(H); u=Int.(rand(0:1e10,N)); y=H*u+rand(N)/100; uhat=cvp(Q'*y,R); sum(abs.(u-uhat))
In the first test we compare the
lll function from LLLplus, the
l2avx function in the
src\l2.jl file in LLLplus, the
lll_with_transform function from Nemo (which uses FLINT), and the
lll_reduction function from fplll. Nemo and fplll are written by
number theorists and are good benchmarks against which to compare. We
first show how the execution time varies as the basis (matrix) size
varies over [4 8 16 32 64]. For each matrix size, 20 random bases
are generated using fplll's
gen_qary function with depth of 25
bits, with the average execution time shown; the
except for NEMO, which uses GMP (its own
BigInt); in all cases the
δ=.99. The vertical axis shows
execution time on a logarithmic scale; the x-axis is also
logarithmic. The generally linear nature of the LLL curves supports
the polynomial-time nature of the algorithm. The
function is slower, while
l2avx is similar to fplll. Though not
shown, using bases from
gen_qary with bit depth of 45 gives fplll
a larger advantage. This figure was generated using code in
One question that could arise when looking at the plot above is what
the quality of the basis is. In the next plot we show execution time
vs the norm of the first vector in the reduced basis, this first
vector is typically the smallest; its norm is an rough indication of
the quality of the reduced basis. We show results averaged over 20
random bases from
gen_qary with depth
25 bits, this time with the
dimension fixed at
32. The curve is created by varying the
.99 in steps of
.2; the larger times and
smaller norms correspond to the largest
δ values. Though the
function is competitive with fplll in this case, in many other cases
the fplll code is significantly faster.
Finally, we show execution time for several built-in
datatypes (Int32, Int64, Int128, Float32, Float64, BitInt, and
BigFloat) as well as type from external packages (Float128 from
Quadmath.jl and Double64
which are used to
generate 40 128x128 matrices, over which execution time for the
lattice reduction techniques is averaged. The vertical axis is a
logarithmic representation of execution time as in the previous
figure. This figure was generated using code in
There are certainly many improvements and additions that could be made to LLLplus, such as adding Block-Korkin-Zolotarev (BKZ) lattice reduction with improvements as in . Even so, it would be hard to compete with fplll on features. In fact, a Julia wrapper around fplll would be the most useful addition to lattice tools in Julia; it would provide funcionality not in LLLplus such as BKZ reduction.
The algorithm pseudocode in the monograph  and the survey paper  were very helpful in writing the lattice reduction tools in LLLplus and are a good resource for further study. If you are trying to break one of the Lattice Challenge records or are looking for robust, well-proven lattice tools, look at fplll. Also, for many number-theoretic problems the Nemo.jl package is appropriate; it uses the FLINT C library to do LLL reduction on Nemo-specific data types. Finally, no number theorists have worked on LLLplus; please treat the package as experimental.
 A. K. Lenstra; H. W. Lenstra Jr.; L. Lovász, "Factoring polynomials with rational coefficients". Mathematische Annalen 261, 1982.
 V. Brun, "En generalisation av kjedebrøken I," Skr. Vidensk. Selsk. Kristiana, Mat. Nat. Klasse, 1919.
 M. Seysen, "Simultaneous reduction of a lattice basis and its reciprocal basis" Combinatorica, 1993.
 D. Wuebben, D. Seethaler, J. Jalden, and G. Matz, "Lattice Reduction - A Survey with Applications in Wireless Communications". IEEE Signal Processing Magazine, 2011.
 A. Ghasemmehdi, E. Agrell, "Faster Recursions in Sphere Decoding" IEEE Transactions on Information Theory, vol 57, issue 6 , June 2011.
 P. W. Wolniansky, G. J. Foschini, G. D. Golden, R. A. Valenzuela, "V-BLAST: An Architecture for Realizing Very High Data Rates Over the Rich-Scattering Wireless Channel". Proc. URSI ISSSE: 295–300, 1998.
 M. R. Bremner, "Lattice Basis Reduction: An Introduction to the LLL Algorithm and Its Applications" CRC Press, 2012.
 Y. Chen, P. Q. Nguyen, "BKZ 2.0: Better Lattice Security Estimates". Proc. ASIACRYPT 2011.
23 days ago