Fast tensor operations using a convenient Einstein index notation.
|Documentation||Build Status||Digital Object Identifier|
The default cache size for intermediate results is now the minimum of either 4GB or one
quarter of your total memory (obtained via
Sys.total_memory()). Furthermore, the
eltype of the temporaries is now also used as lookup key
in the LRU cache, such that you can run the same code on different objects with
different sizes or element types, without constantly having to reallocate the
temporaries. Finally, the task rather than
threadid is used to make the cache
compatible with concurrency at any level.
As a consequence, different objects for the same temporary location can now be cached,
such that the cache can grow out of size quickly. Once the cache is not able to hold all
the temporary objects needed for your simulation, it might actually deteriorate
perfomance, and you might be better off disabling the cache alltogether with
WARNING: TensorOperations 3.0 contains breaking changes if you did implement support for custom array / tensor types by overloading
TensorOperations.jl is mostly used through the
@tensor macro which allows one to express
a given operation in terms of
index notation format, a.k.a.
(using Einstein's summation convention).
using TensorOperations α=randn() A=randn(5,5,5,5,5,5) B=randn(5,5,5) C=randn(5,5,5) D=zeros(5,5,5) @tensor begin D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b] E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b] end
In the second to last line, the result of the operation will be stored in the preallocated
D, whereas the last line uses a different assignment operator
:= in order to
define and allocate a new array
E of the correct size. The contents of
For more information, please see the documentation.
2 days ago