Tensors

Functions and classes for manipulating tensors in full, canonical, and Tucker format, and for tensor approximation.

A full tensor is simply represented as a numpy.ndarray. Additional tensor formats are implemented in the following classes:

In addition, arbitrary tensors can be composed into sums or tensor products using the following classes:

Below, whenever we refer generically to “a tensor”, we mean either an ndarray or an instance of any of these tensor classes.

All tensor classes have members ndim, shape, and ravel which have the same meaning as for an ndarray. Any tensor can be expanded to a full ndarray using asarray(). In addition, most tensor classes have overloaded operators for adding and subtracting tensors in their native format.

All tensors can be sliced using the standard numpy [] indexing syntax. The result is a tensor in the same format, except for the case where all axes have a single scalar index, in which case the entry at the corresponding index is returned as a scalar value.

Linear operators on tensors, themselves represented in suitable low-rank formats, are described by

Module members

class pyiga.tensor.CanonicalOperator(terms)

Represents a linear operator on tensors which is described as a sum of rank one operators (Kronecker products), i.e.,

\[\mathcal A = \sum_{r=1}^{R} A^1_r \otimes\cdots\otimes A^d_r.\]

The argument terms is a list of length R of d-tuples containing the matrices \(A^k_r\).

R

Kronecker rank of the operator

Type:int
shape

a pair where shape[1] is the shape of input tensors accepted by this operator and shape[0] is the shape of output tensors produced

Type:tuple
ndim

the number of dimensions, i.e., d in the formula above

Type:int
T

Return the transpose of this operator as a CanonicalOperator.

apply(X)

Return the result of applying this operator to a tensor X.

asmatrix(format='csr')

Return the raveled form of this operator as a sparse matrix in the given format.

static eye(ns, format='dia')

Represent the identity as a tensor product of identity matrices with sizes given by the tuple of integers ns.

kron(other)

Construct a new CanonicalOperator as the Kronecker product of this and other.

class pyiga.tensor.CanonicalTensor(Xs)

A tensor in CP (canonical/PARAFAC) format, i.e., a sum of rank 1 tensors.

For a tensor of order d, Xs should be a tuple of d matrices. Their number of columns should be identical and determines the rank R of the tensor. The number of rows of the j-th matrix determines the size of the tensor along the j-th axis.

The tensor is given by the sum, for r up to R, of the outer products of the r-th columns of the matrices Xs.

asarray()

Convert canonical tensor to a full ndarray.

copy()

Create a deep copy of this tensor.

static from_tensor(A)

Convert A from other tensor formats to canonical format.

static from_terms(terms)

Construct a canonical tensor from a list of rank 1 terms, represented as tuples of vectors.

norm()

Compute the Frobenius norm of the tensor.

nway_prod(Bs)

Implements apply_tprod() for canonical tensors.

Returns:CanonicalTensor – the result in canonical format
static ones(shape)

Construct a constant canonical tensor with all entries one and the given shape.

ravel()

Return the vectorization of this tensor.

squeeze(axis=None)

Eliminate singleton axes. Equivalent to numpy.squeeze().

terms()

Return the rank one components as a list of tuples.

static zeros(shape)

Construct a zero canonical tensor with the given shape.

class pyiga.tensor.TensorProd(*Xs)

Represents the abstract tensor product of an arbitrary number of tensors.

asarray()

Convert sum of tensors to a full ndarray.

nway_prod(Bs)

Implements apply_tprod() for tensor products.

Returns:TensorProd – the result as a tensor product
ravel()

Return the vectorization of this tensor.

class pyiga.tensor.TensorSum(*Xs)

Represents the abstract sum of an arbitrary number of tensors with identical shapes.

asarray()

Convert sum of tensors to a full ndarray.

nway_prod(Bs)

Implements apply_tprod() for sums of tensors.

Returns:TensorSum – the result as a sum of tensors
ravel()

Return the vectorization of this tensor.

class pyiga.tensor.TuckerTensor(Us, X)

A d-dimensional tensor in Tucker format is given as a list of d basis matrices

\[U_k \in \mathbb R^{n_k \times m_k}, \qquad k=1,\ldots,d\]

and a (typically small) core coefficient tensor

\[X \in \mathbb R^{m_1 \times \ldots \times m_d}.\]

When expanded (using TuckerTensor.asarray()), a Tucker tensor turns into a full tensor

\[A \in \mathbb R^{n_1 \times \ldots \times n_d}.\]

One way to compute a Tucker tensor approximation from a full tensor is to first compute the HOSVD using hosvd() and then truncate it using TuckerTensor.truncate() to the rank estimated by find_truncation_rank(). Such a rank compression is implemented in TuckerTensor.compress().

asarray()

Convert Tucker tensor to a full ndarray.

compress(tol=1e-15, rtol=1e-15)

Approximate this Tucker tensor by another one of smaller rank, up to an absolute error tolerance tol or a relative error tolerance rtol.

Returns:the approximation as a TuckerTensor
copy()

Create a deep copy of this tensor.

static from_tensor(A)

Convert A from other tensor formats to Tucker format.

norm()

Compute the Frobenius norm of the tensor.

nway_prod(Bs)

Implements apply_tprod() for Tucker tensors.

Returns:TuckerTensor – the result in Tucker format
static ones(shape)

Construct a constant Tucker tensor with all entries one and the given shape.

orthogonalize()

Compute an equivalent Tucker representation of the current tensor where the matrices U have orthonormal columns.

Returns:TuckerTensor – the orthonormalized Tucker tensor
ravel()

Return the vectorization of this tensor.

squeeze(axis=None)

Eliminate singleton axes. Equivalent to numpy.squeeze().

truncate(k)

Truncate a Tucker tensor T to the given rank k.

static zeros(shape)

Construct a zero Tucker tensor with the given shape.

pyiga.tensor.als(A, R, tol=1e-10, maxiter=10000, startval=None)

Compute best rank R approximation to tensor A using Alternating Least Squares.

Parameters:
  • A (tensor) – the tensor to be approximated
  • R (int) – the desired rank
  • tol (float) – tolerance for the stopping criterion
  • maxiter (int) – maximum number of iterations
  • startval – starting tensor for iteration. By default, a random rank R tensor is used. A CanonicalTensor with rank R may be supplied for startval instead.
Returns:

CanonicalTensor – a rank R approximation to A; generally close to the best rank R approximation if the algorithm converged to a small enough tolerance.

pyiga.tensor.als1(A, tol=1e-15)

Compute best rank 1 approximation to tensor A using Alternating Least Squares.

Parameters:
  • A (tensor) – the tensor to be approximated
  • tol (float) – tolerance for the stopping criterion
Returns:

A tuple of vectors (x1, …, xd) such that outer(x1, ..., xd) is the approximate best rank 1 approximation to A.

pyiga.tensor.als1_ls(A, B, tol=1e-15, maxiter=10000, spd=False)

Compute rank 1 approximation to the solution of a linear system by Alternating Least Squares.

pyiga.tensor.als1_ls_structured(A, B, tol=1e-15, maxiter=10000)

Compute rank 1 approximation to the solution of a linear system by Alternating Least Squares.

Faster version of als1_ls(), but works only if all the matrices in the operator A have identical sparsity structure.

pyiga.tensor.apply_tprod(ops, A)

Apply multi-way tensor product of operators to tensor A.

Parameters:
  • ops (seq) – a list of matrices, sparse matrices, or LinearOperators
  • A (tensor) – the tensor to apply the multi-way tensor product to
Returns:

a new tensor with the same number of axes as A that is the result of applying the tensor product operator ops[0] x ... x ops[-1] to A. The return type is typically the same type as A.

The initial dimensions of A must match the sizes of the operators, but A is allowed to have an arbitrary number of trailing dimensions. None is a valid operator and is treated like the identity.

An interpretation of this operation is that the Kronecker product of the matrices ops is applied to the vectorization of the tensor A.

pyiga.tensor.array_outer(*xs)

Outer product of an arbitrary number of ndarrays.

Parameters:xs – an arbitrary number of input ndarrays
Returns:ndarray – the outer product of the inputs. Its shape is the concatenation of the shapes of the inputs.
pyiga.tensor.asarray(X)

Return the tensor X as a full ndarray.

pyiga.tensor.find_truncation_rank(X, tol=1e-12)

A greedy algorithm for finding a good truncation rank for a HOSVD core tensor.

pyiga.tensor.fro_norm(X)

Compute the Frobenius norm of the tensor X.

pyiga.tensor.grou(B, R, tol=1e-12, return_errors=False)

Canonical tensor approximation by Greedy Rank One Updates.

References

https://doi.org/10.1016/j.cam.2019.03.002

Parameters:
  • B (tensor) – the tensor to be approximated
  • R (int) – the desired canonical rank for the approximation
  • tol (double) – the desired absolute error tolerance
  • return_errors (bool) – whether to return the error history as a second return value
Returns:

The computed approximation as a CanonicalTensor. If return_errors is true, instead returns a tuple containing the tensor and a list of the error history over the iterations.

pyiga.tensor.gta(A, R, tol=1e-12, rtol=1e-12, return_errors=False)

Greedy Tucker approximation of the tensor A.

References

https://doi.org/10.1016/j.cam.2019.03.002

Parameters:
  • A (tensor) – the tensor to be approximated
  • R (int) – the desired multilinear rank of the approximation
  • tol (double) – target absolute error tolerance
  • rtol (double) – target relative error tolerance
  • return_errors (bool) – whether to return the error history as a second return value
Returns:

The computed approximation as a TuckerTensor. If return_errors is true, instead returns a tuple containing the tensor and a list of the error history over the iterations.

pyiga.tensor.gta_ls(A, F, R, tol=1e-12, verbose=0, gs=None, spd=False)

Greedy Tucker approximation of the solution of a linear system A U = F.

References

https://doi.org/10.1016/j.cam.2019.03.002

Parameters:
  • A (list) – the linear operator in low Kronecker rank format given as a list of tuples. Each tuple represents a Kronecker product operator and contains d matrices or linear operators; the operator is considered as the Kronecker product of these operators
  • F (tensor) – the right-hand side of the linear system as a (possibly low-rank) tensor
  • R (int) – the desired multilinear rank of the approximation (number of iterations)
  • tol (double) – desired reduction of the initial residual
  • verbose (int) – 0 = no printed output, 1 = moderate detail, 2 = full detail
  • gs (int) – if this is not None, then this many Gauss-Seidel iterations are used on the core linear system instead of direct solution; see the paper for details
  • spd (bool) – pass True if A is a symmetric positive definite operator; uses a more efficient and accurate rank 1 approximation algorithm (see the corresponding parameter of als1_ls())
Returns:

the computed approximation as a TuckerTensor

pyiga.tensor.hosvd(X)

Compute higher-order SVD (Tucker decomposition).

Parameters:X (ndarray) – a full tensor of arbitrary size
Returns:TuckerTensor – a Tucker tensor which represents X with the core tensor having the same shape as X and the factor matrices Uk being square and orthogonal.
pyiga.tensor.join_tucker_bases(T1, T2)

Represent the two Tucker tensors T1 and T2 in a joint basis.

Returns:tuple(U,X1,X2) such that T1 == TuckerTensor(U,X1) and T2 == TuckerTensor(U,X2). The basis U is the concatenation of the bases of T1 and T2.
pyiga.tensor.matricize(X, k)

Return the mode-k matricization of the ndarray X.

pyiga.tensor.modek_tprod(B, k, X)

Compute the mode-k tensor product of the ndarray X with the matrix or operator B.

Parameters:
  • B – an ndarray, sparse matrix, or LinearOperator of size m x nk
  • k (int) – the mode along which to multiply X
  • X (ndarray) – tensor with X.shape[k] == nk
Returns:

ndarray – the mode-k tensor product of size (n1, … nk-1, m, nk+1, …, nN)

pyiga.tensor.outer(*xs)

Outer product of an arbitrary number of vectors.

Parameters:xsd input vectors (x1, …, xd) with lengths n1, …, nd
Returns:ndarray – the outer product as an ndarray with d dimensions
pyiga.tensor.pad(X, pad_width)

Pad a tensor with zero rows in each direction.

Parameters:pad_width (list) – a list of (before,after) tuples, the same length as dimensions of X, which specifices how many zeros to prepend/append in each direction. None is admissible and is equivalent to (0,0).
Returns:the padded tensor