Rate This Document
Findability
Accuracy
Completeness
Readability

einsum

Description

Evaluate the Einstein summation convention on the operands.

  • In implicit mode, einsum uses the Einstein summation convention to represent many common multi-dimensional, linear algebraic array operations in a simple fashion.
  • In explicit mode, einsum provides further flexibility to compute other array operations that might not be considered classical Einstein summation operations, by disabling, or forcing summation over specified subscript labels.

Mandatory Input Parameters

Parameter

Type

Description

subscript

str

Specifies the subscripts for summation as comma-separated lists of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator -> is included as well as subscript labels of the precise output form.

operands

list

Operand list.

Scalars are not allowed. Otherwise, an exception is raised.

Optional Input Parameters

Parameter

Type

Default Value

Description

out

ndarray

None

If provided, the calculation result is stored into this array.

dtype

{data-type, None}

None

If provided, it forces the calculation to use the data type specified. Note that you may have to also give a more liberal casting parameter to allow the conversions.

order

{'C', 'F', 'A', 'K'}

'K'

Controls the memory layout of the output.

  • "C" means it should be C contiguous.
  • "F" means it should be Fortran contiguous.
  • "A" means it should be "F" if the inputs are all "F", "C" otherwise.
  • "K" means it should be as close to the layout and the inputs as possible, including arbitrarily permuted axes.

casting

{'no', 'equiv', 'safe', 'same_kind', 'unsafe'}

'safe'

Controls what kind of data casting may occur. You are not advised to set this parameter to "unsafe" because undefined results may occur.

  • "no" means the data types should not be cast at all.
  • "equiv" means only byte-order changes are allowed.
  • "safe" means only casts which can preserve values are allowed.
  • "same_kind" means only safe casts or casts within a kind, like float64 to float32, are allowed.
  • "unsafe" means any data conversions may be done.

optimize

{False, True, 'greedy', 'optimal'}

False

Controls if intermediate optimization should occur. No optimization will occur if False and True default to the greedy algorithm. It also accepts an explicit contraction list from the np.einsum_path function.

Return Value

Type

Description

ndarray

Calculation result based on the Einstein summation convention

Examples

>>> import numpy as np
>>> a = np.arange(25).reshape((5,5))
>>> a
array([[ 0,  1,  2,  3,  4],
       [ 5,  6,  7,  8,  9],
       [10, 11, 12, 13, 14],
       [15, 16, 17, 18, 19],
       [20, 21, 22, 23, 24]])
>>> b = np.arange(5)
>>> b
array([0, 1, 2, 3, 4])
>>> c = np.arange(6).reshape(2,3)
>>> c
array([[0, 1, 2],
       [3, 4, 5]])
>>> 
>>> # Trace of a matrix (sum of diagonal elements)
>>> np.einsum('ii', a)
60
>>> np.einsum(a, [0,0])
60
>>> np.trace(a)
60
>>>
>>> # Diagonal elements of the matrix
>>> np.einsum('ii->i', a)
array([ 0,  6, 12, 18, 24])
>>> np.einsum(a, [0,0], [0])
array([ 0,  6, 12, 18, 24])
>>> np.diag(a)
array([ 0,  6, 12, 18, 24])
>>>
>>> # Sum over an axis
>>> np.einsum('ij->i', a)
array([ 10,  35,  60,  85, 110])
>>> np.einsum(a, [0,1], [0])
array([ 10,  35,  60,  85, 110])
>>> np.sum(a, axis = 1)
array([ 10,  35,  60,  85, 110])
>>> np.einsum(a, [Ellipsis,1], [Ellipsis])
array([ 10,  35,  60,  85, 110])
>>> np.einsum('...j->...', a)
array([ 10,  35,  60,  85, 110])
>>>
>>> c # Matrix transpose
array([[0, 1, 2],
       [3, 4, 5]])
>>> np.einsum('ji', c)
array([[0, 3],
       [1, 4],
       [2, 5]])
>>> np.einsum('ij->ji', c)
array([[0, 3],
       [1, 4],
       [2, 5]])
>>> np.einsum(c, [1,0])
array([[0, 3],
       [1, 4],
       [2, 5]])
>>>
>>> # Vector inner product
>>> b
array([0, 1, 2, 3, 4])
>>> np.einsum('i,i', b, b)
30
>>> np.einsum(b, [0], b, [0])
30
>>> np.inner(b, b)
30
>>>
>>> # Matrix vector multiplication
>>> np.einsum('ij,j', a, b)
array([ 30,  80, 130, 180, 230])
>>> np.einsum(a, [0,1], b, [1])
array([ 30,  80, 130, 180, 230])
>>> np.dot(a, b)
array([ 30,  80, 130, 180, 230])
>>> np.einsum('...j,j', a, b)
array([ 30,  80, 130, 180, 230])
>>>
>>> # Broadcasting and scalar multiplication
>>> c
array([[0, 1, 2],
       [3, 4, 5]])
>>> np.einsum('...,...', 3, c)
array([[ 0,  3,  6],
       [ 9, 12, 15]])
>>> np.einsum(',ij', 3, c)
array([[ 0,  3,  6],
       [ 9, 12, 15]])
>>> np.einsum(3, [Ellipsis], c, [Ellipsis])
array([[ 0,  3,  6],
       [ 9, 12, 15]])
>>> np.multiply(3, c)
array([[ 0,  3,  6],
       [ 9, 12, 15]])
>>>
>>> # Vector outer product
>>> t = np.arange(2) + 1
>>> t
array([1, 2])
>>> b
array([0, 1, 2, 3, 4])
>>> 
>>> np.einsum('i,j', t, b)
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])
>>> np.einsum(t, [0], b, [1])
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])
>>> np.outer(t, b)
array([[0, 1, 2, 3, 4],
       [0, 2, 4, 6, 8]])
>>>
>>> # tensordot
>>> a = np.arange(60).reshape(3,4,5)
>>> b = np.arange(24).reshape(4,3,2)
>>> np.einsum('ijk,jil->kl', a, b)
array([[4400, 4730],
       [4532, 4874],
       [4664, 5018],
       [4796, 5162],
       [4928, 5306]])
>>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3])
array([[4400, 4730],
       [4532, 4874],
       [4664, 5018],
       [4796, 5162],
       [4928, 5306]])
>>> np.tensordot(a, b, axes=([1,0],[0,1]))
array([[4400, 4730],
       [4532, 4874],
       [4664, 5018],
       [4796, 5162],
       [4928, 5306]])
>>>