+ Tensor decision diagrams (TDDs) are a data structure that has the characteristics of both decision diagrams and tensor networks
+ It can be used to represent tensors and quantum circuits
+ It is compact, canonical, and can be calculated through contraction
+ The value of a tensor at a leaf node is obtained by multiplying the weights of edges along the path from the TDD root
+ Very useful in simulation and equivalence checking of quantum circuits
See our paper
The TDD of a tensor can be obtained by first constructing a complete binary tree with terminal (leaf) nodes bearing corresponding tensor values and then applying
Normalisation The tensor value c[n] of each terminal node n is moved upwards along the path from the leaf step by step s.t.
c[n] is the product of the new value (0 or 1) of the terminal node and all weights along the path from the leaf
Each (internal) node represents either the 0 tensor or a *normal* tensor, i.e., a tensor which has max. norm 1 and its first value with norm 1 is exactly 1
Reduction
Merge all terminal 1 nodes
Delete all terminal 0 nodes and redirect their incoming edges to the (unique) terminal and reset their weights to 0
Redirect all weight-0 edges to the terminal
Merging nodes that represent the same tensor
The thus obtained structure is the canonical (reduced) TDD
+ The addition of two TDDs is calculated by adding their sub-TDDs resp.
+ Their contraction is calculated in two manners according to the index to be processed (normally the index of the root):
If the index is not expected to be contracted, the two sub-TDDs should be contracted and connected to a root node with the same index
If the index is expected to be contracted, the two sub-TDDs should be added over