Permutation Entropy

Introduction

Permutation Entropy (PE) is a robust time series tool which provides a quantification measure of the complexity of a dynamic system by capturing the order relations between values of a time series and extracting a probability distribution of the ordinal patterns (see Henry and Judge, 2019).

Among its main features, the PE approach:

  • Is non-parametric and is free of restrictive parametric model assumptions.
  • Is robust with respect to noise, computationally efficient, flexible, and invariant with respect to non-linear monotonic transformations of the data.
  • Relies on the notions of entropy and symbolic dynamics.
  • Accounts for the temporal ordering structure (time causality) of a given time series of real values.
  • Allows the user to unlock the complex dynamic content of nonlinear time series.

Today, we will learn about the PE methodology and will demonstrate its use through a toy example. In a future blog, we will demonstrate the application of this technique to real-world data and show how to estimate time-varying PE estimates.

Methodology

The starting point of PE analysis is a one-dimensional time series. For illustration purposes, we will use the example given by Bandt and Pompe (2002).

$$\begin{equation}S(t) = \{\ 4, 7, 9, 10, 6, 11, 3\ \}\label{example_ts}\end{equation}$$

Partitioning the State Space

The first step is to partition the one-dimensional time series into a matrix of overlapping column vectors. This partitioning uses two hyperparameters:

  $\tau$ D
Description The embedding time delay which controls the number of time periods between elements of each of the new column vectors. The embedding dimension which controls the length of each of the new column vectors.
Valid range Any positive integer. Any integer > 1.
Recommended value(s) 1 3 ≤ D ≤ 7*
*For practical purposes, Band and Pompe (2002) suggest to use $3\le D\le7$ with $\tau = 1$. However, other values $\tau$ can be chosen depending upon the application and on the time series under study.

 

Using $D=3$ and $\tau = 1$, our sample data in $(\ref{example_ts})$ is partitioned as follows

$$ \begin{equation}\begin{bmatrix} 4 &7 &9 &10 &6 \\ 7 &9 &10 &6 &11 \\ 9 &10 &6 &11 &3 \end{bmatrix}\label{SSPM}\end{equation} $$

Each column vector has 3 elements because the embedding dimension, D, is set to 3. In addition, there is a single time period between each element in the vectors because our the embedded time delay, $\tau$, is set to 1.

The number of column vectors created for a $T$ dimensional vector will be $T - (D - 1)\tau$. The matrix shown in $(\ref{SSPM})$ contains $7 - 1(3 - 1) = 5$ column vectors.

Finding the Ordinal Patterns

After partitioning the one-dimensional time series, the D-dimensional vectors in $(\ref{SSPM})$ are mapped into unique permutations that capture the ordinal rankings of the data:

$$ \pi = \{\ r_0, r_1, \ldots, r_{D-1} \}\ = \{\ 0, 1, \ldots, D-1 \}$$

Following with the example above, there are $3! = 6$ different possible permutations (ordinal patterns) in total:

$$ \pi_1 = \{ 0, 1, 2 \} \\ \pi_2 = \{ 0, 2, 1 \} \\ \pi_3 = \{ 1, 0 ,2 \} \\ \pi_4 = \{ 1, 2, 0 \} \\ \pi_5= \{ 2, 0 ,1 \} \\ \pi_6 = \{ 2, 1, 0 \} $$

These permutations assign values to each partitioned vector based on the ordinal position of the values within the vector. As an example, let’s consider the first 3-dimensional vector in $(\ref{SSPM})$

$$ \begin{bmatrix} 4 \\ 7 \\ 9 \end{bmatrix} $$

The permutation for this vector is $\pi_1 = \{ 0, 1, 2 \}$ because $4 < 7 < 9$. Therefore, for our example data the permutation matrix is given by

$$ \begin{equation}\begin{bmatrix} 0 &0 &1 &1 &1 \\ 1 &1 &2 &0 &2 \\ 2 &2 &0 &2 &0 \end{bmatrix}\end{equation} $$

If an input vector contains two or more elements with the same value, the rank is determined by their order in the sequence.

Alternatively, ties may be broken by adding white noise with the strength of the random term being smaller than the smallest distance between values.

Calculating the Relative Frequencies

The relative frequency of each permutation can be calculated by counting the number of times the permutation is found in the time series divided by the total number of sequences.

Permutation Number of Occurrences $p_i$
$\pi_1$ 2 $2/5$
$\pi_2$ 0 $0/5$
$\pi_3$ 1 $1/5$
$\pi_4$ 2 $2/5$
$\pi_5$ 0 $0/5$
$\pi_6$ 0 $0/5$

Computing the PE

Finally, the previous probabilities are used to compute the PE of order $D$ of the time series, which is given by

$$\begin{equation}PE_D = - \sum_{i=1}^{D!} p_i log_2 p_i\label{compute_PE}\end{equation}$$

Continuing with the toy example, the PE of order 3 is

$$PE_3 = -(2/5 log_2(2/5) + 1/5 log_2(1/5) + 2/5 log_2(2/5)) \approx 1.5219$$

The PE measure in $(\ref{compute_PE})$ can also be normalized such that

$$PE_{D, norm} = -\frac{1}{log_2 D!} \sum_{i=0}^{D!} p_i log_2 p_i, $$

which is restricted between 0 and 1. Using the toy example,

$$PE_{3, norm} = -\frac{1}{log_2 3!} 1.5219 = 0.5887$$

Interpreting the PE value

The smaller the $PE_{D, norm}$ is, the more regular and more deterministic the time series is. Contrarily, the closer to 1 the $PE_{D, norm}$ is, the more noisy and random the time series is.

For example, suppose our 7 data observations were very noisy and each partition fell into a different permutation group. In this case our normalized PE measure will be:

$$PE_3 = -\frac{1}{log_2 3!} (1/5 log_2(1/5) + 1/5 log_2(1/5) + 1/5 log_2(1/5) +\\ 1/5 log_2(1/5) + 1/5 log_2(1/5)) \approx 0.898 .$$

Conversely, if observations were completely deterministic, each partition would fall into the same permutation group. In this case our normalized PE measure will be:

$$PE_3 = -\frac{1}{log_2 3!} (5/5 log_2(5/5)) = 0.000 .$$

Computing PE in GAUSS

Our GAUSS code can be used to compute the permutation entropy of time series. Using the function pentropy we can compute the permutation of our toy example.

The function pentropy has three required inputs:


x
Vector, the one-dimensional time series.
tau
Scalar, the embedded time delay that determines the time separation between $x_t$ values.
D
Scalar, the embedding dimension.

It has one return, an instance of the peOut structure with the following structure members:


peOut.h
Scalar, the PE measure.
peOut.h_norm
Scalar, the normalized PE measure.
peOut.relfreq
Vector, relative frequencies of the ordinal patterns.

Using GAUSS to find our permutation entropy measures:

library pe;

x = {4, 7, 9, 10, 6, 11, 3};

// Define output structure
struct peOut pOut;

// Define embedding dimension
D = 3;

// Define embedded time delay
tau = 1;

// Call pentropy
pOut = pentropy(x, D, tau);

print "The permutation entropy is:";
print pOut.h;

print "The normalized permutation entropy is:";
print pOut.h_norm;

print "The relative frequencies of the ordinal patterns:";
print pOut.relfreq;

Yields the following:

The permutation entropy is:
1.5219281
The normalized permutation entropy is:
0.58876216
The relative frequencies of the ordinal patterns:

0.40000000
0.20000000
0.40000000

Conclusions

Today we've learned the basics of permutation entropy using a toy example. After today you should have a better understanding of:

  1. The features of permutation entropy.
  2. The permutation entropy methodology.
  3. How to use GAUSS to find permutation entropy measures.

Code and data from this blog can be found here.

References

Bandt, C. and B. Pompe, 2002, “Permutation Entropy: A Natural Complexity Measure for Time Series,” Physics Review Letters, 88, 174102:1-174102:4.

Henry, M. and G. Judge, 2019, “Permutation Entropy and Information Recovery in Nonlinear Dynamic Economic Time Series,” Econometrics, 7(1), 10.

Leave a Reply