Nonlinear dynamical systems with symmetries exhibit a rich variety of behaviors, often described by complex attractorbasin portraits and enhanced and suppressed bifurcations. Symmetry arguments provide a way to study these collective behaviors and to simplify their analysis. The Koopman operator is an infinite dimensional linear operator that fully captures a system’s nonlinear dynamics through the linear evolution of functions of the state space. Importantly, in contrast with local linearization, it preserves a system’s global nonlinear features. We demonstrate how the presence of symmetries affects the Koopman operator structure and its spectral properties. In fact, we show that symmetry considerations can also simplify finding the Koopman operator approximations using the extended and kernel dynamic mode decomposition methods (EDMD and kernel DMD). Specifically, representation theory allows us to demonstrate that an isotypic component basis induces a block diagonal structure in operator approximations, revealing hidden organization. Practically, if the symmetries are known, the EDMD and kernel DMD methods can be modified to give more efficient computation of the Koopman operator approximation and its eigenvalues, eigenfunctions, and eigenmodes. Rounding out the development, we discuss the effect of measurement noise.
Many natural and engineered dynamical systems—power grid networks and biological regulatory networks, to mention two—exhibit symmetries in their connectivity structure and in their internal dynamics. Some have timereversal symmetry, others rotational and spatial translation invariance, and others still, combinations. These symmetries are often key for understanding the behavior of systems. For instance, the interplay between system behavior and structural symmetries arises in locomotion, where observed symmetries in animal gaits impose certain constraints on the structure of the neural circuits that generate them. For network systems, in particular, symmetries in the connectivity structure are of fundamental importance. For instance, the structural symmetries of a network of identical oscillators can determine their admissible patterns of symmetrybreaking. That said, additional information beyond knowledge of the network structure is often required to address more detailed questions about a system’s dynamics, such as whether a particular configuration is stable in a given parameter regime. In these cases, the system’s linearization near the steady state can be combined with interconnection symmetry to provide the answer. However, these linearization methods are only valid on a local subset of the state space and, therefore, are not sufficient for global characteristics of nonlinear dynamical systems, such as their attractors, basins, and transients. The Koopman operator, in contrast, is a linear infinitedimensional operator that evolves the functions on the state space which is valid on the entire state space. We show how to combine symmetry considerations with the Koopman analysis to study nonlinear dynamical systems with symmetries. We use representation theory to determine the effect of symmetries on the Koopman operator and its approximations, drawing out how local dynamical symmetries interact with symmetries arising from the connectivity of system variables. This, in turn, allows us to modify datadriven Koopman approximation algorithms to make them more efficient when applied to dynamical systems with symmetries. We illustrate our findings in a simple network of coupled Duffing oscillators with symmetries in individual oscillator dynamics and in their physical couplings.
I. INTRODUCTION
Symmetries of dynamical systems manifest themselves in asymptotic dynamics, bifurcations, and attractor basin structures. Symmetries play a crucial role in guiding the emergence of synchronization and pattern formation, which are behaviors broadly observed in natural and engineered systems. Methods from group theory, representation theory, and equivariant bifurcation theory provide useful tools to study the common features of systems with symmetries.^{15–17}
Dynamical elements organized into a network are an important class of dynamical systems that often exhibit these behaviors, especially when symmetries appear in both network structure and the dynamics of the individual nodes. Studying the effect of symmetries in network topology of synthetic and reallife systems using computational group theory is an active area of research.^{29,33,34} These symmetries lead to phenomena such as full synchronization, cluster synchronization, and formation of exotic steady states such as chimeras.^{10,14,30,41} Moreover, topological symmetries underlying cluster synchronization of coupled identical elements assist in analyzing the stability of these fully synchronous cluster states.^{18,33} For networks of identical coupled oscillators, the form of their limit cycle solutions and the form of their bifurcations can be derived from symmetry considerations.^{1} Symmetries are also key in determining network controllability and observability. For example, Refs. 13 and 37 explored the effect of explicit network symmetries for linear timeindependent and timedependent networks. Similarly, Refs. 26 and 47 considered nonlinear network motifs with symmetries and studied how the presence of different types of structural symmetries affects the observability and controllability of the system. Reference 32, similar to our approach, uses the Koopman operator formalism (discussed below). They provide analytic results that link the presence of permutational symmetries in dynamical systems to their observability properties.
Many dynamical systems of current interest are high dimensional and nonlinear. For instance, this is the case for many complex networks, such as power grids and biological networks. Complexity there arises from the interaction between the network interconnectivity structure and the nonlinearities in the node and edge dynamics. This often leads to multistability. Linearization methods can provide insight near the system’s attractors, but they poorly approximate the dynamics on the rest of the state space. In contrast, operatorbased methods give access to the global characteristics of nonlinear systems. They do so in a linear setting and are, therefore, more wellsuited, for instance, to characterize the attractor basin structure of multistable dynamical systems or the design of control interventions. The PerronFrobenius and Koopman operators are adjoint linear infinitedimensional operators whose spectra can provide global information about the system. Their approximations using datadriven approaches make operator methods potentially applicable when there is no prior knowledge of the system.
The PerronFrobenius operator evolves densities on the state space. It has been extensively used to assess global behavior of nonlinear dynamical systems.^{25,45} There are several well developed approaches for obtaining its numerical approximations, such as Ulam’s method that relies on the discretization of the state space to obtain an approximation of the PerronFrobenius operator.^{46} Since the Koopman operator is adjoint to the PerronFrobenius operator, numerical approximations of the Koopman operators can be obtained using these methods as well.^{20}
The Koopman operator is an infinite dimensional linear operator that describes the evolution of “observables” (functions of the state space).^{20,22,25} Its definition and properties in the context of dynamical systems are provided, for instance, in Ref. 9, which also summarizes its applicability to model reduction, coherency analysis, and ergodic theory. Methods based on the Koopman operator decomposition have proved useful for applications such as model reduction and control of fluid flows,^{3} power system analysis,^{44} and extracting spatiotemporal patterns of neural data.^{7}
Datadriven methods to approximate the Koopman operator rely upon snapshot pairs of measurements of the system state at consecutive time steps. Reconstructing the operator from these snapshot pairs requires that a set of functions (called a dictionary of observables) be chosen. The first datadriven method introduced, dynamic mode decomposition (DMD), implicitly uses linear monomials as a dictionary and thus is most applicable to systems where the Koopman eigenfunctions are well represented by this basis set.^{38} A more recent method called extended DMD (EDMD) introduced in Ref. 48 can be more powerful than the standard DMD when applied to nonlinear systems as it allows the choice of more complicated sets of dictionary functions. Applying the EDMD is most computationally feasible if the number of dictionary functions does not exceed the total number of snapshot pairs used. That is not necessarily the case if a rich function dictionary (e.g., a dictionary of high order polynomials) is chosen. A modification of EDMD called kernel DMD introduced in Ref. 49 addresses this issue by providing a way to efficiently calculate the Koopman operator approximation in a case when the number of dictionary functions exceeds the number of measurements. Yet, the principled choice of an underlying dictionary that leads to an accurate approximation of the eigenspectrum corresponding to the leading Koopman modes using EDMD or kernel DMD remains an outstanding challenge. That problem is confronted, for instance, in Ref. 27, where an iterative EDMD dictionary learning method is presented. Although the optimal choice of dictionary functions is often unknown, there are some common choices that are known to produce accurate results for certain classes of systems.^{48}
Here, we study nonlinear dynamical systems with discrete symmetries combining operatorbased approaches and linear representation theory. Recently, related methods have been applied to dynamical systems with symmetries. On the one hand, Ref. 31 addresses symmetries of the PerronFrobenius operator in relation to the admissible symmetry properties of attractors. On the other, Ref. 39 links the spatiotemporal symmetries of the NavierStokes equation to the spatial and temporal Koopman operators. Additionally, Ref. 8 noted that symmetry considerations play an important role in discovering governing equations. Reference 19 shows how conservation laws can be detected with Koopman operator approximations and then used to control Hamiltonian systems.
In contrast, our focus is on dynamical systems with symmetries described by a finite group. We show how the properties of the associated Koopman operator spectrum can be linked to the properties of the spectrum of the finite dimensional approximations of the Koopman operator obtained from finite data. We further show how the analytic properties of the Koopman operator decomposition can inform the choice of dictionary functions that can be used in the Koopman operator approximations. This gives a practical way to reduce the dimensionality of the approximation problem.
Our development builds as follows. Section II defines the Koopman operator, introduces approximation methods (EDMD and kernel DMD), and defines equivariant dynamical systems as well as useful concepts from group theory and representation theory. Section III draws out the implications of dynamical system symmetries for the structure of the Koopman operator and its eigendecomposition. Section IV connects the properties of the Koopman operator and the structure of its EDMD approximation for symmetric systems. This then allows modifying the EDMD method to exploit the underlying symmetries, resulting in a blockdiagonal Koopman operator approximation matrix. We also provide numerical examples, showing how using particular dictionary structures speeds up the algorithm. Finally, Sec. V summarizes our findings and outlines directions for future work.
II. PRELIMINARIES
A. Koopman operator
In this section, we provide some background in operator theoretic approaches to dynamical systems, in particular, the Koopman operator and its adjoint PerronFrobenius operator. Since in this paper, we address the approximations of the Koopman operator where the input is discrete time data, we focus on their definition in the discretized setting. The continuous time definitions are similar.^{9} Our results regarding the degeneracy of Koopman operator eigenvalues and the properties of its corresponding eigenfunctions presented in Sec. III hold in both discrete and continuous time settings.
Suppose that we are given continuous time autonomous dynamical systems defined as
Here, $x\u2208Rn$, $gc:Rn\u2192Rn$. Let $\Phi (x(t),\Delta t)$ be a flow map mapping the initial condition $x(t)$ to the solution at time $t+\Delta t$. It is defined in the following way:
The system can be discretized with a finite time step $\Delta tstep$, so that $xi+1=\Phi (xi,\Delta tstep)$. We denote the function evolving the dynamics of this discretized system by $g$,
The Koopman operator is a “linear infinite dimensional” operator that evolves the functions (referred to as observables) of state space variables $f:Rn\u2192C$. The action of the Koopman operator $K$ on an observable function $f$ for discrete time systems is defined as
Since we consider datadriven Koopman operator approximation methods in this paper, the discrete time version of the definition is most applicable.
Pairs of eigenvalues $\lambda $ and eigenfunctions $\varphi $ of the Koopman operator $K$ are defined as
Of particular interest are the Koopman modes that can be used in model reduction and coherency estimation.^{36,43} The Koopman modes $vif$ of the observable $f(x)$ are defined by
and are projections of the observable onto the span of the eigenfunctions of the Koopman operator $K$. A particularly useful set of modes is that of the full state observable $f(x)=x$, defined as
In general, parts of the Koopman operator spectrum can be continuous.^{9,24} For instance, this can be the case for chaotic systems. However, we focus on the case of a discrete spectrum since the methods we refer to in Secs. II B, IV, and Appendix E (EDMD and kernel DMD) are only applicable for that case. Our results regarding the symmetry properties of the discrete parts of the Koopman operator spectrum are analogous to those related to the continuous part of the spectrum. Numerical methods related to continuous Koopman operator spectra are considered, for instance, in Ref. 28.
The other candidate for studying dynamical systems using an operator based approach is the PerronFrobenius operator $P$ defined as follows for deterministic dynamical systems:
Here, $\rho (x)$ is a density on the state space, $A\u2286Rn$ is a subset of the state space, and $g$, defined in Eq. (3), evolves the state of the system. The PerronFrobenius operator is the adjoint to the Koopman operator,^{25} so an approximation of one of them provides an approximation of the other.^{20}
B. Koopman operator approximation methods
Extended dynamic mode decomposition (EDMD) introduced in Ref. 48 is a datadriven method of approximating the Koopman operator for discretized systems that requires an explicit choice of a dictionary of functions referred to as “observables.” How to optimally choose these functions remains an open problem for many systems, especially if the form of differential equations describing the governing dynamical system is not known in advance and only finite data on the behavior of the system are available. The method can be very accurate in capturing the dynamics of the system, but its accuracy depends strongly on the choice of an appropriate dictionary of observables. The method’s convergence properties are studied in Ref. 23, and its relation to the PerronFrobenius operator approximation methods is discussed in Ref. 20. Here, we summarize the EDMD and its relation to the Koopman operator.
The first requirement for the method is a set of pairs of consecutive snapshots $x=[x1,x2,\u2026,xM]$ and $y=[y1,y2,\u2026,yM]$, where the measurements $xi$ and $yi$ are performed with a small constant time interval $\Delta t$: $yi=\Phi (xi,\Delta t)$, where $\Phi $ is the flow map defined in Eq. (2). Typically, the set of snapshots contains measurements from different trajectories in the state space. We define a dictionary of linearly independent observables $D={\psi 1,\u2026,\psi N}$ and form vectors of observations $\Psi x$ and $\Psi y$. Here, $\Psi x\u2208RM\xd7N$, where $N$ is the number of dictionary functions used in the approximation and $M$ is the number of data snapshots. The elements of $\Psi x$ are obtained from $(\Psi x)ij=\psi j(xi)$. We also use the notation $\Psi (xm)=(\psi 1(xm),\u2026,\psi N(xm))$ for the dictionary functions evaluated at a particular point on the trajectory.
A finite dimensional approximation of the Koopman operator $K$ that we denote as $K$ can be obtained using
Here, $\Psi x+$ denotes the pseudoinverse of $\Psi x$. We focus on the case of the MoorePenrose pseudoinverse for the rest of the paper.^{35}
If the number of snapshots is much higher than the dimensionality of the function dictionary ($M\u226bN$), it is more practical instead to define the square matrices $G$ and $A$ as shown below and obtain the approximation in the following way:
Here, $\u2217$ represents the complex conjugate transpose. If the only observables are the states of the system $x1,x2,\u2026,xn$, EDMD reduces to DMD.^{20,48}
The eigendecomposition of $K$ provides the Koopman eigenvalues, eigenfunctions, and modes that allow an approximate linear representation of the underlying system dynamics. Let $\lambda j$ and $uj$ be the $j$th eigenvalue and eigenvector of $K$. Then, the corresponding Koopman eigenfunction can be approximated by
Let $bi$ be the vectors defined by $g(x)i=\Psi bi$, where $g(x)i=ei\u2217x$ denotes the elements of the full state observable discussed in Ref. 48, and $B=(b1\cdots bn)$. The Koopman eigenmodes can then be obtained as
Here, $wi$ denotes the $i$th left eigenvector of $K$.
A modification of EDMD named kernel DMD^{49} is better suited for systems with a low number of measurements and a high number of observables (e.g., the full state observable for fluid dynamical systems is very high dimensional, so defining a polynomial dictionary of the full state observable is very computationally expensive), i.e., $M\u226aN$. The method relies on evaluating the kernel function:
That allows efficient computation of $M\xd7M$ matrices $G^$, $A^$, and $K^$, where $M$ is the number of trajectory time steps. The eigendecomposition of $K^$ then can be used to obtain the approximations of the Koopman eigenvalues, eigenfunctions, and modes.
In this paper, we focus on the case where the number of measurements is relatively high for each degree of freedom ($M\u226bN$) and obtain a way to reduce the dimensionality of the EDMD approximation of the Koopman operator for systems with symmetries in Sec. IV. A similar modification of the kernel DMD is discussed in Appendix E.
C. Discrete symmetries
In this section, we define the concepts useful to study the structure of the Koopman operator $K$ and its approximations $K$ for systems with symmetries. Throughout this section and the rest of the paper, we use an example of a small network of Duffing oscillators to illustrate the definitions and algorithms.
In this paper, we consider dynamical systems [as defined in Eqs. (1) and (3)] that respect discrete symmetries. These systems are called equivariant with respect to the symmetry group $\Gamma $. We define groups by their “presentations” in a form $\u27e8SR\u27e9$, where $S$ is a set of generators of the group and $R$ is a set of relations among these generators that define that group. Every element of the group can be written as a product of powers of some of these generators.
For instance, the cyclic group $Zn$ is presented by $\u27e8rrn=1\u27e9$. An example of a realization of that group is a set of rotational symmetries of a regular $n$gon.
To study dynamical systems with symmetries, we need to define the specific actions of the group on a vector space in addition to an abstract presentation of a group $\Gamma $. Let $X\u2282Rn$ be a vector space with elements $x\u2208X$. We denote the actions $\gamma \rho $ on a vector space $X$ by $\gamma \rho x$ if the set of these actions $\Gamma \rho $ is isomorphic to $\Gamma $. A shorthand $\gamma \rho x=\gamma x$ is sometimes used in the literature when the action corresponding to the subscript $\rho $ is clear from the context (for instance, it is defined by a permutation matrix of the same degree as the state space of the system); however, we use the $\gamma \rho $ notation to avoid ambiguity since the precise definition of group action in particular cases is important in this paper, as shown, for instance, in Examples II.1 and II.2.
Finally, we define what it means for a dynamical system to be symmetric. Let $x\u02d9=gc(x)$ be a continuous time system of differential equations. Here, $x\u2208Rn$, and $gc:Rn\u2192Rn$. The system is $\Gamma $equivariant with respect to the actions of $\Gamma \rho $ if for all $x\u2208X$ and $\gamma \rho \u2208\Gamma \rho $,
As discussed in Sec. II, data come in a discretized form, so a discrete form of that definition is useful. For discrete time systems defined by $xi+1=g(xi)$, equivariance is defined in a similar manner,
We note that if a continuous time system is $\Gamma $equivariant, so is its discretization. Moreover, the set of trajectories of a $\gamma $equivariant system in the state space also respects the symmetries of the system. For discretized systems, it means that if ${x0,x1,\u2026xn}$ form a trajectory in the state space, then ${\gamma \rho x0,\gamma \rho x1,\u2026\gamma \rho xn}$ form a trajectory as well.
An important example of equivariant dynamical systems that many of the recent works have focused on (such as Refs. 18,30,33, and 41) is a system of coupled identical oscillators. In that case, the set (or a subset) of actions under which the system is equivariant is defined by the set of permutational matrices $P$ that commute with the adjacency matrix (a matrix that describes connectivity between the nodes of the network) of that oscillator network. In this case, the action of the group is linear, however, that does not always have to be the case.
We also need to define the action of the group in the function space, where $f\u2208F$ are functions $f:X\u2192C$, as
Note that the group action is inverted to satisfy the group action axioms (so that actions on functions form the same group structure as the actions on states). This definition will be useful since the Koopman operator acts on functions (i.e., observables).
Another concept useful for our work is a linear group “representation” $T$, which is a mapping from group elements $\gamma \u2208\Gamma $ to the elements of the general linear group [a group of matrices of degree $n$, with the operation of matrix multiplication denoted by $GL(n,V)$] on a vector space $V$ (in this case, we are interested in $V=Cn$). The characters $\chi i(\gamma )$ of a group representation $Ti(\gamma )$ are defined as $\chi i(\gamma )=Tr(Ti(\gamma ))$.
A representation is called irreducible if it has no nontrivial invariant subspaces (meaning that the representation matrices corresponding to the group elements cannot be simultaneously nontrivially block diagonalized into the same block form). For each $\Gamma $, we can obtain all of its irreducible matrix representations. We denote their elements mapping $\gamma \u2208\Gamma $ to $p\xd7p$dimensional matrices as $Ri(\gamma )$, where the index $i$ corresponds to the $i$th irreducible representation. Irreducible representations are defined up to an isomorphism. For the purposes of this paper, it is useful to make use of either the unitary irreducible representations or their characters.
A vector space, e.g., the space of square integrable functions $F$, can be uniquely decomposed into components that transform like the $i$th irreducible representation of $\Gamma $ under the actions of $\Gamma \rho $. These components are called “isotypic components.”^{15} We denote these components by $Fi$. An “isotypic decomposition” of the square integrable function space with respect to $\Gamma \rho $ is then defined as $F=\u2a01iFi$, where the $\u2a01$ symbol denotes the direct sum here and thereafter. We illustrate the construction of an isotypic decomposition using an example of a $Z2$equivariant system.
Symmetries of a single Duffing oscillator dynamics and isotypic components in the function space corresponding to the actions of its symmetry group.
We now illustrate the isotypic component decomposition of $Z2$ in the function space. $Z2$ has two onedimensional irreducible representations$:$ the trivial representation defined by $Rtr(r)=1$ and the sign representation defined by $Rsign(r)=\u22121$. Then, the space of square integrable functions $F$ can be decomposed into $F=Ftr\u2295Fsign,$ where $Ftr={f:rs\xb0f=f(\u2212x,\u2212y)=f(x,y)}$ and $Fsign={f:rs\xb0f=f(\u2212x,\u2212y)=\u2212f(x,y)}$. In this case$,$ the sets of functions $Ftr$ and $Fsign$ consisting of even and odd functions, respectively, transform like the trivial and sign irreducible representations with respect to sign flip as a group generator action.
We now extend the example to a network of Duffing oscillators and explore additional permutation symmetries.
This general coupling scheme is used to model many systems in the literature.^{33,41}
We now consider the case of a 3node network. Depending on what the coupling terms are$,$ the system may be $\Gamma $equivariant with respect to different symmetry groups that act by permuting node indices. Some examples are
If all coupling strengths $\eta ij$ are equal$,$ the network has $D3$ symmetry. This case is shown in Fig. 1(a). Let the state of the system be defined by $x=x1y1x2y2x3y3T$. Then$,$ the symmetry group is presented by $D3=\u27e8r,\kappa r3=\kappa 2=e,\kappa r\kappa =r\u22121\u27e9$ and generated by the actions $rp=010001100\u2297I2\xd72$ and $\kappa p=100001010\u2297I2\xd72$.
If the coupling strengths obey the conditions $\eta ij\u2260\eta ji$ and $\eta ij=\eta jk$ for $i\u2260k,$ the network has $Z3$ symmetry. This case is shown in Fig. 1(b). The symmetry group is presented by $Z3=\u27e8rr3=e\u27e9$ and generated by the action $rp$ defined above.
If the coupling strengths obey the conditions $\eta 12=\eta 21=\eta 13=\eta 31$, as well as $\eta 23=\eta 32,$ and no other equalities hold$,$ the network has $Z2$ symmetry. This case is shown in Fig. 1(c). The symmetry group is presented by $Z2=\u27e8\kappa \kappa 2=e\u27e9$ and generated by the action $\kappa p$ defined above.
Additionally, each node still has $Z2$ symmetry with respect to the action $rs$, which is not broken since the coupling function is odd. That symmetry is also depicted in Fig. 1. The isotypic components of the entire symmetry group are then intersections of the isotypic components of $Z2$ $($acting by a sign flip$)$ and the symmetry group of the network geometry $[$acting by a permutation$,$ e.g.$,$ $D3$ for case (a) of this example, also illustrated in Fig. 3(a) $]$.
Any function can be rewritten as a sum of projections into different isotypic components. The procedure is outlined in Sec. III.
III. PROPERTIES OF THE KOOPMAN OPERATOR FOR SYSTEMS WITH SYMMETRIES
In this section, we consider the structure of the eigenspace of the Koopman operator of $\Gamma $equivariant systems. We show how to obtain a particular eigenbasis of the system corresponding to the isotypic decomposition in the function space and demonstrate that the isotypic decomposition induces a block diagonal structure on the matrix representation of $K$.
The space of eigenfunctions of the Koopman operator $K$ with eigenvalue $\lambda $ for a $\Gamma $equivariant system is $\Gamma $invariant.
We now consider a particular form of the eigenbasis of the Koopman operator that induces a block diagonal structure of the matrix representation of the action of the Koopman operator $K$. The result quoted below is useful for that purpose.
(Theorem 3.5 in Chapter XII of Ref. 17)
Let $\Gamma $ be a compact Lie group acting on the vector space $V$ decomposed into isotypic components $V=W1\u2a01\cdots \u2a01Wt$. Let $A:V\u2192V$ be a linear mapping commuting with $\Gamma $. Then, $A(Wk)\u2282Wk$.
This result is applicable to finite symmetry groups. Isotypic components of $F$ with respect to $\Gamma $ induce a block diagonal structure of the matrix representation of the Koopman operator. Since $K$ and $\Gamma $ commute, $K(Fk)\u2282Fk$. This block structure can be exploited in finding the Koopman operator approximations, as we show in Sec. IV. Thus, we need to be able to obtain an isotypic component basis from an arbitrary function dictionary. This is a well defined procedure,^{11} outlined below. Functions obtained via isotypic decomposition are useful to perform calculations in many areas of physics, for instance, they can simplify finding approximate solutions to the Schrodinger equation, or in studying crystallographic point groups.^{11,42} The construction is also widely applied to dynamical systems, for instance, to study states and their stability using equivariant bifurcation theory.
Suppose we start from an arbitrary basis function dictionary $D\psi ={\psi i}$. Each of these functions can be expanded in the isotypic component basis with at least one nonzero coefficient $\alpha mnp$,
Here, $\xi mnp$ is a basis function in the $p$th isotypic component of $F$ with respect to the actions of the symmetry group $\Gamma $, and $dp$ is the dimension of that isotypic component. Alternatively, it can be thought of as a sum over all inequivalent (nonisomorphic) irreducible representations of $\Gamma $, where $\xi mnp$ transforms as the $(m,n)$th element of the $p$th irreducible representation of $\Gamma $.^{11} We define a projection operator and form a new function basis consisting of functions ${\xi mnp}$ as outlined below.
The projection operator is defined as
Here, $[Rp(\gamma )]mn$ denotes the element in the $n$th row and the $m$th column of the $i$th unitary irreducible representation of $\gamma \u2208\Gamma $, and $\gamma \rho $ is the group action. We can form an orthonormal basis $D\xi ={\xi i}$ using the projection operator as follows:
Here, $cnp=\u27e8Pnnp\psi ,Pnnp\psi \u27e91/2$, where $\u27e8,\u27e9$ denotes the inner product, which can be omitted for our purposes since the scaling of basis functions does not affect the EDMD results (namely, the approximation matrix $K$, along with its eigenvalues and eigenvectors). Similarly, the overall factor $dp\Gamma $ of the projection operator in Eq. (27) only affects the scaling of the basis functions and therefore can be eliminated.
Equivalently, due to orthogonality relations of characters of irreducible representations, the projection operator can be obtained using the following expression:
Here, $\chi p(\gamma )$ is a character of the $p$th irreducible representation of $\Gamma $. If this formula is used, each irreducible representation of degree $dp$ provides a basis function, and $dp2\u22121$ other basis functions can be formed using the GramSchmidt orthogonalization process.^{11,33,42}
Once an isotypic component basis is obtained, the action of the Koopman operator on the function space can be presented in the form of a block diagonal matrix. Each irreducible unitary representation of dimension $dp$ in this case corresponds to a number $dp$ of $dp\xd7dp$ sized blocks in that matrix $K$. Similar analysis applies to the Koopman operator approximation $K$. The reason why this additional decomposition works can be found in Appendix A.
IV. IMPLICATIONS FOR EDMD
A. Constructing a basis for systems with known symmetries
In this section, we show that the approximation of $K$ obtained using EDMD can be reduced to the block diagonal structure similar to $K$ under certain assumptions on the data. We provide some examples of constructing an isotypic component basis from a given function dictionary. We highlight that the basis depends on both the structure of $\Gamma $ and the definition of its actions $\Gamma \rho $.
First, we establish that the Koopman operator approximation $K$ commutes with the actions $\gamma \rho $ of $\Gamma $ if the data used in the calculation respect the symmetry, meaning that the set of pairs of data points satisfies the condition,
In other words, the set of trajectories is closed under the action of the symmetry group of the underlying dynamical system. This condition on trajectories can be achieved by averaging over a symmetry group, which has been used in the literature related to other datadriven methods, for instance, the proper orthogonal decomposition.^{2} We note that this requirement can sometimes be relaxed, as discussed in Appendix C.
In order to perform further simplifications, we pick a particular order of group elements ${\gamma 1,\u2026,\gamma \Gamma }$ and create the vectors $\Psi x$ (and analogously $\Psi y$) according to that ordering,
Given the ordering of the group elements, we can also construct the permutation representation of the group such that
By Cayley’s theorem, such permutations form a group isomorphic to $\Gamma $. Determining the actions $P\gamma k$ of the group generators is sufficient to find the actions of all group elements. Let $P\gamma k=P\gamma k\u2297In\xd7n$. We note that $(P\gamma k)\u2217=(P\gamma k)\u22121$. It can be shown that
By definition, $A=\Psi x\u2217\Psi y$. We note that for symmetric trajectories satisfying Eq. (28),
Therefore,
Thus, $A$ and $G$ (for analogous reasons) commute with the action of the symmetry group.
If $G$ is invertible and $G$ commutes with $\gamma \rho $, $G\u22121$ commutes with $\gamma \rho $ as well. Then,
If $G$ is not invertible, the commutativity result still holds for $G+$. $G$ is a normal matrix since it satisfies $GG\u2217=G\u2217G$. In Appendix B, we show that if $G$ is normal, $GG+=G+G$, so $G$ commutes with its MoorePenrose pseudoinverse, and, therefore, the actions of $K$ and $\Gamma $ commute.
Since $K$ commutes with the actions of $\Gamma $, $KFi\u2282Fi$. This shows that $K$ can be blockdiagonalized in the same way as $K$.
Suppose we start from a dictionary of observables. Since that dictionary is not necessarily an isotypic component dictionary corresponding to $\Gamma $ and its action, in order to obtain a block diagonal matrix $K$, the dictionary needs to be modified using the procedure outlined in Sec. III. In the example below, we show explicitly how to perform this transformation into the isotypic component basis.
In order for the basis to faithfully represent the symmetries of the system, we require that
 The dictionary is closed under the action of the symmetries of the system,(35)$if\psi \u2208D,\gamma \rho \psi \u2208span(D).$
 Each isotypic component is present after the isotypic component decomposition of the original function basis,(36)$\u2200m,p\u2203\psi \u2208Ds.t.Pmnp\psi \u22600.$
For instance, using a monomial basis for a $D3$ equivariant system does not satisfy the second requirement.
If these requirements are satisfied, the change of basis does not affect the result obtained by applying the EDMD algorithm as shown in Appendix D. Additionally, we note that the eigenvalues of $K$ do not typically have the same degeneracy properties as the eigenvalues of $K$, but the symmetries of the underlying dynamical system are preserved in trajectory reconstructions.
Constructing an isotypic component basis for a single Duffing oscillator.
We start from a system with $Z2$ symmetry described in Example II.1. Suppose a polynomial basis is chosen to form basis functions. For instance$,$ $Dpoly={1,x1,x2,x12,x22,x1x2,\u2026}$. Each of the dictionary items can be written as $pmn(x1,x2)\u2261x1mx2n$. For even $m+n,$ $pmn\u2208Ftr,$ and for odd $m+n,$ $pmn\u2208Fst,$ where $Ftr$ and $Fst$ are the isotypic components corresponding to the trivial and standard irreducible representations of $Z2,$ as discussed in Example II.1. Thus$,$ using $Dpoly$ results in a sparse matrix $K,$ and $K$ is a block diagonal after reordering the basis functions.
Another possible choice for a set of dictionary functions is a radial basis function set. This type of functions was used to find the EDMD approximation of the Koopman operator in Ref. 48 . We use an initial dictionary $D\psi $ of $n$ meshfree radial basis functions. The radial basis function centers can be obtained by either kmeans clustering of the data or sampling from a predetermined distribution. As an example$,$ we chose a specific form $\psi (c,x)=rclog\u2061(rc),$ where $c$ is a $2$dimensional radial basis function center$,$ and $rc,x\u2261x\u2212c1/2$.
Constructing an isotypic component basis for a network of Duffing oscillators from a given basis.
We also consider a more complicated case of a system of Duffing oscillators with identical coupling as depicted in Fig. 1(a). In that case$,$ the system has $Z2\xd7D3$ symmetry. Suppose that we want to construct an isotypic component basis from a given function dictionary $D$. As an example, we use an initial dictionary $D\psi $ of $n$ meshfree radial basis functions. Analogously to Example IV.1, each function can be presented in a form $\psi (c,x)=rclog\u2061(rc),$ where $c$ is a $6$dimensional radial basis function center$,$ and $rc,x=x\u2212c1/2$. In order to preserve the symmetries of the system$,$ we need to have dictionary elements corresponding to acting on the basis functions by each $\gamma \rho \u2208\Gamma \rho $. Due to the form of these functions$,$ $\gamma \rho \xb0\psi (c,x)=\psi (\gamma \rho \u22121c,x)$.
The symmetry group $Z2$ has $2$ degree $1$ irreducible representations, discussed in Example II.1:
Trivial representation
Sign representation
Trivial representation $Rtr$: $Rtr(r)=1,$ $Rtr(\kappa )=1;$
Sign representation $Rsign$: $Rsign(r)=1,$ $Rsign(\kappa )=\u22121;$
Standard representation $Rst$: $Rst(r)=\omega 00\omega 2,$ $Rst(\kappa )=0110$. Here$,$ $\omega =e2\pi i/3$.
If we use $\Xi $ as a basis$,$ we obtain $K$ decomposed into $8$ blocks$,$ each corresponding to an irreducible representation of $Z2\xd7D3$.
As shown in the examples above, we can construct a basis that block diagonalizes the Koopman operator matrix approximation $K$ from the elements of any arbitrary basis. Since the off blockdiagonal elements of the matrix are a priori known to be zero, we do not need to compute these elements explicitly. This suggests that for systems with symmetries, it is more efficient to perform the EDMD algorithm for isotypic decomposition blocks. We denote the number of conjugacy classes or irreducible representations of $\Gamma $ by $r\Gamma $. In that case, instead of performing $O((mr\Gamma )\alpha )$ operations of matrix inversion, multiplication, and eigendecomposition, it is sufficient to perform these operations for each of the $r\Gamma $ blocks, with operations being $O(m\alpha )$. Here, $2<\alpha <3$, e.g., as seen in Ref. 21. Even though the algorithmic complexity only differs by a factor that scales with the size of the group that is fixed for any given system, in practice, the computation is more efficient when EDMD specific to $\Gamma $equivariant systems is used. We also note that each $dp$ dimensional irreducible representation contributes $dp$ “equal” blocks, each one of dimensions $dp\xd7dp$, to $Ksymm$, which further simplifies the calculation. Moreover, in the case of networks of high dimensionality, it allows parallel eigendecomposition computation for isotypic component blocks. Table I summarizes the modified EDMD algorithm for $\Gamma $equivariant systems and highlights that the order of computations can be lowered.
Standard EDMD .  EDMD for Γequivariant systems . 



Standard EDMD .  EDMD for Γequivariant systems . 



Koopman eigenfunctions and eigenmodes have many applications in dimensionality reduction, finding the basins of attraction, characterizing coherency between oscillatory systems, etc. Block diagonalizing $K$ allows the efficient computation of the Koopman eigenvalues, eigenfunctions, and modes.
The kernel DMD is closely related to the EDMD algorithm. It relies on calculating the eigentriples associated with $K$ from a dual matrix $K^$ evaluated using a kernel trick commonly applied in machine learning.^{49} This method can be computationally advantageous for cases when the number of basis functions exceeds the number of available measurements of the state of the system. We find that the kernel DMD can also be modified to include symmetry considerations in order to optimize the calculations. The method is provided in Appendix E.
B. Consequences of symmetry assumptions in the basis
Assume that the data are symmetric as defined by Eq. (28) with respect to the symmetry group $\Gamma $. A “perfect” basis is the one respecting the isotypic decomposition of $\Gamma $. Suppose that the basis functions belong to isotypic components of $\Sigma \u2260\Gamma $. That choice will affect the structure of $K$. We study that structure by evaluating the elements of $A$ since $K$ and $G+$ have the same structure as $A$.
If the system is $\Gamma $equivariant, $\Sigma \u2282\Gamma $, and the set of actions of $\Sigma $ is a subset of actions of $\Gamma $, the system is also $\Sigma $equivariant. Thus, picking a basis respecting the isotypic decomposition of $\Sigma $ will have the block diagonal structure corresponding to $\Sigma $. This means that the choice of basis results in block diagonal $K$, but its structure does not provide any additional information about the symmetries of the system.
If the system is $\Gamma $equivariant and $\Gamma \u2282\Sigma $, functions belonging to particular isotypic components of $\Sigma $ are not preserved by the action of $K$. In the case of symmetric trajectories, that can provide information on what the true symmetries of the system are.
A simple case corresponds to $\Sigma =\Sigma 0\xd7\Gamma $. In this case, every action of $\Sigma 0$ commutes with every action of $\Gamma $. Each isotypic component of $F$ with respect to $\Sigma $ can be expressed as $Fpq=F\Sigma 0p\u2229(F\Gamma )q$, where $F\Sigma 0p$ denotes the $p$th isotypic component of $\Sigma 0$. In this case, the offdiagonal blocks corresponding to interactions between isotypic components $Fp1q1$ and $Fp2q2$ are zero if $q1=q2$, and generally nonzero otherwise. For instance, if a network of three Duffing oscillators similar to those discussed in, e.g., Example IV.2, but has no permutation symmetry and $\Sigma =Z2\xd7D3$, with the action of $Z2$ being a sign flip in nodal dynamics, the isotypic components corresponding to these $Z2$ symmetries will not interact with each other, resulting in two blocks in $K$.
Next, we consider a more general case. We denote the $p$th isotypic component of $F$ with respect to the symmetry group $\Sigma $ by $F\Sigma p$. We note that if the following conditions hold:
where $q1$ and $q2$ index different isotypic components of $\Gamma $, then the offdiagonal blocks of $K$ corresponding to interactions between those components are generally nonzero.
The condition for $F\Sigma p\u2229F\Gamma q\u2260\u2205$ is equivalent to
where $f$ is an arbitrary function and $P\Sigma p$ denotes the projection operator onto the $p$th isotypic component with respect to the symmetry group $\Sigma $,
Let $H$ be the set of “left cosets” of $\Gamma $ in $\Sigma $ (defined as $H=\Sigma /\Gamma ={\sigma \Gamma :\sigma \u2208\Sigma}$, where $\sigma \Gamma ={\sigma \gamma :\gamma \u2208\Gamma}$^{15}). Thus, the condition of Eq. (46) holds if for all $h\u2208H$,
Using Eq. (48), the structure of $\Gamma $ can be determined given the structure of $K$ and $\Sigma $ used in the calculation. Characters of irreducible representations are available for small order symmetry groups, and scaling up to a larger order is possible using computational group theory software. Below is an example for the subgroups of a dihedral group $D3$.
Coupled Duffing oscillators$:$ $(Z2\xd7Z2)$ or $(Z2\xd7Z3)$equivariant system with $Z2\xd7D3$ basis functions.
We consider different coupling schemes of networks of $3$ Duffing oscillators shown in Figs. 1(b) and 1(c). We first note that the $Z2$ symmetry generated by a sign flip is still present in the system for both cases, so two noninteracting blocks corresponding to irreducible representations of that group with respect to that action are still present. Now, we focus on the structure of $K$ within each of these noninteracting blocks.
$Ftr,Z3\u2229Ftr,D3\u2260\u2205$
Functions belonging to $Ftr,Z3$ satisfy the following condition$:$
$Ftr,Z3={f:f(x3,x1,x2)=f(x1,x2,x3)}$.
 Functions belonging to $Ftr,D3$ satisfy the following conditions$:$$Ftr,D3={f:f(x3,x1,x2)=f(x1,x2,x3),f(x1,x3,x2)=f(x1,x2,x3)}.$
Thus$,$ $Ftr,Z3\u2229Ftr,D3=Ftr,D3$.
$Ftr,Z3\u2229Fsign,D3\u2260\u2205$
This can be shown in a similar fashion.
$(F\omega ,Z3\u222aF\omega 2,Z3)\u2229Fst,D3=Fst,D3$
Functions belonging to $F\omega /\omega 2,Z3$ satisfy the following condition$:$
$F\omega ,Z3={f:f(x3,x1,x2)=\omega f(x1,x2,x3)}$,
$F\omega 2,Z3={f:f(x3,x1,x2)=\omega 2f(x1,x2,x3)}.$
 Functions belonging to $Fst,D3$ satisfy the following conditions$:$$Fst,D3={f1,f2:f1(x3,x1,x2)=\omega f1(x1,x2,x3),f2(x3,x1,x2)=\omega 2f2(x1,x2,x3),f1(x1,x3,x2)=f2(x1,x2,x3),f2(x1,x3,x2)=f1(x1,x2,x3)}.$
Thus$,$ $(F\omega ,Z3\u222aF\omega 2,Z3)\u2229Fst,D3=Fst,D3$.
$KFtr,D3\u2229Fsign,D3\u2260\u2205$
To see that is the case$,$ we need to refer back to conditions in Eqs. (44) and (45). Since the intersections of both $Ftr,D3$ and $Fsign,D3$ with $Ftr,Z3$ are nonzero$,$ the intersection of the components produces nonzero elements in $K$.
$KFsign,D3\u2229Ftr,D3\u2260\u2205$
This can be shown in a similar fashion.
Other offdiagonal blocks are zeros
For instance$,$ since the intersections of both $Ftr,D3$ and $Fst,D3$ with no specific isotypic component of $Z3$ are simultaneously nonzero$,$ $KFtr,D3\u2229Fst,D3=\u2205$ and $KFst,D3\u2229Ftr,D3=\u2205,$ thus corresponding to blocks of zeros in $K$.
Now$,$ let $\Sigma =D3$ and $\Gamma =Z2$. Here$,$ $Z2=\u27e8e,\kappa \kappa 2=e\u27e9,$ and $\kappa D3$ and $\kappa Z2$ have the same action. The isotypic component decomposition of $Z2$ is defined as $F=Ftr,Z2\u2295Fsign,Z2$.
We note that
$Ftr,Z2\u2229Ftr,D3\u2260\u2205$,
$Ftr,Z2\u2229Fst,D3\u2260\u2205$,
$Fsign,Z2\u2229Fsign,D3\u2260\u2205$,
$Fsign,Z2\u2229Fst,D3\u2260\u2205$.
$Ftr,Z2\u2229Fsign,D3=\u2205$,
$Fsign,Z2\u2229Ftr,D3=\u2205$.
$KFtr,D3\u2229Fsign,D3=\u2205$.
Other offdiagonal blocks corresponding to interactions between node permutation isotypic components are generally nonzero.
This example shows that the structure of the approximation $K$ with maximal symmetries assumed provided information about the actual underlying symmetries of the system.
In this specific example, we can see that any offdiagonal block can be used as an indicator of whether a symmetry subgroup $Z2$ or $Z3$ is present, as seen in Figs. 3(b) and 3(c).
In summary, the symmetries of the system can be detected from the structure of the Koopman operator approximation matrix. This allows using the same method both to detect the symmetries of dynamical systems from data and to obtain their Koopman operator approximation. However, we also note that there are multiple other methods to detect the symmetries of dynamical systems; for instance, this work can be related to symmetry detectives.^{5} Additionally, in many cases, we do not expect perfect symmetries to be present in data, as discussed in Sec. IV C. Thus, it would be useful to see how these imperfections affect the results in order to be able to apply the symmetry considerations in a more practical setting.
C. Toward realistic systems
In this paper, we provide a general approach for dimensionality reduction in the calculation of Koopman operator approximations by exploiting the underlying symmetries present in both the system’s dynamics and system’s structure. The exact scaling achieved by the reduction depends on the structure of the symmetry group of the dynamical system, specifically, the number of irreducible representations of the symmetry group and the dimensionality of these irreducible representations.
The results outlined in this paper, similar to most of the other literature related to dynamical systems with symmetries, are immediately applicable in the case of the existence of exact symmetries in nonlinear dynamics. That is the case when the system is completely deterministic and the initial conditions respect the symmetries of the system. If the symmetries of the system are known and the available trajectories are deterministic, it is always possible to reconstruct the trajectories that are related via the symmetry group of the system. Then, a full set of trajectories respecting the symmetries of the system can be used to approximate the Koopman operator and its eigendecomposition.
However, in many systems, that information is not necessarily available ahead of time and the symmetries are not present in data, even if the initial conditions are symmetric, because of the presence of noise in the system. Some of the examples of not fully symmetric data include the following cases and their combinations:
Deterministic systems with measurement noise. DMD for systems with measurement noise and possible ways to correct for it are presented in Ref. 12. It is shown in Appendix F that in this case the expected values of offdiagonal elements of $K$ computed using the EDMD are zero, so the block decomposition may still be applicable.
Stochastic systems with symmetric initial conditions and process noise. DMD applied to the systems with process noise is studied, for instance, in Ref. 4.
Systems with imperfect symmetries due to sampling and unknown underlying symmetries.
Systems with imperfect symmetries in dynamics (e.g., slight parameter mismatch).
All these cases require separate treatment, and whether the isotypic component decomposition is still useful in computing the Koopman operator approximation will vary depending on specific characteristics of the data available from the system, such as the strength of the noise or the trajectory sampling characteristics.
V. CONCLUSION
In this paper, we apply tools from group theory and representation theory to study the structure of the Koopman operator for equivariant dynamical systems. This approach can be applied to systems with permutation symmetries (e.g., networks symmetric under node permutations where the information about the symmetries is contained in the adjacency matrix), systems with intrinsic dynamical symmetries, and systems with both types of symmetries present. We find that the operator itself and its approximations can be block diagonalized using a symmetry basis that respects the isotypic component structure related to the underlying symmetry group and the actions of its elements. For the approximation matrix to be exactly block diagonal, the data must respect the symmetries of the system. That can be readily accomplished if the underlying symmetry is known ahead of time (e.g., the topology of the network is known). Symmetry considerations are applicable to both EDMD and kernel DMD, which means that they become useful both in the regime when the number of observables is much larger than that of measurements and vice versa.
Moving forward, it would be possible to extend these results. For instance, a natural next step would be to investigate the effect of noise and imperfect symmetries on the Koopman operator approximations for equivariant or nearly equivariant dynamical systems in more detail. It would also be useful to apply the symmetry considerations beyond the range of applicability of EDMD. In that case, symmetry considerations can be used to study, for instance, systems with continuous Koopman spectra. Other future directions include relating our results to the existing literature on equivariant bifurcation theory, stability analysis, and continuous symmetries.
ACKNOWLEDGMENTS
This work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office under MURI Award No. W911NF1310340. The authors thank Jordan Snyder, Mehran Mesbahi, Afshin Mesbahi, and the entire MURI team for useful discussions.
APPENDIX A: BLOCK DIAGONALIZATION OF ISOTYPIC COMPONENTS OBTAINED FROM $dp$DIMENSIONAL IRREDUCIBLE REPRESENTATIONS
We show that $d$dimensional irreducible representations of $\Gamma $ yield identical blocks of $K$ in the isotypic component basis obtained using the unitary irreducible representations of $\Gamma $.
Let the function space be decomposed into isotypic components according to the actions of the symmetry group $\Gamma $ of order $\Gamma $, $\gamma \rho \u2208\Gamma \rho $: $F=F1\u2295\cdots \u2295FN$, where $N$ is the number of irreducible representations of $\Gamma $. Let $Fp$ be one of these isotypic components with a corresponding unitary irreducible representation with elements $Rp(\gamma )$ corresponding to $\gamma \u2208\Gamma $, and let $dp$ be the dimensionality of that representation.
The projection operator is defined as
It acts on $f\u2208F$ to produce sets of projected functions according to
We already know that $K\xi mnp=hp$, where $hp\u2208Fp$. The subspace $Fp$ can be decomposed into $dp$ components $Fp=Fp,1\u2295\cdots \u2295Fp,dp$, where $Fp,m={gg=Pmnp\xb0f,f\u2208F,n=1,\u2026,dp}$. This is a welldefined decomposition since $\u27e8Pmnpf,Pklph\u27e9=\u27e8f,PnmpPklph\u27e9=\u27e8f,\delta mkPnlph\u27e9$^{11} can be nonzero only when $m=k$.
We want to show that $K\xi mnp\u2208Fp,m$ (also true for any linear operator that commutes with the action of the symmetry group). Since the operator commutes with the actions of the group,
Here, $K\xb0f\u2261h$.
Let $f\Gamma ={\gamma \xb0f\gamma \u2208\Gamma}$. Any set of linearly independent functions that span $f\Gamma $ can be transformed into a symmetry respecting basis obtained by calculating all the projections $Pmnp\xb0f\gamma $, where $f\gamma \u2208f\Gamma $. That corresponds to a block diagonal form of the Koopman operator.
We have already shown that $K$, the approximation of $K$, also commutes with the actions of the elements of $\Gamma $ for $\Gamma $equivariant dynamical systems with $\Gamma $equivariant data. Thus, we can obtain an observable dictionary that block diagonalizes $K$ into $\Gamma $ blocks, where each $dp$dimensional irreducible representation results in $dp$ $dp\xd7dp$dimensional blocks.
Additionally, suppose $KPmnp\xb0f=h$, then $KPknp\xb0f=PkmpKPmnp\xb0f=Pkmp\xb0h$. This gives us the relation between blocks in $K$ corresponding to the same irreducible representation $p$. In the context of the approximation $K$, it means that we get that $Kp,i$ (blocks corresponding to $\psi \u2208Fp,i$) are equal for all $i$ (for data respecting the symmetries of the system and a proper ordering of basis functions).
APPENDIX B: COMMUTATIVITY OF $K$ AND $\gamma \rho $ ACTING IN FUNCTION SPACE
We show that $G$ and $G+$ ($+$ denotes the MoorePenrose pseudoinverse) commute. We note that $G$ is a Hermitian matrix since
Thus, $G$ is also normal, i.e., $GG\u2217=G\u2217G$. We show that if $G$ is normal, $GG+=G+G$.
Two of the criteria that define the MoorePenrose pseudoinverse^{35} state that $G+=G+GG+$ and $(GG+)\u2217=GG+$. It follows that the following relation holds: $G+=G+(GG+)\u2217=G+(G+)\u2217G\u2217$. Using that relation^{6} and commutativity of $+$ and $\u2217$ operations, we obtain
Since the action of $\gamma $ commutes with $A$ and $G$, and since $G$ commutes with $G+$, the action of $\gamma $ commutes with $K=G+A$, which is a Koopman operator approximation.
APPENDIX C: REQUIREMENTS OF SYMMETRIZING THE DATA SET
We note that the offdiagonal blocks of the approximation matrix $K$ are only zero if the symmetries are present in the data. The nondiagonal elements can be set to zero explicitly, making computations more efficient.
Moreover, if the symmetries are known a priori, a fully symmetrized data set is not necessary to obtain an approximation of the diagonal block elements of $K$. Suppose that we have a dictionary of basis functions belonging to a particular isotypic component with respect to the action of the full symmetry group $\Gamma $. We label that component by $p$, and the corresponding unitary irreducible representation by $Rp(\gamma )$. We then define $Rp\u2032(\gamma )\u2261Rp(\gamma )\u2297In\xd7n$, where $n$ is the number of functions in the $p$th isotypic component. By definition of isotypic components, even for unsymmetrized data it is the case that
If $R(\gamma )$ is a diagonal matrix,
In the case of onedimensional irreducible representations, it is not necessary to use reflected data to produce the blocks of $K$. For instance, as in Example IV.2, where one of the generators of $D3$, $\kappa $, corresponds to a nondiagonal matrix $R(\kappa )$. In that case,
This demonstrates that the method is dataefficient and sets up requirements on the symmetry properties of data.
APPENDIX D: CHANGE OF BASIS AND THE EDMD APPROXIMATION
We show that rotating the observable dictionary preserves the symmetries of the reconstructed trajectories.
Suppose that we have a basis consisting of dictionary functions $D\psi $ and a dictionary $D\xi $ obtained by $\Xi =T\Psi $. Let $\Psi (t)\u2261(\psi 1(x(t))\psi N(x(t)))T$ and $\Xi (t)\u2261(\xi 1(x(t))\xi N(x(t)))T$. We show that rotating the dictionary function vector by the projection matrix $T$ does not affect the trajectory reconstruction,
Next, we show that the state reconstruction preserves the symmetries of the system. Let $P$ be the action of the symmetry group on the basis functions $\Psi $. We aim to show that if $\Psi t+1=K\Psi t$, then $P\Psi t+1=KP\Psi t$. It follows directly from the fact that $K$ and $P$ commute,
Thus, the trajectories of basis functions reconstructed using the EDMD approximation are $\Gamma $equivariant, just like the original system. In particular, this is true in the case of the evolution of the full state observable.
APPENDIX E: APPLICABILITY TO KERNEL METHODS
Kernel DMD introduced in Ref. 49 is a variant of approximating the Koopman operator matrix most efficient when the number of measurement points is much smaller than the number of basis functions. Kernel DMD relies on evaluating $G^$ and $A^$ using the kernel method. Their elements can be found by indirectly evaluating the inner products in the basis function space: $k(xm,yn)=\Psi (xm)\Psi (yn)\u2217$ [e.g., if $k$ is a polynomial kernel, $k(x,y)=(1+xyT)\alpha $]. We note that $k(\gamma x,\gamma y)=k(x,y)$ due to the properties of inner products.
In kernel DMD, $G^ij=k(xi,xj)$ and $A^ij=k(xi,yj)$. The eigendecomposition of $G^=Q\Sigma 2QT$ is then used to find the matrix $K^$ and use it in computing the eigendecomposition of the Koopman operator approximation matrix $K$,
Again, we pick a particular order of group elements similar to Eq. (29),
We also construct a permutation representation of the group with elements denoted by $P\gamma i$ as defined in Eq. (30).
By Cayley’s theorem, such permutations form a group isomorphic to $\Gamma $. Determining the actions $P\gamma i$ of the group generators is sufficient to find the actions of all the group elements. Let $P\gamma k=P\gamma k\u2297In\xd7n$. We note that $(P\gamma k)\u2217=(P\gamma k)\u22121$. It can be shown that
We do so for $A^$, and the proof for $G^$ is equivalent. We find that
Finally, $k(xp,yl)=k(\gamma ixk,\gamma iyq)=k(xk,yq)$.
Since relation (31) holds, the same reasoning can be applied to block diagonalize the matrix $K^$. It is sufficient to apply the projection operator^{42}
This projection operator is analogous to the one introduced in Eq. (25), except the symmetry group in this case acts by permuting the group elements.
We can apply the singular value decomposition of $P$ to obtain the basis for the projection subspaces of irreducible representations (isotypic components). We form the transformation matrix $T$ by finding the singular value decomposition (SVD) and stacking its eigenvectors as rows of $T$ such that $T=T\u2297In\xd7n$.
Similar to EDMD, the isotypic component basis simplifies calculating the approximations of $K^$,
The modification is summarized in Table II.
Standard kernel DMD .  Kernel DMD for Γequivariant systems . 



Standard kernel DMD .  Kernel DMD for Γequivariant systems . 



Finally, the approximations of Koopman eigenvalues, eigenfunctions, and eigenmodes can be calculated using $KD$, as shown in Ref. 49.
APPENDIX F: DETERMINISTIC SYSTEMS WITH SENSOR NOISE
Transfer operators with process and measurement noise were also studied in Ref. 40. Characterizing and correcting for the effect of sensor noise in DMD is discussed in Ref. 12. We need to extend the results to EDMD to quantify the effect of sensor noise on the structure of the matrix $K$. The main modification that needs to be made is the consideration of the effect of the noise in measuring $X$ and $Y$ on $\Psi x$ and $\Psi y$.
Let $X$ and $Y$ be matrices analogous to $\Psi x$ and $\Psi y$ corresponding to the fullstate observable evaluated at discrete time steps. We denote the sensor noise matrices by $Nx$ and $Ny$, so that the measured $Xn$ and $Yn$ can be found from $Xn=X+Nx$ and $Yn=Y+Ny$. We assume that the noise distributions respect the symmetries of the system, which might be the case, for instance, for symmetric networks. Moreover, we assume that the noise is stateindependent.
We can form vectors $\Psi xn$ and $\Psi yn$ that can be used to find the approximation $K$ using EDMD,
Here, $(\Psi xn)ij=\psi j((Xn)i)$, $(\Psi yn)ij=\psi j((Yn)i)$, and $N\Psi ,x$ and $N\Psi ,y$ correspond to noise matrices obtained as
We aim to show that $E(PKn)=E(KnP)$, meaning that the expected value of the Koopman operator $Kn$ commutes with the permutation matrix corresponding to an element of the symmetry group. If that is the case, then the expected values of the offblockdiagonal elements of $Kn$ in a symmetry adapted basis as defined in Eq. (27) are zero. To do that, we can express $Kn$ as
If the inverse of the first term exists, it can be expanded into the Taylor series with terms of the form below in a weak noise limit. We need to show that
Here, the matrices $Mi$ are selected from $N\Psi ,x/y$ and $\Psi x/y$. That follows directly from
Thus, the expected values of the offblockdiagonal elements of $Kn$ are zero in the isotypic component basis.