Nonlinear dynamical systems with symmetries exhibit a rich variety of behaviors, often described by complex attractor-basin portraits and enhanced and suppressed bifurcations. Symmetry arguments provide a way to study these collective behaviors and to simplify their analysis. The Koopman operator is an infinite dimensional linear operator that fully captures a system’s nonlinear dynamics through the linear evolution of functions of the state space. Importantly, in contrast with local linearization, it preserves a system’s global nonlinear features. We demonstrate how the presence of symmetries affects the Koopman operator structure and its spectral properties. In fact, we show that symmetry considerations can also simplify finding the Koopman operator approximations using the extended and kernel dynamic mode decomposition methods (EDMD and kernel DMD). Specifically, representation theory allows us to demonstrate that an isotypic component basis induces a block diagonal structure in operator approximations, revealing hidden organization. Practically, if the symmetries are known, the EDMD and kernel DMD methods can be modified to give more efficient computation of the Koopman operator approximation and its eigenvalues, eigenfunctions, and eigenmodes. Rounding out the development, we discuss the effect of measurement noise.

Many natural and engineered dynamical systems—power grid networks and biological regulatory networks, to mention two—exhibit symmetries in their connectivity structure and in their internal dynamics. Some have time-reversal symmetry, others rotational and spatial translation invariance, and others still, combinations. These symmetries are often key for understanding the behavior of systems. For instance, the interplay between system behavior and structural symmetries arises in locomotion, where observed symmetries in animal gaits impose certain constraints on the structure of the neural circuits that generate them. For network systems, in particular, symmetries in the connectivity structure are of fundamental importance. For instance, the structural symmetries of a network of identical oscillators can determine their admissible patterns of symmetry-breaking. That said, additional information beyond knowledge of the network structure is often required to address more detailed questions about a system’s dynamics, such as whether a particular configuration is stable in a given parameter regime. In these cases, the system’s linearization near the steady state can be combined with interconnection symmetry to provide the answer. However, these linearization methods are only valid on a local subset of the state space and, therefore, are not sufficient for global characteristics of nonlinear dynamical systems, such as their attractors, basins, and transients. The Koopman operator, in contrast, is a linear infinite-dimensional operator that evolves the functions on the state space which is valid on the entire state space. We show how to combine symmetry considerations with the Koopman analysis to study nonlinear dynamical systems with symmetries. We use representation theory to determine the effect of symmetries on the Koopman operator and its approximations, drawing out how local dynamical symmetries interact with symmetries arising from the connectivity of system variables. This, in turn, allows us to modify data-driven Koopman approximation algorithms to make them more efficient when applied to dynamical systems with symmetries. We illustrate our findings in a simple network of coupled Duffing oscillators with symmetries in individual oscillator dynamics and in their physical couplings.

Symmetries of dynamical systems manifest themselves in asymptotic dynamics, bifurcations, and attractor basin structures. Symmetries play a crucial role in guiding the emergence of synchronization and pattern formation, which are behaviors broadly observed in natural and engineered systems. Methods from group theory, representation theory, and equivariant bifurcation theory provide useful tools to study the common features of systems with symmetries.15–17 

Dynamical elements organized into a network are an important class of dynamical systems that often exhibit these behaviors, especially when symmetries appear in both network structure and the dynamics of the individual nodes. Studying the effect of symmetries in network topology of synthetic and real-life systems using computational group theory is an active area of research.29,33,34 These symmetries lead to phenomena such as full synchronization, cluster synchronization, and formation of exotic steady states such as chimeras.10,14,30,41 Moreover, topological symmetries underlying cluster synchronization of coupled identical elements assist in analyzing the stability of these fully synchronous cluster states.18,33 For networks of identical coupled oscillators, the form of their limit cycle solutions and the form of their bifurcations can be derived from symmetry considerations.1 Symmetries are also key in determining network controllability and observability. For example, Refs. 13 and 37 explored the effect of explicit network symmetries for linear time-independent and time-dependent networks. Similarly, Refs. 26 and 47 considered nonlinear network motifs with symmetries and studied how the presence of different types of structural symmetries affects the observability and controllability of the system. Reference 32, similar to our approach, uses the Koopman operator formalism (discussed below). They provide analytic results that link the presence of permutational symmetries in dynamical systems to their observability properties.

Many dynamical systems of current interest are high dimensional and nonlinear. For instance, this is the case for many complex networks, such as power grids and biological networks. Complexity there arises from the interaction between the network interconnectivity structure and the nonlinearities in the node and edge dynamics. This often leads to multistability. Linearization methods can provide insight near the system’s attractors, but they poorly approximate the dynamics on the rest of the state space. In contrast, operator-based methods give access to the global characteristics of nonlinear systems. They do so in a linear setting and are, therefore, more well-suited, for instance, to characterize the attractor basin structure of multistable dynamical systems or the design of control interventions. The Perron-Frobenius and Koopman operators are adjoint linear infinite-dimensional operators whose spectra can provide global information about the system. Their approximations using data-driven approaches make operator methods potentially applicable when there is no prior knowledge of the system.

The Perron-Frobenius operator evolves densities on the state space. It has been extensively used to assess global behavior of nonlinear dynamical systems.25,45 There are several well developed approaches for obtaining its numerical approximations, such as Ulam’s method that relies on the discretization of the state space to obtain an approximation of the Perron-Frobenius operator.46 Since the Koopman operator is adjoint to the Perron-Frobenius operator, numerical approximations of the Koopman operators can be obtained using these methods as well.20 

The Koopman operator is an infinite dimensional linear operator that describes the evolution of “observables” (functions of the state space).20,22,25 Its definition and properties in the context of dynamical systems are provided, for instance, in Ref. 9, which also summarizes its applicability to model reduction, coherency analysis, and ergodic theory. Methods based on the Koopman operator decomposition have proved useful for applications such as model reduction and control of fluid flows,3 power system analysis,44 and extracting spatio-temporal patterns of neural data.7 

Data-driven methods to approximate the Koopman operator rely upon snapshot pairs of measurements of the system state at consecutive time steps. Reconstructing the operator from these snapshot pairs requires that a set of functions (called a dictionary of observables) be chosen. The first data-driven method introduced, dynamic mode decomposition (DMD), implicitly uses linear monomials as a dictionary and thus is most applicable to systems where the Koopman eigenfunctions are well represented by this basis set.38 A more recent method called extended DMD (EDMD) introduced in Ref. 48 can be more powerful than the standard DMD when applied to nonlinear systems as it allows the choice of more complicated sets of dictionary functions. Applying the EDMD is most computationally feasible if the number of dictionary functions does not exceed the total number of snapshot pairs used. That is not necessarily the case if a rich function dictionary (e.g., a dictionary of high order polynomials) is chosen. A modification of EDMD called kernel DMD introduced in Ref. 49 addresses this issue by providing a way to efficiently calculate the Koopman operator approximation in a case when the number of dictionary functions exceeds the number of measurements. Yet, the principled choice of an underlying dictionary that leads to an accurate approximation of the eigenspectrum corresponding to the leading Koopman modes using EDMD or kernel DMD remains an outstanding challenge. That problem is confronted, for instance, in Ref. 27, where an iterative EDMD dictionary learning method is presented. Although the optimal choice of dictionary functions is often unknown, there are some common choices that are known to produce accurate results for certain classes of systems.48 

Here, we study nonlinear dynamical systems with discrete symmetries combining operator-based approaches and linear representation theory. Recently, related methods have been applied to dynamical systems with symmetries. On the one hand, Ref. 31 addresses symmetries of the Perron-Frobenius operator in relation to the admissible symmetry properties of attractors. On the other, Ref. 39 links the spatiotemporal symmetries of the Navier-Stokes equation to the spatial and temporal Koopman operators. Additionally, Ref. 8 noted that symmetry considerations play an important role in discovering governing equations. Reference 19 shows how conservation laws can be detected with Koopman operator approximations and then used to control Hamiltonian systems.

In contrast, our focus is on dynamical systems with symmetries described by a finite group. We show how the properties of the associated Koopman operator spectrum can be linked to the properties of the spectrum of the finite dimensional approximations of the Koopman operator obtained from finite data. We further show how the analytic properties of the Koopman operator decomposition can inform the choice of dictionary functions that can be used in the Koopman operator approximations. This gives a practical way to reduce the dimensionality of the approximation problem.

Our development builds as follows. Section II defines the Koopman operator, introduces approximation methods (EDMD and kernel DMD), and defines equivariant dynamical systems as well as useful concepts from group theory and representation theory. Section III draws out the implications of dynamical system symmetries for the structure of the Koopman operator and its eigendecomposition. Section IV connects the properties of the Koopman operator and the structure of its EDMD approximation for symmetric systems. This then allows modifying the EDMD method to exploit the underlying symmetries, resulting in a block-diagonal Koopman operator approximation matrix. We also provide numerical examples, showing how using particular dictionary structures speeds up the algorithm. Finally, Sec. V summarizes our findings and outlines directions for future work.

In this section, we provide some background in operator theoretic approaches to dynamical systems, in particular, the Koopman operator and its adjoint Perron-Frobenius operator. Since in this paper, we address the approximations of the Koopman operator where the input is discrete time data, we focus on their definition in the discretized setting. The continuous time definitions are similar.9 Our results regarding the degeneracy of Koopman operator eigenvalues and the properties of its corresponding eigenfunctions presented in Sec. III hold in both discrete and continuous time settings.

Suppose that we are given continuous time autonomous dynamical systems defined as

x˙=gc(x).
(1)

Here, xRn, gc:RnRn. Let Φ(x(t),Δt) be a flow map mapping the initial condition x(t) to the solution at time t+Δt. It is defined in the following way:

Φ(x(t),Δt)=x(t)+tt+Δtgc(x(τ))dτ.
(2)

The system can be discretized with a finite time step Δtstep, so that xi+1=Φ(xi,Δtstep). We denote the function evolving the dynamics of this discretized system by g,

xi+1=g(xi).
(3)

The Koopman operator is a “linear infinite dimensional” operator that evolves the functions (referred to as observables) of state space variables f:RnC. The action of the Koopman operator K on an observable function f for discrete time systems is defined as

(Kf)(x)=f(g(x)).
(4)

Since we consider data-driven Koopman operator approximation methods in this paper, the discrete time version of the definition is most applicable.

Pairs of eigenvalues λ and eigenfunctions ϕ of the Koopman operator K are defined as

(Kϕ)(x)=λϕ(x).
(5)

Of particular interest are the Koopman modes that can be used in model reduction and coherency estimation.36,43 The Koopman modes vif of the observable f(x) are defined by

f(x)=ivifϕi(x)
(6)

and are projections of the observable onto the span of the eigenfunctions of the Koopman operator K. A particularly useful set of modes is that of the full state observable f(x)=x, defined as

x=iviϕi(x).
(7)

In general, parts of the Koopman operator spectrum can be continuous.9,24 For instance, this can be the case for chaotic systems. However, we focus on the case of a discrete spectrum since the methods we refer to in Secs. II B, IV, and  Appendix E (EDMD and kernel DMD) are only applicable for that case. Our results regarding the symmetry properties of the discrete parts of the Koopman operator spectrum are analogous to those related to the continuous part of the spectrum. Numerical methods related to continuous Koopman operator spectra are considered, for instance, in Ref. 28.

The other candidate for studying dynamical systems using an operator based approach is the Perron-Frobenius operator P defined as follows for deterministic dynamical systems:

APρ(x)dx=g1(A)ρ(x)dx.
(8)

Here, ρ(x) is a density on the state space, ARn is a subset of the state space, and g, defined in Eq. (3), evolves the state of the system. The Perron-Frobenius operator is the adjoint to the Koopman operator,25 so an approximation of one of them provides an approximation of the other.20 

Extended dynamic mode decomposition (EDMD) introduced in Ref. 48 is a data-driven method of approximating the Koopman operator for discretized systems that requires an explicit choice of a dictionary of functions referred to as “observables.” How to optimally choose these functions remains an open problem for many systems, especially if the form of differential equations describing the governing dynamical system is not known in advance and only finite data on the behavior of the system are available. The method can be very accurate in capturing the dynamics of the system, but its accuracy depends strongly on the choice of an appropriate dictionary of observables. The method’s convergence properties are studied in Ref. 23, and its relation to the Perron-Frobenius operator approximation methods is discussed in Ref. 20. Here, we summarize the EDMD and its relation to the Koopman operator.

The first requirement for the method is a set of pairs of consecutive snapshots x=[x1,x2,,xM] and y=[y1,y2,,yM], where the measurements xi and yi are performed with a small constant time interval Δt: yi=Φ(xi,Δt), where Φ is the flow map defined in Eq. (2). Typically, the set of snapshots contains measurements from different trajectories in the state space. We define a dictionary of linearly independent observables D={ψ1,,ψN} and form vectors of observations Ψx and Ψy. Here, ΨxRM×N, where N is the number of dictionary functions used in the approximation and M is the number of data snapshots. The elements of Ψx are obtained from (Ψx)ij=ψj(xi). We also use the notation Ψ(xm)=(ψ1(xm),,ψN(xm)) for the dictionary functions evaluated at a particular point on the trajectory.

A finite dimensional approximation of the Koopman operator K that we denote as K can be obtained using

K=Ψx+Ψy.
(9)

Here, Ψx+ denotes the pseudoinverse of Ψx. We focus on the case of the Moore-Penrose pseudoinverse for the rest of the paper.35 

If the number of snapshots is much higher than the dimensionality of the function dictionary (MN), it is more practical instead to define the square matrices G and A as shown below and obtain the approximation in the following way:

K=G+AwhereG=mΨ(xm)Ψ(xm),A=mΨ(xm)Ψ(ym).
(10)

Here, represents the complex conjugate transpose. If the only observables are the states of the system x1,x2,,xn, EDMD reduces to DMD.20,48

The eigendecomposition of K provides the Koopman eigenvalues, eigenfunctions, and modes that allow an approximate linear representation of the underlying system dynamics. Let λj and uj be the jth eigenvalue and eigenvector of K. Then, the corresponding Koopman eigenfunction can be approximated by

ϕj(x)=Ψ(x)uj.
(11)

Let bi be the vectors defined by g(x)i=Ψbi, where g(x)i=eix denotes the elements of the full state observable discussed in Ref. 48, and B=(b1bn). The Koopman eigenmodes can then be obtained as

vi=(wiB)T.
(12)

Here, wi denotes the ith left eigenvector of K.

A modification of EDMD named kernel DMD49 is better suited for systems with a low number of measurements and a high number of observables (e.g., the full state observable for fluid dynamical systems is very high dimensional, so defining a polynomial dictionary of the full state observable is very computationally expensive), i.e., MN. The method relies on evaluating the kernel function:

k(xi,yi)=Ψ(xi)Ψ(yi).
(13)

That allows efficient computation of M×M matrices G^, A^, and K^, where M is the number of trajectory time steps. The eigendecomposition of K^ then can be used to obtain the approximations of the Koopman eigenvalues, eigenfunctions, and modes.

In this paper, we focus on the case where the number of measurements is relatively high for each degree of freedom (MN) and obtain a way to reduce the dimensionality of the EDMD approximation of the Koopman operator for systems with symmetries in Sec. IV. A similar modification of the kernel DMD is discussed in  Appendix E.

In this section, we define the concepts useful to study the structure of the Koopman operator K and its approximations K for systems with symmetries. Throughout this section and the rest of the paper, we use an example of a small network of Duffing oscillators to illustrate the definitions and algorithms.

In this paper, we consider dynamical systems [as defined in Eqs. (1) and (3)] that respect discrete symmetries. These systems are called equivariant with respect to the symmetry group Γ. We define groups by their “presentations” in a form S|R, where S is a set of generators of the group and R is a set of relations among these generators that define that group. Every element of the group can be written as a product of powers of some of these generators.

For instance, the cyclic group Zn is presented by r|rn=1. An example of a realization of that group is a set of rotational symmetries of a regular n-gon.

To study dynamical systems with symmetries, we need to define the specific actions of the group on a vector space in addition to an abstract presentation of a group Γ. Let XRn be a vector space with elements xX. We denote the actions γρ on a vector space X by γρx if the set of these actions Γρ is isomorphic to Γ. A shorthand γρx=γx is sometimes used in the literature when the action corresponding to the subscript ρ is clear from the context (for instance, it is defined by a permutation matrix of the same degree as the state space of the system); however, we use the γρ notation to avoid ambiguity since the precise definition of group action in particular cases is important in this paper, as shown, for instance, in Examples II.1 and II.2.

Finally, we define what it means for a dynamical system to be symmetric. Let x˙=gc(x) be a continuous time system of differential equations. Here, xRn, and gc:RnRn. The system is Γ-equivariant with respect to the actions of Γρ if for all xX and γρΓρ,

gc(γρx(t))=γρgc(x(t)).
(14)

As discussed in Sec. II, data come in a discretized form, so a discrete form of that definition is useful. For discrete time systems defined by xi+1=g(xi), equivariance is defined in a similar manner,

g(γρxi)=γρg(xi).
(15)

We note that if a continuous time system is Γ-equivariant, so is its discretization. Moreover, the set of trajectories of a γ-equivariant system in the state space also respects the symmetries of the system. For discretized systems, it means that if {x0,x1,xn} form a trajectory in the state space, then {γρx0,γρx1,γρxn} form a trajectory as well.

An important example of equivariant dynamical systems that many of the recent works have focused on (such as Refs. 18,30,33, and 41) is a system of coupled identical oscillators. In that case, the set (or a subset) of actions under which the system is equivariant is defined by the set of permutational matrices P that commute with the adjacency matrix (a matrix that describes connectivity between the nodes of the network) of that oscillator network. In this case, the action of the group is linear, however, that does not always have to be the case.

We also need to define the action of the group in the function space, where fF are functions f:XC, as

(γρ°f)(x)f(γρ1x).
(16)

Note that the group action is inverted to satisfy the group action axioms (so that actions on functions form the same group structure as the actions on states). This definition will be useful since the Koopman operator acts on functions (i.e., observables).

Another concept useful for our work is a linear group “representation” T, which is a mapping from group elements γΓ to the elements of the general linear group [a group of matrices of degree n, with the operation of matrix multiplication denoted by GL(n,V)] on a vector space V (in this case, we are interested in V=Cn). The characters χi(γ) of a group representation Ti(γ) are defined as χi(γ)=Tr(Ti(γ)).

A representation is called irreducible if it has no nontrivial invariant subspaces (meaning that the representation matrices corresponding to the group elements cannot be simultaneously nontrivially block diagonalized into the same block form). For each Γ, we can obtain all of its irreducible matrix representations. We denote their elements mapping γΓ to p×p-dimensional matrices as Ri(γ), where the index i corresponds to the ith irreducible representation. Irreducible representations are defined up to an isomorphism. For the purposes of this paper, it is useful to make use of either the unitary irreducible representations or their characters.

A vector space, e.g., the space of square integrable functions F, can be uniquely decomposed into components that transform like the ith irreducible representation of Γ under the actions of Γρ. These components are called “isotypic components.”15 We denote these components by Fi. An “isotypic decomposition” of the square integrable function space with respect to Γρ is then defined as F=iFi, where the symbol denotes the direct sum here and thereafter. We illustrate the construction of an isotypic decomposition using an example of a Z2-equivariant system.

Example II.1

Symmetries of a single Duffing oscillator dynamics and isotypic components in the function space corresponding to the actions of its symmetry group.

The unforced Duffing oscillator equation has the form
x¨=σx˙x(β+α2x).
(17)
We can rewrite the above equation as a system of differential equations to obtain
x˙=y,y˙=σyx(β+α2x).
(18)
Let x=xy, and let the dynamics be denoted by x˙=gc(x). Let rs=1001=I2×2 be the action on the state space that flips the signs of both variables. The actions rs and es=I2×2 form a group Γs isomorphic to Γ=Z2=r|r2=e. Let γsΓs, then
γsgc(x)=gc(γsx).
(19)
Thus, the Duffing oscillator system is Z2-equivariant with respect to the actions γs.

We now illustrate the isotypic component decomposition of Z2 in the function space. Z2 has two one-dimensional irreducible representations: the trivial representation defined by Rtr(r)=1 and the sign representation defined by Rsign(r)=1. Then, the space of square integrable functions F can be decomposed into F=FtrFsign, where Ftr={f:rs°f=f(x,y)=f(x,y)} and Fsign={f:rs°f=f(x,y)=f(x,y)}. In this case, the sets of functions Ftr and Fsign consisting of even and odd functions, respectively, transform like the trivial and sign irreducible representations with respect to sign flip as a group generator action.

We now extend the example to a network of Duffing oscillators and explore additional permutation symmetries.

Example II.2
We now consider the dynamics of a network of Duffing oscillators, as shown in Fig. 1. Suppose that the coupling is linear in x with a coupling coefficient assigned to every edge ηij. Then, for each node i in the network, we have the following dynamics:
xi˙=yi,yi˙=σyi˙xi(β+α2xi)+ijηij(xixj).
(20)
FIG. 1.

Possible symmetries of a network of three identical Duffing oscillators depending on the coupling strength between the oscillators. Different coupling strengths are shown in blue and red. Green arrows correspond to permutational symmetries arising from physical coupling, and black arrows correspond to the symmetries of nodal dynamics. Scenarios (a)–(c) are considered in Examples II.1 and IV.1–IV.3. (a) Duffing oscillator network with Z2×D3 symmetry. (b) Duffing oscillator network with Z2×Z3 symmetry. (c) Duffing oscillator network with Z2×Z2 symmetry.

FIG. 1.

Possible symmetries of a network of three identical Duffing oscillators depending on the coupling strength between the oscillators. Different coupling strengths are shown in blue and red. Green arrows correspond to permutational symmetries arising from physical coupling, and black arrows correspond to the symmetries of nodal dynamics. Scenarios (a)–(c) are considered in Examples II.1 and IV.1–IV.3. (a) Duffing oscillator network with Z2×D3 symmetry. (b) Duffing oscillator network with Z2×Z3 symmetry. (c) Duffing oscillator network with Z2×Z2 symmetry.

Close modal

This general coupling scheme is used to model many systems in the literature.33,41

We now consider the case of a 3-node network. Depending on what the coupling terms are, the system may be Γ-equivariant with respect to different symmetry groups that act by permuting node indices. Some examples are

  • If all coupling strengths ηij are equal, the network has D3 symmetry. This case is shown in Fig. 1(a). Let the state of the system be defined by x=x1y1x2y2x3y3T. Then, the symmetry group is presented by D3=r,κ|r3=κ2=e,κrκ=r1 and generated by the actions rp=010001100I2×2 and κp=100001010I2×2.

  • If the coupling strengths obey the conditions ηijηji and ηij=ηjk for ik, the network has Z3 symmetry. This case is shown in Fig. 1(b). The symmetry group is presented by Z3=r|r3=e and generated by the action rp defined above.

  • If the coupling strengths obey the conditions η12=η21=η13=η31, as well as η23=η32, and no other equalities hold, the network has Z2 symmetry. This case is shown in Fig. 1(c). The symmetry group is presented by Z2=κ|κ2=e and generated by the action κp defined above.

Even though in case (c) the permutation symmetry is isomorphic to the same group as the sign flip symmetry in Example II.1, the isotypic components in function space F induced by the group actions are different. Z2 has two one-dimensional irreducible representations:trivial representation R1(κ)=Rtr(κ)=1 and sign representation R2(κ)=Rsign(κ)=1. Let xi=xiyi. The isotypic components are defined by the permutative relations Ftr={f:κp°f=f(x1,x3,x2)=f(x1,x2,x3)} and Fsign={f:κp°f=f(x1,x3,x2)=f(x1,x2,x3)}.

Additionally, each node still has Z2 symmetry with respect to the action rs, which is not broken since the coupling function is odd. That symmetry is also depicted in Fig. 1. The isotypic components of the entire symmetry group are then intersections of the isotypic components of Z2(acting by a sign flip) and the symmetry group of the network geometry [acting by a permutation, e.g.,D3 for case (a) of this example, also illustrated in Fig. 3(a) ].

Any function can be rewritten as a sum of projections into different isotypic components. The procedure is outlined in Sec. III.

In this section, we consider the structure of the eigenspace of the Koopman operator of Γ-equivariant systems. We show how to obtain a particular eigenbasis of the system corresponding to the isotypic decomposition in the function space and demonstrate that the isotypic decomposition induces a block diagonal structure on the matrix representation of K.

Theorem III.1
For a Γ-equivariant dynamical system xi+1=g(xi) and an arbitrary function f, the Koopman operator commutes with the actions of the elements of Γ,
γρ°(Kf)(x)=K(γρ°f)(x).
(21)
Proof.
The commutativity follows from the definitions of the Koopman operator and the definition of the action of the group in the state space and function space
γρ°(Kf)(x)=γρ°f(g(x))=f(γρ1g(x))=f(g(γρ1x))=Kf(γρ1x)=K(γρ°f)(x).
(22)
This result is similar to Theorem 3.1 in Ref.31, where it is shown that the action of the Perron-Frobenius operator commutes with the action of the symmetry groupΓforΓ-equivariant systems.
Corollary III.1.1

The space of eigenfunctions of the Koopman operator K with eigenvalue λ for a Γ-equivariant system is Γ-invariant.

Proof.
LetSλbe the set of eigenfunctions ofKwith eigenvalueλ. LetϕSλ. Then, using the commutativity ofKandΓρ, we can show thatγρΓρ,
K(γρ°ϕ(x))=γρ°(Kϕ)(x)=λγρ°ϕ(x).
(23)
Thus, ϕγ,ρSλ, whereϕγ,ρis also an eigenfunction with an eigenvalueλdefined asϕγ,ρ(x)=γρ°ϕ(x).

We now consider a particular form of the eigenbasis of the Koopman operator that induces a block diagonal structure of the matrix representation of the action of the Koopman operator K. The result quoted below is useful for that purpose.

Theorem III.2
(Theorem 3.5 in Chapter XII of Ref.17)

Let Γ be a compact Lie group acting on the vector space V decomposed into isotypic components V=W1Wt. Let A:VV be a linear mapping commuting with Γ. Then, A(Wk)Wk.

This result is applicable to finite symmetry groups. Isotypic components of F with respect to Γ induce a block diagonal structure of the matrix representation of the Koopman operator. Since K and Γ commute, K(Fk)Fk. This block structure can be exploited in finding the Koopman operator approximations, as we show in Sec. IV. Thus, we need to be able to obtain an isotypic component basis from an arbitrary function dictionary. This is a well defined procedure,11 outlined below. Functions obtained via isotypic decomposition are useful to perform calculations in many areas of physics, for instance, they can simplify finding approximate solutions to the Schrodinger equation, or in studying crystallographic point groups.11,42 The construction is also widely applied to dynamical systems, for instance, to study states and their stability using equivariant bifurcation theory.

Suppose we start from an arbitrary basis function dictionary Dψ={ψi}. Each of these functions can be expanded in the isotypic component basis with at least one nonzero coefficient αmnp,

ψ=pm,n=1dpαmnpξmnp.
(24)

Here, ξmnp is a basis function in the pth isotypic component of F with respect to the actions of the symmetry group Γ, and dp is the dimension of that isotypic component. Alternatively, it can be thought of as a sum over all inequivalent (nonisomorphic) irreducible representations of Γ, where ξmnp transforms as the (m,n)th element of the pth irreducible representation of Γ.11 We define a projection operator and form a new function basis consisting of functions {ξmnp} as outlined below.

The projection operator is defined as

Pmnp=dp|Γ|γΓ[Rp(γ)]mnγρ.
(25)

Here, [Rp(γ)]mn denotes the element in the nth row and the mth column of the ith unitary irreducible representation of γΓ, and γρ is the group action. We can form an orthonormal basis Dξ={ξi} using the projection operator as follows:

ξmnp(x)=1cnpPmnp°ψ(x).
(26)

Here, cnp=Pnnpψ,Pnnpψ1/2, where , denotes the inner product, which can be omitted for our purposes since the scaling of basis functions does not affect the EDMD results (namely, the approximation matrix K, along with its eigenvalues and eigenvectors). Similarly, the overall factor dp|Γ| of the projection operator in Eq. (27) only affects the scaling of the basis functions and therefore can be eliminated.

Equivalently, due to orthogonality relations of characters of irreducible representations, the projection operator can be obtained using the following expression:

Pp=dp|Γ|γΓχp(γ)γρ.
(27)

Here, χp(γ) is a character of the pth irreducible representation of Γ. If this formula is used, each irreducible representation of degree dp provides a basis function, and dp21 other basis functions can be formed using the Gram-Schmidt orthogonalization process.11,33,42

Once an isotypic component basis is obtained, the action of the Koopman operator on the function space can be presented in the form of a block diagonal matrix. Each irreducible unitary representation of dimension dp in this case corresponds to a number dp of dp×dp sized blocks in that matrix K. Similar analysis applies to the Koopman operator approximation K. The reason why this additional decomposition works can be found in  Appendix A.

In this section, we show that the approximation of K obtained using EDMD can be reduced to the block diagonal structure similar to K under certain assumptions on the data. We provide some examples of constructing an isotypic component basis from a given function dictionary. We highlight that the basis depends on both the structure of Γ and the definition of its actions Γρ.

First, we establish that the Koopman operator approximation K commutes with the actions γρ of Γ if the data used in the calculation respect the symmetry, meaning that the set of pairs of data points satisfies the condition,

{(γρxi,γρyi)}={(xi,yi)}.
(28)

In other words, the set of trajectories is closed under the action of the symmetry group of the underlying dynamical system. This condition on trajectories can be achieved by averaging over a symmetry group, which has been used in the literature related to other data-driven methods, for instance, the proper orthogonal decomposition.2 We note that this requirement can sometimes be relaxed, as discussed in  Appendix C.

In order to perform further simplifications, we pick a particular order of group elements {γ1,,γ|Γ|} and create the vectors Ψx (and analogously Ψy) according to that ordering,

Ψ(x)=γ1°Ψ(x)γ|Γ|°Ψ(x),Ψx=Ψ1(x1)ΨN/|Γ|(x1)Ψ1(xM)ΨN/|Γ|(xM).
(29)

Given the ordering of the group elements, we can also construct the permutation representation of the group such that

Pγk(γ1,,γ|Γ|)T=(γkγ1,,γkγ|Γ|)T.
(30)

By Cayley’s theorem, such permutations form a group isomorphic to Γ. Determining the actions Pγk of the group generators is sufficient to find the actions of all group elements. Let Pγk=PγkIn×n. We note that (Pγk)=(Pγk)1. It can be shown that

PγkG=GPγk,PγkA=APγk.
(31)

By definition, A=ΨxΨy. We note that for symmetric trajectories satisfying Eq. (28),

(ΨxPγk)ΨyPγkij=mψi(γk1xm)ψj(γk1xm)=mψi(xm)ψj(xm)=ΨxΨyij.
(32)

Therefore,

(ΨxPγk)ΨyPγk=ΨxΨy.
(33)

Thus, A and G (for analogous reasons) commute with the action of the symmetry group.

If G is invertible and G commutes with γρ, G1 commutes with γρ as well. Then,

PγiK=PγiG1A=G1APγi=KPγi.
(34)

If G is not invertible, the commutativity result still holds for G+. G is a normal matrix since it satisfies GG=GG. In  Appendix B, we show that if G is normal, GG+=G+G, so G commutes with its Moore-Penrose pseudoinverse, and, therefore, the actions of K and Γ commute.

Since K commutes with the actions of Γ, KFiFi. This shows that K can be block-diagonalized in the same way as K.

Suppose we start from a dictionary of observables. Since that dictionary is not necessarily an isotypic component dictionary corresponding to Γ and its action, in order to obtain a block diagonal matrix K, the dictionary needs to be modified using the procedure outlined in Sec. III. In the example below, we show explicitly how to perform this transformation into the isotypic component basis.

In order for the basis to faithfully represent the symmetries of the system, we require that

  • The dictionary is closed under the action of the symmetries of the system,
    ifψD,γρψspan(D).
    (35)
  • Each isotypic component is present after the isotypic component decomposition of the original function basis,
    m,pψDs.t.Pmnpψ0.
    (36)

For instance, using a monomial basis for a D3 equivariant system does not satisfy the second requirement.

If these requirements are satisfied, the change of basis does not affect the result obtained by applying the EDMD algorithm as shown in  Appendix D. Additionally, we note that the eigenvalues of K do not typically have the same degeneracy properties as the eigenvalues of K, but the symmetries of the underlying dynamical system are preserved in trajectory reconstructions.

Example IV.1

Constructing an isotypic component basis for a single Duffing oscillator.

We start from a system with Z2 symmetry described in Example II.1. Suppose a polynomial basis is chosen to form basis functions. For instance,Dpoly={1,x1,x2,x12,x22,x1x2,}. Each of the dictionary items can be written as pmn(x1,x2)x1mx2n. For even m+n,pmnFtr, and for odd m+n,pmnFst, where Ftr and Fst are the isotypic components corresponding to the trivial and standard irreducible representations of Z2, as discussed in Example II.1. Thus, using Dpoly results in a sparse matrix K, and K is a block diagonal after reordering the basis functions.

Another possible choice for a set of dictionary functions is a radial basis function set. This type of functions was used to find the EDMD approximation of the Koopman operator in Ref. 48 . We use an initial dictionary Dψ of n mesh-free radial basis functions. The radial basis function centers can be obtained by either k-means clustering of the data or sampling from a predetermined distribution. As an example, we chose a specific form ψ(c,x)=rclog(rc), where c is a 2-dimensional radial basis function center, and rc,x||xc||1/2.

In this case, the individual basis functions do not generally belong to a single isotypic component. We use Eq. (25) to construct an isotypic component basis by obtaining the projections of the dictionary onto the isotypic components corresponding to the irreducible representations of Z2, which are the trivial and standard representations defined in Example II.1. Projecting onto the trivial isotypic component leads to
Ptr°ψ(c,x)=12[Rtr(e)]ψ(c,x)+[Rtr(r)]ψ(c,x)=12(ψ(c,x)+ψ(c,x)).
(37)
Analogously, projecting onto the standard isotypic component results in
Pst°ψ(c,x)=12(ψ(c,x)ψ(c,x)).
(38)
We simplify the notation by denoting ψi+ψ(ci,x). In order to satisfy Eq. (35), for each ψi+, the basis should also contain ψiψ(ci,x). Ignoring the irrelevant multiplicative factor,
ξ1trξntrξ1stξnst=TZ2In×nψ1+ψn+ψ1ψn=ψ1++ψ1ψn++ψnψ1+ψ1ψn+ψn.
(39)
For functions in Dξ ordered like in Eq. (39), the approximation matrix K is block diagonal.
Example IV.2

Constructing an isotypic component basis for a network of Duffing oscillators from a given basis.

We also consider a more complicated case of a system of Duffing oscillators with identical coupling as depicted in Fig. 1(a). In that case, the system has Z2×D3 symmetry. Suppose that we want to construct an isotypic component basis from a given function dictionary D. As an example, we use an initial dictionary Dψ of n mesh-free radial basis functions. Analogously to Example IV.1, each function can be presented in a form ψ(c,x)=rclog(rc), where c is a 6-dimensional radial basis function center, and rc,x=||xc||1/2. In order to preserve the symmetries of the system, we need to have dictionary elements corresponding to acting on the basis functions by each γρΓρ. Due to the form of these functions,γρ°ψ(c,x)=ψ(γρ1c,x).

Since Γ=Z2×D3 is a direct product of two groups, we can write the projection operator in the following form:
Pmnpq=dp|Z2|dq|D3|γiZ2γjD3[Rpq(γi,γj)]mnγiflip,γjpermwhereRpq(γi,γj)=Rp(γi)Rq(γj).
(40)
Here,Rp(γi) denotes the pth irreducible representation of γiZ2. Analogously,Rq(γj) denotes the qth irreducible representation of γjD3. Specific actions of group elements γiflip and γjperm are labeled by the superscripts.

The symmetry group Z2 has 2 degree 1 irreducible representations, discussed in Example II.1:

  • Trivial representation

  • Sign representation

The symmetry group D3 has 2 degree 1 and 1 degree 2 irreducible representations, defined by
  • Trivial representation Rtr: Rtr(r)=1,Rtr(κ)=1;

  • Sign representation Rsign: Rsign(r)=1,Rsign(κ)=1;

  • Standard representation Rst: Rst(r)=ω00ω2,Rst(κ)=0110. Here,ω=e2πi/3.

Note that dimensions di of the irreducible representations of Γ satisfy di2=|Γ|(if Γ=D3,|Γ|=6). Therefore, the number of isotypic component basis functions obtained from any set {γ°ψi}γD3 is equal to the number of group elements, so the sizes are consistent.

Suppose that we form a vector of basis functions in Dψ,
Ψ=(ψ1,1ψ2,1,,ψn,1,,ψ1,|Γ|,ψ2,|Γ|,,ψn,|Γ|)T,
(41)
where the first index corresponds to acting on ψ1,i by the jth element of Γρ. Using Eq. (40), we obtain transformation matrices that we can use to get the isotypic component basis,
TD3=1111111111111ωω20000001ω2ω1ω2ω0000001ωω2,TZ2=1111.
(42)
The isotypic component basis then can be obtained by modifying a set of functions in Dψ,
Ξ=TΨ,TTZ2TD3In×n.
(43)
The matrix TZ2TD3 is a 12×12 matrix, and its dimensions are equal to the size of the underlying symmetry group Z2×D3. The matrix In×n ensures that every element of the original dictionary gets mapped to an element of the new isotypic component dictionary.

If we use Ξ as a basis, we obtain K decomposed into 8 blocks, each corresponding to an irreducible representation of Z2×D3.

We implement the EDMD algorithm to obtain the approximation of K. Here, the data come from 500 initial trajectories of length 10 that were then reflected and rotated so that the data respect the symmetries. The parameter values of α=1,β=1,δ=0.5, and η=1 were used. We plot the approximation matrix K in Fig. 2. In this case, a dictionary of 120 radial basis functions was used. Figure 2(a) illustrates the Koopman operator approximation matrix K calculated using an initial dictionary Dψ and requires performing matrix operations on the full 120×120 matrix. Figure 2(b) shows K obtained from the symmetry adapted basis functions. The order of calculations can be reduced significantly since it is only necessary to perform matrix operations on blocks. K calculated in the symmetry adapted basis has four 10×10 and two 20×20 unique blocks.
FIG. 2.

Structure of K for different choices of dictionary functions: (a) K for a standard dictionary of observables. (b) K for a symmetry adapted dictionary of observables.

FIG. 2.

Structure of K for different choices of dictionary functions: (a) K for a standard dictionary of observables. (b) K for a symmetry adapted dictionary of observables.

Close modal

As shown in the examples above, we can construct a basis that block diagonalizes the Koopman operator matrix approximation K from the elements of any arbitrary basis. Since the off block-diagonal elements of the matrix are a priori known to be zero, we do not need to compute these elements explicitly. This suggests that for systems with symmetries, it is more efficient to perform the EDMD algorithm for isotypic decomposition blocks. We denote the number of conjugacy classes or irreducible representations of Γ by rΓ. In that case, instead of performing O((mrΓ)α) operations of matrix inversion, multiplication, and eigendecomposition, it is sufficient to perform these operations for each of the rΓ blocks, with operations being O(mα). Here, 2<α<3, e.g., as seen in Ref. 21. Even though the algorithmic complexity only differs by a factor that scales with the size of the group that is fixed for any given system, in practice, the computation is more efficient when EDMD specific to Γ-equivariant systems is used. We also note that each dp dimensional irreducible representation contributes dp “equal” blocks, each one of dimensions dp×dp, to Ksymm, which further simplifies the calculation. Moreover, in the case of networks of high dimensionality, it allows parallel eigendecomposition computation for isotypic component blocks. Table I summarizes the modified EDMD algorithm for Γ-equivariant systems and highlights that the order of computations can be lowered.

TABLE I.

EDMD vs modified EDMD for Γ-equivariant systems. |Γ| is the order of Γ. The irreducible representations of Γ are indexed by p and are dp-dimensional.

Standard EDMDEDMD for Γ-equivariant systems
  • Pick a dictionary of N observables

  • Evaluate the observables at data points xi and yi

  • Evaluate the entries of G, A: N2 elements

  • Obtain G+: N × N matrix

  • Find K = G+A: N × N matrices

  • Find the eigendecomposition of K: N × N matrix

 
  • Pick a dictionary of N observables

  • Identify the symmetry Γ of the system, find the irreducible representations of Γ

  • Change the basis to a Γ-symmetric basis using Eqs. (27) and (26): multiplying at most N/|Γ| |Γ| × |Γ| matrices by vectors |Γ| × 1; let Np be the number of functions obtained from applying a projection operator Pp corresponding to pth irreducible representation of Γ (e.g., Np = N/|Γ| for cyclic groups)

  • Evaluate the observables at data points xi and yi

  • To obtain the blocks Kpq of K (each isotypic component corresponds to dp blocks), for each p:

    • Evaluate the entries of Gp1, Ap1: (Np/dp)2 elements

    • Obtain Gp1+: (Np/dp) × (Np/dp) matrix

    • Find Kp1=Gp1+Ap1: (Np/dp) × (Np/dp) matrices

    • Find the eigendecomposition of Kp1: (Np/dp) × (Np/dp) matrix

    • The other Kpq blocks equal to Kp1

  • K=pq=1dpKpq; its eigenvalues are the eigenvalues of Kp, and its eigenvectors only have Np nonzero elements; mathematically, eigenvectors vkl of K are of the form (vkl)i=pδpkvpl

 
Standard EDMDEDMD for Γ-equivariant systems
  • Pick a dictionary of N observables

  • Evaluate the observables at data points xi and yi

  • Evaluate the entries of G, A: N2 elements

  • Obtain G+: N × N matrix

  • Find K = G+A: N × N matrices

  • Find the eigendecomposition of K: N × N matrix

 
  • Pick a dictionary of N observables

  • Identify the symmetry Γ of the system, find the irreducible representations of Γ

  • Change the basis to a Γ-symmetric basis using Eqs. (27) and (26): multiplying at most N/|Γ| |Γ| × |Γ| matrices by vectors |Γ| × 1; let Np be the number of functions obtained from applying a projection operator Pp corresponding to pth irreducible representation of Γ (e.g., Np = N/|Γ| for cyclic groups)

  • Evaluate the observables at data points xi and yi

  • To obtain the blocks Kpq of K (each isotypic component corresponds to dp blocks), for each p:

    • Evaluate the entries of Gp1, Ap1: (Np/dp)2 elements

    • Obtain Gp1+: (Np/dp) × (Np/dp) matrix

    • Find Kp1=Gp1+Ap1: (Np/dp) × (Np/dp) matrices

    • Find the eigendecomposition of Kp1: (Np/dp) × (Np/dp) matrix

    • The other Kpq blocks equal to Kp1

  • K=pq=1dpKpq; its eigenvalues are the eigenvalues of Kp, and its eigenvectors only have Np nonzero elements; mathematically, eigenvectors vkl of K are of the form (vkl)i=pδpkvpl

 

Koopman eigenfunctions and eigenmodes have many applications in dimensionality reduction, finding the basins of attraction, characterizing coherency between oscillatory systems, etc. Block diagonalizing K allows the efficient computation of the Koopman eigenvalues, eigenfunctions, and modes.

The kernel DMD is closely related to the EDMD algorithm. It relies on calculating the eigentriples associated with K from a dual matrix K^ evaluated using a kernel trick commonly applied in machine learning.49 This method can be computationally advantageous for cases when the number of basis functions exceeds the number of available measurements of the state of the system. We find that the kernel DMD can also be modified to include symmetry considerations in order to optimize the calculations. The method is provided in  Appendix E.

Assume that the data are symmetric as defined by Eq. (28) with respect to the symmetry group Γ. A “perfect” basis is the one respecting the isotypic decomposition of Γ. Suppose that the basis functions belong to isotypic components of ΣΓ. That choice will affect the structure of K. We study that structure by evaluating the elements of A since K and G+ have the same structure as A.

If the system is Γ-equivariant, ΣΓ, and the set of actions of Σ is a subset of actions of Γ, the system is also Σ-equivariant. Thus, picking a basis respecting the isotypic decomposition of Σ will have the block diagonal structure corresponding to Σ. This means that the choice of basis results in block diagonal K, but its structure does not provide any additional information about the symmetries of the system.

If the system is Γ-equivariant and ΓΣ, functions belonging to particular isotypic components of Σ are not preserved by the action of K. In the case of symmetric trajectories, that can provide information on what the true symmetries of the system are.

A simple case corresponds to Σ=Σ0×Γ. In this case, every action of Σ0 commutes with every action of Γ. Each isotypic component of F with respect to Σ can be expressed as Fpq=FΣ0p(FΓ)q, where FΣ0p denotes the pth isotypic component of Σ0. In this case, the off-diagonal blocks corresponding to interactions between isotypic components Fp1q1 and Fp2q2 are zero if q1=q2, and generally nonzero otherwise. For instance, if a network of three Duffing oscillators similar to those discussed in, e.g., Example IV.2, but has no permutation symmetry and Σ=Z2×D3, with the action of Z2 being a sign flip in nodal dynamics, the isotypic components corresponding to these Z2 symmetries will not interact with each other, resulting in two blocks in K.

Next, we consider a more general case. We denote the pth isotypic component of F with respect to the symmetry group Σ by FΣp. We note that if the following conditions hold:

FΣpFΓq1,
(44)
FΣpFΓq2,
(45)

where q1 and q2 index different isotypic components of Γ, then the off-diagonal blocks of K corresponding to interactions between those components are generally nonzero.

The condition for FΣpFΓq is equivalent to

PΣp°(PΓq°f)0,
(46)

where f is an arbitrary function and PΣp denotes the projection operator onto the pth isotypic component with respect to the symmetry group Σ,

PΣp°(PΓq°f)=σΣχp(σ)σρ°γΓχq(γ)γρ°f=σΣ,γΓχp(σ)χq(γ)(σργρ)°f.
(47)

Let H be the set of “left cosets” of Γ in Σ (defined as H=Σ/Γ={σΓ:σΣ}, where σΓ={σγ:γΓ}15). Thus, the condition of Eq. (46) holds if for all hH,

γΓχp(hγ1)χq(γ)=0.
(48)

Using Eq. (48), the structure of Γ can be determined given the structure of K and Σ used in the calculation. Characters of irreducible representations are available for small order symmetry groups, and scaling up to a larger order is possible using computational group theory software. Below is an example for the subgroups of a dihedral group D3.

Example IV.3

Coupled Duffing oscillators:(Z2×Z2)- or (Z2×Z3)-equivariant system with Z2×D3 basis functions.

We consider different coupling schemes of networks of 3 Duffing oscillators shown in Figs. 1(b) and 1(c). We first note that the Z2 symmetry generated by a sign flip is still present in the system for both cases, so two noninteracting blocks corresponding to irreducible representations of that group with respect to that action are still present. Now, we focus on the structure of K within each of these noninteracting blocks.

First, let the function dictionary symmetry be Σ=D3=r,κ|r3=κ2=e,κrκ=r1 and the true symmetry of the system be Γ=Z3=r|r3=e, where rD3 and rZ3 have the same action. The isotypic component decomposition of D3 is defined in Example IV.2 and can be written as F=Ftr,D3Fsign,D3Fst,D3. The isotypic component decomposition of Z3 is defined as F=Ftr,Z3Fω,Z3Fω2,Z3 (Z3 has 31-dimensional irreducible representations with χtr(r)=1,χω(r)=ω,χω2(r)=ω2). We note that
  • Ftr,Z3Ftr,D3

  • Functions belonging to Ftr,Z3 satisfy the following condition:

  • Ftr,Z3={f:f(x3,x1,x2)=f(x1,x2,x3)}.

  • Functions belonging to Ftr,D3 satisfy the following conditions:
    Ftr,D3={f:f(x3,x1,x2)=f(x1,x2,x3),f(x1,x3,x2)=f(x1,x2,x3)}.
  • Thus,Ftr,Z3Ftr,D3=Ftr,D3.

  • Ftr,Z3Fsign,D3

  • This can be shown in a similar fashion.

  • (Fω,Z3Fω2,Z3)Fst,D3=Fst,D3

  • Functions belonging to Fω/ω2,Z3 satisfy the following condition:

  • Fω,Z3={f:f(x3,x1,x2)=ωf(x1,x2,x3)},

  • Fω2,Z3={f:f(x3,x1,x2)=ω2f(x1,x2,x3)}.

  • Functions belonging to Fst,D3 satisfy the following conditions:
    Fst,D3={f1,f2:f1(x3,x1,x2)=ωf1(x1,x2,x3),f2(x3,x1,x2)=ω2f2(x1,x2,x3),f1(x1,x3,x2)=f2(x1,x2,x3),f2(x1,x3,x2)=f1(x1,x2,x3)}.
  • Thus,(Fω,Z3Fω2,Z3)Fst,D3=Fst,D3.

Thus, the off block-diagonal structure of K is defined by
  • KFtr,D3Fsign,D3

  • To see that is the case, we need to refer back to conditions in Eqs. (44) and (45). Since the intersections of both Ftr,D3 and Fsign,D3 with Ftr,Z3 are nonzero, the intersection of the components produces nonzero elements in K.

  • KFsign,D3Ftr,D3

  • This can be shown in a similar fashion.

  • Other off-diagonal blocks are zeros

  • For instance, since the intersections of both Ftr,D3 and Fst,D3 with no specific isotypic component of Z3 are simultaneously nonzero,KFtr,D3Fst,D3= and KFst,D3Ftr,D3=, thus corresponding to blocks of zeros in K.

This structure is illustrated in Fig. 3(b) and differs from that in Fig. 3(a).
FIG. 3.

Structure of K with basis functions belonging to the isotypic components of Σ=D3 for different underlying symmetries of the system. The labels above and to the left correspond to the isotypic components interacting in each block. Two labels are needed to index over D3 (bold font) and Z2 (standard font). (a) K if Σ=Γ=D3. (b) K if Σ=D3, Γ=Z3. (c) K if Σ=D3, Γ=Z2.

FIG. 3.

Structure of K with basis functions belonging to the isotypic components of Σ=D3 for different underlying symmetries of the system. The labels above and to the left correspond to the isotypic components interacting in each block. Two labels are needed to index over D3 (bold font) and Z2 (standard font). (a) K if Σ=Γ=D3. (b) K if Σ=D3, Γ=Z3. (c) K if Σ=D3, Γ=Z2.

Close modal

Now, let Σ=D3 and Γ=Z2. Here,Z2=e,κ|κ2=e, and κD3 and κZ2 have the same action. The isotypic component decomposition of Z2 is defined as F=Ftr,Z2Fsign,Z2.

We note that

  • Ftr,Z2Ftr,D3,

  • Ftr,Z2Fst,D3,

  • Fsign,Z2Fsign,D3,

  • Fsign,Z2Fst,D3.

Additionally,
  • Ftr,Z2Fsign,D3=,

  • Fsign,Z2Ftr,D3=.

Thus, the off block-diagonal structure of K is defined by:
  • KFtr,D3Fsign,D3=.

  • Other off-diagonal blocks corresponding to interactions between node permutation isotypic components are generally nonzero.

This structure is illustrated in Fig. 3(c) and differs from that in Figs. 3(a) and 3(b).

This example shows that the structure of the approximation K with maximal symmetries assumed provided information about the actual underlying symmetries of the system.

In this specific example, we can see that any off-diagonal block can be used as an indicator of whether a symmetry subgroup Z2 or Z3 is present, as seen in Figs. 3(b) and 3(c).

In summary, the symmetries of the system can be detected from the structure of the Koopman operator approximation matrix. This allows using the same method both to detect the symmetries of dynamical systems from data and to obtain their Koopman operator approximation. However, we also note that there are multiple other methods to detect the symmetries of dynamical systems; for instance, this work can be related to symmetry detectives.5 Additionally, in many cases, we do not expect perfect symmetries to be present in data, as discussed in Sec. IV C. Thus, it would be useful to see how these imperfections affect the results in order to be able to apply the symmetry considerations in a more practical setting.

In this paper, we provide a general approach for dimensionality reduction in the calculation of Koopman operator approximations by exploiting the underlying symmetries present in both the system’s dynamics and system’s structure. The exact scaling achieved by the reduction depends on the structure of the symmetry group of the dynamical system, specifically, the number of irreducible representations of the symmetry group and the dimensionality of these irreducible representations.

The results outlined in this paper, similar to most of the other literature related to dynamical systems with symmetries, are immediately applicable in the case of the existence of exact symmetries in nonlinear dynamics. That is the case when the system is completely deterministic and the initial conditions respect the symmetries of the system. If the symmetries of the system are known and the available trajectories are deterministic, it is always possible to reconstruct the trajectories that are related via the symmetry group of the system. Then, a full set of trajectories respecting the symmetries of the system can be used to approximate the Koopman operator and its eigendecomposition.

However, in many systems, that information is not necessarily available ahead of time and the symmetries are not present in data, even if the initial conditions are symmetric, because of the presence of noise in the system. Some of the examples of not fully symmetric data include the following cases and their combinations:

  • Deterministic systems with measurement noise. DMD for systems with measurement noise and possible ways to correct for it are presented in Ref. 12. It is shown in  Appendix F that in this case the expected values of off-diagonal elements of K computed using the EDMD are zero, so the block decomposition may still be applicable.

  • Stochastic systems with symmetric initial conditions and process noise. DMD applied to the systems with process noise is studied, for instance, in Ref. 4.

  • Systems with imperfect symmetries due to sampling and unknown underlying symmetries.

  • Systems with imperfect symmetries in dynamics (e.g., slight parameter mismatch).

All these cases require separate treatment, and whether the isotypic component decomposition is still useful in computing the Koopman operator approximation will vary depending on specific characteristics of the data available from the system, such as the strength of the noise or the trajectory sampling characteristics.

In this paper, we apply tools from group theory and representation theory to study the structure of the Koopman operator for equivariant dynamical systems. This approach can be applied to systems with permutation symmetries (e.g., networks symmetric under node permutations where the information about the symmetries is contained in the adjacency matrix), systems with intrinsic dynamical symmetries, and systems with both types of symmetries present. We find that the operator itself and its approximations can be block diagonalized using a symmetry basis that respects the isotypic component structure related to the underlying symmetry group and the actions of its elements. For the approximation matrix to be exactly block diagonal, the data must respect the symmetries of the system. That can be readily accomplished if the underlying symmetry is known ahead of time (e.g., the topology of the network is known). Symmetry considerations are applicable to both EDMD and kernel DMD, which means that they become useful both in the regime when the number of observables is much larger than that of measurements and vice versa.

Moving forward, it would be possible to extend these results. For instance, a natural next step would be to investigate the effect of noise and imperfect symmetries on the Koopman operator approximations for equivariant or nearly equivariant dynamical systems in more detail. It would also be useful to apply the symmetry considerations beyond the range of applicability of EDMD. In that case, symmetry considerations can be used to study, for instance, systems with continuous Koopman spectra. Other future directions include relating our results to the existing literature on equivariant bifurcation theory, stability analysis, and continuous symmetries.

This work was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office under MURI Award No. W911NF-13-1-0340. The authors thank Jordan Snyder, Mehran Mesbahi, Afshin Mesbahi, and the entire MURI team for useful discussions.

We show that d-dimensional irreducible representations of Γ yield identical blocks of K in the isotypic component basis obtained using the unitary irreducible representations of Γ.

Let the function space be decomposed into isotypic components according to the actions of the symmetry group Γ of order |Γ|, γρΓρ: F=F1FN, where N is the number of irreducible representations of Γ. Let Fp be one of these isotypic components with a corresponding unitary irreducible representation with elements Rp(γ) corresponding to γΓ, and let dp be the dimensionality of that representation.

The projection operator is defined as

Pmnp=dp|Γ|γΓ[Rp(γ)]mnγρ.
(A1)

It acts on fF to produce sets of projected functions according to

ξmnp=Pmnp°f.
(A2)

We already know that Kξmnp=hp, where hpFp. The subspace Fp can be decomposed into dp components Fp=Fp,1Fp,dp, where Fp,m={g|g=Pmnp°f,fF,n=1,,dp}. This is a well-defined decomposition since Pmnpf,Pklph=f,PnmpPklph=f,δmkPnlph11 can be nonzero only when m=k.

We want to show that KξmnpFp,m (also true for any linear operator that commutes with the action of the symmetry group). Since the operator commutes with the actions of the group,

KPmnp°f=PmnpK°f=Pmnp°hFp,m.
(A3)

Here, K°fh.

Let fΓ={γ°f|γΓ}. Any set of linearly independent functions that span fΓ can be transformed into a symmetry respecting basis obtained by calculating all the projections Pmnp°fγ, where fγfΓ. That corresponds to a block diagonal form of the Koopman operator.

We have already shown that K, the approximation of K, also commutes with the actions of the elements of Γ for Γ-equivariant dynamical systems with Γ-equivariant data. Thus, we can obtain an observable dictionary that block diagonalizes K into |Γ| blocks, where each dp-dimensional irreducible representation results in dpdp×dp-dimensional blocks.

Additionally, suppose KPmnp°f=h, then KPknp°f=PkmpKPmnp°f=Pkmp°h. This gives us the relation between blocks in K corresponding to the same irreducible representation p. In the context of the approximation K, it means that we get that Kp,i (blocks corresponding to ψFp,i) are equal for all i (for data respecting the symmetries of the system and a proper ordering of basis functions).

We show that G and G+ (+ denotes the Moore-Penrose pseudoinverse) commute. We note that G is a Hermitian matrix since

G=mΨ(xm)Ψ(xm)=mΨ(xm)Ψ(xm)=G.
(B1)

Thus, G is also normal, i.e., GG=GG. We show that if G is normal, GG+=G+G.

Two of the criteria that define the Moore-Penrose pseudoinverse35 state that G+=G+GG+ and (GG+)=GG+. It follows that the following relation holds: G+=G+(GG+)=G+(G+)G. Using that relation6 and commutativity of + and operations, we obtain

G+G=G+(G+)GG=(G+)G+GG=(G+)G=(G+(G+)G)G=GG+(G+)G=GG+.
(B2)

Since the action of γ commutes with A and G, and since G commutes with G+, the action of γ commutes with K=G+A, which is a Koopman operator approximation.

We note that the off-diagonal blocks of the approximation matrix K are only zero if the symmetries are present in the data. The nondiagonal elements can be set to zero explicitly, making computations more efficient.

Moreover, if the symmetries are known a priori, a fully symmetrized data set is not necessary to obtain an approximation of the diagonal block elements of K. Suppose that we have a dictionary of basis functions belonging to a particular isotypic component with respect to the action of the full symmetry group Γ. We label that component by p, and the corresponding unitary irreducible representation by Rp(γ). We then define Rp(γ)Rp(γ)In×n, where n is the number of functions in the pth isotypic component. By definition of isotypic components, even for unsymmetrized data it is the case that

Ξ(γxm)Ξ(γym)=(Ξ(xm)Rp(γ))(Ξ(ym)Rp(γ))=(Rp(γ))Ξ(xm)Ξ(ym)Rp(γ).
(C1)

If R(γ) is a diagonal matrix,

Ξ(γxm)Ξ(γym)=Ξ(xm)Ξ(ym).
(C2)

In the case of one-dimensional irreducible representations, it is not necessary to use reflected data to produce the blocks of K. For instance, as in Example IV.2, where one of the generators of D3, κ, corresponds to a nondiagonal matrix R(κ). In that case,

γmΞ(γxm)Ξ(γym)=3(Ξ(xm)Ξ(ym)+Ξ(κxm)Ξ(κym)).
(C3)

This demonstrates that the method is data-efficient and sets up requirements on the symmetry properties of data.

We show that rotating the observable dictionary preserves the symmetries of the reconstructed trajectories.

Suppose that we have a basis consisting of dictionary functions Dψ and a dictionary Dξ obtained by Ξ=TΨ. Let Ψ(t)(ψ1(x(t))ψN(x(t)))T and Ξ(t)(ξ1(x(t))ξN(x(t)))T. We show that rotating the dictionary function vector by the projection matrix T does not affect the trajectory reconstruction,

Ψt+1=KψΨt,Ξt=TΨt,Ξt+1=KψTΨt=TKψΨt=TΨt+1.
(D1)

Next, we show that the state reconstruction preserves the symmetries of the system. Let P be the action of the symmetry group on the basis functions Ψ. We aim to show that if Ψt+1=KΨt, then PΨt+1=KPΨt. It follows directly from the fact that K and P commute,

PΨt+1=PKΨt=KPΨt.
(D2)

Thus, the trajectories of basis functions reconstructed using the EDMD approximation are Γ-equivariant, just like the original system. In particular, this is true in the case of the evolution of the full state observable.

Kernel DMD introduced in Ref. 49 is a variant of approximating the Koopman operator matrix most efficient when the number of measurement points is much smaller than the number of basis functions. Kernel DMD relies on evaluating G^ and A^ using the kernel method. Their elements can be found by indirectly evaluating the inner products in the basis function space: k(xm,yn)=Ψ(xm)Ψ(yn) [e.g., if k is a polynomial kernel, k(x,y)=(1+xyT)α]. We note that k(γx,γy)=k(x,y) due to the properties of inner products.

In kernel DMD, G^ij=k(xi,xj) and A^ij=k(xi,yj). The eigendecomposition of G^=QΣ2QT is then used to find the matrix K^ and use it in computing the eigendecomposition of the Koopman operator approximation matrix K,

K^=(Σ+QT)A^(QΣ+).
(E1)

Again, we pick a particular order of group elements similar to Eq. (29),

Ψx=Ψ(γ1x)Ψ(γ|Γ|x)whereΨ(x)=Ψ1(x1)ΨN(x1)Ψ1(xM/|Γ|)ΨN(xM/|Γ|).
(E2)

We also construct a permutation representation of the group with elements denoted by Pγi as defined in Eq. (30).

By Cayley’s theorem, such permutations form a group isomorphic to Γ. Determining the actions Pγi of the group generators is sufficient to find the actions of all the group elements. Let Pγk=PγkIn×n. We note that (Pγk)=(Pγk)1. It can be shown that

PγiG^=G^Pγi,PγiA^=A^Pγi.
(E3)

We do so for A^, and the proof for G^ is equivalent. We find that

(PγiA^)kl=A^pl=k(xp,yl),γp=γiγk,(A^Pγi)kl=A^kq=k(xk,yq),γq=γi1γl.
(E4)

Finally, k(xp,yl)=k(γixk,γiyq)=k(xk,yq).

Since relation (31) holds, the same reasoning can be applied to block diagonalize the matrix K^. It is sufficient to apply the projection operator42 

Pmnp=dp|Γ|γΓ[Rp(γ)]mnPγi.
(E5)

This projection operator is analogous to the one introduced in Eq. (25), except the symmetry group in this case acts by permuting the group elements.

We can apply the singular value decomposition of P to obtain the basis for the projection subspaces of irreducible representations (isotypic components). We form the transformation matrix T by finding the singular value decomposition (SVD) and stacking its eigenvectors as rows of T such that T=TIn×n.

Similar to EDMD, the isotypic component basis simplifies calculating the approximations of K^,

A^D=pqA^pq,G^D=pqG^pq=pqQpqΣpq2QpqT,K^D=pq(Σpq+QpqT)A^pq(QpqΣpq+).
(E6)

The modification is summarized in Table II.

TABLE II.

Kernel DMD vs modified kernel DMD for Γ-equivariant systems. |Γ| is the order of Γ. The irreducible representations of Γ are indexed by p and are dp-dimensional. Here, M is the number of data points used by the algorithm and {(xm, ym)} respects the symmetries of the system.

Standard kernel DMDKernel DMD for Γ-equivariant systems
  • Pick a dictionary of N observables

  • Evaluate the kernel functions at data points xi and yi

  • Evaluate the entries of G^,A^: M2 elements

  • Obtain G^+: M × M matrix

  • Find K^=(Σ+QT)A^(QΣ+): M × M matrices

  • Find the eigendecomposition of K^: M × M matrix

 
  • Pick a dictionary of N observables

  • Identify the symmetry Γ of the system and find the irreducible representations of Γ

  • Change the basis to a Γ-symmetric basis using Eqs. (30) and (E5)

  • Evaluate the observables at data points xi and yi, add trajectories to reflect the symmetries if necessary

  • To obtain the blocks K^pq of K^ (each isotypic component corresponds to dp blocks), for each p:

    • Evaluate the entries of G^p1,A^p1: (Mp/dp)2 elements

    • Obtain G^p1+: (Mp/dp) × (Mp/dp) matrix

    • Find K^p1=G^p1+A^p1: (Mp/dp) × (Mp/dp) matrices

    • Find the eigendecomposition of K^p1: (Mp/dp) ×(Mp/dp) matrix

    • The other K^pq blocks equal to K^p1

  • K^=pq=1dpK^pq; its eigenvalues are the eigenvalues of K^p, and its eigenvectors only have Mp nonzero elements; mathematically, eigenvectors vkl of K are of the form (vkl)i=pδpkvpl

 
Standard kernel DMDKernel DMD for Γ-equivariant systems
  • Pick a dictionary of N observables

  • Evaluate the kernel functions at data points xi and yi

  • Evaluate the entries of G^,A^: M2 elements

  • Obtain G^+: M × M matrix

  • Find K^=(Σ+QT)A^(QΣ+): M × M matrices

  • Find the eigendecomposition of K^: M × M matrix

 
  • Pick a dictionary of N observables

  • Identify the symmetry Γ of the system and find the irreducible representations of Γ

  • Change the basis to a Γ-symmetric basis using Eqs. (30) and (E5)

  • Evaluate the observables at data points xi and yi, add trajectories to reflect the symmetries if necessary

  • To obtain the blocks K^pq of K^ (each isotypic component corresponds to dp blocks), for each p:

    • Evaluate the entries of G^p1,A^p1: (Mp/dp)2 elements

    • Obtain G^p1+: (Mp/dp) × (Mp/dp) matrix

    • Find K^p1=G^p1+A^p1: (Mp/dp) × (Mp/dp) matrices

    • Find the eigendecomposition of K^p1: (Mp/dp) ×(Mp/dp) matrix

    • The other K^pq blocks equal to K^p1

  • K^=pq=1dpK^pq; its eigenvalues are the eigenvalues of K^p, and its eigenvectors only have Mp nonzero elements; mathematically, eigenvectors vkl of K are of the form (vkl)i=pδpkvpl

 

Finally, the approximations of Koopman eigenvalues, eigenfunctions, and eigenmodes can be calculated using KD, as shown in Ref. 49.

Transfer operators with process and measurement noise were also studied in Ref. 40. Characterizing and correcting for the effect of sensor noise in DMD is discussed in Ref. 12. We need to extend the results to EDMD to quantify the effect of sensor noise on the structure of the matrix K. The main modification that needs to be made is the consideration of the effect of the noise in measuring X and Y on Ψx and Ψy.

Let X and Y be matrices analogous to Ψx and Ψy corresponding to the full-state observable evaluated at discrete time steps. We denote the sensor noise matrices by Nx and Ny, so that the measured Xn and Yn can be found from Xn=X+Nx and Yn=Y+Ny. We assume that the noise distributions respect the symmetries of the system, which might be the case, for instance, for symmetric networks. Moreover, we assume that the noise is state-independent.

We can form vectors Ψxn and Ψyn that can be used to find the approximation K using EDMD,

Kn=Ψxn+Ψyn.
(F1)

Here, (Ψxn)ij=ψj((Xn)i), (Ψyn)ij=ψj((Yn)i), and NΨ,x and NΨ,y correspond to noise matrices obtained as

(NΨ,x)ij=ψj(Xi+Nx,i)ψj(Xi).
(F2)

We aim to show that E(PKn)=E(KnP), meaning that the expected value of the Koopman operator Kn commutes with the permutation matrix corresponding to an element of the symmetry group. If that is the case, then the expected values of the off-block-diagonal elements of Kn in a symmetry adapted basis as defined in Eq. (27) are zero. To do that, we can express Kn as

Kn=Ψxn+Ψyn=(Ψx+NΨ,x)+(Ψy+NΨ,y)=((Ψx+NΨ,x)(Ψx+NΨ,x))+(Ψx+NΨ,x)(Ψy+NΨ,y)=(ΨxΨx+ΨxNΨ,x+NΨ,xΨx+NΨ,xNΨ,x)+(ΨxΨy+ΨxNΨ,y+NΨ,xΨy+NΨ,xNΨ,y).
(F3)

If the inverse of the first term exists, it can be expanded into the Taylor series with terms of the form below in a weak noise limit. We need to show that

PγkE(M1M2Mn1Mn)
(F4)
=E(M1M2Mn1Mn)Pγk.
(F5)

Here, the matrices Mi are selected from NΨ,x/y and Ψx/y. That follows directly from

E((M1Pγk)(M2Pγk)(Mn1Pγk)(MnPγk))=Pγk1E(M1M2Mn1Mn)Pγk=E(M1M2Mn1Mn).
(F6)

Thus, the expected values of the off-block-diagonal elements of Kn are zero in the isotypic component basis.

1.
P.
Ashwin
and
J. W.
Swift
, “
The dynamics of n weakly coupled identical oscillators
,”
J. Nonlinear Sci.
2
(
1
),
69
108
(
1992
).
2.
N.
Aubry
,
W.-Y.
Lian
, and
E. S.
Titi
, “
Preserving symmetries in the proper orthogonal decomposition
,”
SIAM J. Sci. Comput.
14
(
2
),
483
505
(
1993
).
3.
S.
Bagheri
, “
Koopman-mode decomposition of the cylinder wake
,”
J. Fluid Mech.
726
,
596
623
(
2013
).
4.
S.
Bagheri
, “
Effects of weak noise on oscillating flows: Linking quality factor, Floquet modes, and Koopman spectrum
,”
Phys. Fluids
26
(
9
),
094104
(
2014
).
5.
E.
Barany
,
M.
Dellnitz
, and
M.
Golubitsky
, “
Detecting the symmetry of attractors
,”
Physica D
67
(
1–3
),
66
87
(
1993
).
6.
A.
Ben-Israel
and
T. N. E.
Greville
,
Generalized Inverses: Theory and Applications
(
Springer Science & Business Media
,
2003
), Vol. 15.
7.
B. W.
Brunton
,
L. A.
Johnson
,
J. G.
Ojemann
, and
J. N.
Kutz
, “
Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition
,”
J. Neurosci. Methods
258
,
1
15
(
2016
).
8.
S. L.
Brunton
,
J. L.
Proctor
, and
J. N.
Kutz
, “
Discovering governing equations from data by sparse identification of nonlinear dynamical systems
,”
Proc. Natl. Acad. Sci. U.S.A.
113
(
15
),
3932
3937
(
2016
).
9.
M.
Budišić
,
R.
Mohr
, and
I.
Mezić
, “
Applied Koopmanism
,”
Chaos
22
(
4
),
047510
(
2012
).
10.
Y. S.
Cho
,
T.
Nishikawa
, and
A. E.
Motter
, “
Stable chimeras and independently synchronizable clusters
,”
Phys. Rev. Lett.
119
(
8
),
084101
(
2017
).
11.
J. F.
Cornwell
,
Group Theory in Physics: An Introduction
(
Academic Press
,
1997
), Vol. 1.
12.
S. T. M.
Dawson
,
M. S.
Hemati
,
M. O.
Williams
, and
C. W.
Rowley
, “
Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition
,”
Exp. Fluids
57
(
3
),
42
(
2016
).
13.
M.
Dellnitz
and
S.
Klus
, “
Sensing and control in symmetric networks
,”
Dyn. Syst.
32
(
1
),
61
79
(
2017
).
14.
J.
Emenheiser
,
A.
Chapman
,
M.
Pósfai
,
J. P.
Crutchfield
,
M.
Mesbahi
, and
R. M.
D’Souza
, “
Patterns of patterns of synchronization: Noise induced attractor switching in rings of coupled nonlinear oscillators
,”
Chaos
26
(
9
),
094816
(
2016
).
15.
M.
Golubitsky
and
I.
Stewart
,
The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space
(
Springer Science & Business Media
,
2003
), Vol. 200.
16.
M.
Golubitsky
and
I.
Stewart
, “
Recent advances in symmetric and network dynamics
,”
Chaos
25
(
9
),
097612
(
2015
).
17.
M.
Golubitsky
,
I.
Stewart
, and
D. G.
Schaeffer
,
Singularities and Groups in Bifurcation Theory
(
Springer Science & Business Media
,
2012
), Vol. 2.
18.
J. D.
Hart
,
Y.
Zhang
,
R.
Roy
, and
A. E.
Motter
, “
Topological control of synchronization patterns: Trading symmetry for stability
,”
Phys. Rev. Lett.
122
,
058301
(
2019
).
19.
E.
Kaiser
,
J. N.
Kutz
, and
S. L
Brunton
, “Discovering conservation laws from data for control,” in 2018 IEEE Conference on Decision and Control (CDC) (IEEE, 2018), pp. 6415–6421.
20.
S.
Klus
,
P.
Koltai
, and
C.
Schütte
, “
On the numerical approximation of the Perron–Frobenius and Koopman operator
,”
J. Comput. Dynam.
3
(
1
),
51
79
(
2016
).
21.
D. E.
Knuth
,
The Art of Computer Programming
(
Pearson Education
,
1997
), Vol. 3.
22.
B. O.
Koopman
, “
Hamiltonian systems and transformation in Hilbert space
,”
Proc. Natl. Acad. Sci. U.S.A.
17
(
5
),
315
318
(
1931
).
23.
M.
Korda
and
I.
Mezić
, “
On convergence of extended dynamic mode decomposition to the Koopman operator
,”
J. Nonlinear Sci.
28
(
2
),
687
710
(
2018
).
24.
M.
Korda
,
M.
Putinar
, and
I.
Mezić
, “
Data-driven spectral analysis of the Koopman operator
,”
Appl. Comput. Harmonic Anal.
(published online).
25.
A.
Lasota
and
M. C.
Mackey
,
Probabilistic Properties of Deterministic Systems
(
Cambridge University Press
,
1985
).
26.
C.
Letellier
and
L. A.
Aguirre
, “
Investigating nonlinear dynamics from time series: The influence of symmetries and the choice of observables
,”
Chaos
12
(
3
),
549
558
(
2002
).
27.
Q.
Li
,
F.
Dietrich
,
E. M.
Bollt
, and
I. G.
Kevrekidis
, “
Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator
,”
Chaos
27
(
10
),
103111
(
2017
).
28.
B.
Lusch
,
J. N.
Kutz
, and
S. L.
Brunton
, “
Deep learning for universal linear embeddings of nonlinear dynamics
,”
Nat. Commun.
9
(
1
),
4950
(
2018
).
29.
B. D.
MacArthur
,
R. J.
Sánchez-García
, and
J. W.
Anderson
, “
Symmetry in complex networks
,”
Discrete Appl. Math.
156
(
18
),
3525
3531
(
2008
).
30.
M. H.
Matheny
,
J.
Emenheiser
,
W.
Fon
,
A.
Chapman
,
A.
Salova
,
M.
Rohden
,
J.
Li
,
M. H.
de Badyn
,
M.
Pósfai
,
L.
Duenas-Osorio
et al., “
Exotic states in a simple network of nanoelectromechanical oscillators
,”
Science
363
(
6431
),
eaav7932
(
2019
).
31.
P. G.
Mehta
,
M.
Hessel-von Molo
, and
M.
Dellnitz
, “
Symmetry of attractors and the Perron–Frobenius operator
,”
J. Difference Equ. Appl.
12
(
11
),
1147
1178
(
2006
).
32.
A.
Mesbahi
,
J.
Bu
, and
M.
Mesbahi
, “
On modal properties of the Koopman operator for nonlinear systems with symmetry
,” in
2019 American Control Conference (ACC)
(
IEEE
,
2019
), pp.
1918
1923
.
33.
L. M.
Pecora
,
F.
Sorrentino
,
A. M.
Hagerstrom
,
T. E.
Murphy
, and
R.
Roy
, “
Cluster synchronization and isolated desynchronization in complex networks with symmetries
,”
Nat. Commun.
5
,
4079
(
2014
).
34.
L. M.
Pecora
,
F.
Sorrentino
,
A. M.
Hagerstrom
,
T. E.
Murphy
, and
R.
Roy
, “Discovering, constructing, and analyzing synchronous clusters of oscillators in a complex network using symmetries,” in Advances in Dynamics, Patterns, Cognition (Springer, 2017), pp. 145–160.
35.
R.
Penrose
, “A generalized inverse for matrices,” in Mathematical Proceedings of the Cambridge Philosophical Society (Cambridge University Press, 1955), Vol. 51, pp. 406–413.
36.
C. W.
Rowley
and
S. T. M.
Dawson
, “
Model reduction for flow analysis and control
,”
Annu. Rev. Fluid Mech.
49
,
387
417
(
2017
).
37.
H.
Rubin
and
H. E.
Meadows
, “
Controllability and observability in linear time-variable networks with arbitrary symmetry groups
,”
Bell Syst. Tech. J.
51
(
2
),
507
542
(
1972
).
38.
P. J.
Schmid
, “
Dynamic mode decomposition of numerical and experimental data
,”
J. Fluid Mech.
656
,
5
28
(
2010
).
39.
A. S.
Sharma
,
I.
Mezić
, and
B. J.
McKeon
, “
Correspondence between Koopman mode decomposition, resolvent mode decomposition, and invariant solutions of the Navier–Stokes equations
,”
Phys. Rev. Fluids
1
(
3
),
032402
(
2016
).
40.
S.
Sinha
,
B.
Huang
, and
U.
Vaidya
, “Robust approximation of Koopman operator and prediction in random dynamical systems,” in 2018 Annual American Control Conference (ACC) (IEEE, 2018), pp. 5491–5496.
41.
F.
Sorrentino
,
L. M.
Pecora
,
A. M.
Hagerstrom
,
T. E.
Murphy
, and
R.
Roy
, “
Complete characterization of the stability of cluster synchronization in complex dynamical networks
,”
Sci. Adv.
2
(
4
),
e1501737
(
2016
).
42.
E.
Stiefel
and
A.
Fässler
,
Group Theoretical Methods and Their Applications
(
Springer Science & Business Media
,
2012
).
43.
Y.
Susuki
and
I.
Mezic
, “
Nonlinear Koopman modes and coherency identification of coupled swing dynamics
,”
IEEE Trans. Power Syst.
26
(
4
),
1894
1904
(
2011
).
44.
Y.
Susuki
,
I.
Mezic
,
F.
Raak
, and
T.
Hikihara
, “
Applied Koopman operator theory for power systems technology
,”
Nonlinear Theory Appl. IEICE
7
(
4
),
430
459
(
2016
).
45.
A.
Tantet
,
V.
Lucarini
,
F.
Lunkeit
, and
H. A.
Dijkstra
, “
Crisis of the chaotic attractor of a climate model: A transfer operator approach
,”
Nonlinearity
31
(
5
),
2221
(
2018
).
46.
S. M.
Ulam
,
A Collection of Mathematical Problems
(
Interscience Publishers
,
1960
), Vol. 8.
47.
A. J.
Whalen
,
S. N.
Brennan
,
T. D.
Sauer
, and
S. J.
Schiff
, “
Observability and controllability of nonlinear networks: The role of symmetry
,”
Phys. Rev. X
5
(
1
),
011005
(
2015
).
48.
M. O.
Williams
,
I. G.
Kevrekidis
, and
C. W.
Rowley
, “
A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition
,”
J. Nonlinear Sci.
25
(
6
),
1307
1346
(
2015
).
49.
M. O.
Williams
,
C. W.
Rowley
, and
I. G.
Kevrekidis
, “
A kernel-based method for data-driven Koopman spectral analysis
,”
J. Comput. Dynam.
2
(
2
),
247
265
(
2015
).