Thermal energy storage is being actively investigated for grid, industrial, and building applications for realizing an all-renewable energy world. Phase change materials (PCMs), which are commonly used in thermal energy storage applications, are difficult to design because they require excellent energy density and thermal transport, both of which are difficult to predict from simple physics-based models. In this Perspective, we describe recent advances in the understanding of the equilibrium and transport properties of PCM materials that can help accelerate technology development. We then emphasize how the microscopic phonon picture of both liquids and solids enables a better understanding of novel PCM systems and their predictive power. We then show how this microscopic picture can be used to understand kinetic processes, such as supercooling, and how it can impact the thermal power output in thermal energy storage systems.

## INTRODUCTION

One of the central challenges of this century is mitigating and reversing anthropogenic climate change. Reducing and ultimately eliminating greenhouse gas emissions to solve this problem will necessitate decarbonizing our entire energy infrastructure. This means that regardless of the diversity of downstream forms of energy and end uses for that energy, all primary energy sources will need to be carbon-free. It is expected that a significant fraction of this carbon-free energy will come from renewable sources such as solar and wind energy. Unfortunately, these sources are inherently intermittent, while our energy infrastructure and economy are predicated on reliable, dispatchable, and 24/7 consistent sources of energy. This creates a considerable timing mismatch between energy supply and demand for a carbon-free future. As a consequence, it will be impossible to switch to all-renewable primary energy sources without large scale (>100 kW), long duration (10–100 h), and inexpensive [levelized cost of storage (LCOS) < $0.05/kWh-cycle] energy storage technologies to bridge the gap between supply and demand.^{1}

Among the many energy storage technology options, thermal energy storage (TES) is very promising as more than 90% of the world's primary energy generation is consumed or wasted as heat.^{2} TES entails storing energy as either sensible heat through heating of a suitable material, as latent heat in a phase change material (PCM), or the heat of a reversible chemical reaction in a thermochemical material (TCM), as shown in Fig. 1. The stored energy can then be supplied directly as process heat to industrial applications and to buildings for thermal comfort^{3} as needed, providing a steady energy output while receiving intermittent energy inputs.^{4,5} The development of TES materials and systems was highlighted as one of the top five grand challenges for decarbonization,^{2} and it is particularly well suited for large-scale and long duration storage: TES technology is not constrained by specific geographic requirements and can be made modular and deployable to most regions. For example, TES does not require terrain with large height differentials like pumped hydro does. Furthermore, heat and non-gaseous phase change are among the safest forms of stored energy. TES does not have a catastrophic failure mode that can suddenly and destructively release all of its stored energy at once, unlike many other forms of energy storage such as high-speed flywheels, electrochemical batteries, gravitational potential energy storage, or compressed air. Finally, the additional capital cost to increase storage capacity of TES can be very low due to the abundance of inexpensive materials such as molten silicon for high temperatures^{6} or polymeric phase change materials for low temperatures.^{7} Additionally, in TES, most atoms comprising the storage material play a direct role in storing energy, so there is very little inactive material adding to the weight and cost. However, this can also create challenges in modeling the physics of TES systems because all constituent parts of the (frequently messy and complex) material are participatory and, therefore, must be considered along with their full degrees of freedom, rather than being able to restrict analysis to smaller subcomponents that are doing all the work. Improving these TES material modeling capabilities would help provide insight that accelerates material design.

Realizing large scales of integrated TES will require solving technological challenges associated with material design and thermal transport, which, in turn, vary with the application. For instance, industrial process heat for manufacturing requires TES at temperatures ranging from 200 to 1500 °C, solar-thermal energy harvesting typically uses molten salts at temperatures ∼400 °C, while several applications such as residential and commercial building HVAC, water desalination, typical sorbent regeneration for the direct air capture of CO_{2}, thermal management of batteries, and personal thermoregulation all require low temperatures <150 °C. Thermal energy storage materials and associated properties that govern thermal transport need to be tailored to these specific applications, which may include controlling transition temperatures, energy density (i.e., heat capacity or latent heat of fusion), thermal conductivity, nucleation dynamics, and overall enthalpies and entropies of reactions (Fig. 1). Efficient control of these properties first requires an understanding of their fundamental mechanisms and associated governing physics, but this is often still lacking.

While the physics of sensible heating of solids is well known with a high degree of confidence using the Debye Theory of Solids, the physics of phase change and solid to liquid phase change, in particular, is still not a completely solved problem. One often needs to resort to computational methods such as molecular dynamics (MD), combined with density functional theory, to understand the solid–liquid phase transition.

Phase Change Materials (PCMs) based on solid to liquid phase transition are one of the most promising TES materials for both low and high temperature applications.^{8} Considering the promise of PCM TES, in this Perspective, we describe recent advances in the understanding of the thermodynamic and kinetic properties of PCM materials that can help accelerate technology development. Despite their potential, many fundamental and applied questions remain unanswered, such as: (i) how do we model and predict thermodynamic properties such as the latent heat of fusion to design new phase change materials? (ii) how do kinetic processes such as supercooling impact the metastability of PCMs at different length scales? And (iii) how do transport properties such as thermal conductivity impact the thermal power output from such a storage system?

## DESIGN OF THERMAL ENERGY STORAGE MATERIALS AND SYSTEMS

For a first order phase transition in PCMs, solid–liquid equilibria exist when the Gibbs free energy of the liquid phase is equal to the solid phase such that $ \Delta G L \u2192 S=0$, yielding the relation $ \Delta H f u s= T m \Delta S f u s$, where $ \Delta S$ is the change in entropy between the solid and the liquid phases. The melting temperature, *T _{m}*, dictates the range of temperatures with which the PCM can operate effectively, while the enthalpy of phase change (latent heat of fusion, $ \Delta H f u s$) is a measure of the energy storage density of the PCM, as shown in Fig. 2. Selecting the right material requires knowing two of these three terms; entropy change is challenging to accurately predict owing to multiple entropy contributions associated with the melting process (e.g., vibrational, configurational, mixing, intra-molecular dynamics, etc.). As a result, the thermodynamic properties of storage materials are experimentally determined using various thermophysical characterization techniques— $ \Delta H f u s$ is typically measured using differential scanning calorimetry, and

*T*is based on the application. The full suite of experimental techniques required to sufficiently characterize the equilibrium properties for TES applications is time-consuming and often expensive. Thus, the development of simple analytical models to down-select materials with promising thermophysical properties is needed.

_{m}While latent heat of fusion, heat capacity, density, and other equilibrium thermophysical properties are critical for material selection, it is the non-equilibrium properties that drive the system-level design and determine the overall TES performance. Among the various non-equilibrium properties relevant to phase change materials, thermal conductivity and supercooling are the most important. Thermal conductivity determines the thermal energy charge/discharge rate or the power output, in addition to the storage system architecture and boundary conditions. Most high-energy density PCMs have correspondingly low thermal conductivities and—by design—high heat capacities, resulting in exceptionally low thermal diffusivities. For thermal reservoir type applications, moving the heat in and out can, therefore, be particularly challenging while keeping capital costs of heat exchangers low. This trade-off has recently been analyzed using thermal Ragone plots to optimize the design.^{12} Low thermal conductivity can be addressed by techniques such as the addition of nanoparticles and the impregnation of PCM into graphite matrices or other composite architectures. However, approaches to reduce supercooling by using either nucleating or thickening agents have met with limited and unreliable success. If unanticipated supercooling persists in PCM applications, the system may never nucleate the solid state, and the system utilization factor decreases to zero. Conversely, if the TES system is overdesigned, an unnecessary temperature bias may be introduced, which reduces the storage round-trip efficiency.

In the following sections, we highlight different physics-based models that describe equilibrium thermodynamic properties of the liquid state and their implications for thermal energy storage and identify future directions for research. We then discuss physics and statistics-based frameworks to explain metastable behavior and show that thermal transport strongly affects nonequilibrium TES performance.

### Material-level microscopic description

#### Equilibrium thermodynamic properties

For solid–liquid phase change materials (e.g., ice and paraffin wax) or pumpable sensible storage (e.g., hot water and molten salts), the thermodynamic properties of liquids are paramount in the modeling of these TES systems. Valuable insights into these properties can be made by simple physical models describing the equilibrium solid and liquid states that are important for thermal storage. These models offer selection rules and help reduce the number of experiments needed for full thermodynamic characterization.

To predict the melting point of a material, the enthalpy and entropy of both the solid and liquid phases must be known. Historically, quantifying the absolute value of the enthalpy and entropy of the liquid phase independent of the solid phase has been challenging. Consequently, simple melting “rules” formulated from experimental observation have been popular. The Lindemann melting criterion^{13–15} is perhaps the most widely used melting rule, and it states that melting occurs when the root mean amplitude of vibration exceeds a threshold value in relation to the nearest neighbor distance (originally stated to be 10%). At melt, all atoms vibrate at the Einstein frequency so the equipartition theorem can be used to equate the amplitude of vibration to the temperature, yielding $ T m= 4 \pi 2 m c l a 2 k b$, where *m* is the atomic mass, $ c l$ is the Lindemann constant, and *a* is the nearest neighbor distance. The Lindemann constant changes with the crystal structure, and the Lindemann melting rule provides only modest agreement with experimentally recorded melting points. There have been many sophisticated attempts to improve upon the Lindemann rule, but to date, there is no universally successful model for melting. Moreover, the exact mechanism responsible for the lattice instability that drives the transition to the liquid state remains a mystery, and no self-consistent solid-state model predicts it.

Although no solid-state model exhibits explicit solid–liquid equilibria like the van der Waals equation of state does for the liquid–gas transition, they do describe the solid-state thermodynamic properties reasonably well. Specifically, the Einstein and Debye models provide an excellent balance between accuracy and ease of use. However, a major failure of the crystal thermodynamics theory is that it cannot predict melting. This is attributed to the inability to handle anharmonicity near melting, which is believed to cause instability that drives the discontinuity in the free energy curve. Without a model/equation of state that explicitly exhibits first-order phase transitions, it is often easier to employ two separate mechanical models for the high temperature and low temperature phase and equate the free energy predicted by the two to determine the phase transition. As mentioned, the Debye model and/or its variants are adequate to describe the solid phase, but a robust mechanical model to describe the liquid phase has been elusive. In fact, a general approach to calculate the thermodynamic properties of liquids has been a long-standing problem in condensed matter physics.^{16} On the fundamental side, the Vibration-Transit (VT) model^{17–22} elucidates much of the fundamental statistical mechanics underlying liquid phenomena, but the application of this model requires sophisticated Molecular Dynamics (MD) simulations with high-fidelity interatomic potentials of the system. Wallace formulated the VT theory by building upon Stillinger and Weber's work^{23–26} on the multi-atomic potential energy surface. By building upon their concept of inherent structures, Wallace re-casts the configurational partition function into the product of partition functions of independent liquid “structures.”^{20,27} Each “structure” has a distinct contribution to the partition function and, therefore, the system thermodynamics, and the degeneracy of that structure provides the appropriate weighting function. The degeneracy, or the number of random structures per given energy level, is an unknown parameter. Only detailed MD simulations—whereby the system is repeatedly quenched, and the energy levels calculated and “binned”—can determine the number of random structures existing at each energy level. However, Wallace did find that for simple monatomic liquids, the average degeneracy was well-defined, and the standard deviation of that value is relatively small.^{20}

Less fundamental but more easily applied and quantitatively fairly accurate, Eyring developed a purely analytical model^{28–31} that provides a quantitative description of the liquid state. Qualitatively, Eyring assumes that a liquid has a mixture of solid-like and gas-like characteristics such that it supports both conventional solid-like phonon transport and gas-like ballistic transport. The model assumes that any atoms next to a vacancy behave as gas-like and then uses the vacancy concentration along with atom-coordination number to determine the fraction of gas-like atoms in the liquid. Once this fraction is established, Eyring employs what is essentially an interpolation between solid and liquid partition functions, weighted by the “gas-like” fraction; the percentage of gas-like particles is a fitting parameter, and it is phenomenological. Although this model provides an excellent fit for many liquids, it has little predictive power for TES applications because there is no way to know the fraction of gas-like molecules that will appear a priori. Eyring's work has, however, been extended by Henry Frank^{32–34} in his formulation of a free volume theory to describe entropy changes upon isothermal expansions (e.g., phase transitions). Although its focus is mainly on the liquid to vapor transition, for which free volume theory has many merits and has seen much success, it can also be applied to the solid–liquid phase transition. To use it in the condensed state, details of crystal structure and bonding are necessary, along with the density change upon melting. If these are known, the free volume theory can be used to estimate the entropy of fusion, and in addition, it can be used to evaluate Eyring's fraction of “gas-like” molecules, enabling quantitative prediction of the rest of the liquid's thermodynamic functions vs temperature and volume.

Both Eyring and Wallace's models lean on phonon theory to describe microscopic dynamics. Modern molecular dynamics approaches, such as the instantaneous normal mode theory^{35} and the two-phase thermodynamic model,^{36} are consonant with the phonon picture. The instantaneous mode theory employs lattice dynamics at short timescales to resolve the eigen frequencies governing the lattice dynamics. Real frequencies correspond to normal, solid-like modes, and imaginary frequencies signal instability, whereby the lattice will re-arrange. This picture is closely related to Wallace's vibration–transit view of liquid dynamics. The two-phase model abides much more closely to Eyring's microscopic view; in the two-phase model, a Fourier-transform of a particle's velocity autocorrelation function yields the liquid’s density of states, which they decompose into a solid-like part and a gas-like part, just as Eyring did in his analytical model. Both approaches are useful in extracting thermodynamic properties for molecular dynamics simulations but are quite complex to implement.

Lacking a simple model without fitting parameters, scientists and engineers have traditionally looked toward empirical rules to estimate thermodynamic properties in the liquid phase. Empirical rules, such as Richard's rule of melting,^{37} Lindemann's melting criteria, and Trouton's rule of vaporization,^{38} are practical rules of thumb, but they lack the fineness to offer design or selection rules for TES. In our recent work,^{39} we proposed a simple model for the entropy of melting of monatomic liquids, which—when combined with the Debye model for the solid phase—can be used to predict the entropy and enthalpy of fusion without fitting parameters. Furthermore, this model—under the appropriate limits—recovers Richard's rule of melting. The simple, physics-based model considers a particle's liquid phase dynamics in a rough potential energy surface. The dynamics of a particle in the liquid state include (i) lattice vibrations, which are solid-like, except they generally exhibit anharmonicity due to large displacements from meta-stable equilibrium; (ii) large-scale diffusion, which is gas like and describes the hopping motion of the atom from one lattice cage to another as described by Frenkel;^{16} and (iii) small-scale diffusion corresponding to movement within a local lattice cage and without significant or lasting change to neighbor atoms. The lattice vibrations are well-described by the conventional Debye model, and large-scale diffusion can generally be ignored at the melting point for elements with high viscosity upon melting.^{40,41} Due to small-scale diffusion, the center of oscillation of the traditional lattice-like vibrations changes,^{42,43} as shown in Fig. 3. This changing center of oscillation is a consequence of the complex, time-dependent nature of the atom's multi-atomic potential energy surface, so employing traditional lattice dynamics methods to resolve these effects on particle motion is impractical.^{44} However, molecular dynamics simulations have shown that the changing center of oscillation occurs at a frequency similar to the lattice vibrations and that they are often oscillatory in nature.

^{45}after analyzing root mean square vibrational amplitudes in simple metals and fitting to Eyring's liquid model, also concludes empirically that the effective liquid Debye temperature is $ \omega L , T m \u221a 2$ under the approximation that

*ω*=

_{l}*ω*confirming that Eq. (2) also works with Eyring's model. Using the effective Debye frequency for the liquid state, the entropy and enthalpy of the liquid at melt can be described using

_{s}*R*is the ideal gas constant, $ k b$ is Boltzmann's constant, and $\u210f is Planc k \u2032 s constant$. Combining this with the Debye model for the solid phase, the entropy and enthalpy of fusion for a PCM can be calculated as

^{46}the entropy as a function of temperature, valid until the boiling point, can be expressed as

*M*is the molar volume,

*B*is the fluid's bulk modulus, $ \alpha V$ is the fluid's volumetric thermal expansion coefficient, and $ \omega F$ is the liquid's Frenkel frequency.

Equation (4) gives an RMSE of 1.64% when compared with the experimental values for different elements whose liquid state Debye frequencies have been calculated^{47}—Ar, Cs, Rb, K, and Li. For elements with unknown liquid state Debye temperatures, the Grüneisen approximation can be used to calculate the liquid state Debye temperature from that of the solid state using $ \theta D(\rho )\u2248\theta ( \rho o) ( \rho \rho o ) \gamma $, where $ \rho o$ is the initial density, $\rho $ is the new density, and $\gamma $ is the Grüneisen parameter, which can be assumed to be 2 if it is not tabulated. Using this approximation, Eq. (4) gives an RMSE of 6.35% for 16 elements, and Eq. (4) gives less than 10% error compared with experimentally measured data across all temperatures for nine different elements, as shown in Fig. 4. Using Eq. (5) and ideal gas law for the vapor state, this model also provides excellent corrections to the well-known empirical Trouton's rule.

The above-mentioned equations apply to monatomic systems to isolate the thermodynamic contributions from inter-molecular interactions. However, TES applications usually involve multi-atomic materials; here, intra-molecular interactions in the liquid state are well-described using Einstein oscillators for vibrational modes and the Debye density of states for librational modes, which are the hindered rotational modes along the molecules’ three principal rotational axes. Intra-molecular vibrations are straightforward when the characteristic frequency, determined by the intra-molecular potential, is known. Librational modes are more nuanced—in addition to the intra-molecular potential, the nature of the librational modes depends on the molecular geometry and the lattice coordination in the liquid state at small time scales. Thus, making predictions without a detailed knowledge of the molecule and lattice structure becomes quite complex. Future work must be done to simplify the inclusion of intra-molecular contributions in the calculation of enthalpy/entropy in the liquid state to predict the enthalpy of fusion of TES materials.

#### Thermal conductivity

^{48}In 1964, Mclaughlin

^{49}reviewed 18 physics-based models that describe liquid thermal conductivity, yet none accurately capture the temperature and pressure dependent behavior over a wide range of liquids governed by different interatomic potentials. Since his review, Molecular Dynamics simulations (e.g., Modal contributions via Green–Kubo modal analysis)

^{50}have dominated the field, and a few new analytical models have been proposed. In the solid phase, phonon gas models have been successfully applied to various types of solids.

^{51}Most phonon gas models begin with the Boltzmann transport equation or kinetic theory and perform a modal analysis of the relevant energy carriers. In the case of non-metallic solids, the energy carriers are predominantly phonons or quantized vibrations propagating through the lattice. In liquids, there is some debate over the appropriate quantization. Many have championed the phonon picture at short timescales,

^{52}while recent work

^{53}has suggested local atomic re-arrangements, which are referred to as anankeons,

^{54}as the more fundamental quantization in liquids. It is not yet clear how anankeons can be incorporated into traditional statistical mechanics schemes for thermodynamic predictions, which is needed before they can be evaluated as energy carriers for analyzing thermal conductivity. During the interim, perhaps insight can be gleaned from the work of Allen and Feldman

^{55}on the concepts of locons, propagons, and diffusions, which are responsible for thermal transport in amorphous solids. More work is needed in this direction to see if it will prove fruitful, and until then, we believe the phonon picture has the most merit. For each polarization, the typical modal analysis reads like

^{51}

^{56}

Eyring accounted for the decrease in constant-volume heat capacity with his solid–gas interpolation of the liquid state.^{28,45} As the liquid increases in temperature, the fraction of gas-like molecules increases. The gas-like molecules are immune to the intermolecular potential, so when a molecule converts from solid-like to gas-like via vacancy formation, it loses its potential energy, or a $ k b 2$ of energy via the equipartition theorem. At the melting point when there is low vacancy formation, Eyring's equation predicts that of Dulong Petit, $3 k b$. At the gas transition when vacancies dominate, it predicts that of an ideal gas, $ 3 2 k b$. At intermediate temperatures, it uses the vacancy formation to determine the fraction of gas-like molecules, which modulates the constant-volume heat capacity between the solid and the gas limits.

Wallace also accounted for the decrease in constant-volume heat capacity with his vibration-transit theory.^{40} By re-casting the configurational partition function into the partition function of independent structures, he argued that the integral of the partition function should no longer be from $\u2212\u221e$ to $\u221e$, as is customary. Instead, the structures have distinct boundaries, after which they begin to impinge on each other. As the temperature increases and atoms vibrate with larger amplitudes, the boundaries get closer and closer so that the atom's configurational phase space becomes increasingly more restricted. Thus, in his integral of the partition function for each structure, he bounds the configurational space and, thus, the integrand. That bound gets smaller with temperature, and the heat capacity concomitantly decreases.

More recently, Trachenko and colleagues^{46,56,57} have re-championed Frenkel's picture of liquid dynamics in which transverse phonons, previously believed to disappear in the liquid state, can persist so long as they are above the “Frenkel frequency,” or the Maxwell relaxation time of a molecule subjected to a simple spring and damper in parallel. The spring is given by the shear modulus of the liquid, and the damper is given by the viscosity. Using this postulate, Trachenko *et al.* re-counted the number of phonons existing at each temperature. If each phonon contributes $ k b$ of energy to the system, then the final energy will be equal to $N k b$, where *N* is the number of phonons. By assuming a Debye density of states, the number of transverse phonons having frequencies below the Frenkel frequency can be evaluated and then subtracted from $N k b$ for the energy. The kinetic energy of those atoms can be added back into the account, and the result is the total energy of the system, with its derivative giving the constant volume heat capacity. The result of this model agrees fairly well with experimental data for many simple liquids.

It is difficult to reconcile Trachenko and Wallace's model under a single physical picture; Trachenko *et al.* relied on a phonon gas model, where they assume that the thermodynamic functions of each phonon are independent of one another, and use traditional lattice-dynamical approaches, including the Frenkel modification, to prescribe the appropriate thermodynamic weighting to each phonon in the gas comprising the liquid. Wallace also assumes a phonon gas, but instead of employing a mechanical modification to the dynamics, as Trachenko did with the vanishing transverse phonons, he prescribes a limit on the available amplitude (and, therefore, volume) occupied by each phonon, imposing a restriction on the system's configurational phase space. It is unclear how the configurational restriction relates to the vanishing of transverse phonons; however, both certainly predict a decreasing constant volume heat capacity as a function of temperature in the liquid state. We note that for PCMs, Trachenko *et al*.'s model seems more promising because it can be directly evaluated from material properties, whereas Wallace's model requires experimental fitting parameters.

*et al.*

^{58}In this model, Trachenko's model for $ C v( \omega , T)$ was employed, which re-formats the modal analysis into a sum of integrals,

The Debye model was used for the group velocity, which equates the group velocity to the speed of sound. Finally, instead of using the Cahill–Pohl model for the mean free path, they posit that dominant energy carriers are short-wavelength, high-energy modes, and that they scatter frequently so that the mean free path can be taken as the intermolecular distance. This is perhaps their biggest assumption, and more experimental evidence is needed to resolve the mean free path of high energy phonons in liquids. Nevertheless, predictions match experimental data fairly well, so that assumption is likely valid for the class of liquids that they are compared with.

### System-level macroscopic description

#### Impact of non-equilibrium behavior

As discussed earlier, among the various non-equilibrium properties relevant to phase change materials, thermal conductivity and supercooling are the most important^{59–61} as these dominate thermal transport. Figure 5 depicts the cycling of erythritol,^{62} a commonly used sugar alcohol PCM, in a DSC. The downward sloping curves are endothermic and indicate melting, while the upward sloping curves are exothermic and indicate crystallization. The onset of nucleation is the first deviation from the baseline upon cooling, which is scattered around $20 \xb0 C$. The melting point is about $120 \xb0 C$, indicating that erythritol exhibits over $100 \xb0 C$ of supercooling. From these experiments, it would appear to rule it out as a candidate for TES systems, but it is important to quantify the volumetric and rate effects on the supercooling behavior before overlooking its use in larger systems. Traditionally, supercooling is described using the well-known classical nucleation theory.^{63} While it provides excellent physical insight into the nucleation process, the inputs to such models (e.g., surface energies, surface shape, and free energy barriers) are difficult to know *a priori*. For TES applications, engineers are not concerned with the surface energies or nucleus shapes, and instead, they seek predictive power. To properly model and design a TES system then, it is imperative to know the temperature at which the PCM is expected to nucleate as this defines the operating range of the system. Unfortunately, predicting the nucleation temperature of a PCM in an arbitrary system has been very difficult because supercooling changes with geometry, volume, material, microstructure, purity, discharge rate, etc. Thus, predicting the performance of any large-scale practical system based on lab data from mg-scale samples is a very difficult task.^{64} As a result, literature reports on supercooling temperature from lab-scale experiments are meaningless beyond that specific experimental system, size, material, and environment.

^{65}that bridges lab-scale characterization with large-scale performance. The analysis can be used in conjunction with existing numerical methods to accurately incorporate supercooling into phase change models, thus combining material modeling with system modeling. The framework describes nucleation as a non-homogeneous Poisson process because the rate parameter, i.e., the nucleation rate, may not be constant over time (e.g., under non-isothermal conditions). Note that this use of “non-homogenous” is different from the nucleation itself being homogeneous or heterogeneous. The properties of the distribution of supercooling temperatures can be described by the survivor function,

*V*is the material volume, $\beta $ is the cooling rate, $ T m$ is the equilibrium melting point temperature, and $ J V$ is the volume-normalized nucleation rate. We note that Eq. (6) is for applications with volume-dominated nucleation, but in principle, the modeling results obtained could be easily translated to a system with surface area-dominated nucleation physics $[ A J A ( t ) \u226b V J V ( t )]$ by suitable exchanges: $ J V(t)\u2194 J A(t)$ and $V\u2194A$. To compute the volume-specific nucleation rate, $ J V(T)$, from experimental data, we invert the survivor function, $\chi (T)$,

^{66}a fitting function to the normalized nucleation rate,

*n*or some other description of $ J V(T)$, when characterizing new PCMs. As will be shown, once $\gamma $ and

*n*are known, the supercooling behavior of the material in an arbitrary thermal and geometric system can be predicted.

Thus, if $\gamma $ and n are determined experimentally by running simple cooling experiments and recording a series of temperatures at which nucleation occurred, the nucleation probability can then be coupled with thermal transport to accurately predict when supercooling will occur in the system. Analytical solutions for $T( x , y , z , t)$ are not available for all but the simplest geometries and boundary conditions, so this procedure must be carried out numerically. Equations (9)–(11) are naturally discretized in space (index *i*) and time and can be easily incorporated into existing numerical schemes for PCMs. In general, *T*(*x,y,z*) can be determined at each time step by solving the heat conduction equation, and the integral of the nucleation rate as a function of *T*(*x,y,z*) over the volume in Eq. (10) can be calculated to determine the effective global nucleation rate at time *t*. Stepping through time, $ \lambda e f f(t)$ can be calculated, and from $ \lambda e f f(t)$, the CDF, PDF, and then average time to nucleation can be determined.

^{66–68}for the temperature at which nucleation will occur,

^{12}When thermal gradients develop, the nucleation probability at each material element becomes different. In the middle of the material, where the temperature is the highest (lowest supercooling), the nucleation probability is far lower than it is at the system boundaries, where supercooling is the highest. Thermal transport, therefore, strongly influences the nucleation behavior of a system. Larger systems imply more nucleation sites, but if operating in a transport limited regime, those nucleation sites may never become active, stifling nucleation. Conversely, if perfect thermal transport exists (i.e., lumped or uniform approximation), the temperature at the boundaries is pulled upward relative to a non-lumped system, and although more of the nucleation sites become active, the most active nucleation sites at the boundaries can become less potent.

Figure 6 compares Eqs. (11)–(13) for neopentyl glycol at constant volume with a varied aspect ratio and, therefore, varied thermal transport. The equations predict experimental results as a function of aspect ratio quite well. The plot also highlights the significant effect thermal transport has on the supercooling behavior. As shown, the lumped/uniform approximation with a constant cooling rate [Eq. (14)] gives average nucleation times that are ∼1.5× lower than that given by the more detailed treatment of the non-uniform $[ T ( x , y , z , t )]$ with Eqs. (10)–(12). In addition, we show the predictions for a uniform, convective cooling process. This ignores the thermal transport through the material, but it matches the boundary conditions to what is seen in the experiment, which is convective cooling. In real applications, convective cooling is more appropriate than constant cooling, which is used in DSC. It is shown in the gray dashed line in Fig. 6 and overpredicts the experimental cooling time by about a factor of 2 for each aspect ratio. It overpredicts because the uniform assumption ignores large temperature gradients that generally arise near the material surface. As mentioned, the large temperature gradients lead to a much lower temperature near the surface, which catalyzes nucleation. Without including a detailed analysis of the temperature profile as a function of time, the catalyzed effect on nucleation is missed, leading to huge overprediction. Thus, thermal transport must be included in any detailed analysis of PCM supercooling behavior.

#### Energy transfer rate

^{69}

^{,}

*X*(

*t*). Because the interface is defined by the region of solid–liquid equilibria, it is always at the equilibrium melting point temperature so that $T( X ( t ) , t)= T m$. Semi-analytical solutions exist for the 1D problem,

^{69}but, in general, numerical methods are needed in higher dimensions and to include more realistic temperature-dependent thermophysical properties. Finite difference schemes employing the effective heat capacity method are common for this type of analysis. Heat flux at the boundary is extremely important in TES applications because it determines how quickly the system can be charged and discharged, along with the effective thermal capacity for a given discharge rate (e.g., thermal Ragone plot

^{12}). By solving the Stefan problem, or some numerical variant, the heat flux as a function of time at the heat exchange boundary can be calculated. In general, the heat flux at large times decreases as a function of heat capacity and a latent heat of fusion and increases with thermal conductivity. That is, $q\u221d ( 1 \Delta H , 1 C p , k )$. Therefore, high power output thermal storage systems may need to sacrifice energy density and vice versa.

At large times, the flux is especially dependent on the thermal conductivity and heat capacity of the liquid. To extract the heat from the phase change front, it must be propagated through the liquid phase to the boundary at which the heat is collected. As the PCM melts and the phase change front recesses further and further into the material, more and more heat must be transported through longer distances of the liquid phase. Thus, if the thermal diffusivity of the liquid is low (i.e., low thermal conductivity and high heat capacity), extracting heat becomes increasingly difficult with time, and a larger bias temperature must be applied at the boundary, ultimately reducing the effective energy density. Thermal conductivity of liquids is, therefore, essential in predicting the power output of thermal energy storage systems. Therefore, as discussed in the previous section, a fundamental understanding of thermal conductivity of the liquid phase is very important in understanding the rate of thermal energy transfer in TES.

## SUMMARY AND OUTLOOK

There is much work to be done in describing the physics of the liquid state. No model can thoroughly predict the solid–liquid transition using a single equation of state, like the van der Waals equation does for liquid–gas transitions. Without such a model, an accurate prediction of the melting point without fitting parameters has yet to be done, and such a model would significantly contribute to our ability to screen materials for thermal energy storage applications. In addition, no model exists that accurately predicts the temperature-dependent thermal conductivity of diverse liquids, and without such a model, predicting the power output and charging time of PCM-based thermal energy storage systems will require experimental data that are time-consuming to acquire. It is the authors’ opinion that the microscopic/molecular picture is best to inform equilibrium thermodynamic modeling, whereby the traditional partition function formulation of statistical mechanics and all its many results can be employed. For transport properties, such as thermal conductivity, the microscopic picture quickly grows complex, and traditional lattice dynamics approaches are confounded by strong anharmonic potentials with no fixed point for Taylor expansion. It seems more fruitful, therefore, to look toward kinetic theory learning approaches, whereby the existence of a fundamental energy carrier is assumed, and results are derived from spectral/modal considerations of this energy carrier. In non-metallic solids, this carrier is the phonon. In liquids, there is strong evidence that high energy and short-range phonons certainly exist in the longitudinal direction, and below certain timescales, in the transverse direction as well. It will be interesting to see if new, more fundamental quasi-particles can be discerned that encompass both the phonon and the transverse-shear relaxations, leading to local atomic re-arrangement, and whether they can be used to describe thermal transport more thoroughly. The more satisfactorily these questions can be answered, the more predictive power we gain for screening and designing thermal energy storage.

**ACKNOWLEDGMENTS**

This work was supported by the Energy Efficiency and Renewable Energy, Building Technologies Program of the U.S. Department of Energy under Contract No. DEAC02-05CH11231.

## DATA AVAILABILITY

The data that support the findings of this study are available within this article.

## REFERENCES

*Energy Landscapes, Inherent Structures, and Condensed Matter Phenomena*

*Statistical Mechanics and Dynamics*

*Chemistry: Principles and Practice*

*Nanoscale Energy Transport and Conversion: A Parallel Treatment of Electrons, Molecules, Phonons, and Photons*

*Nucleation Theory*(