Functional soft materials, comprising colloidal and molecular building blocks that self-organize into complex structures as a result of their tunable interactions, enable a wide array of technological applications. Inverse methods provide a systematic means for navigating their inherently high-dimensional design spaces to create materials with targeted properties. While multiple physically motivated inverse strategies have been successfully implemented in silico, their translation to guiding experimental materials discovery has thus far been limited to a handful of proof-of-concept studies. In this perspective, we discuss recent advances in inverse methods for design of soft materials that address two challenges: (1) methodological limitations that prevent such approaches from satisfying design constraints and (2) computational challenges that limit the size and complexity of systems that can be addressed. Strategies that leverage machine learning have proven particularly effective, including methods to discover order parameters that characterize complex structural motifs and schemes to efficiently compute macroscopic properties from the underlying structure. We also highlight promising opportunities to improve the experimental realizability of materials designed computationally, including discovery of materials with functionality at multiple thermodynamic states, design of externally directed assembly protocols that are simple to implement in experiments, and strategies to improve the accuracy and computational efficiency of experimentally relevant models.

Soft materials with tailored properties have found application in a variety of technologies including waveguides in photonic circuits,1 collectors for energy harvesting devices,2 membranes for energy storage cells,3,4 and tunable-rheology fluids in brake lines, artificial joints, and vibrational dampeners.5 The use-inspired behaviors of these materials derive from the physicochemical properties of their constituent components as well as their internal spatial organization (i.e., structure). Because fabrication methods that enable top-down control of structure at the nanoscale can be prohibitively expensive and slow for industrial-scale manufacturing processes,6 bottom-up strategies based on self-assembling materials have been explored as promising alternatives.7 Colloidal nanoparticles, polymers, and proteins can serve as powerful material building blocks for self-assembly because their mutual interactions, which help determine the favored equilibrium state of the system, can be systematically varied through, e.g., their size, shape, charge, composition/sequence, and surface functionalization,8 providing a rich design space. A key challenge is to determine which building blocks reliably self-assemble a material with a targeted structure or desired macroscopic properties.

Forward strategies for discovering new self-assembling materials are commonly adopted. In such approaches, an initial set of material building blocks is synthesized and protocols are chosen to promote their self-assembly in an experiment or a computer simulation. The structure and properties of the resulting material are subsequently characterized. These steps are then repeated (typically many times) with different choices for the building blocks or protocols to screen for materials with superior attributes. To make materials discovery more systematic and amenable to meeting specified design constraints, it can be helpful to instead formulate this process as an inverse problem (Fig. 1). For example, one can define a figure of merit (FOM) based on a desired structure or macroscopic property and then apply methods of constrained optimization to help navigate the multidimensional design space and determine which available building blocks, interactions, or protocols are most suitable for realizing a material.

FIG. 1.

Forward and inverse approaches in discovery and design of soft materials. In an example forward approach, materials with new properties may be discovered by repeatedly carrying out the following sequence of steps. Chemical synthesis (a) is used to create material building blocks with effective, coarse-grained interactions (b) that drive their assembly into structures (c) that impart emergent properties on the macroscopic scale (d). Inverse methods work backwards to systematically discover which material components will spontaneously form targeted structures or materials with desired macroscopic properties. (For each panel from left to right and top down) (a) Adapted with permission from Agrawal et al., Chem. Rev. 118, 3121 (2018). Copyright 2018 American Chemical Society;9 (b) Adapted with permission from Loget et al., Adv. Mater. 24, 5111 (2012). Copyright 2012 Wiley;10 Adapted with permission from Y. Sun and Y. Xia, Science 298, 2176 (2002). Copyright 2002 AAAS;11 Adapted with permission from Kalsin et al., Science 312, 420 (2006). Copyright 2006 AAAS;12 (c) Adapted with permission from Stoykovich et al., ACS Nano 1, 168 (2007). Copyright 2007 American Chemical Society;13 Adapted with permission from Jiang et al., Chem. Mater. 11, 2132 (1999). Copyright 1999 American Chemical Society;14 Adapted with permission from W. F. Reinhart and A. Z. Panagiotopoulos, J. Chem. Phys. 145, 094505 (2016). Copyright 2016 AIP Publishing LLC;15 (d) Used with permission from J. Ge, Y. Hu, and Y. Yin, Angew. Chem., Int. Ed. 46, 7428 (2007). Copyright 2007 Wiley;16 Adapted with permission from Cheng et al., Langmuir 35, 9464 (2019). Copyright 2019 American Chemical Society;17 Adapted with permission from Hughes et al., ACS Photonics 5, 4781 (2018). Copyright 2018 American Chemical Society.

FIG. 1.

Forward and inverse approaches in discovery and design of soft materials. In an example forward approach, materials with new properties may be discovered by repeatedly carrying out the following sequence of steps. Chemical synthesis (a) is used to create material building blocks with effective, coarse-grained interactions (b) that drive their assembly into structures (c) that impart emergent properties on the macroscopic scale (d). Inverse methods work backwards to systematically discover which material components will spontaneously form targeted structures or materials with desired macroscopic properties. (For each panel from left to right and top down) (a) Adapted with permission from Agrawal et al., Chem. Rev. 118, 3121 (2018). Copyright 2018 American Chemical Society;9 (b) Adapted with permission from Loget et al., Adv. Mater. 24, 5111 (2012). Copyright 2012 Wiley;10 Adapted with permission from Y. Sun and Y. Xia, Science 298, 2176 (2002). Copyright 2002 AAAS;11 Adapted with permission from Kalsin et al., Science 312, 420 (2006). Copyright 2006 AAAS;12 (c) Adapted with permission from Stoykovich et al., ACS Nano 1, 168 (2007). Copyright 2007 American Chemical Society;13 Adapted with permission from Jiang et al., Chem. Mater. 11, 2132 (1999). Copyright 1999 American Chemical Society;14 Adapted with permission from W. F. Reinhart and A. Z. Panagiotopoulos, J. Chem. Phys. 145, 094505 (2016). Copyright 2016 AIP Publishing LLC;15 (d) Used with permission from J. Ge, Y. Hu, and Y. Yin, Angew. Chem., Int. Ed. 46, 7428 (2007). Copyright 2007 Wiley;16 Adapted with permission from Cheng et al., Langmuir 35, 9464 (2019). Copyright 2019 American Chemical Society;17 Adapted with permission from Hughes et al., ACS Photonics 5, 4781 (2018). Copyright 2018 American Chemical Society.

Close modal

Progress on statistical mechanical approaches to inverse problems for designing soft matter has been chronicled in recent reviews and perspective articles. Topics covered include the design of colloidal interactions to stabilize self-assembled target structures19 [Fig. 1(c)Fig. 1(b)] and the discovery of structures optimal for realizing desired macroscopic properties across a range of soft materials20 [Fig. 1(d)Fig. 1(c)]. Inverse methods have proven powerful for designing granular materials,21–23 block copolymer assemblies,22–24 and bio-inspired materials.23 Recent reviews25,26 have highlighted inverse techniques that leverage machine learning (ML) to effectively process the high-dimensional data obtained from computer simulations of materials to analyze and design their novel structural and dynamic properties. Inverse methods have also been widely applied in related fields including the design of molecules and chemical reactions.27–30 

In this perspective, we explore recent advances in the use of inverse methods for computational soft-material design. We split the discussion into methods related to structure design in Sec. II and macroscopic property design in Sec. III. For structure design, a major challenge is discovering a FOM that can (1) discriminate between the target structure and its competitors and (2) encourage spontaneous assembly of the target. For property design, the FOM is often known in advance and related to the property of interest. The challenge is to find an efficient way to compute and optimize it. The latter can be carried out either directly by varying the possible building block interactions [Fig. 1(d)Fig. 1(b)] or in two stages by first discovering the optimal structure [Fig. 1(d)Fig. 1(c)] and then using that information to determine optimal interactions [Fig. 1(c)Fig. 1(b)]. Within each section, we compare inverse methods recently demonstrated to be successful for addressing these problems. Promising methods that utilize ML algorithms have been proposed for both structure and property design strategies, and we discuss how they may be effectively incorporated into inverse schemes. Despite the successes of in silico structure and property design, inverse techniques are not routinely used to design materials in experiments,31–41 and improving the experimental realizability of computational design remains an outstanding challenge. In Sec. IV, we outline some promising future areas for which inverse strategies may be particularly effective and useful for directing experimental materials design.

To design interactions that promote self-assembly into a specific structure [Fig. 1(c)Fig. 1(b)], a target ensemble comprising the configuration data of the building blocks in the desired phase must be considered. Ideally, a single FOM can be constructed, which is descriptive of the material’s high-dimensional configurations and can be used to favor the target structure over those of competing phases. FOMs can include thermodynamic quantities, statistical distances from information theory, and structural order parameters. In this section, we discuss strategies that have used these types of FOMs to successfully design interactions for self-assembly of model materials into various target structures. Some of these methods are depicted schematically in Fig. 2.

FIG. 2.

Schematics illustrating the steps involved in several inverse methods along with snapshots of model materials designed and assembled using these approaches. (a) Machine learning is used to discover structural order parameters ψi from unbiased molecular dynamics (MD) or Monte Carlo (MC) simulations. The free energy landscape is generated in the low-dimensional order parameter (OP) space using biased sampling methods, and the difference in free energy between the target and competitors is maximized. This technique has been used to design patchy particles that assemble clusters,42 which, in turn, assemble into open crystals.43 Adapted with permission from A. W. Long and A. L. Ferguson, Mol. Syst. Des. Eng. 3, 49 (2018). Copyright 2018 The Royal Society of Chemistry; Adapted with permission from Y. Ma and A. L. Ferguson, Soft Matter 15, 8808 (2019). Copyright 2019 The Royal Society of Chemistry. (b) Alchemical Monte Carlo (AMC) simulations find a particle shape that minimizes the alchemical free energy of a target lattice to favor its spontaneous assembly. Symmetrizing the shape can improve the target’s thermodynamic stability. This method has been used to assemble a rich variety of crystal structures from hard colloidal particles.44,45 Modified from Ref. 45. Copyright The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). (c) The radial distribution function g(r) from MD or MC simulations (blue) is compared to that for a target structure (red) to minimize the relative entropy between the two ensembles. This method has been used to design isotropic pair potentials for assembling phases with complex morphologies.46,47 Adapted with permission from Jadrich et al., J. Chem. Phys. 146, 184103 (2017). Copyright 2017 AIP Publishing LLC; Adapted with permission from Lindquist et al., J. Chem. Phys. B. 122, 5547 (2018). Copyright 2018 American Chemical Society. (d) Forward simulations generate an ensemble of data that is used to skew the probability distribution toward configurations that contribute more toward a targeted structure or property than average. This approach has been used to design block copolymers for templated self-assembly, folding polymers, and time-dependent processing conditions to shuttle a particle across an energy landscape. Adapted from Ref. 48.

FIG. 2.

Schematics illustrating the steps involved in several inverse methods along with snapshots of model materials designed and assembled using these approaches. (a) Machine learning is used to discover structural order parameters ψi from unbiased molecular dynamics (MD) or Monte Carlo (MC) simulations. The free energy landscape is generated in the low-dimensional order parameter (OP) space using biased sampling methods, and the difference in free energy between the target and competitors is maximized. This technique has been used to design patchy particles that assemble clusters,42 which, in turn, assemble into open crystals.43 Adapted with permission from A. W. Long and A. L. Ferguson, Mol. Syst. Des. Eng. 3, 49 (2018). Copyright 2018 The Royal Society of Chemistry; Adapted with permission from Y. Ma and A. L. Ferguson, Soft Matter 15, 8808 (2019). Copyright 2019 The Royal Society of Chemistry. (b) Alchemical Monte Carlo (AMC) simulations find a particle shape that minimizes the alchemical free energy of a target lattice to favor its spontaneous assembly. Symmetrizing the shape can improve the target’s thermodynamic stability. This method has been used to assemble a rich variety of crystal structures from hard colloidal particles.44,45 Modified from Ref. 45. Copyright The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. Distributed under a Creative Commons Attribution NonCommercial License 4.0 (CC BY-NC). (c) The radial distribution function g(r) from MD or MC simulations (blue) is compared to that for a target structure (red) to minimize the relative entropy between the two ensembles. This method has been used to design isotropic pair potentials for assembling phases with complex morphologies.46,47 Adapted with permission from Jadrich et al., J. Chem. Phys. 146, 184103 (2017). Copyright 2017 AIP Publishing LLC; Adapted with permission from Lindquist et al., J. Chem. Phys. B. 122, 5547 (2018). Copyright 2018 American Chemical Society. (d) Forward simulations generate an ensemble of data that is used to skew the probability distribution toward configurations that contribute more toward a targeted structure or property than average. This approach has been used to design block copolymers for templated self-assembly, folding polymers, and time-dependent processing conditions to shuttle a particle across an energy landscape. Adapted from Ref. 48.

Close modal

One of the pioneering approaches in computational design of self-assembly was to determine interactions that maximize the potential energy difference between a target structure and its competitors while ensuring mechanical stability of the target, effectively sculpting the ground-state potential energy landscape.19,49–54 In this strategy, the target structure and a pool of possible competitors are selected and then the optimization is performed over a set of design parameters λ characterizing the interaction potential between assembling components. While there are few inherent constraints on the functional form of the interactions, most research to date using this approach has focused on isotropic, pairwise potentials. A forward calculation of the ground-state phase diagram of the model with the optimized interactions can reveal if the list of the competitors considered needs to be expanded and further optimizations performed.

Using this method, interactions that stabilize several two-dimensional (e.g., honeycomb and square49,50,52,53) and three-dimensional (e.g., diamond51,54) crystalline phases have been discovered. Modifications to this approach have enabled the optimization of interactions to ensure target stability over a wide range of particle concentrations.55–59 Finite-temperature effects can also be treated approximately by minimizing the Lindemann criterion quantifying fluctuations from the target structure.19,49–51 While these methods are straightforward to implement for the design of interactions that stabilize crystalline targets, it is not clear how to extend them to target specific types of local structuring in disordered states of matter. Ensuring the kinetic accessibility of the target structure via self-assembly (e.g., from a disordered fluid) is also an outstanding challenge for ground-state-based strategies.

Long and Ferguson recently developed a free-energy “landscape engineering” method [Fig. 2(a)] that goes beyond potential-energy minimization. In this approach, the free energy of a desired structural motif is directly minimized relative to other possible emergent structures in a low-dimensional space of collective coordinates. Importantly, the collective coordinates or “order parameters” (OPs) are machine-learned from the high-dimensional space of raw particle coordinates to maximize information retention. Diffusion maps42,60–62 and autoencoders63–66 have been shown to be particularly useful for this reduction, but, in principle, other ML techniques or physically informed OPs (Sec. II C) could be used. The free-energy landscape is computed in the low-dimensional OP space using enhanced sampling techniques.42,62–64 The free-energy differences between the target and its competitors are extracted from the landscape and used to update the design parameters in an iterative loop, depicted in Fig. 2(a). In contrast to potential-energy minimization methods, landscape engineering naturally incorporates temperature effects and automatically enumerates competitors from the free-energy landscape. Landscape engineering has been used to construct patchy colloids that self-assemble into targeted clusters42 that, in turn, assemble into open crystals,43 as seen in the snapshots in the right-hand side of Fig. 2(a).

Constructing an entire free-energy landscape is difficult due to its computational costs, so landscape engineering has not yet been applied to more general target structures. However, even if the landscape cannot be computed, it can still be navigated and updated in simulations. Van Anders, Glotzer, and co-workers proposed a design approach using simulations in an “alchemical” (or “expanded”) ensemble [Fig. 2(b)].44,45 Their work focused on a family of hard particles, the shapes of which are described by a set of variables, λ. For athermal particles, the free energy for a given shape is purely entropic with contributions from positions, x, and orientations, q. The particles were initially constrained to a target lattice using a tethering potential E(x). Alchemical Monte Carlo simulations were performed in an ensemble where particles not only rotated and translated but also fluctuated in shape. The partition function ZAMC of an ensemble of N identical particles can be expressed as

ZAMC=dxdqdλeβ[U(x,q|λ)Nμλ+E(x)],
(1)

where μ are alchemical potentials (conjugate to the shape parameters) and U(x, q|λ) is the interparticle potential.

Because μ cannot be controlled in real systems, μ was set to zero to avoid biasing the particle shape, and the external tethering potential was slowly reduced. If the crystal structure is stable in the limit E → 0, the free energy (entropy) for the target lattice has been minimized (maximized) with respect to the particle shape. As shown in Fig. 2(b), the particle shape can be additionally symmetrized to improve the thermodynamic stability of the target.45 Because the free energy, rather than a free energy difference, is the FOM, alchemical Monte Carlo does not explicitly consider competitors. This is computationally efficient because it avoids having to fully sample the free-energy landscape but does not ensure that the competitors are disfavored, and thus, the target may only be metastable. Nonetheless, this method has been used successfully to optimize hard-particle shapes that assemble into many complex crystals [see, e.g., right-hand side of Fig. 2(b)].45 Alchemical methods are not limited to hard particles, and λ may include parameters characterizing other particle interactions.67 

Information theory provides quantities, so called “statistical distances,” which characterize differences between data samples. The Kullback–Leibler (KL) divergence is one such quantity;68,69 and the KL divergence of a target distribution, Pt(x), with respect to a model probability distribution, P(x|λ), is defined as

DKL(Pt(x)P(x|λ))=dxPt(x)lnPt(x)/P(x|λ).
(2)

In the case of equilibrium self-assembly, P(x|λ) is the Boltzmann weight of a configuration x, and λ are the parameters that characterize the potential energy function. By optimizing λ so that DKL is minimized, the structures self-assembled from the model system are made to resemble the target configurations. Minimization of DKL is conceptually appealing and intuitive as a design objective because it is equivalent to maximizing the likelihood that the probabilistic model P(x|λ) will sample the configurations contained in the target ensemble.46 

While various computational approaches have been applied to minimize DKL,70–72 updates to λ consistent with a steepest-descent optimization are particularly simple to compute in the canonical ensemble when the interaction potential is pairwise and isotropic [denoted here as u(r|λ)]. In particular, the potential parameters used in a molecular simulation during the (i + 1)th iteration of the minimization, λi+1, are determined from those used in the ith iteration, λi, as

λi+1=λi+αdrg(r|λi)gt(r)λu(r|λ)λi,
(3)

where g(r|λi) and gt(r) are the radial distribution functions of the simulated model in the ith iteration and that of the target ensemble, respectively, and α is a tunable parameter that controls the magnitude of the update. This iterative update process is depicted schematically in Fig. 2(c). Unlike the thermodynamic-descriptor-based methods discussed in Sec. II A, which are not guaranteed to result in self-assembly of the target in a forward simulation, Eq. (3) uses the structures measured from the self-assembly process as input to the parameter update at each optimization step. In this way, spontaneous assembly of the target structure is strongly promoted over its competitors by the interactions optimized using such “on-the-fly” methods.73 

The strategy described above has been used to design model isotropic pair potentials that self-assemble exotic structures [Fig. 2(c)] including open lattices,46,74 Frank–Kasper phases,47 multi-component crystals,75 and colloidal strings.76 Some desirable features of this approach include the following: (1) by manipulating the form of u(r|λ), physically motivated interaction potentials can be discovered;77,78 (2) by varying the ensemble in which the iterative simulations are performed, simultaneous control of structure and thermodynamic quantities, such as the pressure of the self-assembled system, can be achieved;78 and (3) the minimization can also be performed in Fourier space, which may be computationally convenient for some design problems.79 

The Kullback–Leibler divergence is also termed the relative entropy. Its minimization has been used to parameterize molecular coarse-grained models, where many atoms might be represented as a single bead, which are intended to stand in for more computationally expensive all-atom target simulations.70,71,80 In both designs for self-assembly and coarse-graining, the goal is to discover the parameters for a probabilistic model that are most likely to reproduce a target dataset, whether that dataset comes from an all-atom simulation or a contrived set of configurations that display a desired structural motif. Given these similarities, it is perhaps not surprising that other techniques from the coarse-graining literature have found success in design for self-assembly applications as well. For example, iterative Boltzmann inversion, which utilizes a heuristic update scheme with the same stationary point as Eq. (3) for a pair potential that is infinitely flexible, has been used to discover isotropic pair interactions that self-assemble cluster fluids81 as well as mesoporous materials.77,82

The relationship between coarse-graining and design for self-assembly problems suggests multiple avenues for future work on the latter. For example, coarse-graining has been performed with multi-body83,84 and anisotropic85 interactions. These more complex models are compatible with the relative entropy framework described above. Because many interactions commonly used to assemble structures in experiments are many-body and/or anisotropic in nature, including those mediated by electric charges,12 electric and magnetic fields,86,87 surface tension,88,89 nematic liquid crystals,90 and heterogeneous surfaces,91,92 embedding these features into the design space may allow for stronger coupling between computational and experimental materials assembly. Additionally, certain target structures may require very complex interactions (or may even be impossible) to assemble if the potential is restricted to isotropic and pairwise forms. Nonetheless, there may be a “simpler” many-body and/or anisotropic potential that will readily assemble the structure. For example, the formation of capsid-like structures would undoubtedly be difficult for particles with isotropic interactions, but patchy particles with relatively simple short-ranged interactions are known to self-assemble into them with high fidelity.42 

Finally, other statistical distances can also serve as FOMs. For example, one drawback of relative entropy minimization is that DKL is not readily amenable to optimizing singular interactions such as a hard core-potential. A hard core produces regions in configuration space of zero weight [i.e., P(x|λ) = 0], which leads to a divergent DKL if Pt(x) ≠ 0 for the same configurations. In such cases, the relevant gradients cannot be computed to minimize DKL. In contrast, the Bhattacharyya distance69 

DB(Pt(x),P(x|λ))=lndxP(x|λ)Pt(x)
(4)

does not share this limitation and might be used as an alternative metric for inverse design of hard-particle systems.

Many of the preceding descriptors are statistical mechanical quantities that must be computed on the basis of an ensemble of configurations. This limits their usefulness for systems where the relative statistical weights of configurations are not readily known (e.g., non-equilibrium systems). A more generically applicable strategy is to instead use a structural OP that serves as a low-resolution description of a high-dimensional configuration. When such OPs reliably distinguish between a target structure and its competitors,93 they can be used to steer an iterative scheme using the OPs as the FOM, like in Figs. 3(a)–3(c). For example, Kumar et al. recently used the Steinhardt bond-order parameters based on spherical harmonics of local neighbor orientations94,95 to design pair interactions for assembly of body-centered-cubic colloidal crystals.96 However, OPs that are sufficiently discriminatory between target structures and competitors can be challenging to construct, particularly for complex structures such as open lattices, crystals with large unit cells, and quasicrystals as well as cases where potential competitor structures are not known in advance. Machine learning offers possible solutions to automatically discover OPs from structural data for design.

FIG. 3.

Strategies for leveraging machine learning (ML) architectures for enhanced materials design. (a) Unsupervised training on a large training set of configurations discovers functions (represented with trapezoids) that map between high-dimensional configurations and a low-dimensional set of order parameters (OPs) ψi. (b) The ML OPs can be used as the design objective for assembling complex structures. Adapted with permission from Lindquist et al., J. Chem. Phys. B. 122, 5547 (2018). Copyright 2018 American Chemical Society. (c) ML reduces the cost of computing objective functions in inverse design. (d) The structure-OP function can be transferred to accelerate supervised learning of structure–property relations from a small training set. (e) Inverse design can be performed directly in low-dimensional OP space using the ML configuration-generating function in lieu of forward simulations.

FIG. 3.

Strategies for leveraging machine learning (ML) architectures for enhanced materials design. (a) Unsupervised training on a large training set of configurations discovers functions (represented with trapezoids) that map between high-dimensional configurations and a low-dimensional set of order parameters (OPs) ψi. (b) The ML OPs can be used as the design objective for assembling complex structures. Adapted with permission from Lindquist et al., J. Chem. Phys. B. 122, 5547 (2018). Copyright 2018 American Chemical Society. (c) ML reduces the cost of computing objective functions in inverse design. (d) The structure-OP function can be transferred to accelerate supervised learning of structure–property relations from a small training set. (e) Inverse design can be performed directly in low-dimensional OP space using the ML configuration-generating function in lieu of forward simulations.

Close modal

Several supervised ML methodologies using neural networks have successfully classified input configurations according to a library of known structures.97–99 The ML classifiers outperform classifications using traditional OPs based on local orientations,94,95 angles,100,101 and neighbor-graph topology102 in discriminating complex crystal phases97,99 and can be trained to identify interfacial structures.98 If relevant structures are not known ahead of time or are difficult to produce, unsupervised ML methods leveraging clustering algorithms can categorize similar structures together.103–105 

These methodologies primarily classify configurations into discrete categories, but continuous OPs are desirable for optimization. These OPs can be local, for example, computing a descriptor for each particle, or global, computing one value for an entire configuration. The spatial resolution of local descriptors makes them useful for characterizing interfaces; however, many conventional local order parameters (e.g., the Steinhardt parameters) are not constructed to accurately identify interfaces. Therefore, Reinhart and co-workers developed an unsupervised ML method, called neighborhood graph analysis,106–108 which uses diffusion maps to discern a few continuous OPs characterizing local structural motifs; their method efficiently discriminated between not only a variety of colloidal crystals but also their surfaces and defects. Neighborhood graph analysis has also been useful for understanding the properties of grain boundaries.109 

Global OPs are useful to compute a single FOM to update parameters in an iterative design loop. One fruitful strategy for generating global OPs is to perform dimensionality reduction on a large dataset of configurations and use the low-dimensional representation as an OP. While discovery of the OP requires multiple configurations, once defined, the OP can be computed on a per-configuration basis. Dimensionality reduction methods such as diffusion maps,42,60–62 autoencoders,63–65 and variational dynamics encoders66 have been used to construct global OPs that are continuous and differentiable. The underlying principle of such dimensionality reduction approaches is to find an intermediate compressed representation that when uncompressed is as close to the input data as possible, as depicted in Fig. 3(a). Such OPs have been leveraged for enhanced sampling of molecular dynamics trajectories directly in the low-dimensional OP space.42,61–66,110 Similarly, Jadrich et al. used sorted arrays of pairwise distances and orientations to obtain global OPs through principal component analysis (PCA).111,112 The resulting OPs were able to detect a variety of phase transitions including freezing in hard disks/spheres, liquid–gas and compositional phase separation, nematic ordering in ellipses, and a non-equilibrium phase transition. A similar method utilized non-negative matrix factorization to compute global OPs in a ternary lipid mixture.113 

For structural design, the machine-learned OPs of a target can serve as a convenient, numerical design objective, as shown in Fig. 3(b). The output OPs can be employed in an iterative scheme [Fig. 3(c)] whereby configurations are converted to OPs and parameters can be tuned to push the OPs toward the desired value. This may be especially useful to design complex structures for which good descriptors are lacking. However, it is not necessarily straightforward to perform such an optimization. Learning the OPs requires generating a large amount of data. These data must be representative of the structures likely to be sampled during design so that the OPs can discriminate between the target structure and competitors. For machine-learned OPs to be reliably ported into inverse schemes, it would be beneficial to have systematic methods for determining the minimum amount of data required to learn sufficiently accurate OPs, where in the design space these data should be collected, and if the data can be acquired on-the-fly and/or recycled between designs using transfer learning. Similarly, automated ways to select the best FOM, ML strategy, and iterative scheme for a certain design problem would be useful for non-experts. Such methods have not been fully explored for structural design problems and remain an important area for future research.

Unlike the design of interactions for self-assembling a target structure, where determining a suitable FOM was challenging, there is an obvious choice for a FOM in property design—the property itself. Each iteration of the optimization involves computing the property from the underlying structure, i.e., evaluating the “structure–property relationship.” If the structure–property relationship is known or readily computed, material properties can be designed using a variety of optimization routines. For example, Miskin and Jaeger designed an unusual strain-stiffening granular material using an evolutionary optimization algorithm,35,36 while elastic networks with maximally negative Poisson ratios38 and targeted allosteric response114 have been designed using gradient-descent and simulated-annealing algorithms, respectively. Dynamic properties such as diffusivity and viscosity can be optimized by similar techniques.115–117 

More often though, material properties are complex functions of the structure that can depend on dynamic or nonequilibrium behavior, and the structure–property relationship is prohibitively expensive to evaluate frequently. In this case, either (1) the iterative optimization algorithm must be significantly improved to reduce the total number of structure–property evaluations required for convergence or (2) the cost of computing the property must be reduced by evaluating the structure–property relationship approximately.

Many techniques have been developed to improve optimization routines relevant for design of soft matter, and it is beyond the scope of this perspective to comprehensively cover them. Inverse methods leveraging Bayesian optimization appear particularly promising and have been recently adapted for design of material properties.110,118,119 In addition to navigating design spaces efficiently, these methods also provide estimates for uncertainties and sensitivities of solutions, which may be useful for finding and prioritizing degenerate solutions. For example, the solution that is least sensitive to perturbations in the design parameters of a model might be the best to fabricate in experiments, where deviations from the model are bound to occur.

We highlight one particular approach to improve the convergence of optimizations that is physically motivated and has been applied to property design of self-assembled materials. Jaeger, de Pablo, and co-workers proposed a statistical physics “design engine,”48 depicted in Fig. 2(d),

dP(x|λ)dt=P(x|λ)f(x)f(x)P(x|λ),
(5)

which prescribes dynamics to the optimization with an artificial time t. Here, f is an objective function of the configuration x and sets the design goal, and P is an ensemble average over the probability distribution P(x|λ). The design engine leverages information about the entire probability distribution; configurations that contribute more to f(x) than average increase their likeliness, while those that contribute less than average become more unfavorable. The form of Eq. (5) enforces conservation of probability and ensures that the probability distribution is normalized. The design engine converges more quickly than standard optimizers (such as steepest descent or simulated annealing) for certain classes of problems. There is considerable flexibility in choosing f so that various OPs or material properties can be incorporated for either structure or property design. The design engine has been successfully applied to a sampling of inverse problems shown in the right-hand side of Fig. 2(d), including colloidal crystallization,96 polymer folding, self-assembly of block copolymers, and even nonequilibrium systems.48 Other types of physically motivated iterative schemes may also be useful; for example, alchemical-ensemble methods have been suggested for property design.67 

ML has proven effective for reducing the cost of determining material properties [Figs. 3(c) and 3(d)]. One ML strategy is to discover an easier-to-compute OP that serves as a proxy for difficult-to-compute properties. Support vector machines have been used to analyze the “softness” of glassy systems from their structural features.120–122 PCA of particle configurations has been used to find OPs for the mechanical properties of polycrystalline materials123 and the effective diffusivity through membranes,124 and the OPs were then regressed to simulated material properties. More accurate predictions can be obtained by using supervised ML methods to learn the structure–property relationship directly. Neural networks were trained to predict the elastic modulus of a lattice model of a binary elastic composite from its configuration, outperforming linear regression of OPs from PCA.125 Neural networks similarly outperformed regression to predict the activity of antifreeze proteins from the structure and hydrogen-bonding dynamics of nearby water.126 

Learning these relationships can require large training sets, which are impractical to generate if the material property is difficult to compute in the first place. Transfer-learning strategies can be used to accelerate training. Yang, Agrawal, and co-workers trained a generative adversarial network (GAN) for heterogeneous, disordered two-dimensional optical materials.110 The GAN was initially trained on a large dataset of configurations that were easy to produce; this learning was leveraged to initialize a new network for computing optical adsorption using a smaller, more expensive-to-produce training set of configuration–adsorption pairs [Fig. 3(d)]. The transfer-learned structure–property network is more accurate than a network trained from scratch for a fixed number of iterations, and training set size, or equivalently, requires smaller training sets and fewer iterations to achieve the same prediction accuracy. Transfer-learning may also be useful in cases where a design optimization pushes the target property outside the bounds of the training set so that retraining of the ML structure–property relationship is required, but such methods have yet to be fleshed out for materials design.

During training, generative ML methods learn a small set of OPs from which they are able to generate new configurations statistically indistinguishable from those in the training set. The generator from a GAN was recently used to perform inverse design directly in the OP space to find structures with high optical absorption [Fig. 3(e)].110 Because OP space is much lower-dimensional than configuration space, it is easier to explore. Other ML approaches for dimensionality reduction can be used similarly for inverse design; for example, Guo, Ren, and co-workers designed density (spatial) distributions of heat-transfer materials with optimal thermal properties using the decoder from a pretrained autoencoder network to perform the optimization in OP space.127 Similarly, structural OPs discovered using the ML methods of Sec. II C can serve as the design space. Combining both ML OP design spaces and ML structure–property relationships in the same inverse cycle could provide even more efficient design schemes.

The highlighted inverse methods are primarily intended for design of a single material property, but many applications require materials with multiple functionalities. For example, bulletproof vests should be lightweight and flexible yet highly energy-dissipative,21 while membranes used in flow batteries must be both mechanically strong and electrically conductive.3 Methods that can efficiently address inverse problems with several design objectives will be useful for such state-of-the-art materials.

One possible strategy is to reduce these “multiobjective” design problems into a single objective that depends on multiple material properties. This idea was used to engineer several mechanical properties of a gallium–iron alloy128 and to design multifunctional optical ports.18 However, there are many ways to incorporate several criteria into a single objective function, and the arbitrary choice has a large effect on the final solution.18,128 Such design problems may be better approached using algorithms developed for multiobjective optimization,129,130 but this application has not been thoroughly explored for soft-materials design.

The methods summarized in this article have been remarkably successful for designing soft materials in silico. Inverse approaches have similarly been used to design materials in experiments,31–41 but these strategies have not taken advantage of the methods developed for in silico design. As a result, there are compelling future opportunities to address the translation of effective computational strategies for the discovery of new materials to the laboratory. These opportunities include the application of inverse approaches to find robust solutions subject to experimentally realistic design constraints (Sec. IV A), the adaptation of design assembly protocols that are simple to implement in experiments (Sec. IV B), and the development of strategies to improve the accuracy and computational efficiency of experimentally relevant models (Sec. IV C).

Most inverse methods for structure design are intended for a single target structure at one thermodynamic state point (e.g., temperature and pressure). This is problematic in practice because processing and operating conditions are rarely constant over a material’s lifetime. Materials designed only for one state may have different structures and properties that are suboptimal or even unusable at other conditions. Alternatively, a material with a structural transition may be the design objective. For example, reconfigurable materials that change their structure in response to their conditions are useful for sensing applications131,132 and as responsive materials capable of controlled, on-the-fly modulation of properties.5,133 Methods allowing for design of multiple target structures and multiple state points can efficiently address these inverse problems.

The coarse-graining community has addressed a problem closely related to “multistate” design: developing an optimal coarse-grained representation from atomistic data sampled at different thermodynamic states.134–137 Such approaches could be leveraged for inverse schemes to find a single interaction potential that assembles different structures under different conditions. In principle, entire phase diagrams could be designed by tessellating state points with target structures and simultaneously designing for them. This approach could systematically find materials with exotic phase behavior such as those that “inverse melt” upon cooling.138 However, computational demands for this procedure may be intense, and a feasible solution that assembles all target structures may not exist. Investigation is needed to demonstrate the possibilities for and limitations of multistate design.

In addition to equilibrium thermodynamic considerations, reconfigurable materials require kinetic transitions from one structure to another. Particularly challenging are fluid–solid and solid–solid transitions that have many kinetic barriers but are essential for controlled manipulation of material properties.139,140 Objective functions that vary during the optimization may help find solutions with transitions specifically embedded. For example, an objective function that periodically switched between two states of reconfigurable circuits141 and of allosteric networks142 found solutions prioritizing transitions between states. Murugan and Jaeger recently suggested applying this same “switching” strategy to self-assembling materials,23 and we agree that this is a potentially fruitful area for further study.

If interactions among building blocks can be controlled with an external stimulus, the stimulus may be used to facilitate self-assembly. Many such systems have been studied experimentally, including materials that respond to light, temperature, electric/magnetic field, and flow.139 This approach for assembly is attractive because it is often easier to control and modulate the external processing conditions than it is to change the physicochemical properties of the building blocks.

If the interactions induced by the external stimuli can be represented with simple expressions, for example, in terms of an equilibrium interparticle pair potential, they are amenable to design using inverse methods described in Secs. II and III. Often though, these interactions are many-bodied (e.g., electrostatic), anisotropic (e.g., dipolar), and out-of-equilibrium (e.g., flow-induced). This complicates the design of the parameters of the externally induced interactions, but if robust methods for this inverse problem could be developed, the solutions may be easier to realize experimentally.

A particularly promising feature of induced interactions is the ability to vary them over time. For example, these approaches show promise for enhancing crystallization rates while reducing defects87,143–145 as well as assembling structures not stable at equilibrium.140 Processing conditions may offer greater design flexibility when complicated interactions discovered from inverse methods are challenging to realize experimentally.46,74,79 However, a systematic inverse approach is likely required to find these optimal protocols, given the complexity of the design space.

Although inverse design has been successful in silico, connecting these techniques with experiments to realize new materials remains a key challenge. Inverse methods can be applied directly to experiments,31 but this approach is ineffective when the experiments are slow, sensitive, or not amenable to automation. In these cases, computation can be leveraged to rapidly screen materials using inverse techniques, and computational predictions can be verified with experiments. This approach has been applied to find bottlebrush polymers with targeted morphologies,32 an optimal director field for a liquid crystal,33 and disordered materials with targeted acoustic and photonic properties.34,146,147 The success of the combined computational–experimental approach hinges on (1) the availability of fast and accurate computational models and (2) the ability to constrain the design to experimentally controllable, feasible parameters. The design of block copolymers has given particularly successful demonstration of this,22,39–41,148–150 where well-established techniques such as self-consistent field theory can be coupled to inverse schemes.

For other classes of soft materials, reliable models that can access the appropriate length and time scale for assembly may not exist or may be challenging to connect to experiments. Detailed models, where the designable parameters are usually clearer, are often too computationally demanding to use in inverse schemes, while models with effective interactions may not be readily mapped onto experiments. Implementing coarse-graining techniques within inverse frameworks may help bridge this disconnect. One such strategy is to define the design space in terms of experimentally controllable parameters of a detailed model, including parameter constraints. Systematic coarse-graining can then be used to map these parameters to a simpler model for simulating assembly. For example, for the current value of the design parameters, a fully atomic simulation may be used to compute the effective pair potential between two colloidal particles. The coarse-grained potential can then be used to simulate assembly of a much larger system of many particles. The design parameters would then be updated directly and the process iterated. Such an integrated coarse-graining–inverse scheme has not yet been demonstrated but is potentially powerful for connecting computation and experiments to design new materials.

Inverse approaches suggest systematic means for designing soft materials with complex target structures and desired macroscopic properties. In this perspective, we reviewed the methodological and computational challenges associated with various design problems in soft matter and the strategies developed to address them. Methods for assembling target structures focus mainly on determining an optimal FOM that is descriptive of the target and preferentially encourages its assembly. Metrics based on thermodynamic energies, statistical distance measures, and structural OPs have all been implemented as FOMs to design interactions that successfully self-assemble a variety of phases with complex structures. These methods may further benefit from ML strategies to automatically discover structural FOMs. For design problems of materials with target properties, the FOM is typically more obvious (i.e., the property itself), so effective strategies focus on developing efficient strategies for determining the relevant structure–property relationships. Since such relations are often computationally demanding to compute, they may benefit from ML strategies to accelerate property evaluation. The advances presented here expand the scope of application for computational design of soft materials and open up promising new opportunities, including the synthesis of reconfigurable materials with multiple functionalities, the engineering of nonequilibrium assembly protocols, and the strengthening of connections between computational and experimental approaches to materials discovery.

M.P.H. acknowledges support from the Center for Materials for Water and Energy Systems, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0019272. B.A.L. acknowledges support from the Darleane Christian Hoffman Distinguished Postdoctoral Fellowship at the Los Alamos National Laboratory. T.M.T. acknowledges support from the Robert A. Welch Foundation (Grant No. F-1696).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
W.
Bogaerts
,
V.
Wiaux
,
D.
Taillaert
,
S.
Beckx
,
B.
Luyssaert
,
P.
Bienstman
, and
R.
Baets
,
IEEE J. Sel. Top. Quantum Electron.
8
,
928
(
2002
).
2.
K.
Nakayama
,
K.
Tanabe
, and
H. A.
Atwater
,
Appl. Phys. Lett.
93
,
121904
(
2008
).
3.
K.
Kanamura
,
N.
Akutagawa
, and
K.
Dokko
,
J. Power Sources
146
,
86
(
2005
).
4.
R. M.
Darling
,
K. G.
Gallagher
,
J. A.
Kowalski
,
S.
Ha
, and
F. R.
Brushett
,
Energy Environ. Sci.
7
,
3459
(
2014
).
5.
D. J.
Klingenberg
,
AIChE J.
47
,
246
(
2001
).
6.
C.
Girard
and
E.
Dujardin
,
J. Opt. A: Pure Appl. Opt.
8
,
S73
(
2006
).
7.
G. M.
Whitesides
and
B.
Grzybowski
,
Science
295
,
2418
(
2002
).
8.
S. C.
Glotzer
and
M. J.
Solomon
,
Nat. Mater.
6
,
557
(
2007
).
9.
A.
Agrawal
,
S. H.
Cho
,
O.
Zandi
,
S.
Ghosh
,
R. W.
Johns
, and
D. J.
Milliron
,
Chem. Rev.
118
,
3121
(
2018
).
10.
G.
Loget
,
J.
Roche
, and
A.
Kuhn
,
Adv. Mater.
24
,
5111
(
2012
).
11.
Y.
Sun
and
Y.
Xia
,
Science
298
,
2176
(
2002
).
12.
A. M.
Kalsin
,
M.
Fialkowski
,
M.
Paszewski
,
S. K.
Smoukov
,
K. J. M.
Bishop
, and
B. A.
Grzybowski
,
Science
312
,
420
(
2006
).
13.
M. P.
Stoykovich
,
H.
Kang
,
K. C.
Daoulas
,
G.
Liu
,
C.-C.
Liu
,
J. J.
de Pablo
,
M.
Müller
, and
P. F.
Nealey
,
ACS Nano
1
,
168
(
2007
).
14.
P.
Jiang
,
J. F.
Bertone
,
K. S.
Hwang
, and
V. L.
Colvin
,
Chem. Mater.
11
,
2132
(
1999
).
15.
W. F.
Reinhart
and
A. Z.
Panagiotopoulos
,
J. Chem. Phys.
145
,
094505
(
2016
).
16.
J.
Ge
,
Y.
Hu
, and
Y.
Yin
,
Angew. Chem., Int. Ed.
46
,
7428
(
2007
).
17.
L.-C.
Cheng
,
Z. M.
Sherman
,
J. W.
Swan
, and
P. S.
Doyle
,
Langmuir
35
,
9464
(
2019
).
18.
T. W.
Hughes
,
M.
Minkov
,
I. A. D.
Williamson
, and
S.
Fan
,
ACS Photonics
5
,
4781
(
2018
).
19.
S.
Torquato
,
Soft Matter
5
,
1157
(
2009
).
20.
A.
Jain
,
J. A.
Bollinger
, and
T. M.
Truskett
,
AIChE J.
60
,
2732
(
2014
).
21.
H. M.
Jaeger
,
Soft Matter
11
,
12
(
2015
).
22.
H. M.
Jaeger
and
J. J.
de Pablo
,
APL Mater.
4
,
053209
(
2016
).
23.
A.
Murugan
and
H. M.
Jaeger
,
MRS Bull.
44
,
96
(
2019
).
24.
K. R.
Gadelrab
,
A. F.
Hannon
,
C. A.
Ross
, and
A.
Alexander-Katz
,
Mol. Syst. Des. Eng.
2
,
539
(
2017
).
25.
A. L.
Ferguson
,
J. Phys.: Condens. Matter
30
,
043002
(
2018
).
26.
N. E.
Jackson
,
M. A.
Webb
, and
J. J.
de Pablo
,
Curr. Opin. Chem. Eng.
23
,
106
(
2019
).
27.
B.
Sanchez-Lengeling
and
A.
Aspuru-Guzik
,
Science
361
,
360
(
2018
).
28.
R.
Gómez-Bombarelli
,
J. N.
Wei
,
D.
Duvenaud
,
J. M.
Hernández-Lobato
,
B.
Sánchez-Lengeling
,
D.
Sheberla
,
J.
Aguilera-Iparraguirre
,
T. D.
Hirzel
,
R. P.
Adams
, and
A.
Aspuru-Guzik
,
ACS Cent. Sci.
4
,
268
(
2018
).
29.
C. W.
Coley
,
N. S.
Eyke
, and
K. F.
Jensen
,
Angew. Chem., Int. Ed.
(published online
2019
).
30.
C. W.
Coley
,
N. S.
Eyke
, and
K. F.
Jensen
,
Angew. Chem., Int. Ed.
(published online
2019
).
31.
S.
Wilken
,
M. Z.
Miskin
, and
H. M.
Jaeger
,
Phys. Rev. E
91
,
022212
(
2015
).
32.
D. J.
Walsh
,
S.
Dutta
,
C. E.
Sing
, and
D.
Guironnet
,
Macromolecules
52
,
4847
(
2019
).
33.
H.
Aharoni
,
Y.
Xia
,
X.
Zhang
,
R. D.
Kamien
, and
S.
Yang
,
Proc. Natl. Acad. Sci. U. S. A.
115
,
7206
(
2018
).
34.
W.
Man
,
M.
Florescu
,
E. P.
Williamson
,
Y.
He
,
S. R.
Hashemizad
,
B. Y. C.
Leung
,
D. R.
Liner
,
S.
Torquato
,
P. M.
Chaikin
, and
P. J.
Steinhardt
,
Proc. Natl. Acad. Sci. U. S. A.
110
,
15886
(
2013
).
35.
M. Z.
Miskin
and
H. M.
Jaeger
,
Nat. Mater.
12
,
326
(
2013
).
36.
M. Z.
Miskin
and
H. M.
Jaeger
,
Soft Matter
10
,
3708
(
2014
).
37.
T.
Auzinger
,
W.
Heidrich
, and
B.
Bickel
,
ACM Trans. Graphics
37
,
159
(
2018
).
38.
D. R.
Reid
,
N.
Pashine
,
A. S.
Bowen
,
S. R.
Nagel
, and
J. J.
de Pablo
,
Soft Matter
15
,
8084
(
2019
).
39.
A. F.
Hannon
,
K. W.
Gotrik
,
C. A.
Ross
, and
A.
Alexander-Katz
,
ACS Macro Lett.
2
,
251
(
2013
).
40.
A. F.
Hannon
,
Y.
Ding
,
W.
Bai
,
C. A.
Ross
, and
A.
Alexander-Katz
,
Nano Lett.
14
,
318
(
2013
).
41.
J.
Qin
,
G. S.
Khaira
,
Y.
Su
,
G. P.
Garner
,
M.
Miskin
,
H. M.
Jaeger
, and
J. J.
de Pablo
,
Soft Matter
9
,
11467
(
2013
).
42.
A. W.
Long
and
A. L.
Ferguson
,
Mol. Syst. Des. Eng.
3
,
49
(
2018
).
43.
Y.
Ma
and
A. L.
Ferguson
,
Soft Matter
15
,
8808
(
2019
).
44.
G.
van Anders
,
D.
Klotsa
,
A. S.
Karas
,
P. M.
Dodd
, and
S. C.
Glotzer
,
ACS Nano
9
,
9542
(
2015
).
45.
Y.
Geng
,
G.
van Anders
,
P. M.
Dodd
,
J.
Dshemuchadse
, and
S. C.
Glotzer
,
Sci. Adv.
5
,
eaaw0514
(
2019
).
46.
R. B.
Jadrich
,
B. A.
Lindquist
, and
T. M.
Truskett
,
J. Chem. Phys.
146
,
184103
(
2017
).
47.
B. A.
Lindquist
,
R. B.
Jadrich
,
W. D.
Piñeros
, and
T. M.
Truskett
,
J. Chem. Phys. B.
122
,
5547
(
2018
).
48.
M. Z.
Miskin
,
G.
Khaira
,
J. J.
de Pablo
, and
H. M.
Jaeger
,
Proc. Natl. Acad. Sci. U. S. A.
113
,
34
(
2016
).
49.
M.
Rechtsman
,
F. H.
Stillinger
, and
S.
Torquato
,
Phys. Rev. Lett.
95
,
228301
(
2005
).
50.
M.
Rechtsman
,
F. H.
Stillinger
, and
S.
Torquato
,
Phys. Rev. E
73
,
011406
(
2006
).
51.
M.
Rechtsman
,
F. H.
Stillinger
, and
S.
Torquato
,
Phys. Rev. E
75
,
031403
(
2007
).
52.
É.
Marcotte
,
F. H.
Stillinger
, and
S.
Torquato
,
Soft Matter
7
,
2332
(
2011
).
53.
É.
Marcotte
,
F. H.
Stillinger
, and
S.
Torquato
,
J. Chem. Phys.
134
,
164105
(
2011
).
54.
É.
Marcotte
,
F. H.
Stillinger
, and
S.
Torquato
,
J. Chem. Phys.
138
,
061101
(
2013
).
55.
A.
Jain
,
J. R.
Errington
, and
T. M.
Truskett
,
Soft Matter
9
,
3866
(
2013
).
56.
A.
Jain
,
J. R.
Errington
, and
T. M.
Truskett
,
Phys. Rev. X
4
,
031049
(
2014
).
57.
W. D.
Piñeros
,
M.
Baldea
, and
T. M.
Truskett
,
J. Chem. Phys.
144
,
084502
(
2016
).
58.
W. D.
Piñeros
,
M.
Baldea
, and
T. M.
Truskett
,
J. Chem. Phys.
145
,
054901
(
2016
).
59.
W. D.
Piñeros
and
T. M.
Truskett
,
J. Chem. Phys.
146
,
144501
(
2017
).
60.
A. L.
Ferguson
,
A. Z.
Panagiotopoulos
,
P. G.
Debenedetti
, and
I. G.
Kevrekidis
,
Proc. Natl. Acad. Sci. U. S. A.
107
,
13597
(
2010
).
61.
A. W.
Long
and
A. L.
Ferguson
,
J. Phys. Chem. B
118
,
4228
(
2014
).
62.
E.
Chiavazzo
,
R.
Covino
,
R. R.
Coifman
,
C. W.
Gear
,
A. S.
Georgiou
,
G.
Hummer
, and
I. G.
Kevrekidis
,
Proc. Natl. Acad. Sci. U. S. A.
114
,
E5494
(
2017
).
63.
W.
Chen
and
A. L.
Ferguson
,
J. Comput. Chem.
39
,
2079
(
2018
).
64.
W.
Chen
,
A. R.
Tan
, and
A. L.
Ferguson
,
J. Chem. Phys.
149
,
072312
(
2018
).
65.
J. M. L.
Ribeiro
,
P.
Bravo
,
Y.
Wang
, and
P.
Tiwary
,
J. Chem. Phys.
149
,
072301
(
2018
).
66.
M. M.
Sultan
,
H. K.
Wayment-Steele
, and
V. S.
Pande
,
J. Chem. Theory Comput.
14
,
1887
(
2018
).
67.
P.
Zhou
,
J. C.
Proctor
,
G.
van Anders
, and
S. C.
Glotzer
,
Mol. Phys.
117
,
3968
(
2019
).
68.
D.
Barber
,
Bayesian Reasoning and Machine Learning
(
Cambridge University Press
,
2012
).
69.
L.
Vlcek
and
A. A.
Chialvo
,
J. Chem. Phys.
143
,
144110
(
2015
).
70.
M. S.
Shell
,
J. Chem. Phys.
129
,
144108
(
2008
).
71.
A.
Chaimovich
and
M. S.
Shell
,
J. Chem. Phys.
134
,
094112
(
2011
).
72.
I.
Bilionis
and
N.
Zabaras
,
J. Chem. Phys.
138
,
044313
(
2013
).
73.

In practice, this necessitates that at each optimization step, the system is started from a disordered state and not configurations from the previous step.

74.
B. A.
Lindquist
,
R. B.
Jadrich
, and
T. M.
Truskett
,
J. Chem. Phys.
145
,
111101
(
2016
).
75.
W. D.
Piñeros
,
B. A.
Lindquist
,
R. B.
Jadrich
, and
T. M.
Truskett
,
J. Chem. Phys.
148
,
104509
(
2018
).
76.
D.
Banerjee
,
B. A.
Lindquist
,
R. B.
Jadrich
, and
T. M.
Truskett
,
J. Chem. Phys.
150
,
124903
(
2019
).
77.
B. A.
Lindquist
,
R. B.
Jadrich
, and
T. M.
Truskett
,
Soft Matter
12
,
2663
(
2016
).
78.
B. A.
Lindquist
,
R. B.
Jadrich
,
M. P.
Howard
, and
T. M.
Truskett
,
J. Chem. Phys.
151
,
104104
(
2019
).
79.
C. S.
Adorf
,
J.
Antonaglia
,
J.
Dshemuchadse
, and
S. C.
Glotzer
,
J. Chem. Phys.
149
,
204102
(
2018
).
80.
W. G.
Noid
,
J. Chem. Phys.
139
,
090901
(
2013
).
81.
R. B.
Jadrich
,
J. A.
Bollinger
,
B. A.
Lindquist
, and
T. M.
Truskett
,
Soft Matter
11
,
9342
(
2015
).
82.
B. A.
Lindquist
,
S.
Dutta
,
R. B.
Jadrich
,
D. J.
Milliron
, and
T. M.
Truskett
,
Soft Matter
13
,
1335
(
2017
).
83.
T.
Sanyal
and
M. S.
Shell
,
J. Chem. Phys.
145
,
034109
(
2016
).
84.
D.-L.
Zhang
,
C.-F.
Mu
,
H.
Wang
,
R.
Car
, and
W.
E
,
J. Chem. Phys.
42
,
034101
(
2018
).
85.
L.
Paramonov
,
M. G.
Burker
, and
S. N.
Yaliraki
,
Coarse-Graining of Condensed Phase and Biomolecular Systems
(
CRC Press
,
2008
).
86.
A.
Yethiraj
and
A.
van Blaaderen
,
Nature
421
,
513
(
2003
).
87.
J. W.
Swan
,
P. A.
Vasquez
,
P. A.
Whitson
,
E. M.
Fincke
,
K.
Wakata
,
S. H.
Magnus
,
F. D.
Winne
,
M. R.
Barratt
,
J. H.
Agui
,
R. D.
Green
,
N. R.
Hall
,
D. Y.
Bohman
,
C. T.
Bunnell
,
A. P.
Gast
, and
E. M.
Furst
,
Proc. Natl. Acad. Sci. U. S. A.
109
,
16023
(
2012
).
88.
L.
Yao
,
N.
Sharifi-Mood
,
I. B.
Liu
, and
K. J.
Stebe
,
J. Colloid Interface Sci.
449
,
436
(
2015
).
89.
N.
Sharifi-Mood
,
I. B.
Liu
, and
K. J.
Stebe
,
Soft Matter
11
,
6768
(
2015
).
90.
I. I.
Smalyukh
,
Annu. Rev. Condens. Matter Phys.
9
,
207
(
2018
).
91.
Q.
Chen
,
S. C.
Bae
, and
S.
Granick
,
Nature
469
,
381
(
2011
).
92.
Y.
Wang
,
Y.
Wang
,
D. R.
Breed
,
V. N.
Manoharan
,
L.
Feng
,
A. D.
Hollingsworth
,
M.
Weck
, and
D. J.
Pine
,
Nature
491
,
51
(
2012
).
93.
A. S.
Keys
,
C. R.
Iacovella
, and
S. C.
Glotzer
,
Annu. Rev. Condens. Matter Phys.
2
,
263
(
2011
).
94.
P. J.
Steinhardt
,
D. R.
Nelson
, and
M.
Ronchetti
,
Phys. Rev. B
28
,
784
(
1983
).
95.
W.
Lechner
and
C.
Dellago
,
J. Chem. Phys.
129
,
114707
(
2008
).
96.
R.
Kumar
,
G. M.
Coli
,
M.
Dijkstra
, and
S.
Sastry
,
J. Chem. Phys.
151
,
084109
(
2019
).
97.
P.
Geiger
and
C.
Dellago
,
J. Chem. Phys.
139
,
164105
(
2013
).
98.
R. S.
DeFever
,
C.
Targonski
,
S. W.
Hall
,
M. C.
Smith
, and
S.
Sarupria
,
Chem. Sci.
10
,
7503
(
2019
).
99.
M.
Fulford
,
M.
Salvalaglio
, and
C.
Molteni
,
J. Chem. Inf. Model.
59
,
2141
(
2019
).
100.
P.-L.
Chau
and
A. J.
Hardwick
,
Mol. Phys.
93
,
511
(
1998
).
101.
J. R.
Errington
and
P. G.
Debenedetti
,
Nature
409
,
318
(
2001
).
102.
P. M.
Larsen
,
S.
Schmidt
, and
J.
Schiøtz
,
Modell. Simul. Mater. Sci. Eng.
24
,
055007
(
2016
).
103.
C. L.
Phillips
and
G. A.
Voth
,
Soft Matter
9
,
8552
(
2013
).
104.
M.
Spellings
and
S. C.
Glotzer
,
AIChE J.
64
,
2198
(
2018
).
105.
C. S.
Adorf
,
T. C.
Moore
,
Y. J. U.
Melle
, and
S. C.
Glotzer
,
J. Phys. Chem. B
124
,
69
(
2019
).
106.
W. F.
Reinhardt
,
A. W.
Long
,
M. P.
Howard
,
A. L.
Ferguson
, and
A. Z.
Panagiotopoulos
,
Soft Matter
13
,
4733
(
2017
).
107.
W. F.
Reinhardt
and
A. Z.
Panagiotopoulos
,
Soft Matter
13
,
6803
(
2017
).
108.
W. F.
Reinhardt
and
A. Z.
Panagiotopoulos
,
Soft Matter
14
,
6083
(
2018
).
109.
B. D.
Snow
,
D. D.
Doty
, and
O. K.
Johnson
,
Front. Mater.
6
,
120
(
2019
).
110.
Z.
Yang
,
X.
Li
,
L. C.
Brinson
,
A. N.
Choudary
,
W.
Chen
, and
A.
Agrawal
,
J. Mech. Des.
140
,
111416
(
2018
).
111.
R. B.
Jadrich
,
B. A.
Lindquist
, and
T. M.
Truskett
,
J. Chem. Phys.
149
,
194109
(
2018
).
112.
R. B.
Jadrich
,
B. A.
Lindquist
,
W. D.
Piñeros
,
D.
Banerjee
, and
T. M.
Truskett
,
J. Chem. Phys.
149
,
194110
(
2018
).
113.
C. A.
López
,
V. V.
Vesselinov
,
S.
Gnanakaran
, and
B. S.
Alexandrov
,
J. Chem. Theory Comput.
15
,
6343
(
2019
).
114.
L.
Yan
,
R.
Ravasio
,
C.
Brito
, and
M.
Wyart
,
Proc. Natl. Acad. Sci. U. S. A.
114
,
2526
(
2017
).
115.
G.
Goel
,
W. P.
Krekelberg
,
J. R.
Errington
, and
T. M.
Truskett
,
Phys. Rev. Lett.
100
,
106001
(
2008
).
116.
J.
Carmer
,
G.
Goel
,
M. J.
Pond
,
J. R.
Errington
, and
T. M.
Truskett
,
Soft Matter
8
,
4083
(
2012
).
117.
J. I.
Monroe
and
M. S.
Shell
,
Proc. Natl. Acad. Sci. U. S. A.
115
,
8093
(
2018
).
118.
S.
Ju
,
T.
Shiga
,
L.
Feng
,
Z.
Hou
,
K.
Tsuda
, and
J.
Shiomi
,
Phys. Rev. X
7
,
021024
(
2017
).
119.
A.
Tran
,
M.
Tran
, and
Y.
Wang
,
Struct. Multidiscip. Optim.
59
,
2131
(
2019
).
120.
S. S.
Schoenholz
,
E. D.
Cubuk
,
D. M.
Sussman
,
E.
Kaxiras
, and
A. J.
Liu
,
Nat. Phys.
12
,
469
(
2016
).
121.
S. S.
Schoenholz
,
E. D.
Cubuk
,
E.
Kaxiras
, and
A. J.
Liu
,
Proc. Natl. Acad. Sci. U. S. A.
114
,
263
(
2017
).
122.
E. D.
Cubuk
,
R. J. S.
Ivancic
,
S. S.
Schoenholz
,
D. J.
Strickland
,
A.
Basu
,
Z. S.
Davidson
,
J.
Fontaine
,
J. L.
Hor
,
Y.-R.
Huang
,
Y.
Jiang
,
N. C.
Keim
,
K. D.
Koshigan
,
J. A.
Lefever
,
T.
Liu
,
X.-G.
Ma
,
D. J.
Magagnosc
,
E.
Morrow
,
C. P.
Ortiz
,
J. M.
Rieser
,
A.
Shavit
,
T.
Still
,
Y.
Xu
,
Y.
Zhang
,
K. N.
Nordstrom
,
P. E.
Arratia
,
R. W.
Carpick
,
D. J.
Durian
,
Z.
Fakhraai
,
D. J.
Jerolmack
,
D.
Lee
,
J.
Li
,
R.
Riggleman
,
K. T.
Turner
,
A. G.
Yodh
,
D. S.
Gianola
, and
A. J.
Liu
,
Science
358
,
1033
(
2017
).
123.
N. H.
Paulson
,
M. W.
Priddy
,
D. L.
McDowell
, and
S. R.
Kalidindi
,
Acta Mater.
129
,
428
(
2017
).
124.
A.
Çeçen
,
T.
Fast
,
E. C.
Kumbur
, and
S. R.
Kalidindi
,
J. Power Sources
245
,
144
(
2014
).
125.
Z.
Yang
,
Y. C.
Yabansu
,
R.
Al-Bahrani
,
W.-k.
Liao
,
A. N.
Choudhary
,
S. R.
Kalidindi
, and
A.
Agrawal
,
Comput. Mater. Sci.
151
,
278
(
2018
).
126.
D. J.
Kozuch
,
F. H.
Stillinger
, and
P. G.
Debenedetti
,
Proc. Natl. Acad. Sci. U. S. A.
115
,
13252
(
2018
).
127.
T.
Guo
,
D. J.
Lohan
,
J. T.
Allison
,
R.
Cang
, and
Y.
Ren
, , 210049 ed. (AIAA,
2018
).
128.
R.
Liu
,
A.
Kumar
,
Z.
Chen
,
A.
Agrawal
,
V.
Sundararaghavan
, and
A.
Choudhary
,
Sci. Rep.
5
,
11551
(
2015
).
129.
M.
Laumanns
and
J.
Ocenasek
,
Parallel Problem Solving from Nature—PPSN VII
(
2002
), pp.
298
307
.
130.
R. T.
Marler
and
J. S.
Arora
,
Struct. Multidisc. Optim.
26
,
369
(
2004
).
131.
J. H.
Holtz
and
S. A.
Asher
,
Nature
389
,
829
(
1997
).
132.
K.
Lee
and
S. A.
Asher
,
J. Am. Chem. Soc.
122
,
9534
(
2000
).
133.
J.
Ge
and
Y.
Yin
,
Angew. Chem., Int. Ed.
50
,
1492
(
2011
).
134.
J. W.
Mullinax
and
W. G.
Noid
,
J. Chem. Phys.
131
,
104110
(
2009
).
135.
T. C.
Moore
,
C. R.
Iacovella
, and
C.
McCabe
,
J. Chem. Phys.
140
,
224104
(
2014
).
136.
T.
Sanyal
,
J.
Mittal
, and
M. S.
Shell
,
J. Chem. Phys.
151
,
044111
(
2019
).
137.
M. E.
Sharp
,
F. X.
Vázquez
,
J. W.
Wagner
,
T.
Dannenhoffer-Lafage
, and
G. A.
Voth
,
J. Chem. Theory Comput.
15
,
3306
(
2019
).
138.
M. R.
Feeney
,
P. G.
Debenedetti
, and
F. H.
Stillinger
,
J. Chem. Phys.
119
,
4582
(
2003
).
139.
M.
Grzelczak
,
J.
Vermant
,
E. M.
Furst
, and
L. M.
Liz-Marzán
,
ACS Nano
4
,
3591
(
2010
).
140.
Z. M.
Sherman
and
J. W.
Swan
,
ACS Nano
13
,
764
(
2019
).
141.
N.
Kashtan
and
U.
Alon
,
Proc. Natl. Acad. Sci. U. S. A.
102
,
13773
(
2005
).
142.
M.
Hemery
and
O.
Rivoire
,
Phys. Rev. E
91
,
042704
(
2015
).
143.
X.
Tang
,
B.
Rupp
,
Y.
Yang
,
T. D.
Edwards
,
M. A.
Grover
, and
M. A.
Bevan
,
ACS Nano
10
,
6791
(
2016
).
144.
X.
Tang
,
J.
Zhang
,
M. A.
Bevan
, and
M. A.
Grover
,
J. Process Control
60
,
141
(
2017
).
145.
M. P.
Howard
,
W. F.
Reinhart
,
T.
Sanyal
,
M. S.
Shell
,
A.
Nikoubashman
, and
A. Z.
Panagiotopoulos
,
J. Chem. Phys.
149
,
094901
(
2018
).
146.
R. D.
Batten
,
F. H.
Stillinger
, and
S.
Torquato
,
J. Appl. Phys.
104
,
033504
(
2008
).
147.
M.
Florescu
,
S.
Torquato
, and
P. J.
Steinhardt
,
Proc. Natl. Acad. Sci. U. S. A.
106
,
20658
(
2009
).
148.
G. S.
Khaira
,
J.
Qin
,
G. P.
Garner
,
S.
Xiong
,
L.
Wan
,
R.
Ruiz
,
H. M.
Jaeger
,
P. F.
Nealey
, and
J. J.
de Pablo
,
ACS Macro Lett.
3
,
747
(
2014
).
149.
S. P.
Paradiso
,
K. T.
Delaney
, and
G. H.
Fredrickson
,
ACS Macro Lett.
5
,
972
(
2016
).
150.
M. R.
Khadikar
,
S.
Paradiso
,
K. T.
Delaney
, and
G. H.
Fredrickson
,
Macromolecules
50
,
6702
(
2017
).