In this work, we introduce a comprehensive machine-learning algorithm, namely, a multifidelity neural network (MFNN) architecture for data-driven constitutive metamodeling of complex fluids. The physics-based neural networks developed here are informed by the underlying rheological constitutive models through the synthetic generation of low-fidelity model-based data points. The performance of these rheologically informed algorithms is thoroughly investigated and compared against classical deep neural networks (DNNs). The MFNNs are found to recover the experimentally observed rheology of a multicomponent complex fluid consisting of several different colloidal particles, wormlike micelles, and other oil and aromatic particles. Moreover, the data-driven model is capable of successfully predicting the steady state shear viscosity of this fluid under a wide range of applied shear rates based on its constituting components. Building upon the demonstrated framework, we present the rheological predictions of a series of multicomponent complex fluids made by DNN and MFNN. We show that by incorporating the appropriate physical intuition into the neural network, the MFNN algorithms capture the role of experiment temperature, the salt concentration added to the mixture, as well as aging within and outside the range of training data parameters. This is made possible by leveraging an abundance of synthetic low-fidelity data that adhere to specific rheological models. In contrast, a purely data-driven DNN is consistently found to predict erroneous rheological behavior.

Many complex and structured fluids exhibit a wide range of rheological responses to different flow characteristics owing to their evolving internal structures [1–8]. The ability to represent this complex rheological behavior through closed-form constitutive equations constructed from kinematic variables is essential in better understanding and designing these complex fluids and their processing conditions. Thus, efforts in constitutive modeling of complex fluids date back to the inception of the field of rheology itself [9–11]. However, as the material’s response to an applied deformation or stress becomes more complicated, so does the constitutive model of choice to describe such a response, resulting in more model parameters and hence more experimental protocols to determine those parameters. Generalized Newtonian fluids are a class of constitutive equations in which different functional forms are designated to represent the changes in the non-Newtonian viscosity [12–14]. For instance, the power-law (PL) model represents a single exponent rate dependence, which can take a shear-thinning [15,16] or shear-thickening [17] form. The PL model can be simply written as Eq. (1), where k and n are the only two model parameters,

σ=kγ˙n.
(1)

Very often, structured fluids exhibit yield stress under which the material does not flow and upon reaching this critical yield stress begins to flow [18,19]. In its simplest form, where this flow is rate independent, this behavior can be captured through a Bingham plastic model [10] shown as Eq. (2), where σy is the yield stress and k is the continuous phase viscosity,

σ=σy+kγ˙.
(2)

Although Eq. (2) has two model parameters, it does not reflect on the rate dependence of the fluid. The majority of the yield stress fluids exhibit a nonlinear dependency with the shear rate upon reaching the yield stress [20–26]. A combination of Eqs. (1) and (2) leads to the so-called Herschel–Bulkley (HB) model [9] with three different parameters, in which the viscosity itself is related to the shear rate in a nonlinear way,

σ=σy+kγ˙n.
(3)

While the HB model successfully describes the flow curve of a wide range of yield stress materials, a direct connection of the parameter value to the underlying microscopic physics determining the viscosity is not clear. In fact, the PL and HB models have clear limitations in the ability to extrapolate at high shear rates, predicting vanishing low viscosity at the high shear rate for any n values smaller than 1. This is not a problem for many applications but clearly shows that the model does not capture the physics controlling the viscosity. For this reason, a number of attempts have been done to derive constitutive models that better connect microstructure and bulk rheological properties [27–29]. One example is the three component (TC) model [26], recently proposed as a physically based alternative to the HB model shown as Eq. (4). In this model, the scaling of the different dissipation mechanisms is fixed and the three parameters have clear physical meaning,

σ=σy+σy(γ˙/γ˙c)0.5+ηbgγ˙.
(4)

The rheological response of complex and structured fluids to a change in thermomechanical or thermochemical environment may take a variety of forms. For example, for rheologically simple fluids, increasing the temperature usually, in turn, decreases the viscosity of fluid, which can be explained through Arrhenius-like description [14]; however, this relationship becomes more complex when changing the temperature and changes the interactions between the constructing constituents. For instance, thermoreversible gels show jumps in their moduli upon reaching a critical temperature that effectively induces gelation in the system [30]. The same effect can be observed by changing the salinity of a mixture. As salt is added to a particulate system, charge screening around different particles results in changing the effective interactions between them, and hence, the formation/breakage of particle-particle bonds and structures [31–33]. This structure and network formation subsequently changes the macroscopic rheological measures of the system entirely.

The slow aging of these structures is another factor that can change the rheology of a structured fluid [34–39]. The structure of the colloidal gel coarsens over time in a continuous and nontrivial manner, which, in turn, changes the measured moduli of the material. Nonetheless, the timescale for this aging behavior is significantly longer that initial network formation, or single particle diffusion timescale, also depends on the interactions between the particles as well as the fraction of the solid content in a mixture [40–48]. Since the material is ever-changing with respect to its microstructure, the rheological behavior of the system cannot be expressed through traditional constitutive models where this time-dependence is not mathematically present. These structure-rheology couplings are not limited to colloidal gels and can be extended to all systems where the primary components of the system form structures that affect the physical behavior. For instance, surfactant molecules in aqueous solutions can self-assemble to form wormlike micelles (WLMs) with a distinct rheological behavior. Experimental and theoretical studies on the structure and rheology of WLM solutions show a range of exotic rheological responses to different applied deformations and fields [49–54].

For many decades, phenomenological models (from early models such as Maxwell and Kelvin to the most recent ones such as isokinematic hardening and other thixotropic models with microstructural parameters [55]) have been developed and employed for understanding the underpinning physics of a problem. These are extremely important developments as they provide an invaluable insight into the underlying physics of a particular phenomenon. This makes such constitutive models perfect candidates to study and understand ideal rheological behaviors. Nonetheless, as the material becomes multicomponent and more complex, the number of additional parameters required to fully capture the rheological response of the fluid to an applied deformation increases and eventually becomes computationally prohibitive. In other words, the diversity of rheological responses observed in these structured fluids makes it a very challenging task to represent these behaviors through constitutive models with the optimal number of model parameters. The emergence of multiple time and length scales due to structure formation and breakup at different local or global scales [56–58] requires combining different models or increasing the number of parameters to make a phenomenological model of choice more adaptive. For instance, one can imagine that the model parameters that describe the time-evolution of the structure parameter or yield stress for colloidal gels [59–63] bears some physical intuition to temperature dependence, aging, salinity, as well as other deterministic factors. However, since these parameters do not necessarily carry a direct physical underpinning and are often difficult to fit using experimental results, erroneous predictions begin to emerge. This is even more pronounced when real-world fluids with multiple interacting particles and constituents are under question. A complex fluid of choice may consist of different solid particles with different chemical identities (hence, different physical interactions), surfactants, different polymers, aqueous, and nonaqueous simple fluids. For these multicomponent systems, where the fluid behavior is governed by structure formation and breakup at several length scales and is strictly coupled to the thermomechanochemical identity and history of the fluid, devising a meaningful constitutive relation between deformation and stress that represents the time-, salinty-, and age-dependence of the fluid is extremely challenging, if possible at all.

With an ever-increasing computational power, and the ability to process data at an unprecedented rate, data-driven models have become an undeniable and extremely powerful method of choice for understanding and predicting different phenomena [64–66]. Over the past few years, there has been an increasing interest in using machine-learning (ML) algorithms to harness the power of data-driven modeling in all avenues of science and engineering. However, the field of soft matter and more specifically rheology is clearly lagging behind and not capitalizing on such advanced methodologies. This is perhaps, in part, due to the ambiguous consequences of the produced metamodels, and more importantly, their correlations to the fundamental underlying physics that drive a particular phenomenon. However, these issues can be efficiently alleviated by devising the appropriate type of ML approach that is guided or informed by the physical laws of interest.

Bishop [67] defined ML as a subset of artificial intelligence (AI) that performs a specific task without using explicit instructions. The types of ML algorithms differ in their approach, the type of input and output data, and the type of task or problem that they intend to solve. Neural network (NN) is a type of supervised [68] ML algorithm that is inspired by the biological neurons system to process and predict data. NNs consist of many interconnected processing elements called neurons that work together to model a computational structured framework where the complex relations between the inputs and outputs are revealed as functions. These networks are capable of learning such functions, and the system learns to correct its own errors by working the weights and inferences between these functions. Learning in NNs is adaptive, which means the weights of the neurons are changed continuously to generate a correct response when new inputs are provided. Over the years, different types of NNs were introduced, namely, those of artificial neural network (ANN) [69–75], deep neural network (DNN) [76–79], convolutional neural network (CNN) [80–82], and recurrent neural network (RNN) [83,84]. Each of these NNs has proven to be effective for a variety of physical applications. Regardless of the type of NNs, or as a more general category ML algorithms, these methodologies rely on the abundance of data to maintain a reliable and accurate predictive ability. In other words, it is absolutely essential for the NNs to be trained on exhaustively large datasets to enable any reliable predictions. Another setback for the ML and data-driven algorithms in scientific applications is the fact that ML algorithms are generally limited to predictions in the range of training datasets. In other words, given that sufficiently large training datasets are employed, data-driven models can only predict outputs for the input conditions that fall in the range of training data (interpolation) and are not able to predict beyond this range (extrapolation). Furthermore, the underlying physics is nonexistent in ML algorithms, since the basic idea behind every ML algorithm is predicting based on data correlations and statistics. Therefore, there has been increasing attention to developing methods for reducing the need for large datasets, as well as including the essential physics of a given problem into the NN. The pioneering work of Raissi et al. [85] introduced a novel concept called “physics-informed neural network” (PINN) to address these issues. The idea behind such networks is to add physical governing equations to the NN framework to achieve a meaningful metamodel. By introducing the essential physics of the problem, and conditioning the NN correlations to always adhere to these physical laws, the need for large training datasets is diminished, and physical problems can be solved with a fewer number of observations in a dataset. Subsequently, a number of variations to the original PINN for solving different problems have been introduced: parareal physics-informed neural network (PPINN) [86] for parallel in time learning of a problem, multifidelity physics-informed neural network (MPINN) [87] for solving problems in which the training data exist with varying level of confidence, fractional physics-informed neural network (fPINN) [88] for solving fractional partial differential equations, and other methods such as nonlocal physics-informed neural network (nPINN) [89], DeepONet [90], and DeepXDE [91]. It should be noted that PINNs are not the only physics-based approach for data-driven and ML. For instance, Wang et al. [92] introduced a physics-informed ML approach to reconstruct the Reynolds stress discrepancies in Reynolds Average Navier Stokes (RANS) modeling using DNS data. A framework based on the physics-informed ML approach was designed by Wu et al. [93] to augment the turbulent models. Swischuk et al. [94] introduced a parametric surrogate model that develops a low-dimensional parameterization of quantities of interest, such as pressure and temperature, using proper orthogonal decomposition (POD). By incorporating these parameters into the ML algorithm, the methodology learns to map the input parameters to the POD expansion coefficients and predict a high dimensional output problem. Jia et al. [95] presented physics-guided ML to predict and simulate the temperature profile of a lake. Rackauckas et al. [96] introduced a universal differential equation (UDE) framework as a scientific ML framework that can be used in a range of different problems such as discovering unknown governing equations and accurate extrapolations.

As discussed above, there are various methods of incorporating the essential physics into different ML algorithms. In the case of neural networks, the physical governing laws can be included implicitly or explicitly. Explicit inclusion of physics in form of differential equations is proven to be very efficient in accelerating the solution of problems with known constitutive models. On the other hand, implicit inclusion of physics can be more effective when the physical laws that govern the phenomena are not particularly accurate. In this work, we present an implicit methodology for the incorporation of physical governing laws, referred to as MFNN, to construct a rheological metamodel for predicting the quasi-steady-state simple shear rheological response of a complex multicomponent system. Moreover, we seek to determine the applicability of the physics-based neural networks on predicting the rheology of a complex fluid with respect to more complex parameters such as aging, salinity of the mixture, temperature dependence, etc. The goal of the current work is to establish the framework that will preserve the essential physical and rheological underpinning of the problem and by doing so enables accurate predictions of the rheological response of a given complex multicomponent system.

Thus, the architecture of the current study is as follows: Sec. II A provides information about the material system as well as the experiments performed and type of data in hand; Sec. II B presents detailed information about the MFNN as well as the simple DNN and its corresponding structures; Sec. III compares the predictions made using the proposed method, DNN, and a material-specific constitutive equation with respect to the steady flow curves of the material under simple shear. Finally, Sec. IV provides concluding remarks as well as an outlook for future data-driven frameworks in rheology.

The material used in the current study is a model fluid formulated to mimic the complexity of consumer product formulations. The model system investigated consists of several components: a surfactant continuous phase, formulated to self-assemble into entangled WLMs, different types of colloidal particles, polymer, oil additive, and in some samples model perfume. Table I presents the full variability and the range of variations for each component in different samples, named samples 1 through 19. In these systems, due to the presence of WLMs and the polymers, the colloidal particles self-assembles into a network driven by the depletion attraction forces [97,98] and form a colloidal gel with measurable yield stress. The surfactant continuous phase present in the system exhibit the typical rate dependent viscosity found in WLM systems, with shear-thinning behavior above a critical shear rate, due to the alignment of WLMs under shear. On the other hand, the gel network formed by the colloids breaks apart under flowing conditions, resulting in a different shear-thinning mechanism. The coupling of two phenomena, as well as the presence of other particles result in a rather complex response even under steady shear flows. The steady state flow curve is found to be described rather accurately using a combined TC model in which the Newtonian term is replaced by the Carreau model to account for the nonlinear rheological behavior of the continuous phase. The exponent of the Carreau model is fixed to 1 to describe the presence of a stress plateau in the shear-thinning region of WLMs. Equation (5) represents this material-specific model referred to as the TCC model in the paper,

σ=σy+σy(γ˙/γ˙cTC)0.5+γ˙k(1+(γ˙/γ˙ccarreau)2)0.5.
(5)

Evidently, the TCC model has four different fitting parameters, but each one has a clear physical meaning allowing to set expectation and boundaries that will limit the space of possible flow curves that such a complex formulated system can express. Nonetheless, these parameters do not offer a mathematical expression that directly reflects on the formulation of the fluid, its age, or a clear correlation to experiment temperature.

TABLE I.

The constituent components/formulation of different samples tested from sample to sample.

SampleColloid 1Surfactant (%)Colloid 2Colloid 3Perfume amount (%)Perfume typePolymer (%)Oil drop (%)
2.5 18 1.61 1.1 
2.5 18 1.61 0.5 
2.5 12 1.61 0.5 
3.5 12 3.5 0.5 
1.5 18 3.5 1.61 0.5 3.35 
1.5 18 0.5 3.35 
1.5 12 1.61 1.1 0.5 
2.5 18 3.5 1.1 0.5 3.35 
2.5 14 1.61 0.85 0.23 0.8 
10 2.5 12 1.61 1.1 
11 3.5 12 1.1 3.35 
12 1.5 12 3.5 1.1 0.5 
13 3.5 18 0.5 0.5 
14 2.5 14 1.61 0.75 0.23 1.6 
15 3.5 12 3.5 1.61 0.5 0.5 3.35 
16 1.5 18 0.5 0.5 
17 3.5 18 1.61 1.1 0.5 3.35 
18 3.5 18 3.5 1.1 
19 1.5 12 1.1 3.35 
SampleColloid 1Surfactant (%)Colloid 2Colloid 3Perfume amount (%)Perfume typePolymer (%)Oil drop (%)
2.5 18 1.61 1.1 
2.5 18 1.61 0.5 
2.5 12 1.61 0.5 
3.5 12 3.5 0.5 
1.5 18 3.5 1.61 0.5 3.35 
1.5 18 0.5 3.35 
1.5 12 1.61 1.1 0.5 
2.5 18 3.5 1.1 0.5 3.35 
2.5 14 1.61 0.85 0.23 0.8 
10 2.5 12 1.61 1.1 
11 3.5 12 1.1 3.35 
12 1.5 12 3.5 1.1 0.5 
13 3.5 18 0.5 0.5 
14 2.5 14 1.61 0.75 0.23 1.6 
15 3.5 12 3.5 1.61 0.5 0.5 3.35 
16 1.5 18 0.5 0.5 
17 3.5 18 1.61 1.1 0.5 3.35 
18 3.5 18 3.5 1.1 
19 1.5 12 1.1 3.35 

1. Deep neural network

NNs are a subset of ML algorithms that can be employed to predict the output responses of a complex system. Each NN consists of several hidden layers, and each hidden layer contains several neurons. Typically, a NN with more than two hidden layers is called DNN. The bottom-right architecture presented in Fig. 1 shows a schematic view of a DNN with 7 inputs and 1 output. As clearly named across colored boxes in the figure, the inputs to the DNN in our study are the different constituents of the model fluid, imposed deformation rate, age, salt concentration in the background fluid, and experiment temperature. These parameters are then correlated to a single output, shear viscosity, using a number of layers and neurons. For visual purposes, the figure presented only includes three (3) hidden layers, with several neurons per layer shown. In practice, this number was also varied and the sensitivity of predictions to the number of these layers and neurons is discussed later. Variables in a DNN can be learned by minimizing the loss function according to Eq. (6),

MSE=(yactualypredicted)2.
(6)

In Eq. (6), MSE is the loss function, y-actual is the real result and y-predicted is the NN-predicted result. The NN operates in a manner to minimize the MSE by changing the weights and inferences between each layer/neuron to the next. By adjusting these variables in a NN, a metamodel is produced to predict the output based on a new input variable. As evident in Fig. 1, this methodology solely relies on data points and correlations between them and does not adhere to any physical laws.

FIG. 1.

Schematic view of the MFNN and DNN. The inputs to the DNN architecture are the experimental measurements, while the MFNN architecture leverages the abundance of low-fidelity data points synthetically generated through different models as well as accuracy of experimental data as the high-fidelity dataset.

FIG. 1.

Schematic view of the MFNN and DNN. The inputs to the DNN architecture are the experimental measurements, while the MFNN architecture leverages the abundance of low-fidelity data points synthetically generated through different models as well as accuracy of experimental data as the high-fidelity dataset.

Close modal

2. Multifidelity neural network

Here, we introduce a novel method to leverage NN capabilities to incorporate a rheological-basis for data-driven modeling and prediction of experimental results. To do so, we are introducing a MFNN framework leveraging advances in physics-based NNs with a limited number of actual data at hand. As mentioned before, there exist several methods to incorporate the essential physics of a problem into the NN. Here, we include the physical law by means of low-fidelity data generation from a given physical model. It is important to note that in this framework, high-fidelity data are referred to as any experimental or high resolution simulation result that is reliably reflecting on the rheological behavior. These high-fidelity data are commonly expensive (with respect to time and resources) and exist in limited quantities. With a limited number of high-fidelity data, comes a very limited understanding of the problem as well. In this methodology, low-fidelity data are generated by introducing noise into different constitutive models, and synthetically generating an abundance of data. However, it should be noted that the low-fidelity datasets are not reliable for optimization purposes and can only be used in conjunction with high-fidelity datasets. By controlling the constitutive equation of choice adapted for low-fidelity data generation, we investigate the role of underlying physics. Ultimately, a combination of low- and high-fidelity datasets should be utilized for an appropriate physical understanding and accurate optimization. For an introduction to multifidelity modeling, please refer to Fernández-Godino et al. [99]. The simplest relation between low- and high-fidelity data can be expressed as Eq. (7), in which yHF and yLF are high-fidelity and low-fidelity data, respectively [99]. In addition, ρ(x) and δ(x) are multiplicative correlation and additive correlation surrogates, accordingly. These two functions are problem-specific and do not take predefined definitions. Based on the problem, one could consider different functions of choice, such as polynomial, for these correlations and find the coefficients of noted functions based on the data at hand,

yHF=ρ(x)yLF+δ(x).
(7)

Equation (7) expresses linear correlation in multifidelity modeling. However, the correlation between low- and high-fidelity data do not obey Eq. (7) in most problems, hence, a general expression to reveal the correlation of low- and high-fidelity data is needed. The general form of Eq. (7) can be written as Eq. (8), in which G() is a general combination of yLF and x,

yHF=G(yLF,x).
(8)

In addition, the decomposition of the general correlation into linear and nonlinear parts is shown in Eq. (9),

yHF=Gnl(yLF,x)+Gl(yLF,x).
(9)

Figure 1 shows a schematic of an MFNN. An MFNN consists of two interconnected NNs; the first NN handles the low-fidelity dataset, and the second one deals with the high-fidelity data coming from experiments. The high-fidelity part of an MFNN contains a “linear” part and a “nonlinear” part as discussed in Eq. (9). Each part on high-fidelity NN learns the correlation between the output and input data using its own network of layers and neurons [87]. The left architecture presented in Fig. 1 show the schematic views of the low-fidelity NN with seven inputs and one output. The top right architecture labeled as the high-fidelity neural networks shows the eight input parameters (the seven inputs to the other two NNs, as well as the viscosity output from the low-fidelity NN) of the high-fidelity platform. Also, clearly named across colored boxes in the figure are the inputs to both NNs in our study. For visual purposes, the figure presented only includes two hidden layers for the low-fidelity NN and the nonlinear part of the high-fidelity NN, and a single hidden layer used in the linear part of the high-fidelity NN. In practice, this number was also varied and the sensitivity of predictions to the number of these layers and neurons was studied. The variables of an MFNN are learned by minimizing the loss function according to Eq. (10),

MSE=MSEyHF+MSEyLF+λwi2.
(10)

In Eq. (10), MSEyHF and MSEyLF are deviations of predicted and actual data for high- and low-fidelity data, respectively. Also, wi are the weight functions for the NNs and λ is L2 regularization rates for weight functions to prevent overfitting [100]. MFNN can benefit from the accuracy of the high-fidelity dataset as well as the abundance of the low-fidelity dataset, to predict a suitable output based on input variables. The idea is to use the low-fidelity NN to provide trends to the high-fidelity NN, since the number of high-fidelity data is much smaller in comparison. In addition, the low-fidelity NN prevents the high-fidelity NN from diverging off the correct solution.

For details on convergence comparison between the DNN and the MFNN architectures and the residual losses corresponding to each method, refer to Fig. 17 of the  Appendix.

3. Effect of the NN architecture on predictions

An important aspect of the NN algorithms that has been studied extensively [85,87,101] is their architecture. Namely, the number of layers within the NN architecture, and the number of neurons per layer can affect the accuracy of the NNs with different architectures. To this end, we use the relative absolute error (RAE) according to Eq. (11) as the measure of accuracy to compare the role of the number of hidden layers (depth) and the number of neurons per layer (width),

RAE=1Nn=1N|yactualypredicted|yactual.
(11)

All calculations are done based on a predictive MFNN for the same sample to exclude system-specific biases. Increasing the number of hidden layers as well as the neurons in each layer adds complexity to the NNs and expectedly the accuracy of the NNs change. Nonetheless, increasing the NN elements does not necessarily result in increasing the accuracy of its predictions. Adding more neurons to the NN can lead to overfitting, which, in turn, reduced the efficiency of the algorithm. Table II shows the relative error of different NN architectures, namely, the DNN and the MFNN algorithms, in predicting the steady state shear viscosity of a sample based on its constituting components. The specifics of the results are later presented and discussed in Fig. 7. In this study, widths ranging between 5 and 20 and depths ranging between 2 and 4 are found to yield the best levels of accuracy to avoid overfitting.

TABLE II.

The mean RAE based on different NN architectures based on their number of hidden layers (depth) and neurons per layer (width) on a single sample, keeping all other variables constant.

Width/depth12416
0.196 0.092 0.121 0.094 
10 0.153 0.095 0.101 0.101 
20 0.108 0.101 0.103 0.101 
50 0.101 0.102 0.102 0.102 
Width/depth12416
0.196 0.092 0.121 0.094 
10 0.153 0.095 0.101 0.101 
20 0.108 0.101 0.103 0.101 
50 0.101 0.102 0.102 0.102 

4. Training dataset

The training dataset contains the quasisteady shear viscosity behavior of 19 different sample compositions over a range of shear rates between 0.01 and 100s1 (42 different shear rate points). In order to evaluate the ability of the MFNN and the DNN algorithms, in each section, the networks are trained on 18 (of 19 total) samples’ experimental data and asked to make predictions for the 19th sample. In our study, all 19 samples have been systematically tried and tested; however, for the sake of brevity in each section results for two different samples are presented. These do not represent the best or worst performances of the neural networks, and are merely two different examples to ensure the robustness of the methodology. Thus, throughout the paper, the term “prediction” is referred to NN’s predicted results for a sample or a parameter choice which was removed from the training datasets. In order to determine the importance of each constituent in the sample on the rheological behavior, a sensitivity analysis was performed which indicated that the shear viscosity is strictly sensitive to one of the colloids (18% sensitive) and the surfactant amount (16% sensitive) amongst all constituents of the system. Nonetheless, while relatively smaller than the two component mentioned, the amount of remaining components also affects the viscosity (between 6% and 9% percent). In an ideal situation, where a wide variety of experimental data exist on each of these components, one would define each of them as an input to the DNN and MFNN to enable the most accurate predictions; however, since only a very limited number of samples are at hand, all of the remaining parameters are clustered into a virtual category reflecting on the sum of these components’ compositions. Collectively, the colloid fraction, the surfactant fraction, this new parameter (all other fractions combined) as well as the shear rate are the three state variables that are set as direct inputs to the NNs. In practice, three different concentrations of salts are later added to the system in order to adjust the background viscosity (at the shear rate of 10s1) to 1, 5, and 10 Pa s, respectively. Samples are also stored for different times and tested in 1 month intervals to study the rheology of aged materials. This testing however is performed at two different temperatures (25 and 40 ° C). Thus, in addition to the three state variables, the imposed shear rate, the salt concentration, the experiment temperature and the age of the sample are other direct inputs to the NNs. These seven (7) different input parameters are then correlated to a single output in both DNN and MFNN, which is the steady state shear viscosity of the fluid.

The training dataset described above constructs the input parameters and data for the DNN as well as the high-fidelity portion of the MFNN architecture. As noted previously, the low-fidelity data of the MFNN algorithm are generated based on the physical constitutive equation. This allows for the role of physics to be studied directly by changing the constitutive model employed to generate the data. This is of utmost importance, as the choice of the model used directly dictates the physical intuition of the NN. For instance, if the material under investigation exhibits yield stress under experimental conditions, the physical model of choice should also have yield stress description. Later, we will present the effect of such physical choices on the ability of the MFNN platform to capture the rheological behavior. The type of constitutive equation employed is stated for each part of the study. After selecting the appropriate constitutive relation, we generate a series of data referred to as low-fidelity data, by introducing 10% noise based on uncorrelated Gaussian noise process to the constitutive law of choice. This process makes the training procedure more robust and ensures the effectiveness of the MFNN facing noisy data. It should be noted that the high-fidelity data will always be associated with some noise due to environmental and experimental effects. Generation of low-fidelity data will increase the total number of data that is required to train a proper NN. The number of low-fidelity data is chosen in a way that the RAE based on Eq. (11) is not dependent on the number of data. Figure 2 shows the variation of relative error with respect to the number of low-fidelity data (for the same case study as in Table II and Fig. 7). For all subsequent results presented in the current study, the number of generated low-fidelity data is chosen to be 10 times the high-fidelity data at hand. Using a lesser number of LF data, the RAE becomes dependent on the number of low-fidelity data.

FIG. 2.

The mean RAE of the MFNN as a function of the number of low-fidelity data. Here, the number of high-fidelity data is always the entire dataset at hand, which in this study is 18 series of data points, each spanning over 42 different applied shear rates.

FIG. 2.

The mean RAE of the MFNN as a function of the number of low-fidelity data. Here, the number of high-fidelity data is always the entire dataset at hand, which in this study is 18 series of data points, each spanning over 42 different applied shear rates.

Close modal

The ultimate goal of the present work is to establish the framework in which an ML algorithm, namely, a PINN can be developed and employed as an alternative meta-constitutive model. Thus, we first present the predictions made using such a framework for the steady-state shear viscosity of a complex fluid, using a DNN, MFNN, and a constitutive model developed specific to the material under investigation. Subsequently, we investigate the applicability of our proposed methodologies to predict the shear viscosity of the material with respect to the role of aging, temperature, and addition of salt which are not reflected through the TCC model shown in Eq. (5).

It should be noted that results predicted by the DNN platform do not contain any physical intuition, and are merely data-driven predictions made based on other material compositions. However, in the MFNN method, the underlying physics of the problem manifests in the form of LF data generated, and thus predictions made using MFNN directly reflect on the choice of model and complexity of the physical laws that LF data adhere to. Hence, having developed the MFNNs of choice, there are several different pathways for actual utilization of these networks: (1) When the HF data of a given material are available, as well as the material-specific constitutive equation that explains those data (in this case, the TCC equation). In this situation, the MFNN predictions are simply compared to model fitting as an alternative. Here, we refer to these predictions as “interpolation” as the parameters of a working constitutive model are known. (2) While the TCC model accurately explains the steady shear rheology of these fluids, it is strictly limited to this material and components. However, realistically it is likely that the experimental data (HF) are available, but the more familiar constitutive equations such as the HB model, PL model, etc., are unable to capture the nontrivial rheological behavior of the material. In such instances, the MFNN can be employed as an evolving metamodel, where minimal physics are presumed for the physical law that informs the NN, in order to provide the best possible prediction of the rheological behavior. As such, the MFNN is informed by well-known preexisting models such as simple PL, or a HB, which are known to be not accurate in describing the actual rheology but are used as the most basic presumptions for the behavior of the material. (3) In product/material design and development, often the typical experimental data and working constitutive equations that explain those data are available; however, the model does not necessarily correlate to each individual component of the fluid and thus it cannot predict the rheology based on the formulation of a new material. For instance, in the case of our multicomponent fluid, what would an entirely new formulation in terms of surfactant or colloidal fraction entails in terms of rheological behavior? In order to answer this using traditional constitutive models, a priori functional form for the relationship between different model parameters and the material components is required. Otherwise for any new composition/combination of material constituents extensive new testing is required. Here, we explore the possibility of predicting the rheology of a new sample formulation. In other words, no HF or LF data are available for a new sample and are solely predicted using the existing data on other material formulations. We refer to these predictions as “extrapolation” as they fall outside the bounds of training datasets. (4) Since the TCC model parameters do not explicitly reflect on the temperature-, salt concentration- or age-dependent rheology of the material, no theoretical prediction based on the constitutive model is possible for the rheology of an aged formulation or a formulation at elevated temperatures. Hence, we investigate the applicability of the proposed method with respect to different sample age, salinity, and temperature-dependency of the fluid. In other words, here we seek to alternatively use NNs to answer this question: “how does change in temperature/salinity/sample age affect the bulk rheology of a multicomponent colloidal gel-WLM mixture?.”

In interpolating (and fitting) of the shear viscosity of this material system, for each of the samples, the actual experimental data and the coefficients of the TCC model that describe those data are known. Thus, the LF data points are generated using the appropriate physical constitutive model of choice for each given sample. It should be noted that the actual HF data are only used to fit the TCC model, and generate the LF data using the model parameters, but are eliminated from the HF training dataset. For the simple DNN, the training dataset includes the information of all other compositions except the targeted sample. The regression plot of the trained model for a sample is shown in Fig. 3. It should also be noted that the performances of the DNN and the MFNN are found to be highly typical and not dependent on specific samples. In other words, the results presented in Fig. 3 do not represent the best or the worst performances of the MFNN/DNN and are merely chosen as comparison points. Evidently, the MFNN tracks the experimental data (HF) very closely, with minimal deviation, while the DNN fails to recover the monotonic changes of the viscosity. This confirms that the MFNN platform, by incorporating the LF data using the appropriate TCC model parameters inherently captures the details of rheological features of a given sample. On the other hand, strong deviations between the predicted and actual viscosities for the DNN shows that at least using the limited number of actual data points available for these samples, a purely data-driven prediction cannot reflect on any rheological features of the system.Alternatively, one can plot the predicted flow curves instead, which are shown in Fig. 4. One would argue that the inability of the DNN will diminish as the number of data points at hand increases. Nonetheless, it is not likely to have an abundance of experimental data for a given rheological behavior, similar to data sizes commonly observed in other applications of DNNs. The results from MFNN on the other hand accurately track the experimental observations. We note that the model predictions provided in Fig. 4 are the results of the constitutive equation based on Eq. (5) and are the basis for LF data generation in MFNN. As evident in Fig. 4, the MFNN here does not offer any improvement over the constitutive model at hand. Thus, one could argue that when model parameters are known for a constitutive model, the data-driven approach simply recovers the same results using the LF data that are very accurately describing the rheology.

FIG. 3.

The regression between the actual experimental results and the ones predicted using MFNN and DNN algorithms for the steady state shear viscosity of sample 5. The MFNN is informed by LF data generated through the TCC model.

FIG. 3.

The regression between the actual experimental results and the ones predicted using MFNN and DNN algorithms for the steady state shear viscosity of sample 5. The MFNN is informed by LF data generated through the TCC model.

Close modal
FIG. 4.

Flow curve predictions made by three different methods: a TCC constitutive equation (model), MFNN, and a classical DNN for two different samples with known TCC model parameters to generate LF data points. The top (sample 9) and bottom (sample 13) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

FIG. 4.

Flow curve predictions made by three different methods: a TCC constitutive equation (model), MFNN, and a classical DNN for two different samples with known TCC model parameters to generate LF data points. The top (sample 9) and bottom (sample 13) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

Close modal

As previously mentioned, the underlying physical laws that govern the rheology and dynamics of a complex fluid can be represented in various mathematical forms. With respect to a multicomponent structured fluid under flow, representing these behaviors in a singular closed-form equation (as in the TCC model) is far from trivial. Thus, one would initially use classical models to explain and describe a certain behavior, before developing system-specific equations that require timely and expensive rheological interrogation of the material under different flowing conditions. On the other hand, results in Fig. 4 clearly show that inclusion of the underlying physics into the NN is essential in enabling the algorithm to provide an accurate prediction. In this section, we investigate the role of the accuracy of this physical intuition in recovering the rheology of our fluid. In other words, we are seeking the answer to this question: what if the ideal constitutive model of choice is not known for a given material? In the first step, instead of the system-specific TCC model, we use classical constitutive equations with different levels of complexity (and hence fitting parameters) to generate the low-fidelity data. Namely, PL and HB models with, respectively, two and three fitting parameters are used in generating the low-fidelity datasets. The material under investigation shows yield stress, two different shear-thinning regimes and exponents, and a short plateau viscosity in the intermediate shear rate regime. It is clear that a simple PL model, which only predicts a single thinning behavior is unable to capture such viscosity changes. Nonetheless, the goal here is to interrogate MFNN’s ability to predict shear viscosity with such a primitive physical intuition. The generated low-fidelity datasets as well as the acquired high-fidelity datasets of other samples (excluding the sample under question) are employed to train the MFNN. In order to provide a benchmark against the classical DNN algorithms, results are also presented using DNN with no physical intuition. The regression plots for both the classical DNN and MFNNs using different physics are shown in Fig. 5. While the classical DNN does not seem to predict the actual experiment, by incorporating minimal physics into the problem using the PL model, the regression is ameliorated. Further improvement is achieved upon the inclusion of the HB model as the physical basis.

FIG. 5.

Regression between the actual experimental results and the ones predicted by three different neural network algorithms: classical DNN, physics-informed network based on power-law (MFNN-PL) model, and physics-informed network based on Herschel–Bulkley (MFNN-HB) model for the steady-state shear viscosity of sample 5.

FIG. 5.

Regression between the actual experimental results and the ones predicted by three different neural network algorithms: classical DNN, physics-informed network based on power-law (MFNN-PL) model, and physics-informed network based on Herschel–Bulkley (MFNN-HB) model for the steady-state shear viscosity of sample 5.

Close modal

Similar to results in Fig. 4, one can output a fully recovered flow curve of the target sample using DNN and the MFNNs described above shown in Fig. 6. While simple DNN does not follow the experimental data accurately, even the simplest and most primitive physical model (PL) appears to provide a rather general trend of the range of viscosities observed. In this framework, since the number of LF data points significantly overweigh the number of available HF data points, the shape of fitting and general trends of the viscosity are governed by the physical equation used, while ranges and values are corrected through HF experimental values. Thus, using a simple PL model does not provide the flexibility required to capture the complexity of the viscosity behavior. Nonetheless, an incrementally more complex model such as HB satisfactorily captures these complexities and decreases the mean deviation between MFNN-HB fitting predictions and the actual experimental data to less than 1%. This observation is significant with respect to predictions made by the MFNN model, as it suggests that the MFNN outperforms the pure constitutive model, as well as the pure data-driven model in the absence of ideal models that explain the rheology. In fact, such perfect constitutive models do not exist for most complex fluids. For instance, even a complex fractal iso-kinematic hardening (FIKH) or thixotropic elastoviscoplastic (TEVP) with more than 10–15 fitting parameters cannot accurately recover the time- and rate-dependent rheology of a multicomponent crude oil. In such situations, and considering the cost of parameterizing the model for a new sample or flow protocol, the MFNN offers a significant leap in predicting the rich rheology of the fluid using what is known to be the correct but nontrivial physical model.

FIG. 6.

Experimentally measured steady state shear viscosity flow curves compared to the predictions made by the neural networks using no physics (DNN), PL physics (MFNN-PL), and HB physics (MFNN-HB). The top (sample 11) and bottom (sample 13) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

FIG. 6.

Experimentally measured steady state shear viscosity flow curves compared to the predictions made by the neural networks using no physics (DNN), PL physics (MFNN-PL), and HB physics (MFNN-HB). The top (sample 11) and bottom (sample 13) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

Close modal

As previously discussed, constitutive models (regardless of their ability to describe a set of experimentally measured rheological behavior) commonly lack predictive abilities for new formulations. This is due to the fact that all constitutive modeling approaches are reduced-order models, in which state variables and material components/compositions are represented collectively through a number of model parameters. Since in a multicomponent system, changing the fraction of one can change the interactions between others, such simplistic representation of material parameters takes away any predictive abilities. In contrast, NNs do not reduce the order of the problem at hand, and each component and its variations can be direct inputs, with nontrivial correlations made through the hidden layers and neurons to the output viscosity. Here, we evaluate the ability of DNN and MFNN algorithms in predicting the rheological behavior of new formulation. Thus, the results presented in this section are pure predictions of the NNs for a composition of the model fluid. To do so, the experimental data for all samples but the one under question are available to the NN for training, as well as a model that accurately describes those behaviors. Nevertheless, the model parameters in generating LF data points for the new formulation are unknown as well and have to be estimated and predicted. This is similar to realistic material development in which the only known information of a newly developed sample are its components and their compositions. In other words, the MFNN results presented here are complete predictions of the viscosity behavior of a given sample, solely based on the TCC constitutive equation through which the material’s behavior can be explained and not its actual model parameters.

The TCC model has four main fitting parameters with no particular functional form that correlates to the composition/formulation of the material. For instance, there is no clear connection between these parameters and colloid or surfactant concentrations. Hence, in the first step, one needs to provide an estimate for each of the four TCC model parameters of a new sample based on the compositions in other existing samples. In the absence of data-driven method, the most reasonable approach would be to deduce the model parameters by interpolating from existing samples. For instance, if the yield stress for 0.015 and 0.035 fraction of specific colloid is 0.1 and 0.3 Pa, respectively, one could simply assume that for the intermediate fraction of 0.025 the yield stress can be approximated around 0.2 Pa. We should note that this is only a logical choice in the absence of a physical underpinning or a functional form that describes the colloid fraction-dependent yield stress of the fluid. In this section, for the “model” predictions, such interpolations are used. The yellow line in Fig. 7 shows the results for the interpolation-predicted TCC model, and the red line exhibits the performance of the DNN. Evidently, both the DNN and the predicted model fail to recover the viscosity behavior of a new untested sample.

On the other hand, for the MFNN algorithms, the coefficients of the untested sample are predicted via a simple ML algorithm known as a support vector machine (SVM). Table III presents the accuracy of the SVM algorithm predictions of the TCC coefficients, compared to actual model parameters from fitting TCC to experimental data. The SVM-predicted TCC model coefficients are used to generate LF datasets, and training the MFNN algorithm. The MFNN algorithm is consequently asked to provide a prediction based on these LF data, and the HF data of other samples to the viscosity of an untested fluid, presented as the green line in Fig. 7. In contrast to DNN and the interpolated TCC model, the MFNN leverages the abundance of LF data in the model, as well as the accuracy of HF data in the DNN. Combining the strength of both methods, the MFNN clearly provides a significantly closer prediction of the actual experimental data without any knowledge of the new sample. As can be seen from Fig. 7, both the general trend and the range of the viscosity are predicted with minimal deviations from their actual values using the MFNN algorithm. Since the physics of the problem and thus the nonmonotonic behavior of the viscosity are incorporated through LF datasets and the constitutive mode, the accuracy of the prediction is highly dependent on the number and accuracy of the SVM-predicted TCC model. However, an acceptable level of accuracy—MSE of less than 1%—is achieved with only a very simple coefficient prediction.

TABLE III.

The percentage error between the fitted-to-experiment TCC model parameters, and SVM-predicted TCC model parameters for sample 3 as in Fig. 7.

σyγ˙TCηbgγ˙CA
Actual value 0.0072 0.00023 1.52 47.98 
Predicted value 0.013 0.0011 1.31 66.53 
Percentage error 80.5 378.3 13.8 38.7 
σyγ˙TCηbgγ˙CA
Actual value 0.0072 0.00023 1.52 47.98 
Predicted value 0.013 0.0011 1.31 66.53 
Percentage error 80.5 378.3 13.8 38.7 

One can argue that in many complex fluids, one can deduce physically-based predictions of the model parameters as opposed to a purely data-driven and interpolation technique such as SVM. The regression plot for the viscosities predicted using the DNN and the MFNN are alternatively shown in Fig. 8 confirming a poor performance obtained using the DNN compared to the MFNN algorithm.

FIG. 7.

Experimentally measured steady state shear viscosity flow curves compared to fully predicted flow curves through no-physical basis incorporated into the neural network (DNN), the TCC model with coefficients interpolated using other samples at hand (model), and physics-based neural network using SVM-predicted TCC coefficients as the physical intuition. The top (sample 3) and bottom (sample 9) figures correspond to two different samples for illustrative purposes and removing system-specific biases. Circles, diamonds, squares, and plus signs represent results for the interpolation-predicted TCC mode, the DNN, the MFNN, and the experiments, respectively.

FIG. 7.

Experimentally measured steady state shear viscosity flow curves compared to fully predicted flow curves through no-physical basis incorporated into the neural network (DNN), the TCC model with coefficients interpolated using other samples at hand (model), and physics-based neural network using SVM-predicted TCC coefficients as the physical intuition. The top (sample 3) and bottom (sample 9) figures correspond to two different samples for illustrative purposes and removing system-specific biases. Circles, diamonds, squares, and plus signs represent results for the interpolation-predicted TCC mode, the DNN, the MFNN, and the experiments, respectively.

Close modal
FIG. 8.

Regression between actual experimental viscosities measured and the NN-predicted viscosities for an unknown sample (sample 3) using DNN and MFNN.

FIG. 8.

Regression between actual experimental viscosities measured and the NN-predicted viscosities for an unknown sample (sample 3) using DNN and MFNN.

Close modal

Change of temperature plays an important role on the structure and rheology of our model fluid. Here, the experimental protocol is as follows: first, the shear viscosity is probed at different shear rates at room temperature (25°C), followed by a temperature ramp to 40°C at a constant deformation rate of 2s1, and finally a second flow curve at 40°C. An example of the experimental protocol is shown in Fig. 9. The fluid studied here is a consumer product, and thus is tested over realistic processing, transportation, storage, and use temperatures. Within those temperatures, the fluid does not show any phase transitions in the polymers, colloids or surfactant fractions. Nonetheless, the coarsening of microstructure, interaction between different components, and intercorrelation of different constituents and their rate-dependence will be directly affected by changing the temperature. In fact, that is the rational for changed flow curves at elevated temperatures, where the secondary thinning regime is absent.

FIG. 9.

Three sets of flow curves: Circles and squares show the rate dependent viscosity at 25 and 40°C, respectively. Diamonds along the temperature axis show the temperature dependent viscosity at 2 s−1.

FIG. 9.

Three sets of flow curves: Circles and squares show the rate dependent viscosity at 25 and 40°C, respectively. Diamonds along the temperature axis show the temperature dependent viscosity at 2 s−1.

Close modal

One would argue that while all components individually experience and react to temperature change, the WLMs dramatically change structures at elevated temperatures, resulting in disappearance of the second thinning regime, and a Carreau-like behavior. Consequently, the underlying physics changes by changing the temperature. The simple DNN and MFNN predictions of the viscosities at elevated temperatures for two different samples are provided in Fig. 10. In training these NNs, no information is provided for the elevated temperature flow curves, and the algorithm has been trained on the room temperature viscosity data as well as the temperature ramp at constant shear rate.

FIG. 10.

Prediction of shear viscosity vs shear rate behavior made through simple DNN and physics-informed MFNN at the test temperature of 40°C based on an initial experiment temperature of 25°C and a temperature ramp from 25 to 40°C, at a constant shear rate of 2s1. The top (sample 16) and bottom (sample 17) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

FIG. 10.

Prediction of shear viscosity vs shear rate behavior made through simple DNN and physics-informed MFNN at the test temperature of 40°C based on an initial experiment temperature of 25°C and a temperature ramp from 25 to 40°C, at a constant shear rate of 2s1. The top (sample 16) and bottom (sample 17) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

Close modal

As clearly shown through deviations between the DNN predictions and the experimental measurements, the physical intuition to the overall rheological behavior is key in providing a meaningful prediction. Thus, the choice of model, and carrying the appropriate form of this physical intuition through low-fidelity data training plays an essential role in enabling MFNN to predict the rheological behavior properly. As previously discussed, the rheological behavior at 25°C can be accurately described using the TCC model shown as Eq. (5); however, by increasing the experiment temperature, the viscosity behavior changes to a typical HB or TC model behavior, as the Carreau-like behavior diminishes. Thus, for the low-fidelity data generation within the MFNN algorithm, a simple HB model is adopted instead.

The rheology of the model fluid investigated in this study is rather complex owing to a number of structure-forming constituents. Upon formulation, the sample does not contain salt and has a relatively lower viscosity than a target viscosity for practical purposes. Different levels of salt are added to the mixture as viscosity modifiers; however, addition of salt has a dual role on WLMs and colloidal particles. In other words, different levels of salt concentrations are mainly used in practice to set the background plateau viscosity at the intermediate shear rate, by changing the colloidal and surfactant interactions. While these concentrations differ from one sample to another, they are all formulated in a manner to set the viscosity to 1, 5, and 10 Pa s before the second thinning regime.

One would argue that addition of salt directly changes the effective interactions between the colloidal particles, and the dominant length scale for the WLM structures. The three colloidal particle sizes and variations each have their specific zeta potential, resulting in different phase dynamics for each components under flow as well as in quiescent conditions. Nonetheless, the diverging viscosities (analog to the yield stress of the fluid) at low shear rates for the samples with different salt concentrations do not show a significant variation. This viscosity increase is even smaller at the highest shear rates explored. This is perhaps due to the fact that at larger shear rates the fluid is effectively destructured and interactions between different components cannot change the macroscopic response of the material. In contrast, the viscosity in the intermediate shear rate regime of 0.1<γ˙<10 increases tenfolds (Fig. 11). This results in a significant change in the shear-thinning behavior observed at intermediate shear rate regime, with minimal changes to the overall viscosity of the fluid at the lower and higher shear rates. Therefore, the viscosity cannot be simply shifted to higher or lower values with the same nonmonotonic features as the salt concentration changes. This typical behavior is illustrated in Fig. 11 for a given sample.

FIG. 11.

The role of salt concentration on the shear viscosity vs shear rate behavior of sample 3 over three different salt concentrations.

FIG. 11.

The role of salt concentration on the shear viscosity vs shear rate behavior of sample 3 over three different salt concentrations.

Close modal

The addition of salt changes the interactions between components of the system beyond any model prediction, which would be only revealed through detailed molecular level simulations. However, as previously discussed, NN-predictions can be made within the range of trained data variables or outside the range of training sets, also referred to as interpolation and extrapolation predictions, respectively. Since three different salt concentrations are experimentally investigated, by training the NN on salt concentrations of 1 and 3, and predicting the viscosity at the intermediate salt concentration we probe the interpolation prediction. Once again the simple DNN, despite capturing some features of the flow curves, does not follow the experimental measurements. On the other hand, MFNN for both samples presented in Fig. 12 closely predicts the experiments. It should be noted that the physical law used to generate the low fidelity data remains the hybrid model based on Eq. (5).

FIG. 12.

Prediction of shear viscosity vs shear rate behavior made through simple DNN and physics-informed MFNN at the salt concentration of 2 based on salt concentrations of 1 and 3 (interpolation). The top (sample 10) and bottom (sample 18) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

FIG. 12.

Prediction of shear viscosity vs shear rate behavior made through simple DNN and physics-informed MFNN at the salt concentration of 2 based on salt concentrations of 1 and 3 (interpolation). The top (sample 10) and bottom (sample 18) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

Close modal

In testing the applicability of MFNN algorithm to extrapolated predictions, Fig. 13 shows the NN viscosity predictions for the highest salt concentration, having been trained on the low and intermediate salt concentrations. The simple DNN shows significant deviations from experimental measurements, as in one of the samples it predicts a shear-thickening in the intermediate shear rate regime, and a steep shear-thinning regime for the other sample shown in Fig. 13. It should be noted that the DNN is purely data-driven and thus the accuracy of its prediction can be improved upon by increasing the number of training datasets. Nonetheless, by introducing the physical instinct through low-fidelity datasets, MFNN recovers the experimental measurements with an excellent agreement.

FIG. 13.

Prediction of shear viscosity vs shear rate behavior made through a simple DNN and physics-informed MFNN at the salt concentration of 3 based on salt concentrations of 1 and 2 (extrapolation). The top (sample 3) and bottom (sample 18) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

FIG. 13.

Prediction of shear viscosity vs shear rate behavior made through a simple DNN and physics-informed MFNN at the salt concentration of 3 based on salt concentrations of 1 and 2 (extrapolation). The top (sample 3) and bottom (sample 18) figures correspond to two different samples for illustrative purposes and removing system-specific biases.

Close modal

Of particular interest in practical real-world applications, is the ability to predict the behavior of a given material at different times. For this consumer product, such age-dependent rheology directly determines the shelf-life of the material. The structural aging of the colloidal gels and WLMs is a well-studied field of research. Due to dynamical and many-body nature of the particle interactions in each constituent, these microstructures and mesostructures coarsen and change over very long timescales resulting in gradual change of rheological behavior as the sample ages. Figure 14 shows a typical change of viscosity behavior of a given sample over the time period of a year. As clearly indicated in Fig. 14, this behavior is nonmonotonic, with increasing the yield stress of the fluid over time and decrease of the terminal viscosity at high deformation rates, making it challenging to capture through simplistic constitutive models.

FIG. 14.

Steady state shear viscosity vs shear rate behavior of a multicomponent sample tested over a time period of 1 year. The color increments indicate the aging of the sample from fresh (0 month) up to 1 year of aging.

FIG. 14.

Steady state shear viscosity vs shear rate behavior of a multicomponent sample tested over a time period of 1 year. The color increments indicate the aging of the sample from fresh (0 month) up to 1 year of aging.

Close modal

In practice, and in order to study the role of aging in rheology, one needs to wait for long periods of time to be able to measure the samples with different ages. Alternatively, one can leverage the accelerated aging at elevated temperatures (due to increased thermal motion of particles); however, as discussed previously the temperature change plays a dual role in changing the underlying physics. Here, we make predictions of the viscosity behavior at various aging times using the devised NNs and validate the applicability of such methodologies by comparing these predictions to experimentally measured rheological data. This is done in two different ways: (1) predicting the age-dependent viscosity of a sample, having its rheology at younger ages, and (2) predicting the age-dependent viscosity behavior of an entirely new sample knowing age-dependence of other samples.

For the first approach, we train the NNs using the same sample’s rheological measurements at different times and make predictions of the viscosity at a specific time. Since several data points for each sample’s age-dependent rheology are available, we showcase the applicability of NNs by predicting the oldest sample’s rheology using previous history of the fluid. Thus, different NNs are used to predict the rheology of the sample after 12 months of aging, we seek to answer the question of: how many data points are required to provide a reasonable prediction of the viscosity behavior after a year? The results for the experiments after a year and predictions made using DNN and MFNN are shown in Fig. 15. The legends and the color increments in the figure correspond to different training sets provided, before making predictions of the year-old sample. Evidently, having the full history of the sample at different ages, both the DNN and the MFNN algorithms accurately capture the experimental viscosities observed. Nonetheless, the DNN algorithm does not provide a meaningful prediction before having at least 7 months of rheology data, as opposed to MFNN which provides a very good prediction of the viscosity behavior having only the behavior of the fresh sample and after one month of aging. Since the underlying physics of the problem does not change over time, the low-fidelity datasets are generated based on the TCC model. However, the coefficients and TCC model parameters for the viscosity behavior of the unknown age are predicted using the coefficients of available months. These predictions (for the model parameters) are made using a simple ML algorithm, moving weighted average (MWA) and linear regression. By using the predicted coefficients, a number of low-fidelity data are generated to train the NN.

FIG. 15.

Predictions made for the viscosity vs shear rate behavior of a multicomponent complex fluid, aged for 12 months through (top) DNN (bottom) MFNN architectures. The dashed lines represent the experimentally measured viscosities of the sample 13 after 1 year of aging. While all predictions are made for the sample aged for 12 months, the color increments represent the amount of training datasets provided before those predictions are made for each NN.

FIG. 15.

Predictions made for the viscosity vs shear rate behavior of a multicomponent complex fluid, aged for 12 months through (top) DNN (bottom) MFNN architectures. The dashed lines represent the experimentally measured viscosities of the sample 13 after 1 year of aging. While all predictions are made for the sample aged for 12 months, the color increments represent the amount of training datasets provided before those predictions are made for each NN.

Close modal

For the second scenario, we train the NNs on the rate-dependent rheology of all samples at all ages at hand and seek to predict the viscosity of an entirely new sample based on its components at different ages. This is of particular interest in industrial settings, where new formulations are devised regularly and a prediction on the long time behavior of the material can be extremely informative. The only information known for the sample is the fraction of its constituting particles. Thus, the high-fidelity datasets used in this section consist of the aging behavior of the remaining available samples. In addition, the low-fidelity datasets are generated based on the TCC model and SVM predicted model parameters. For details of predictions based on sample components please refer to the previous part of this paper. We note that the DNN utilizes the entire high-fidelity dataset and does not include the low-fidelity predictions. Figure 16 clearly shows that by using DNN and feeding the aging information of other samples, an erroneous trend is predicted for all ages of an unknown sample. One can argue that the trend of DNN predictions remains rather unchanged as well. This includes a thinning regime followed by a slight thickening regime and a second thinning at highest shear rates. This is due to deviations in predictions of the fresh sample to begin with. In contrast, the MFNN algorithm closely predicts the viscosity behavior at all ages with negligible deviations from the experimental measurements.

FIG. 16.

MFNN predictions of shear viscosity vs shear rate behavior for an unknown sample (sample 13) based on its compositions, at different aging times of (top-left) 3 months, (top-right) 6 months, (bottom-left) 9 months, and (bottom-right) 12 months.

FIG. 16.

MFNN predictions of shear viscosity vs shear rate behavior for an unknown sample (sample 13) based on its compositions, at different aging times of (top-left) 3 months, (top-right) 6 months, (bottom-left) 9 months, and (bottom-right) 12 months.

Close modal
FIG. 17.

The residual of training process for (top) MFNN and (bottom) DNN. In the left figure, the magnitude of the residual for both low-fidelity and high-fidelity networks as well as total magnitude of loss for MFNN is shown during the training process.

FIG. 17.

The residual of training process for (top) MFNN and (bottom) DNN. In the left figure, the magnitude of the residual for both low-fidelity and high-fidelity networks as well as total magnitude of loss for MFNN is shown during the training process.

Close modal

In this work, we introduced and studied the performance of an adaptable and comprehensive data-driven algorithm for constitutive metamodeling of complex fluids with respect to their rheological behavior. The proposed MFNN is capable of taking advantage of high-fidelity experimental (or high resolution simulation) data and an abundance of synthetically generated low-fidelity data using different constitutive models at hand. This provides an extremely powerful platform for employing data-driven and ML algorithms in areas of research where often small sizes of data available prevent a meaningful predictive capability to be devised. In contrary, the simple classical DNN, without a physical basis is not able to reflect on real behavior of the material. This is mainly due to the fact that in purely data-driven methods, an abundance of data is required to provide a meaningful ML algorithm to be deployed, which is often not the case for rheological measurements. Our results showed that the DNNs are incapable of recovering the realistic rheological behavior; however, the incorporation of a physical intuition into the neural network architecture in the form of low-fidelity data generated through constitutive models significantly improves the predictive ability of the algorithm. We further investigated the role of the accuracy of constitutive models employed to generate the synthetic data and found that while even an over-simplistic model such as PL improves upon the accuracy of the DNNs in the MFNN framework, including the fundamental rheological intuitions such as the emergence of yield stress in an HB model can result in recovering the experimental observations through MFNN. More importantly, we showed that the MFNN can be used to provide a rather accurate prediction of an entirely new sample with only known components and their compositions. The MFNN is found to leverage the physical and phenomenological advantages of constitutive models, as well as data-driven learning of the actual experimental measurements, in providing a predictive capability. This can be explained by contrasting the fundamental differences in constitutive/phenomenological modeling vs data-driven modeling. In constitutive and theoretical modeling, system variables and material-specific constituents and compositions are reduced and collectively represented through a number of model parameters. On the other hand, each component, system variable, or process condition can be used as an input parameter to the NN, without the need for reduced-order modeling. Relying on the physically informed methodologies proposed here, and individual contribution of different components, the MFNN enables prediction of rheology directly from formulation, offering a significant leap in the material design and discovery.

Subsequently, we demonstrated the applicability of our proposed method as alternative constitutive metamodels for predicting the viscosity behavior of a complex multicomponent fluid under different thermomechanical and thermochemical conditions. In particular, the role of experiment temperature, salt-level, and sample aging on steady shear flow curves was studied using simple DNN and MFNN. We showed that once the appropriate physical intuition is carried through the low fidelity datasets, the MFNN captures the rheological behavior of the sample within or outside the range of training data points and parameters. This is of utmost relevance and importance in many real-world material design protocols, where an informed prediction of the physical and rheological behavior of a material based on its formulation can be transformative. This was clearly demonstrated through MFNN-predicted viscosity behavior of a sample based on its constituents over a period of 1 year.

While the MFNN architecture proposed in this paper shows a great promise as an alternative data-driven constitutive metamodel for complex fluids, one must always cautiously employ such statistical methodologies with careful choice of physics that the model adheres to. For instance, here we only used rather simple generalized newtonian fluids (GNFs) constitutive models of choice. While GNFs are very useful in describing the rate-dependent viscosity of a complex fluid in shear flows, they are unable to provide any meaningful description of time-dependent or elastic effects. The MFNN (or any similar physics-based ML algorithm) relies directly on the choice of physics made to describe a phenomenon. Thus, a wrong choice of model will likely result in erroneous predictions, even when mitigated through the abundance of experimental data.

In this paper, we reported on devising and utilizing a physics-based MFNN architecture to predict the simple shear rheological behavior of a complex system. However, the physical intuition of the problem under investigation is not required to be in the form of low-fidelity data and can be manifested through direct differential equations that the algorithm has to comply with. The authors believe that such methodologies can be extremely powerful and practical, leveraging the advances in ML algorithms without compromising the essential physical and rheological underpinnings of the phenomena at hand. Nonetheless, one should note that the physical basis present in the MFNN is not limited to generated data from constitutive models and can be extended to instead directly solve functional forms and partial differential equations, expanding the window of applications of these methods to various flow conditions and rheological investigations.

M.M. and S.J. would like to acknowledge support by Northeastern University’s GapFund program. G.K. was supported by DOE PhILMS Grant No. DE-SC0019453.

Residual and loss: Here, we present the convergence comparison as well as the residual losses for both DNN and MFNN architectures. As mentioned, the MFNN contains both the low-fidelity and high-fidelity parts to detect the relation between the inputs and output accurately. To better training the MFNN, the losses from both low-fidelity and high-fidelity networks should be minimized. Figure 17 presents the residual losses for both parts as well as the total loss of the training process. The DNN, on the other hand, has a much simpler architecture. Therefore, there wil be only one function to minimize to find the relation between inputs and outputs. The residual behavior of the DNN is also shown in Fig. 17. It should be noted that throughout this work, we have been using a combination of adams optimizer and LBFG-S method together with Xavier’s initialization method to optimize the loss function, while the hyperbolic tangent function is employed as the activation function.

Architecture of neural networks: Throughout this work, the loss function is optimized using a combination of adams optimizer and LBFG-S method together with Xavier’s initialization method, while the hyperbolic tangent function is employed as the activation function for DNN, low-fidelity NN, and non- linear part of the high-fidelity NN. It should be noted that the linear part of the high-fidelity NN does not have an activation function due to the fact that it is used to approximate the linear part of the relation between inputs and output.

The architecture of the DNN is three layers with 20 neurons in each layer. On the other hand, the architecture of the MFNN is two layers with 20 neurons per layer for low-fidelity NN as well as two layers with ten neurons per layer for nonlinear part of the high-fidelity NN.

Computational resources and required time: Throughout this study, all the training processes are performed on a normal computer without any specific requirements. The run-time for each training in average is less than 1 h. In other words, with only 1 h of proper training, one could have pretty accurate predictions as good as the ones shown above.

1.
de Souza Mendes
,
P. R.
, “
Thixotropic elasto-viscoplastic model for structured fluids
,”
Soft Matter
7
,
2471
2483
(
2011
).
2.
Colombo
,
J.
, and
E.
Del Gado
, “
Stress localization, stiffening, and yielding in a model colloidal gel
,”
J. Rheol.
58
,
1089
1116
(
2014
).
3.
Gurnon
,
A. K.
, and
N. J.
Wagner
, “
Microstructure and rheology relationships for shear thickening colloidal dispersions
,”
J. Fluid Mech.
769
,
242
276
(
2015
).
4.
Rogers
,
S. A.
,
D.
Vlassopoulos
, and
P. T.
Callaghan
, “
Aging, yielding, and shear banding in soft colloidal glasses
,”
Phys. Rev. Lett.
100
,
128304
(
2008
).
5.
Dimitriou
,
C. J.
, and
G. H.
McKinley
, “
A comprehensive constitutive law for waxy crude oil: A thixotropic yield stress fluid
,”
Soft Matter
10
,
6619
6644
(
2014
).
6.
Gelbart
,
W. M.
, and
A.
Ben-Shaul
, “
The ‘new’ science of ‘complex fluids
,’”
J. Phys. Chem.
100
,
13169
13189
(
1996
).
7.
Vermant
,
J.
, and
M. J.
Solomon
, “
Flow-induced structure in colloidal suspensions
,”
J. Phys. Condens. Matter
17
,
R187
R216
(
2005
).
8.
Masschaele
,
K.
,
J.
Fransaer
, and
J.
Vermant
, “
Flow-induced structure in colloidal gels: Direct visualization of model 2D suspensions
,”
Soft Matter
7
,
7717
7726
(
2011
).
9.
Herschel
,
W. H.
, and
R.
Bulkley
, “
Konsistenzmessungen von Gummi-Benzollösungen
,”
Kolloid Z.
39
,
291
300
(
1926
).
10.
Bingham
,
E. C.
, “
An investigation of the laws of plastic flow
,”
Bull. Bureau Stand.
13
,
309
353
(
1916
).
11.
Gillespie
,
T.
, “
An extension of Goodeve’s impulse theory of viscosity to pseudoplastic systems
,”
J. Colloid Sci.
15
,
219
231
(
1960
).
12.
Bird
,
R. B.
, and
O.
Hassager
, Dynamics of Polymeric Liquids: Fluid mechanics, Dynamics of Polymeric Liquids (Wiley, New York, 1987).
13.
Morrison
,
F. A.
, and
A.
Morrison
, Understanding Rheology, Raymond F. Boyer Library Collection (Oxford University, New York, 2001).
14.
Macosko
,
C. W.
, Rheology: Principles, Measurements, and Applications, Advances in Interfacial Engineering Series (VCH, Weinheim, 1994).
15.
Mezger
,
T. G.
,
The Rheology Handbook: For Users of Rotational and Oscillatory Rheometers
, 2 rev. ed. (
Vincentz Network
,
Hannover
,
2006
).
16.
Heldman
,
R. P.
, and
D. R.
Singh
,
Introduction to Food Engineering
, 5th ed. (
Elsevier
,
Amsterdam
,
2013
).
17.
Coleman
,
P. C.
, and
M. M.
Painter
,
Fundamentals of Polymer Science: An Introductory Text
, 2nd ed. (
Technomic
,
Lancaster, PA
,
1997
).
18.
Bonn
,
D.
,
M. M.
Denn
,
L.
Berthier
,
T.
Divoux
, and
S.
Manneville
, “
Yield stress materials in soft condensed matter
,”
Rev. Mod. Phys.
89
,
035005
(
2017
).
19.
Coussot
,
P.
, “
Yield stress fluid flows: A review of experimental data
,”
J. Nonnewton Fluid Mech.
211
,
31
49
(
2014
).
20.
Kim
,
J.-Y.
,
J.-Y.
Song
,
E.-J.
Lee
, and
S.-K.
Park
, “
Rheological properties and microstructures of Carbopol gel network system
,”
Colloid Polym. Sci.
281
,
614
623
(
2003
).
21.
Kaneda
,
I.
, and
A.
Sogabe
, “
Rheological properties of water swellable microgel polymerized in a confined space
,”
Colloids Surf. A
270
,
163
170
(
2005
).
22.
Petekidis
,
G.
,
D.
Vlassopoulos
, and
P.
Pusey
, “
Yielding and flow of sheared colloidal glasses
,”
J. Phys.: Condens. Matter
16
,
S3955
(
2004
).
23.
Ghosh
,
A.
,
G.
Chaudhary
,
J. G.
Kang
,
P. V.
Braun
,
R. H.
Ewoldt
, and
K. S.
Schweizer
, “
Linear and nonlinear rheology and structural relaxation in dense glassy and jammed soft repulsive pNIPAM microgel suspensions
,”
Soft Matter
15
,
1038
1052
(
2019
).
24.
Pellet
,
C.
, and
M.
Cloitre
, “
The glass and jamming transitions of soft polyelectrolyte microgel suspensions
,”
Soft Matter
12
,
3710
3720
(
2016
).
25.
Koumakis
,
N.
,
A.
Pamvouxoglou
,
A. S.
Poulos
, and
G.
Petekidis
, “
Direct comparison of the rheology of model hard and soft particle glasses
,”
Soft Matter
8
,
4271
4284
(
2012
).
26.
Caggioni
,
M.
,
V.
Trappe
, and
P. T.
Spicer
, “
Variations of the Herschel–Bulkley exponent reflecting contributions of the viscous continuous phase to the shear rate-dependent stress of soft glassy materials
,”
J. Rheol.
64
,
413
422
(
2020
).
27.
Hébraud
,
P.
, and
F.
Lequeux
, “
Mode-coupling theory for the pasty rheology of soft glassy materials
,”
Phys. Rev. Lett.
81
,
2934
2937
(
1998
).
28.
Bocquet
,
L.
,
A.
Colin
, and
A.
Ajdari
, “
Kinetic theory of plastic flow in soft glassy materials
,”
Phys. Rev. Lett.
103
,
036001
(
2009
).
29.
Seth
,
J. R.
,
L.
Mohan
,
C.
Locatelli-Champagne
,
M.
Cloitre
, and
R. T.
Bonnecaze
, “
A micromechanical model to predict the flow of soft particle glasses
,”
Nat. Mater.
10
,
838
843
(
2011
).
30.
Helgeson
,
M. E.
,
S. E.
Moran
,
H. Z.
An
, and
P. S.
Doyle
, “
Mesoporous organohydrogels from thermogelling photocrosslinkable nanoemulsions
,”
Nat. Mater.
11
,
344
352
(
2012
).
31.
Mohraz
,
A.
, and
M. J.
Solomon
, “
Orientation and rupture of fractal colloidal gels during start-up of steady shear flow
,”
J. Rheol.
49
,
657
681
(
2005
).
32.
Hsiao
,
L. C.
,
R. S.
Newman
,
S. C.
Glotzer
, and
M. J.
Solomon
, “
Role of isostaticity and load-bearing microstructure in the elasticity of yielded colloidal gels
,”
Proc. Natl. Acad. Sci. U.S.A.
109
,
16029
16034
(
2012
).
33.
Maranzano
,
B. J.
, and
N. J.
Wagner
, “
The effects of interparticle interactions and particle size on reversible shear thickening: Hard-sphere colloidal dispersions
,”
J. Rheol.
45
,
1205
1222
(
2001
).
34.
Cipelletti
,
L.
,
S.
Manley
,
R. C.
Ball
, and
D. A.
Weitz
, “
Universal aging features in the restructuring of fractal colloidal gels
,”
Phys. Rev. Lett.
84
,
2275
2278
(
2000
).
35.
Zia
,
R. N.
,
B. J.
Landrum
, and
W. B.
Russel
, “
A micro-mechanical study of coarsening and rheology of colloidal gels: Cage building, cage hopping, and Smoluchowski’s ratchet
,”
J. Rheol.
58
,
1121
1157
(
2014
).
36.
Landrum
,
B. J.
,
W. B.
Russel
, and
R. N.
Zia
, “
Delayed yield in colloidal gels: Creep, flow, and re-entrant solid regimes
,”
J. Rheol.
60
,
783
807
(
2016
).
37.
Chu
,
H. C. W.
, and
R. N.
Zia
, “
Active microrheology of hydrodynamically interacting colloids: Normal stresses and entropic energy density
,”
J. Rheol.
60
,
755
781
(
2016
).
38.
Jamali
,
S.
,
G. H.
McKinley
, and
R. C.
Armstrong
, “
Microstructural rearrangements and their rheological implications in a model thixotropic elastoviscoplastic fluid
,”
Phys. Rev. Lett.
118
,
048003
(
2017
).
39.
Boromand
,
A.
,
S.
Jamali
, and
J. M.
Maia
, “
Structural fingerprints of yielding mechanisms in attractive colloidal gels
,”
Soft Matter
13
,
458
473
(
2017
).
40.
Kim
,
J.
,
D.
Merger
,
M.
Wilhelm
, and
M. E.
Helgeson
, “
Microstructure and nonlinear signatures of yielding in a heterogeneous colloidal gel under large amplitude oscillatory shear
,”
J. Rheol.
58
,
1359
1390
(
2014
).
41.
Gao
,
Y.
,
J.
Kim
, and
M. E.
Helgeson
, “
Microdynamics and arrest of coarsening during spinodal decomposition in thermoreversible colloidal gels
,”
Soft Matter
11
,
6360
6370
(
2015
).
42.
Min Kim
,
J.
,
A. P. R.
Eberle
,
A.
Kate Gurnon
,
L.
Porcar
, and
N. J.
Wagner
, “
The microstructure and rheology of a model, thixotropic nanoparticle gel under steady shear and large amplitude oscillatory shear (LAOS)
,”
J. Rheol.
58
,
1301
1328
(
2014
).
43.
Solomon
,
M. J.
, and
P. T.
Spicer
, “
Microstructural regimes of colloidal rod suspensions, gels, and glasses
,”
Soft Matter
6
,
1391
1400
(
2010
).
44.
Dibble
,
C. J.
,
M.
Kogan
, and
M. J.
Solomon
, “
Structure and dynamics of colloidal depletion gels: Coincidence of transitions and heterogeneity
,”
Phys. Rev. E
74
,
041403
(
2006
).
45.
Furst
,
E. M.
, and
J. P.
Pantina
, “
Yielding in colloidal gels due to nonlinear microstructure bending mechanics
,”
Phys. Rev. E
75
,
050402
(
2007
).
46.
Johnson
,
L. C.
,
B. J.
Landrum
, and
R. N.
Zia
, “
Yield of reversible colloidal gels during flow start-up: Release from kinetic arrest
,”
Soft Matter
14
,
5048
5068
(
2018
).
47.
Johnson
,
L. C.
,
R. N.
Zia
,
E.
Moghimi
, and
G.
Petekidis
, “
Influence of structure on the linear response rheology of colloidal gels
,”
J. Rheol.
63
,
583
608
(
2019
).
48.
Lin
,
N. Y. C.
,
B. M.
Guy
,
M.
Hermes
,
C.
Ness
,
J.
Sun
,
W. C. K.
Poon
, and
I.
Cohen
, “
Hydrodynamic and contact contributions to continuous shear thickening in colloidal suspensions
,”
Phys. Rev. Lett.
115
,
228304
(
2015
).
49.
Berret
,
J.-F.
, “Rheology of wormlike micelles: Equilibrium properties and shear banding transition,” arXiv:0406681 [cond-mat] (2004).
50.
Rothstein
,
J. P.
, “
Transient extensional rheology of wormlike micelle solutions
,”
J. Rheol.
47
,
1227
1247
(
2003
).
51.
Padding
,
J. T.
,
E. S.
Boek
, and
W. J.
Briels
, “
Rheology of wormlike micellar fluids from Brownian and molecular dynamics simulations
,”
J. Phys.: Condens. Matter
17
,
S3347
S3353
(
2005
).
52.
Padding
,
J. T.
,
E. S.
Boek
, and
W. J.
Briels
, “
Dynamics and rheology of wormlike micelles emerging from particulate computer simulations
,”
J. Chem. Phys.
129
,
074903
(
2008
).
53.
Padding
,
J. T.
,
W. J.
Briels
,
M. R.
Stukan
, and
E. S.
Boek
, “
Review of multi-scale particulate simulation of the rheology of wormlike micellar fluids
,”
Soft Matter
5
,
4367
4375
(
2009
).
54.
Zhao
,
Y.
,
S. J.
Haward
, and
A. Q.
Shen
, “
Rheological characterizations of wormlike micellar solutions containing cationic surfactant and anionic hydrotropic salt
,”
J. Rheol.
59
,
1229
1259
(
2015
).
55.
Larson
,
R. G.
, “
Constitutive equations for thixotropic fluids
,”
J. Rheol.
59
,
595
611
(
2015
).
56.
Jamali
,
S.
,
R. C.
Armstrong
, and
G. H.
McKinley
, “
Multiscale nature of thixotropy and rheological hysteresis in attractive colloidal suspensions under shear
,”
Phys. Rev. Lett.
123
,
248003
(
2019
).
57.
Jamali
,
S.
,
R. C.
Armstrong
, and
G. H.
McKinley
, “
Time-rate-transformation framework for targeted assembly of short-range attractive colloidal suspensions
,”
Mater. Today Adv.
5
,
100026
(
2020
).
58.
Divoux
,
T.
,
V.
Grenard
, and
S.
Manneville
, “
Rheological hysteresis in soft glassy materials
,”
Phys. Rev. Lett.
110
,
018304
(
2013
).
59.
Mewis
,
J.
, and
N. J.
Wagner
, “
Thixotropy
,”
Adv. Colloid Interface Sci.
147-148
,
214
227
(
2009
).
60.
de Souza Mendes
,
P. R.
, and
R. L.
Thompson
, “
A critical overview of elasto-viscoplastic thixotropic modeling
,”
J. Nonnewton Fluid Mech.
187-188
,
8
15
(
2012
).
61.
Wei
,
Y.
,
M. J.
Solomon
, and
R. G.
Larson
, “
Quantitative nonlinear thixotropic model with stretched exponential response in transient shear flows
,”
J. Rheol.
60
,
1301
1315
(
2016
).
62.
Coussot
,
P.
,
Q. D.
Nguyen
,
H. T.
Huynh
, and
D.
Bonn
, “
Viscosity bifurcation in thixotropic, yielding fluids
,”
J. Rheol.
46
,
573
589
(
2002
).
63.
Goodeve
,
C. F.
, and
G. W.
Whitfield
, “
The measurement of thixotropy in absolute units
,”
Trans. Faraday Soc.
34
,
511
520
(
1938
).
64.
Janes
,
K. A.
, and
M. B.
Yaffe
, “
Data-driven modelling of signal-transduction networks
,”
Nat. Rev. Mol. Cell Biol.
7
,
820
828
(
2006
).
65.
Solomatine
,
D. P.
, and
A.
Ostfeld
, “
Data-driven modelling: Some past experiences and new approaches
,”
J. Hydroinf.
10
,
3
22
(
2008
).
66.
Solomatine
,
D.
,
L.
See
, and
R.
Abrahart
, Data-driven modelling: Concepts, approaches and experiences, in Practical Hydroinformatics (Springer, Berlin), pp. 17–30.
67.
Bishop
,
C.
,
Pattern Recognition and Machine Learning
(
Springer
,
New York
,
2006
), p.
738
.
68.
Brunton
,
S. L.
,
B. R.
Noack
, and
P.
Koumoutsakos
, “
Machine learning for fluid mechanics
,”
Annu. Rev. Fluid Mech.
52
,
477
508
(
2020
).
69.
Chang
,
W.
,
X.
Chu
,
A. F. B.
S. Fareed
,
S.
Pandey
,
J.
Luo
,
B.
Weigand
, and
E.
Laurien
, “
Heat transfer prediction of supercritical water with artificial neural networks
,”
Appl. Therm. Eng.
131
,
815
824
(
2018
).
70.
Mahmoudabadbozchelou
,
M.
,
A.
Eghtesad
,
S.
Jamali
, and
H.
Afshin
, “
Entropy analysis and thermal optimization of nanofluid impinging jet using artificial neural network and genetic algorithm
,”
Int. Commun. Heat Mass Transfer
119
,
104978
(
2020
).
71.
Mohanraj
,
M.
,
S.
Jayaraj
, and
C.
Muraleedharan
, “
Applications of artificial neural networks for thermal analysis of heat exchangers—A review
,”
Int. J. Therm. Sci.
90
,
150
172
(
2015
).
72.
Mahmoudabadbozchelou
,
M.
,
N.
Rabiei
, and
M.
Bazargan
, “
Numerical and experimental investigation of the optimization of vehicle speed and inter-vehicle distance in an automated highway car platoon to minimize fuel consumption
,”
SAE Int. J. CAV
1
,
3
12
(
2018
).
73.
Rabault
,
J.
,
J.
Kolaas
, and
A.
Jensen
, “
Performing particle image velocimetry using artificial neural networks: A proof-of-concept
,”
Meas. Sci. Technol.
28
,
125301
(
2017
).
74.
Eghtesad
,
A.
,
M.
Mahmoudabadbozchelou
, and
H.
Afshin
, “
Heat transfer optimization of twin turbulent sweeping impinging jets
,”
Int. J. Therm. Sci.
146
,
106064
(
2019
).
75.
Xie
,
G.
,
B.
Sunden
,
Q.
Wang
, and
L.
Tang
, “
Performance predictions of laminar and turbulent heat transfer and fluid flow of heat exchangers having large tube-diameter and large tube-row by artificial neural networks
,”
Int. J. Heat Mass Transfer
52
,
2484
2497
(
2009
).
76.
Sirignano
,
J.
, and
K.
Spiliopoulos
, “
DGM: A deep learning algorithm for solving partial differential equations
,”
J. Comput. Phys.
375
,
1339
1364
(
2018
).
77.
Poplin
,
R.
,
A. V.
Varadarajan
,
K.
Blumer
,
Y.
Liu
,
M. V.
McConnell
,
G. S.
Corrado
,
L.
Peng
, and
D. R.
Webster
, “
Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning
,”
Nat. Biomed. Eng.
2
,
158
164
(
2018
).
78.
Kim
,
B.
,
V. C.
Azevedo
,
N.
Thuerey
,
T.
Kim
,
M.
Gross
, and
B.
Solenthaler
, “
Deep fluids: A generative network for parameterized fluid simulations
,”
Comput. Graph. Forum
38
,
59
70
(
2019
).
79.
Geneva
,
N.
, and
N.
Zabaras
, “
Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks
,”
J. Comput. Phys.
383
,
125
147
(
2019
).
80.
Lu
,
D.
,
M.
Heisler
,
S.
Lee
,
G.
Ding
,
M. V.
Sarunic
, and
M. F.
Beg
, “Retinal fluid segmentation and detection in optical coherence tomography images using fully convolutional neural network,” arXiv:1710.04778 (2017).
81.
Murata
,
T.
,
K.
Fukami
, and
K.
Fukagata
, “
Nonlinear mode decomposition with convolutional neural networks for fluid dynamics
,”
J. Fluid Mech.
882
,
A13
(
2020
).
82.
Rashno
,
A.
,
D. D.
Koozekanani
, and
K. K.
Parhi
, OCT fluid segmentation using graph shortest path and convolutional neural network*, in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (IEEE, New York, 2018), pp. 3426–3429.
83.
Smith
,
C.
,
J.
Doherty
, and
Y.
Jin
, “Multi-objective evolutionary recurrent neural network ensemble for prediction of computational fluid dynamic simulations,” in 2014 IEEE Congress on Evolutionary Computation (CEC) (IEEE, New York, 2014), pp. 2609–2616.
84.
Liao
,
C.
,
K.
Wang
,
M.
Yu
, and
W.
Chen
, “Modeling of magneto-rheological fluid damper employing recurrent neural networks,” in 2005 International Conference on Neural Networks and Brain (IEEE, New York, 2005), Vol. 2, pp. 616–620.
85.
Raissi
,
M.
,
P.
Perdikaris
, and
G.
Karniadakis
, “
Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
,”
J. Comput. Phys.
378
,
686
707
(
2019
).
86.
Meng
,
X.
,
Z.
Li
,
D.
Zhang
, and
G. E.
Karniadakis
, “PPINN: Parareal physics-informed neural network for time-dependent PDEs,” arXiv:1909.10145 (2019), pp. 1–17.
87.
Meng
,
X.
, and
G. E.
Karniadakis
, “
A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems
,”
J. Comput. Phys.
401
,
109020
(
2020
).
88.
Pang
,
G.
,
L.
Lu
, and
G. E.
Karniadakis
, “
fPINNs: Fractional physics-informed neural networks
,”
SIAM J. Sci. Comput.
35
,
225
253
(
2018
).
89.
Pang
,
G.
,
M.
D’Elia
,
M.
Parks
, and
G. E.
Karniadakis
, “nPINNs: Nonlocal physics-informed neural networks for a parametrized nonlocal universal laplacian operator. Algorithms and applications,” arXiv:2004.04276 (2020).
90.
Lu
,
L.
,
P.
Jin
, and
G. E.
Karniadakis
, “DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators,” arXiv:1910.03193 (2019), pp. 1–22.
91.
Lu
,
L.
,
X.
Meng
,
Z.
Mao
, and
G. E.
Karniadakis
, “DeepXDE: A deep learning library for solving differential equations,” arXiv:1907.04502 (2019), pp. 1–17.
92.
Wang
,
J. X.
,
J. L.
Wu
, and
H.
Xiao
, “
Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data
,”
Phys. Rev. Fluids
2
,
1
22
(
2017
).
93.
Wu
,
J. L.
,
H.
Xiao
, and
E.
Paterson
, “
Physics-informed machine learning approach for augmenting turbulence models: A comprehensive framework
,”
Phys. Rev. Fluids
7
,
1
28
(
2018
).
94.
Swischuk
,
R.
,
L.
Mainini
,
B.
Peherstorfer
, and
K.
Willcox
, “
Projection-based model reduction: Formulations for physics-based machine learning
,”
Comput. Fluids
179
,
704
717
(
2019
).
95.
Jia
,
X.
,
J.
Willard
,
A.
Karpatne
,
J. S.
Read
,
J. A.
Zwart
,
M.
Steinbach
, and
V.
Kumar
, “Physics-guided machine learning for scientific discovery: An application in simulating lake temperature profiles,” arXiv:2001.11086 (2020), pp. 1–25.
96.
Rackauckas
,
C.
,
Y.
Ma
,
J.
Martensen
,
C.
Warner
,
K.
Zubov
,
R.
Supekar
,
D.
Skinner
, and
A.
Ramadhan
, “Universal differential equations for scientific machine learning,” arXiv:2001.04385 (2020).
97.
Piazza
,
R.
, and
G. D.
Pietro
, “
Phase separation and gel-like structures in mixtures of colloids and surfactant
,”
Europhys. Lett.
28
,
445
450
(
1994
).
98.
Petekidis
,
G.
,
L.
Galloway
,
S.
Egelhaaf
,
M.
Cates
, and
W.
Poon
, “
Mixtures of colloids and wormlike micelles: Phase behavior and kinetics
,”
Langmuir
18
,
4248
4257
(
2002
).
99.
Fernández-Godino
,
M. G.
,
C.
Park
,
N.-H.
Kim
, and
R. T.
Haftka
, “Review of multi-fidelity models,” arXiv:1609.07196 (2016).
100.
Zhang
,
D.
,
L.
Lu
,
L.
Guo
, and
G. E.
Karniadakis
, “
Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems
,”
J. Comput. Phys.
397
,
108850
(
2019
).
101.
Foody
,
G. M.
, and
M. K.
Arora
, “
An evaluation of some factors affecting the accuracy of classification by an artificial neural network
,”
Int. J. Remote Sens.
18
,
799
810
(
1997
).