The recent rise of machine learning (ML) has revolutionized many fields since its advent, leading to remarkable advances in data science, medical research, and many engineering fields. The vortex induced vibration problem being a complex amalgamation of fluid dynamics, fluid-structure interaction, and structural vibration fields of engineering, has always been a costly nut to crack experimentally while being a highly time-consuming problem to solve through numerical simulations. The current study is aimed at bridging the gap by the use of recent advances in AI and ML through the application of various recent techniques applied to the same problem for a better prediction of the results. The dataset used for training and testing models was self-generated, validated, published, and hence considered suitable for further research into identification of suitable techniques for the effective and efficient prediction of the vortex-induced vibrations phenomenon. The current study delves into the application of a host of supervised learning techniques, including artificial neural networks (ANNs), support vector machine (SVM), decision trees, ensemble methods, and Gaussian Process Regression (GPR), on the same dataset. The ANN was analyzed using multiple training–testing ratios. Three different variations of decision trees were analyzed i.e., course, medium, and fine. Six different algorithms for SVM were tested including: linear, quadratic, cubic, coarse Gaussian, medium Gaussian, and fine Gaussian. Both bagging and boosting type ensemble methods were also tested while four different algorithms of GPR were examined, namely, exponential, squared exponential, rational quadratic, and Matern 5/2. The results are analyzed on a parametric basis using mean squared error (MSE), root mean squared error (RMSE), R-squared (R2), and mean absolute error primarily. The results show that even a training–testing ratio of 30:70 may provide sufficiently credible predictions although for a ratio of 50:50, the accuracy of predictions shows diminishing returns and hence is a sufficiently high training–testing ratio. Fine decision trees, fine Gaussian SVM, boosting ensemble method, and Matern 5/2 GPR algorithms showed the best results within their own techniques while the GPR techniques provided the best predictions of all the different techniques tested.

Fluid dynamics is a rapidly evolving field with numerous applications in civil engineering and beyond, necessitating the generation and analysis of vast amounts of data through experiments, fieldwork, and simulations.1 These traditional methods are often time-consuming and resource-intensive. A key phenomenon in fluid dynamics is vortex-induced vibrations (VIV), which occur when structures such as cylinders experience oscillations due to vortex formation and shedding in fluid flow. Recently, machine learning (ML) techniques have been applied to VIV, enhancing our understanding, prediction, and control of these vibrations by providing more efficient and precise fluid flow predictions with fewer resources. This research explores the implementation of machine learning techniques on datasets produced from simulation work to predict fluid dynamics. The advent of big data has underscored the need for processing large volumes of data, and advancements in data analysis have facilitated easier storage, compression, and analysis of such data. Machine learning accelerates this process by offering quick and accurate data analysis, with techniques generally categorized into supervised, unsupervised, and semi-supervised learning. Supervised learning involves training a model on labeled data, where each input has a corresponding output, whereas unsupervised learning works without predefined labels, categorizing data based on inherent attributes. Semi-supervised learning combines these approaches, utilizing both labeled and unlabeled data. The focus of this research is on two supervised learning techniques, classification and regression, to predict the Reynolds number (Re) from input data. Classification involves training models, such as neural networks and support vector machines, to categorize inputs into predefined classes. Clustering, an unsupervised learning technique, is applied to segment data into clusters based on their values, enhancing the analysis and understanding of fluid dynamics. Figure 1 shows all the ML techniques with their major categorization.

FIG. 1.

Categorization of machine learning techniques and algorithms.

FIG. 1.

Categorization of machine learning techniques and algorithms.

Close modal

VIV phenomena are prevalent in diverse fields, such as civil engineering, marine applications, and pipeline systems. Analyzing these complex fluid dynamics has traditionally been challenging due to the intricate nature of structures involved.1–5 Historically, combining theoretical and experimental methods has been used to model fluid dynamics, leading to semi-theoretical and semi-experimental models. However, these approaches often demand substantial computational resources and time, alongside generating large datasets.6,7 To address the inefficiencies of traditional methods, machine learning algorithms have been introduced to VIV studies, effectively managing large datasets and improving prediction accuracy.8–11 

The integration of deep neural networks (DNNs) with embedded invariance into turbulence modeling in computational fluid dynamics (CFD) has shown superior performance compared to traditional Reynolds-averaged Navier–Stokes (RANS) models, enhancing accuracy in capturing complex flow dynamics.12 Machine learning models based on the minimum description length (MDL) principle demonstrate effective turbulence characterization across various Reynolds numbers, offering improved robustness and generalization over traditional models.14 A comprehensive survey highlights the transformative role of machine learning in CFD, addressing challenges such as data availability and model interpretability while suggesting future research directions and emerging trends.15 Novel deep learning techniques are applied to quantify turbulence predictability in fluid dynamics, showing the ability to accurately predict turbulent flow behavior and identify regions with higher predictability.16 Deep learning techniques effectively model turbulent flow separation over airfoils, capturing complex interactions between airfoil geometry and flow conditions to enhance aerodynamic performance predictions.17 Deep learning methods have been applied to predict VIV in circular cylinders, demonstrating accuracy in capturing the dynamics of vortex shedding and cylinder oscillations across various flow conditions and geometries.18 Machine learning algorithms, including regression and neural networks, have been utilized to predict VIV in circular cylinders, achieving reliable predictions by learning complex fluid–structure interactions.19–21 

Artificial neural networks (ANNs) are computational systems modeled after the neural networks in the human brain. They excel at identifying and learning from intricate, nonlinear patterns in data. However, optimizing their hyperparameters can be quite demanding and typically necessitates advanced algorithms. ANNs find significant applications in areas such as predicting flow, modeling turbulence, controlling flow, and creating reduced-order models.22 

Support vector machines (SVM) are a supervised learning technique used for classifying data in high-dimensional spaces by identifying the optimal hyperplane that separates different classes. They are particularly effective for binary classification tasks. However, the computational cost can be quite high when dealing with large datasets. SVMs are commonly applied in flow classification and anomaly detection.23 

Decision trees are a supervised learning algorithm that partitions data based on if-else rules. This approach allows for easy interpretation and visualization of data. However, decision trees can suffer from overfitting, especially when applied to large datasets. Despite this limitation, they are effectively used in turbulence modeling and flow classification.24 

Random forests are an ensemble method that enhances accuracy by combining multiple decision trees. This technique is robust and less prone to overfitting compared to individual decision trees. However, computational cost can be significant when processing large datasets. Random forests are widely used in turbulence modeling, flow classification, and uncertainty quantification.13 

Gaussian processes are non-parametric, probabilistic models used to represent complex functions while providing uncertainty estimates for predictions. Although they offer valuable insights into prediction uncertainties, their computational cost can become high with large datasets. Gaussian processes are particularly useful in surrogate modeling, uncertainty quantification, and flow prediction.25 

The k-nearest neighbors (k-NN) algorithm is a straightforward classification method that assigns data points to a class based on the majority class among its k closest neighbors. It is easy to understand and has relatively low computational costs. However, its accuracy can be affected by outliers, making it crucial to have clean data. In addition, k-NN is commonly used in flow classification and pattern recognition.26 

Convolutional neural networks (CNNs) are machine learning algorithms that excel at processing image-based data and are also effective for statistical data analysis. They are particularly adept at capturing spatial patterns and features. However, they require large amounts of training data to achieve optimal results. CNNs are widely used in image-based flow analysis and modeling turbulence from velocity fields.27 

Recurrent neural networks (RNNs) are a type of neural network designed to process sequential and time-dependent data effectively. They excel at modeling temporal dependencies in dynamic systems. However, they can suffer from the issue of vanishing gradients. RNNs are frequently applied in time series prediction, flow simulation, and turbulence modeling in unsteady flows.28 

Long Short-Term Memory (LSTM) is a specialized type of recurrent neural network (RNN) that effectively mitigates the vanishing gradient problem, making it suitable for handling long-range dependencies in sequential data. It is particularly adept at capturing and monitoring long-term dependencies in time series data. Despite being complex and computationally expensive, LSTMs are extensively utilized in tasks such as time series prediction, simulating unsteady flows, and modeling turbulence in dynamic fluid systems.29 

Machine learning is basically classified into two categories: supervised and un-supervised learning. In supervised learning techniques, labels are provided with input data and the model trained on this input data gives us labels as output, whereas in unsupervised learning, the model divides the data into categories. To be more precise, supervised learning techniques are further divided into classification techniques and regression techniques.

In classification techniques, the model gives a label, whereas in regression techniques, the results are in the form of numbers. In this research, different supervised learning techniques are tried and tested on a particle dataset to check what kind of technique gives the best results. Neural networks are tried in classification method, whereas support vector machine (SVM), decision tress, ensemble methods (bagging and boosting), and Gaussian process regression (GPR) techniques are tested from regression methods on this particular dataset. Figure 2 shows the categorization of all the machine learning techniques tested in this research.

FIG. 2.

Different techniques implemented throughout research.

FIG. 2.

Different techniques implemented throughout research.

Close modal

The process begins with the dataset, which is meticulously recorded in an Excel file containing all the data generated through simulation. Initially, the data undergo a preprocessing phase that involves cleaning and feature extraction to ensure that only relevant information is processed by machine learning techniques.

For parameter estimation in this nonlinear problem, the Levenberg–Marquardt algorithm is employed, which is an optimization method specifically designed for solving nonlinear least squares problems. This algorithm is particularly favored in numerical optimization and curve fitting due to its robustness and efficiency. The primary goal of the Levenberg–Marquardt algorithm is to determine the parameters of a nonlinear model that minimize the sum of squared differences between the observed data and the model's predicted values, an objective known as the least squares function. This iterative approach combines elements of both the steepest descent and Gauss–Newton methods, utilizing a damping parameter (also referred to as the regularization parameter) to balance between these directions in each iteration, thus ensuring effective and stable convergence.

The algorithm follows these general steps.

  1. Initialization: begin with an initial guess for the model parameters.

  2. Jacobian calculation: compute the Jacobian matrix, which contains partial derivatives of the model with respect to each parameter.

  3. Residual evaluation: calculate the residuals by finding the difference between the observed data and the model’s predictions.

  4. Gradient and Hessian calculation: derive the gradient and the Hessian matrix using the Jacobian and residuals.

  5. Damping parameter adjustment: modify the damping parameter to determine the step size for the subsequent iteration.

  6. Parameter update: adjust the model parameters using the damping-adjusted step.

  7. Iteration: repeat steps 2 to 6 until convergence criteria are met.

After preprocessing, the data are divided into three subsets: training, testing, and validation. A machine learning model is then constructed with the initial parameter and hyperparameter settings and trained on the training data. The model is subsequently validated using a specified percentage of validation data. Based on the validation results, the model undergoes further training with parameter adjustments to enhance performance, a process known as model training and validation. The refined model is then evaluated using the testing data. Once validated, the model is ready for deployment and can be used to make predictions on real-world data.

Delving deeper into the machine learning model, the proposed neural network consists of three fundamental layers. The initial layer, known as the input layer, receives data in the form of features, denoted as x1 + x2 + x3 … xn. These input features are then fed into the hidden layer, carrying their initially assigned weights. The hidden layer’s role is to process the inputs using artificial neurons, which compute outputs through an activation function. Each neuron is assigned a bias value, and the weights are updated according to the formula f = (wx + b).

The resulting values are subsequently transferred to the final layer, known as the output layer, which generates the model’s predictions. Figure 3 shows the whole process in a flow chart.

FIG. 3.

Architectural framework of the machine learning process with neural networks.

FIG. 3.

Architectural framework of the machine learning process with neural networks.

Close modal

Support vector machine (SVM) algorithm is applied to address a multiclass classification problem. A dataset comprising N samples and D features was used, with each sample assigned to one of K unique classes. Prior to implementing the support vector machine, one-hot encoding was initialized to transform the class labels into a binary format. This encoding method converts class labels into binary vectors of length K, where each vector has a single element set to 1 to denote the class, and all other elements set to 0.

For each class k (where k ranges from 1 to K), a binary classifier is trained using SVM. The purpose of this classifier is to distinguish class k from the other classes. The binary label for class k is denoted as yk, a vector of length N, where yki=1 if sample I belongs to class k, and yki=1 otherwise.

The training dataset (x, yk) for each binary classifier is then processed by using the SVM, solving the optimization problem for class k. This approach allows the SVM to effectively learn decision boundaries that separate each class from the others in the feature space. Figure 4 shows this whole process in a flow chart.

FIG. 4.

Flow chart to explain the process of predicting values through the SVM.

FIG. 4.

Flow chart to explain the process of predicting values through the SVM.

Close modal

In this research, another technique that was utilized is decision trees. A technique requiring consideration of several key elements:

Decision Nodes: these nodes represent feature tests or decisions. The feature at the node is represented by n as Fn, with the decision threshold represented as Tn. The decision node compares the value of the specific feature in the input data against the threshold to determine the next path to follow in the tree.

Leaf Nodes: these nodes indicate the final prediction or outcome. For classification tasks, each leaf node corresponds to a specific class label, whereas for regression tasks, each leaf node represents a numerical value.

Splitting Criteria: the algorithm employs a metric to select the optimal feature and threshold for splitting the data at each decision node. In classification tasks, metrics such as Gini impurity or entropy might be used, while for regression tasks, mean squared error is commonly employed.

The mathematical representation of these concepts is as follows.

For decision node n:

  • Splitting feature: Fn.

  • Threshold: Tn.

  • Left child: a subtree where the condition FnTn holds.

  • Right child: a subtree where the condition Fn > Tn holds.

For leaf node n:

  • As decision trees are used for regression tasks in this study, the leaf nodes represent an output variable.

By considering these elements, decision trees can effectively model complex relationships within the data, providing valuable insights and predictions for regression tasks. Figure 5 shows the whole process in the flow chart.

FIG. 5.

Flow chart to explain the process of predicting values through decision trees.

FIG. 5.

Flow chart to explain the process of predicting values through decision trees.

Close modal

Ensemble methods are a powerful technique in machine learning that combine the predictions of multiple models to improve the overall performance and robustness of the system. By leveraging the strengths of different models, ensemble methods can often achieve higher accuracy and generalization compared to individual models. There are two general types of ensemble methods used in this research.

Bagging is the short form for bootstrap aggregating; it is an ensemble learning technique that enhances the stability and accuracy of machine learning models by reducing variance and preventing overfitting. For this particular problem, boosting technique is applied on the same dataset. It involves creating multiple subsets of the training data through bootstrapping, a method of random sampling with replacement. Each subset is used to train a separate model, typically a strong and complex learner such as a decision tree. These models are then aggregated to form an ensemble.

As discussed above, the dataset used in this research is of the numeric type, so this is the case of regression tasks, and the final prediction is obtained by averaging the predictions of all the models. By training multiple models on different subsets of data, bagging mitigates the impact of outliers and noise, leading to improved generalization. The whole process is shown in Fig. 6.

FIG. 6.

Flow chart to explain the process of predicting values through bagging, one of the types of ensemble methods.

FIG. 6.

Flow chart to explain the process of predicting values through bagging, one of the types of ensemble methods.

Close modal

Boosting is an ensemble learning technique designed to enhance the accuracy of predictive models by sequentially combining multiple weak learners to form a strong learner. Unlike methods such as bagging, which focuses on reducing variance by training models independently, boosting emphasizes reducing bias by training models sequentially. In this particular program, boosting is applied. The process begins by training a weak learner on the entire dataset and evaluating its performance. In subsequent iterations, the algorithm increases the weights of the misclassified samples, making them more influential in the training of the next model. Each subsequent model is trained to correct the errors made by its predecessor, effectively learning from the mistakes. This iterative process continues until a predefined number of weak learners are trained or the model performance stabilizes. The final prediction is a weighted average or vote of the predictions from all the weak learners, resulting in a model with improved accuracy and robustness. Figure 7 shows the whole process in the form of a flow chart.

FIG. 7.

Flow chart to explain the process of predicting values through boosting, one of the types of ensemble methods.

FIG. 7.

Flow chart to explain the process of predicting values through boosting, one of the types of ensemble methods.

Close modal

Table III is where Matern gives the optimal results. Kernal determines the correlation structure between data points, influencing the shape and smoothness of the inferred functions. GPR provides not only a predictive mean but also a measure of uncertainty around each prediction, making it particularly useful in applications where understanding prediction confidence is crucial, such as in active learning and Bayesian optimization.

Gaussian process regression (GPR) is a non-parametric, Bayesian approach to regression that is particularly well-suited for modeling complex, non-linear relationships. Unlike traditional regression methods that assume a fixed functional form, GPR treats the regression problem as a probabilistic inference problem, where the goal is to infer a distribution over functions that are consistent with the observed data.

The model used for experimentation is defined by a mean function, often set to zero, and a covariance function (or kernel), which encodes assumptions about the smoothness and amplitude of the target function. Three different popular kernels, such as the Radial Basis Function (RBF) and Matern, are used, as shown in the result section.

Despite its strengths, GPR has computational limitations, scaling cubically with the number of data points, which makes it challenging to apply directly to large datasets. Careful selection and tuning of the kernel and its hyperparameters are essential to maximize the model’s performance. Figure 8 shows the process of predicting values through the GPR method.

FIG. 8.

Flow chart to explain the process of predicting values through the GPR technique.

FIG. 8.

Flow chart to explain the process of predicting values through the GPR technique.

Close modal

The experimentation was conducted using an HP Pavilion gaming laptop equipped with an AMD Ryzen 7 4800H processor and Radeon graphics system. The system operated on a 64-bit Windows 10 operating system with 256 GB of primary memory.

The dataset employed in this study was generated via simulation spanning Reynolds numbers ranging from 70 to 150. The simulation results were validated against experimental findings and subsequently detailed in two published research articles.30,31 It contains 17 different classes having 48 samples in each class. The dataset is in the form of numerical data generated through extensive experimentation. There are 17 labels for each of the 48 selected Reynold numbers; therefore, the size of the dataset is 17 × 48 entries. The details of the dataset labels are presented in Table I.

TABLE I.

Details of the dataset used for experimentation.

No. of classes17
Labels Reynolds number 
Natural frequency 
Prandtl number 
Density 
Reduced velocity 
Thermal conductivity 
Strouhal number (oscillating) 
Oscillating shedding frequency 
Frequency ratio 
RMS drag coefficient 
RMS lift coefficient 
RMS in-line oscillation 
RMS transverse oscillation 
Maximum transverse oscillation 
Maximum Nusselt number 
Mean Nusselt number 
RMS Nusselt number 
Total no. of samples 48 samples of each class 
Reynolds number range 70 < Re < 150 
No. of classes17
Labels Reynolds number 
Natural frequency 
Prandtl number 
Density 
Reduced velocity 
Thermal conductivity 
Strouhal number (oscillating) 
Oscillating shedding frequency 
Frequency ratio 
RMS drag coefficient 
RMS lift coefficient 
RMS in-line oscillation 
RMS transverse oscillation 
Maximum transverse oscillation 
Maximum Nusselt number 
Mean Nusselt number 
RMS Nusselt number 
Total no. of samples 48 samples of each class 
Reynolds number range 70 < Re < 150 

The performance of all the experiments undertaken is evaluated using four different parameters.

In neural networks, accuracy is a fundamental metric used to evaluate model performance. It is computed as the ratio of correctly identified samples to the total number of samples,
Mean square error (MSE) is a fundamental metric employed to evaluate the accuracy of regression models by quantifying the average squared difference between predicted values and actual (true) values. It provides a measure of the average magnitude of error in the predictions. The mathematical formula for calculating MSE is expressed as
where ytrue,i denotes the true values, ypred,i represents the predicted values, and N signifies the total number of samples in the dataset.
Root mean squared error (RMSE) serves as a widely adopted metric for assessing the accuracy of regression models. It quantifies the disparity between predicted values and actual (true) values within a dataset featuring continuous target variables. RMSE provides a measure of how closely the predictions of a model align with the observed values. The formula for calculating RMSE is expressed as
where ytrue,i denotes the true values, ypred,i represents the predicted values, and N signifies the total number of samples in the dataset.
R-squared (R2) is a critical statistical measure in machine learning for assessing the goodness-of-fit of a regression model. It quantifies the proportion of variance in the dependent variable that can be predicted by the independent variables. In essence, R2 indicates the extent to which the regression model elucidates the variability observed in the target variable compared to its mean. The mathematical expression for R2 is defined as
where ssregression denotes the sum of squares of the regression (explained variance) and sstotal represents the total sum of squares of the samples (total variance).

To perform classification, neural networks were used, and Levenberg–Marquardt algorithm was implemented to perform the experimentation. Neural works same as the principle of the human brain. A brain responds well with more information, same as the case with neural networks. To check this phenomenon, the dataset was divided into five different divisions. In the first scheme, the training portion of the data was kept 70% and the remaining 30% was used for both testing and validation. Out of 30%, data 15% were used for testing and 15% for validation purposes. The results were evaluated on five different evaluation parameters; it can clearly be seen that error ratio is reducing and classification accuracy is increasing. The value of R is 0.99, whereas the mean absolute error is reduced to 0.0028. Similarly, when the data were divided in 50:50, the results are different. There is not a visible difference in classification accuracy but mean absolute error and mean square error are increased, which shows that the changes of wrong information have increased. While when data are divided in 30:70, where only 30% data were used for training, the model is not performing so well. The classification accuracy has reduced to 0.65%, whereas mean square error has increased to 0.36. Table II presents the details of all the values at different training testing ratios.

TABLE II.

Comparison of different training testing ratios using neural networks.

Supervised learning
Classification
Neural networks (Levenberg–Marquardt)
Parameter70:3060:4050:5040:6030:70
0.994 86 0.982 11 0.998 53 0.957 38 0.654 7 
R-squared 0.989 74 0.964 53 0.997 07 0.916 58 0.428 63 
RMSE 0.016 68 0.044 35 0.014 36 0.115 05 0.604 57 
MSE 0.000 28 0.001 97 0.000 21 0.013 24 0.365 5 
MAE 0.002 8 0.015 0.005 6 0.020 1 0.023 8 
Supervised learning
Classification
Neural networks (Levenberg–Marquardt)
Parameter70:3060:4050:5040:6030:70
0.994 86 0.982 11 0.998 53 0.957 38 0.654 7 
R-squared 0.989 74 0.964 53 0.997 07 0.916 58 0.428 63 
RMSE 0.016 68 0.044 35 0.014 36 0.115 05 0.604 57 
MSE 0.000 28 0.001 97 0.000 21 0.013 24 0.365 5 
MAE 0.002 8 0.015 0.005 6 0.020 1 0.023 8 

Figure 9 shows the graphs comparing the actual and predicted values of Reynolds numbers, with Reynolds numbers as the input parameter and ymax as the output parameter. Various training and testing ratios were employed across the experiments. In these graphs, the dotted red lines represent the predictions made by our artificial neural network (ANN) model, while the solid blue lines depict the actual observed values.

FIG. 9.

Comparison of actual and predicted Reynolds numbers and y-max.

FIG. 9.

Comparison of actual and predicted Reynolds numbers and y-max.

Close modal

While increasing the amount of training data can improve validation accuracy, test accuracy may decrease if the model is overfed with data beyond its capacity. Figure 10 shows these results, where the diagonal line represents the actual output values, and the small circles indicate predictions made by using the ANN model. With a training–testing ratio of 50:50, the model achieves optimal testing accuracy of 0.998. However, when the training data are reduced to 30%, model accuracy decreases to 0.94. Conversely, increasing the training data from 50% to 70% leads to overfitting, causing the testing accuracy to decline from 0.998 to 0.994. These findings suggest that having a large dataset is not always necessary to achieve optimal results, and excellent performance can be obtained with a smaller amount of data.

FIG. 10.

Test accuracy across various training and testing ratios.

FIG. 10.

Test accuracy across various training and testing ratios.

Close modal

Table III presents a comparison of four supervised learning regression techniques applied to identify the most effective method for the specific problem. The input variables, Reynolds number (Re), and frequency ratio, were consistent across all experiments, while the output variable varied. Initially, ymax served as the output variable, analyzed using three types of decision trees: fine, medium, and coarse. Performance was evaluated using RMSE, R2, MSE, and MAE. The results indicate that the fine decision tree was the most suitable, achieving an R2 value of 91%.

TABLE III.

Detailed comparison of mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), and R-squared (R2) metrics across various regression analysis techniques.

Supervised learning
Regression
Decision treesSupport vector machinesGPREnsemble methods
ParameterFineMediumCoarseLinearQuadraticCubicFine GaussianMedium GaussianCoarse GaussianRational quadraticSquared exponentialMatern 5/2ExponentialBaggingBoosting
RMSE 0.066 2 0.156 0.221 0.238 0.074 7 0.096 7 0.051 6 0.075 2 0.199 0.019 5 0.020 3 0.017 9 0.018 8 0.092 7 0.088 6 
R-squared 0.91 0.5 1.16 0.89 0.81 0.95 0.88 0.19 0.99 0.99 0.99 0.99 0.82 0.84 
MSE 0.004 39 0.024 4 0.048 7 0.056 5 0.005 58 0.009 35 0.002 66 0.005 66 0.0397 0.000 381 0.000 410 0.000 319 0.000 353 0.0086 0.007 86 
MAE 0.034 4 0.118 0.203 0.127 0.054 0 0.047 4 0.040 8 0.048 6 0.124 0.008 48 0.008 69 0.007 51 0.007 039 0.065 0 0.048 8 
Supervised learning
Regression
Decision treesSupport vector machinesGPREnsemble methods
ParameterFineMediumCoarseLinearQuadraticCubicFine GaussianMedium GaussianCoarse GaussianRational quadraticSquared exponentialMatern 5/2ExponentialBaggingBoosting
RMSE 0.066 2 0.156 0.221 0.238 0.074 7 0.096 7 0.051 6 0.075 2 0.199 0.019 5 0.020 3 0.017 9 0.018 8 0.092 7 0.088 6 
R-squared 0.91 0.5 1.16 0.89 0.81 0.95 0.88 0.19 0.99 0.99 0.99 0.99 0.82 0.84 
MSE 0.004 39 0.024 4 0.048 7 0.056 5 0.005 58 0.009 35 0.002 66 0.005 66 0.0397 0.000 381 0.000 410 0.000 319 0.000 353 0.0086 0.007 86 
MAE 0.034 4 0.118 0.203 0.127 0.054 0 0.047 4 0.040 8 0.048 6 0.124 0.008 48 0.008 69 0.007 51 0.007 039 0.065 0 0.048 8 

The same input and output parameters were tested using six SVM techniques: linear, quadratic, cubic, fine Gaussian, medium Gaussian, and coarse Gaussian. Among these, the fine Gaussian SVM yielded the best results with an R2 value of 95%.

The experiments were then repeated using four Gaussian process regression (GPR) techniques: rational quadratic, squared exponential, Matern 5/2, and exponential. Again, performance was measured using RMSE, R2, MSE, and MAE. The Matern 5/2 technique demonstrated superior performance with an R2 value of 99%. At the end, ensemble methods were evaluated using both bagging and boosting techniques, with the same input and output parameters and evaluation metrics. The results revealed that boosting trees were the most effective ensemble method for regression analysis on this dataset, achieving an R2 value of 84%.

The response graphs from these experiments are shown in Fig. 11. These graphs, also known as regression plots or fitted line plots, illustrate the relationship between independent and dependent variables, showing how predicted values change with variations in the independent variable.

FIG. 11.

Performance comparison graphs of decision trees, SVM, GPR, and ensemble methods.

FIG. 11.

Performance comparison graphs of decision trees, SVM, GPR, and ensemble methods.

Close modal

The comparison of actual to predicted values shown in Fig. 11 shows that while the fine decision trees exhibit a similar pattern to the actual values, a consistent deviation is observed rendering it lower accuracy of predictions. The best results are predicted by using the Matern 5/2 algorithm of the Gaussian process regression model, which is highly accurate especially in the lock-in or resonance (high oscillation amplitude) flow regime. The fine Gaussian support vector machine algorithm and the boosted ensemble method also produced satisfactory results but not as accurate as the Matern 5/2 GPR.

Figure 12 shows the relationship between actual and predicted values, where the straight line represents the true or actual response, and the blue dots denote the model’s predictions. The Gaussian process regression (GPR) technique demonstrates superior performance compared to the other methods tested. The figure shows that most of the predicted values, indicated by the blue dots, align closely with the diagonal line of actual responses, highlighting the accuracy of the GPR model.

FIG. 12.

Actual vs predicted values for decision trees, SVM, Gaussian process regression (GPR), and ensemble methods, highlighting optimal performances.

FIG. 12.

Actual vs predicted values for decision trees, SVM, Gaussian process regression (GPR), and ensemble methods, highlighting optimal performances.

Close modal
Figure 13 shows the residuals for the Gaussian process regression (GPR) techniques, specifically highlighting the Matern 5/2 kernel. The residual is defined as the difference between the observed value and the value predicted by the regression model,
FIG. 13.

Residual analysis for decision trees, SVM, Gaussian process regression (GPR), and ensemble methods, highlighting optimal performances.

FIG. 13.

Residual analysis for decision trees, SVM, Gaussian process regression (GPR), and ensemble methods, highlighting optimal performances.

Close modal

This figure shows that, for the Matern 5/2 kernel, most of the predicted residuals (shown as orange dots) are clustered close to the horizontal line at zero, indicating that the model’s predictions are closely aligned with the actual values and the residuals are minimal. This suggests that the Matern 5/2 kernel provides the best fit among the tested models, with residuals being well-distributed around zero.

The future trajectory is increasingly oriented toward artificial intelligence and machine learning, with numerous applications spanning various domains. In the field of fluid dynamics, which is rapidly evolving and has significant implications for engineering and other fields, the necessity for processing extensive datasets derived from experiments, field studies, and simulations is paramount. Traditional methods for data collection and analysis are often characterized by their time-consuming and resource-intensive nature. To address these challenges and explore future directions, this research evaluates and compares various machine learning techniques to determine their efficacy for numeric data generated through simulations and experimental work.

  • The results show that even a training–testing ratio of 30:70 may provide sufficiently credible predictions although for a ratio of 50:50, the accuracy of predictions shows the most credible predictions and hence is a sufficiently high training-testing ratio.

  • Higher training–testing ratios tended to show overfitting tendencies due to the small dataset size used. It is, therefore, beneficial to use smaller training–testing ratios for smaller datasets.

  • Fine decision trees were observed to generate the best predictions among all the decision trees tested.

  • From among the six SVM algorithms tested, fine Gaussian SVM provided the best predictions.

  • Ensemble methods generally produced the lowest quality results but from among the two types checked, the boosted ensemble method generated the better results.

  • Matern 5/2 GPR algorithms showed the overall best results in all of the models and algorithms tested.

  • All the GPR techniques generally provided the best predictions from all the different techniques tested.

The authors have no conflicts to disclose.

A. Ijaz: Conceptualization (equal); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (lead); Resources (equal); Software (lead); Validation (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). S. Manzoor: Conceptualization (equal); Funding acquisition (lead); Project administration (lead); Resources (equal); Supervision (lead); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
Q.
Duan
,
C.
Ma
, and
Q.
Li
, “
Vortex-induced vibration performance and mechanism of long-span railway bridge streamlined box girder
,”
Adv. Struct. Eng.
26
(
13
),
2506
2519
(
2023
).
2.
J.
Liu
,
B.
Bao
,
J.
Chen
,
Y.
Wu
, and
Q.
Wang
, “
Marine energy harvesting from tidal currents and offshore winds: A 2-DOF system based on flow-induced vibrations
,”
Nano Energy
114
,
108664
(
2023
).
3.
J.
Li
,
Z.
Li
, and
T.
Wang
, “
Prediction framework of vortex-induced vibration of steel tubes in transmission towers based on generalized wake oscillator model
,”
Int. J. Struct. Stab. Dyn.
24
,
23
(
2023
).
4.
C.
Wan
,
H.
Tian
,
X.
Shan
, and
T.
Xie
, “
Enhanced performance of airfoil-based piezoelectric energy harvester under coupled flutter and vortex-induced vibration
,”
Int. J. Mech. Sci.
241
,
107979
(
2023
).
5.
M.
Zhao
, “
A review of recent studies on the control of vortex-induced vibration of circular cylinders
,”
Ocean Eng.
285
,
115389
(
2023
).
6.
Y.
Yan
and
J.
Tu
, “
Computational fluid dynamics
,” in
Bioaerosol Characterisation, Transportation and Transmission: Fundamental, Modelling and Application
(
Springer
,
2023
), pp.
65
83
.
7.
Z.
Yang
,
H.
Cai
,
M.
Dai
,
T.
Wang
, and
M.
Li
, “
Mechanical behavior and rock breaking mechanism of shield hob based on particle flow code (PFC) method
,”
Geotech. Geol. Eng.
41
(
1
),
353
370
(
2023
).
8.
Z.
Wang
,
J.
Zhu
, and
Z.
Zhang
, “
Machine learning-based deep data mining and prediction of vortex-induced vibration of circular cylinders
,”
Ocean Eng.
285
,
115313
(
2023
).
9.
W.
Xu
,
Z.
He
,
L.
Zhai
, and
E.
Wang
, “
Vortex-induced vibration prediction of an inclined flexible cylinder based on machine learning methods
,”
Ocean Eng.
282
,
114956
(
2023
).
10.
Z.
Chen
et al, “
Machine-learning prediction of aerodynamic damping for buildings and structures undergoing flow-induced vibrations
,”
J. Build. Eng.
63
,
105374
(
2023
).
11.
T.
Song
,
L.
Ding
,
L.
Yang
,
J.
Ran
, and
L.
Zhang
, “
Comparison of machine learning models for performance evaluation of wind-induced vibration piezoelectric energy harvester with fin-shaped attachments
,”
Ocean Eng.
280
,
114630
(
2023
).
12.
J.
Ling
,
A.
Kurzawski
, and
J.
Templeton
, “
Reynolds averaged turbulence modelling using deep neural networks with embedded invariance
,”
J. Fluid Mech.
807
,
155
166
(
2016
).
13.
S.
Iavarone
,
H.
Yang
,
Z.
Li
,
Z. X.
Chen
, and
N.
Swaminathan
, “
On the use of machine learning for subgrid scale filtered density function modelling in large eddy simulations of combustion systems
,” in
Machine Learning and its Application to Reacting Flows: ML and Combustion
(
Springer International Publishing
,
Cham
,
2023
), pp.
209
243
.
14.
Y.
Zhao
,
H. D.
Akolekar
,
J.
Weatheritt
,
V.
Michelassi
, and
R. D.
Sandberg
, “
RANS turbulence model development using CFD-driven machine learning
,”
J. Comput. Phys.
411
,
109413
(
2020
).
15.
J. N.
Kutz
, “
Deep learning in fluid dynamics
,”
J. Fluid Mech.
814
,
1
4
(
2017
).
16.
R.
Wang
,
K.
Kashinath
,
M.
Mustafa
,
A.
Albert
, and
R.
Yu
, “
Towards physics-informed deep learning for turbulent flow prediction
,” in
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
, 2020 (
ACM
, n.d.), pp.
1457
1466
.
17.
A. P.
Singh
,
S.
Medida
, and
K.
Duraisamy
, “
Machine-learning-augmented predictive modeling of turbulent separated flows over airfoils
,”
AIAA J.
55
(
7
),
2215
2227
(
2017
).
18.
N.
Hosseini
,
M.
Griffith
, and
J.
Leontini
, “
The flow past large numbers of cylinders in tandem
,”
J. Fluids Struct.
98
,
103103
(
2020
).
19.
L.
Chirco
and
S.
Manservisi
, “
An adjoint based pressure boundary optimal control approach for fluid-structure interaction problems
,”
Comput. Fluids
182
,
118
127
(
2019
).
20.
M.
Alagan Chella
,
H.
Bihs
, and
D.
Myrhaug
, “
Wave impact pressure and kinematics due to breaking wave impingement on a monopile
,”
J. Fluids Struct.
86
,
94
123
(
2019
).
21.
I.
Turnbull
,
P.
Bourbonnais
, and
R.
Taylor
, “
Investigation of two pack ice besetting events on the Umiak I and development of a probabilistic prediction model
,”
Ocean Eng.
179
,
76
91
(
2019
).
22.
M.
Gamahara
and
Y.
Hattori
, “
Searching for turbulence models by artificial neural network
,”
Phys. Rev. Fluids
2
(
5
),
054604
(
2017
).
23.
A. K.
Panda
,
J. S.
Rapur
, and
R.
Tiwari
, “
Prediction of flow blockages and impending cavitation in centrifugal pumps using Support Vector Machine (SVM) algorithms based on vibration measurements
,”
Measurement
130
,
44
56
(
2018
).
24.
A.
Beck
and
M.
Kurz
, “
A perspective on machine learning methods in turbulence modeling
,”
GAMM‐Mitteilungen
44
(
1
),
e202100002
(
2021
).
25.
Y.
Zhu
,
N.
Zabaras
,
P.-S.
Koutsourelakis
, and
P.
Perdikaris
, “
Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data
,”
J. Comput. Phys.
394
,
56
81
(
2019
).
26.
E.
Ebrahimi
and
M.
Shourian
, “
River flow prediction using dynamic method for selecting and prioritizing K-nearest neighbors based on data features
,”
J. Hydrol. Eng.
25
(
5
),
04020010
(
2020
).
27.
T.
Murata
,
K.
Fukami
, and
K.
Fukagata
, “
Nonlinear mode decomposition with convolutional neural networks for fluid dynamics
,”
J. Fluid Mech.
882
,
A13
(
2020
).
28.
V. M.
MS
and
V.
Menon
, “
Measuring viscosity of fluids: A deep learning approach using a CNN-RNN architecture
,” in
Proceedings of the First International Conference on AI-ML Systems
(
ACM
,
2021
), pp.
1
5
.
29.
Z.
Deng
,
Y.
Chen
,
Y.
Liu
, and
K. C.
Kim
, “
Time-resolved turbulent velocity field reconstruction using a long short-term memory (LSTM)-based artificial intelligence framework
,”
Phys. Fluids
31
(
7
),
075108
(
2019
).
30.
A.
Ijaz
,
S.
Manzoor
,
M. Y.
Younis
,
M.
Ebrahem
, and
M.
Ali
, “
Effect of frequency lock-in on the dynamic response and heat transfer from a circular cylinder at low Reynolds number
,”
Ocean Eng.
238
,
109708
(
2021
).
31.
A.
Ijaz
,
S.
Manzoor
,
M.
Ebrahem
,
E.
Uddin
, and
N.
Sheikh
, “
VIV and forced convection response of a freely oscillating cylinder to a single impulse perturbation of varying intensity
,”
Ocean Eng.
280
,
114515
(
2023
).