This paper presents BubbleID, a sophisticated deep learning architecture designed to comprehensively identify both static and dynamic attributes of bubbles within sequences of boiling images. By amalgamating segmentation powered by Mask R-CNN with SORT-based tracking techniques, the framework is capable of analyzing each bubble's location, dimensions, interface shape, and velocity over its lifetime and capturing dynamic events such as bubble departure. BubbleID is trained and tested on boiling images across diverse heater surfaces and operational settings. This paper also offers a comparative analysis of bubble interface dynamics prior to and post-critical heat flux conditions.

The rapid growth of high-power applications necessitates more efficient cooling systems. Thermal management is becoming a bottleneck in many growing industries (e.g., data centers,1 nuclear power plants, high power density electronics, the power grid, electric vehicles2). Since traditional single-phase cooling methods are not able to sufficiently meet this demand, more sophisticated two-phase methods are being explored and, in some cases, implemented as alternatives. By leveraging the high latent heat, two-phase cooling can offer more efficient heat dissipation. In particular, pool boiling and flow boiling methods are promising avenues for cooling. However, several complexities arise when implementing boiling methods such as instabilities (e.g., critical heat flux, flow reversal3), complexities in setup, and a lack of physical understanding of the boiling phenomena. The nucleate pool boiling regime is the ideal regime for immersion cooling to operate in as it offers a high heat flux with relatively low superheats.4 Beyond the nucleate regime is the transition boiling regime and the point of this switch, or the critical heat flux (CHF), is a major problem for physical implementations. At this point, a vapor layer begins to cover the heating surface and acts as an insulator resulting in rapid deterioration of the heat transfer and a subsequent rapid increase in temperature.5 This can be detrimental to the system by leading to overheating or burnout. To avoid this, current implementations operate with a high factor of safety so the full benefit of the nucleate regime pool boiling is not realized. Another limiting factor for implementations is that boiling is a complex phenomenon that is not fully understood. A better understanding of the boiling phenomenon is needed to design for improving the performance of boiling applications while maintaining safety and efficiency. Many groups have performed experimental correlations, modeling, image analysis, etc. to improve this understanding. Hazi and Markus6 utilized the Lattice Boltzmann method for a numerical study on bubble growth on a horizontal plate in slowly moving and static fluids. Their simulations estimated the bubble departure frequency and diameter, specifically finding that in static fluids, the departure diameter is proportional to the square root of gravitational acceleration. McFadden and Grassmann's research indicated that the product of the bubble departure diameter and the square root of the bubble departure frequency is constant.7 Gong and Cheng8 confirmed that increasing the heat flux could reduce the bubble waiting time. With an increase in heat flux, there was an observed increase in bubble departure frequency and a decrease in bubble departure diameter.9 

Boiling image datasets contain a substantial amount of information and have been used extensively to glean insights into the boiling process through boiling regime classification, correlation development, bubble characteristic extraction, etc. The analysis of such image datasets, however, is made difficult due to their large size. To capture the change in the bubbles, high-speed cameras are typically required, which produce thousands of images each second. With such large datasets, computer processing methods are needed to analyze the data. Traditional image processing methods involved computationally large codes for analyzing data.10 Manual image processing has been used for identifying boiling characteristics such as departure frequency, diameter, and bubble velocity.11 Other image processing methods have been used for approximating bubble parameters such as contact angles12 or bubble growth rate.13 Oikonomidou et al. used four different image-processing algorithms developed by different universities for analyzing bubbles in the absence of gravity.14 Some methods used MATLAB image processing to determine the contact line diameter, bubble height, bubble volume, equivalent diameter, and contact angles. Sadaghiani et al. sought to explore the effects of bubble coalescence through the use of experimental pool boiling data with different prepared surfaces and image processing.15 They reported the bubble nucleation and growth rates, bubble departure diameter, and frequency and did an in-depth analysis of coalescence. Villegas et al. used image processing to distinguish bubbles from a background in grayscale images to determine the size and shape and follow their position through successive images to determine trajectory to measure velocity.16 The advancement of computer vision methods with machine learning has addressed some of the limiting issues in these traditional methods. They have aided the time and accuracy of such analyses and have introduced new forms of available analysis. These data-driven models have found applications with furthering understanding and improving boiling research and have been used for classifying boiling regimes, generating new physical descriptors,17 heat flux prediction, and extracting bubble statistics.

As a typical data-driven method, segmentation models have been largely used in image analysis for bubbles and two-phase flow for several end goals. Torisaki and Miwa used a Yolo v3 backbone for semantic segmentation of images from a gas–liquid two-phase flow.18 From this, they extracted void fraction, approximate equivalent diameter, bubble aspect ratio, etc. Instance segmentation models allow for distinction between individual bubbles to be made and can enable the ability to gather more advanced information. Zhang et al. used Mask R-CNN models and transfer learning to aid image processing for lab-on-a-chip data.19 They trained with an annotated ground truth bounding box and mask for each droplet or bubble. For boiling specifically, semantic and instance segmentation models have been used for vapor fraction prediction and bubble identification.20 Cui et al. also used Mask R-CNN along with ResNet101 for identifying bubbles in bubbly flow.21 Seong et al. used U-Net to identify bubbles in flow boiling images and determine if each bubble was coalesced, condensing, sliding, growing, or nucleated.22 Soibam et al. used a CNN model to generate masks for boiling images.23 They used these masks to track the bubbles and determine coalescences, nucleation rate, and oscillation. These present implementations of segmentation models on boiling image data, which were used primarily for single image analysis to obtain general features such as bubble diameters, count, and vapor fraction. Just focusing on a single image analysis can make it difficult to fully realize what is happening in a single frame. For example, by just looking at an image, it can be difficult to determine if coalescence is occurring or if the bubbles are overlapping while the inclusion of video frames can remove ambiguity.

Motivated by the desire to understand the dynamic nature captured through consecutive boiling video frames, temporal models have begun to be explored. Tracking and multi-object tracking (MOT) models allow for detecting the same object across frames at different points in time. Multi-object tracking methods are growing in application and sophistication. These models have found applications in security, car safety, and driverless cars, and have also led to significant improvement in boiling analysis applications. These models are constantly improving to account for issues commonly seen in early iterations such as occlusions, deforming objects, scaling, etc. The application of these models to boiling data has allowed for more advanced and efficient analysis than previously achieved. However, there are several issues encountered when trying to apply these MOT methods to boiling images.24 For example, two or more bubbles can merge, and bubbles can be covered by another; the shape of bubbles is unpredictable and varies as they depend on pressure and heat flux. Also, reflections and shadows from other bubbles can increase the noise and difficulty of maintaining consistent tracking. Suh et al. developed a framework titled Vision-IT for analyzing images from boiling and condensation.25 It uses Mask-RCNN for detection and a tracking model, which uses the Crocker–Grier algorithm. To help improve the tracking, they took advantage of the typical vertical and lateral movements of boiling and added weight to the x-coordinate feature. This Vision-IT framework has been used for several applications by their group.26,27 Through the utilization of the Vision-IT framework, Chang et al. identified features from microgravity flow boiling.26 They filtered out bubbles with switching IDs and the bubbles that were cut out of frame for their analysis. From this work, they gathered different bubble statistics (i.e., bubble count, size, aspect ratio), wetting front, interfacial length, wavelength, vapor layer thickness, vapor fraction, bubble velocity, etc.

Despite their proven success, the majority of machine learning models on boiling video analysis are limited to acquiring static spatial features from single images, which neglects to account for the dynamic nature of boiling. Therefore, our framework is proposed. The novelty and contribution of this work is summarized as follows:

  1. A deep learning framework, BubbleID, is proposed for bubble dynamic analysis. Unlike existing static image-based methods, our approach can mitigate the impact of bubble coalescence or overlapping on the accuracy of bubble dynamics analysis and is equipped with simultaneous static and dynamic feature extraction ability.

  2. The Observation-Centric Simple Online and Real-Time Tracking (OC-SORT) method is introduced into the proposed framework for bubble tracking, offering higher real-time performance and robustness in scenarios involving occlusion and nonlinear motion processes compared to existing methods such as those based on the Crocker-Grier algorithm.

  3. This work also introduces a novel feature, the bubble interface velocity, along with its corresponding approximation method, to enrich the analysis of bubble dynamics.

  4. Both steady-state and transient pool boiling experiments are designed and carried out to capture boiling image sequences and demonstrate the effectiveness of the proposed framework.

The rest of the paper is organized as follows. Section II details the experiment setup and proposes the BubbleID framework and analysis methods. Section III provides the experiment results and discussion. The conclusion of this paper is given in Sec. IV.

In this section, the data preparation, machine learning models, and the analysis methodology of the framework outputs for bubble dynamics are presented as follows.

High-speed images during pool boiling experiments were used for training and testing machine learning models. These models include an instance segmentation and classification model. These models were all trained and tested on in-house experimental data, which span multiple experiments. The following section describes the specifics of the pool boiling experiments and how the images and labels were generated.

Multiple boiling datasets were used for the training and utilization of the machine learning models presented in the paper, as summarized in Table I. These datasets include the authors' past steady-state boiling tests on a variety of surfaces, including polished copper,28 copper foams,28 and copper microchannels29 and newly performed transient boiling tests. These datasets were chosen to cover a range of different properties, which are summarized in Table II. A detailed description of the pool boiling facility can be found in Ref. 28 and a general schematic can be found in the left top corner of Fig. 1. For the experiments, nine cartridge heaters (Omega Engineering HDC19102) were inserted into the base of the block for heating and powered by a DC power supply (Magna-Power SL200-7.5). Four thermocouples (Omega Engineering TJ36-CPSS-032U-6) were mounted equidistance (0.1 in.) from each other along the side of the copper block for approximating the heat flux at the surface using Fourier's law and regression. The nucleate boiling regime heat flux is estimated to be within 10 W/cm2 of the actual heat flux when taking into account uncertainty in thermocouple resolution, locations, and linear regression approximation based on our past uncertainty analysis.30 A high-speed camera (Phantom VEO 710L) was mounted directly outside the chamber and a light source was placed on the opposite side.

FIG. 1.

Illustration of the BubbleID framework consisting of the instance segmentation model (Mask R-CNN), binary classification model, and tracking model (OC-SORT) for obtaining individual bubble, global, and dynamic features.

FIG. 1.

Illustration of the BubbleID framework consisting of the instance segmentation model (Mask R-CNN), binary classification model, and tracking model (OC-SORT) for obtaining individual bubble, global, and dynamic features.

Close modal
TABLE I.

Pool boiling datasets used for model training and testing.

Dataset IDTypeSurfaceFrame fate (fps)Training amountTesting amountSource
Boiling-1 Steady-state Polished Cu 3000 178 22 Ref. 24  
Boiling-2 Steady-state Cu microchannels 3000 150 Ref. 25  
Boiling-3 Steady-state pH 0 copper foam 3000 231 27 Ref. 24  
Boiling-4 Steady-state pH 10 copper foam 3000 150 20 Ref. 24  
Boiling-5 Transient Polished Cu 150 360 Present work 
Boiling-6 Transient Polished Cu 150 452 48 Present work 
Boiling-7 Transient Cu microchannels 150 13 Present work 
Dataset IDTypeSurfaceFrame fate (fps)Training amountTesting amountSource
Boiling-1 Steady-state Polished Cu 3000 178 22 Ref. 24  
Boiling-2 Steady-state Cu microchannels 3000 150 Ref. 25  
Boiling-3 Steady-state pH 0 copper foam 3000 231 27 Ref. 24  
Boiling-4 Steady-state pH 10 copper foam 3000 150 20 Ref. 24  
Boiling-5 Transient Polished Cu 150 360 Present work 
Boiling-6 Transient Polished Cu 150 452 48 Present work 
Boiling-7 Transient Cu microchannels 150 13 Present work 
TABLE II.

Properties of pool boiling datasets.

Dataset IDSurfaceCHF (W/cm2)Max heat transfer coefficientAdvancing contact angleReceding contact angleContact angle hysteresisWicking flux
Boiling-1 Polished Cu 102.6 7.1 107.7 83.9 23.8 
Boiling-2 Cu microchannels 203.5 10.7 152.2 142.3 9.9 
Boiling-3 pH 0 copper foam 183.9 7.8 30.7 
Boiling-4 pH 10 copper foam 218.6 10.7 41.8 
Boiling-5 Polished Cu 136.2 7.0 107.7 83.9 23.8 
Boiling-6 Polished Cu 107.4 7.7 107.7 83.9 23.8 
Boiling-7 Cu microchannels 194.5 23.0 152.2 142.3 9.9 
Dataset IDSurfaceCHF (W/cm2)Max heat transfer coefficientAdvancing contact angleReceding contact angleContact angle hysteresisWicking flux
Boiling-1 Polished Cu 102.6 7.1 107.7 83.9 23.8 
Boiling-2 Cu microchannels 203.5 10.7 152.2 142.3 9.9 
Boiling-3 pH 0 copper foam 183.9 7.8 30.7 
Boiling-4 pH 10 copper foam 218.6 10.7 41.8 
Boiling-5 Polished Cu 136.2 7.0 107.7 83.9 23.8 
Boiling-6 Polished Cu 107.4 7.7 107.7 83.9 23.8 
Boiling-7 Cu microchannels 194.5 23.0 152.2 142.3 9.9 

Datasets of two types of experiments were generated; transient and steady state. The transient cases consisted of increasing the power until rapid temperature spikes were identified (i.e., the CHF was reached) and then quickly turning off the heaters. Due to camera memory limitations, these cases used sampling rates of 150 frames per second in order to capture the onset of nucleate boiling to the CHF in a continuous video. These transient cases were used for training and testing the segmentation model since the frame rate was too low for tracking purposes. The steady-state tests describe running the experiment for extended periods at a constant power. These cases used a high sampling rate of 3000 frames per second since the videos could be at a shorter duration and still capture relevant features.

In total, 1384 images were labeled for training and 275 for testing. These images were taken from different boiling experiment datasets shown in Table I using a Python script. For the steady-state experimental datasets, the same number of images were randomly taken from each heat flux to cover a wide range of bubble sizes and types. For the transient experimental datasets, images were taken at random. The images were used for the training and testing of an instance segmentation model. For the segmentation and classification model, the polygon tool in the LabelMe31 software was used to manually label all of the images. For this labeling, bubbles were outlined and categorized as either “attached” or “detached” based on if the bubble was connected to the heater surface or not. Then, the data were saved in the MS COCO format under JSON files to be used for developing the model. For the segmentation model, the categories labels of “attached” and “detached” were both replaced with “bubble”. Table I shows the number of images for each boiling experiment that were used for training and testing the segmentation model. As shown in Table I, annotations from one entire experiment, Boiling-2, were withheld from the training sets in order to test the model's performance on data from different experiments not used in training.

Three different types of models (i.e., instance segmentation, classification, and tracking) are paired to extract important information from the high-speed videos. These extracted features can be split into individual bubble features, global features, and dynamic image features. The individual bubble features are based on a single bubble. The global features refer to the characteristics of all bubbles in single frames. The dynamic features are based on image sequences and utilize the results of the tracking model with the segmentation model to represent full-frame temporal data. Figure 1 shows the overall model architecture. Image sequences are passed through the segmentation and classification models and the tracking model for extracting masks of bubbles and labeling the same bubble in consecutive frames. Then, this information is used to extract the different bubble features.

For the framework, an instance segmentation model is utilized. In general, there are two types of segmentation models; semantic and instance. Semantic segmentation is used to classify each pixel of an image as a specific class (i.e., bubble or background), while instance segmentation identifies objects and assigns a label to each object. Instance segmentation is used to distinguish between multiple objects that belong to the same class. For the work presented here, a pretrained instance segmentation model, Mask R-CNN,32 was used from the Detectron233 GitHub repository. Mask R-CNN is an extension of the Faster R-CNN object detection. It was developed to address the task of instance segmentation, i.e., detect objects while precisely delineating their boundaries. The Mask R-CNN model is made of several convolutional neural network (CNN) components. In general, a CNN works by establishing filters with random weights. Then, through the training process, data are passed through the model, and these weights are continuously adjusted through iterations to minimize a designated loss function (objective function). The CNNs that make up the Mask R-CNN are described as follows. First, an input image is passed through a CNN for extracting features. Then, this output is passed through a region proposal network for generating predictions of the bounding boxes and identifying objects. The region proposal network outputs and initial feature extractions are then passed through a region of interest align layer in order to align features. Then, this is passed through another CNN to output the individual masks for the detected objects. It is also passed through layers to obtain both the classes of each object and the coordinates of bounding boxes for each object. In summary, the Mask R-CNN model takes image inputs and produces three separate outputs; individual masks of the objects, bounding boxes, and class labels for each object. The boiling image training datasets whose preparation is described in Sec. II A were used for finetuning the model. To improve training and reduce overfitting, data augmentation was applied to the training set to artificially generate a larger dataset. This augmentation included brightness, contrast, saturation, and horizontal flips of images. The segmentation model identified all the bubbles in the image. The output of the model consisted of bounding boxes for each bubble, confidence values for each prediction, and binary masks for each identified bubble. The outputs of the segmentation model were used for classifying bubble attachment status. To identify if a bubble was attached to or departed from the heater surface a separate CNN model in Pytorch was trained for this classification. The classification model inputs were generated from the output bubble masks from the segmentation model. These masks were used to create three-channel images, with the first channel being the binary mask, the second channel being the original image with the mask applied, the third channel being the inverse of the mask applied to the original image.

Outputs of the segmentation model were also coupled with a tracking model to enable assigning matching IDs to the same bubble in consecutive frames. In general, tracking is done by iterating through frames. At each frame, inputs of predicted bounding boxes from the segmentation model are passed through the tracking model to predict each box's location in the next frame. Then, in that next frame, these tracking model predicted locations are matched with the locations of the segmentation model predicted bounding boxes. Tracking is made difficult due to several factors such as occlusion, noise, motion blur, deformation, etc. Simple online and real-time tracking (SORT)34 has been the base of several new tracking models. SORT works by first using an object detection model to identify the location of the objects in each frame. Then, the Kalman filter is used to predict the next location of the said object. The Kalman filter is a powerful algorithm used in many tracking applications and is used for predicting the state of an object. Some common issues with SORT are insufficient tracking robustness with non-linear motion and no observations for updating posterior. Also, there is a tradeoff between high framerate and performance. A higher frame rate is desired for the linear frame approximation but with a higher frame rate, the noise object velocity variance is high. Noise in the velocity will accumulate into the position estimate. Also, the noise of the state estimate accumulates when there are no observations. Although SORT is a good baseline tracking model, it has room for improving errors currently faced. Models such as DeepSORT35 and OC-SORT36 have been developed to improve issues seen by SORT. Observation-Centric SORT (OC-SORT) was a model proposed to improve performance with non-linear motion and occlusion and is the model used in this work. It includes a module, named observation-centric re-update, which uses object state observations to reduce accumulated error during tracks being lost, and an observation-centric momentum module for incorporating the direction consistency of tracks in the cost matrix for association.

An important part of the proposed models is extracting physical meaning and features from the model outputs. Several different features can be extracted and computed from these types of machine learning models. These features are divided into individual bubble features, static global features, and dynamic features. The following describes how all the features are defined and how they are obtained from the machine learning segmentation, classification, and tracking models. For individual bubbles, many characteristics can be extracted. As a representation of bubble size, approximate bubble diameter was reported. This is defined as the diameter of a circle with the same area as the mask of the bubble and its formula is given below, where N is the number of pixels the bubble occupies based on the mask and α is a scale value that specifies the number of pixels in one cm,
(1)

Another result reported was the bubble interface morphology, which describes the location of the interface of a single bubble in a particular frame. This was achieved by taking the bubble masks produced from the Mask R-CNN model and using OpenCV's findContours function.37 The interface velocity of a bubble was presented as a new feature to obtain through segmentation and tracking. This describes how the bubble is expanding. To achieve these velocity vectors along the bubble interface, first a single bubble was tracked through multiple frames. Then, an initial frame was taken with that bubble. Identically to the interface morphology extraction, the outline of the bubble was found by using OpenCV's findContours function on a mask of the bubble and choosing the largest contour. This function provides coordinates of the contour of the mask. This same process was then used on the same bubble but 5 frames later as identified through the tracking model. Next, the contour of the bubble at the initial frame was compared to the contour at the future frame. Using cKDTree from the Scipy38 library, the closest point on the second contour was matched to each point on the first contour. To convert these vectors to velocity vectors, the distance in pixels was converted to physical distance by using the 1 cm wide heating surface as a reference. They were then divided by the time between the two frames.

The global data refer to features that are generated using all the bubbles in a single frame. One of the simplest examples of this is identifying the bubble count. This is just the number of bubbles in each frame. This is achieved by summing the number of instances identified by the segmentation model in each frame. Vapor fraction describes the ratio between liquid and gas. Utilizing the classification and segmentation models, three representatives of the vapor fraction were obtained. The first was obtained by taking the number of pixels containing a bubble in the image and dividing it by the total number of pixels in the image. The second was obtained by taking the number of pixels containing bubbles classified as attached and dividing by the total number of pixels in the image. The third vapor fraction analysis reported the vapor fraction of a single bubble over time. This was achieved by using the tracking model to identify the same bubble from frame to frame and for each frame dividing the number of pixels that bubble occupied by the total number of pixels in the image.

Dynamic features are extracted over the time domain. The departure rate describes the frequency at which bubbles depart from the heater surface. The segmentation and classification models were used for determining the departure rate. Using the tracking model, each bubble was tracked through the video. Then, the classification results for each bubble through time were used. A departure event was defined when the status of a bubble was changed from attached to detached. To get the departure rate, this count of departed bubbles was divided by the duration of the clip.

To present the benefits of the proposed bubble dynamics analysis network, boiling experiments are conducted. Three metrics based on average precision (AP)39–41 are used to verify its performance. Computing AP essentially involves calculating the area under the precision-recall curve. This is done by summing up the products of precision and the change in recall at each threshold, as given in Eq. (4), where precision and recall are calculated by Eqs. (2) and (3). Precision is used to measure the accuracy of the model's positive predictions, while recall is used to measure the model's ability to capture true positives. In these equations, TP (true positive) refers to the cases where the model correctly identifies a positive instance as positive. FP (false positive) refers to the cases where the model incorrectly identifies a negative instance as positive. FN (false negative) refers to the cases where the model incorrectly identifies a positive instance as negative,
(2)
(3)
(4)

AP50 is the average precision calculated using an IoU (Intersection over Union) threshold of 0.5 when computing AP. IoU is a metric used to measure the overlap between the detected object and the ground truth, where an IoU of 0.5 means that a detection is considered correct if its overlap with the ground truth exceeds 50%. An IoU of 1 is a perfect prediction, while 0 means no overlap. Compared to AP50, AP75 employs a higher IoU threshold, thus requiring a greater overlap between the detection and the ground truth. With these three metrics, the effectiveness of the proposed model can be comprehensively evaluated.

The evaluation results are provided in Table III. In Boiling-1, the model exhibited a high AP of 50.825%, reaching 74.892% in AP50 (IoU = 0.5), indicating its good detection capability under a relatively lenient IoU threshold. Additionally, the AP75 (IoU = 0.75) result of 57.668% suggests the model's ability to accurately segment bubbles under stricter conditions. Concerning the identification of different types of bubbles, the average precision for attached bubbles was 63.186%, whereas for detached bubbles, it was slightly lower at 38.465%, reflecting the greater difficulty in recognizing and tracking detached bubbles in complex backgrounds. The results of Boiling-2 were lower, with an overall AP of 43.557%, AP50 of 72.706%, and AP75 of 44.828%. This disparity can be attributed to the fact that the model was trained without any images from the Boiling-2 dataset. This boiling was completely used as a test set. Therefore, even without including this boiling image in the training set, the model still achieved a high accuracy in AP50, further demonstrating its generalization ability.

TABLE III.

The performance of the proposed machine learning model for bubble object detection and segmentation.

Boiling testAPAP50AP756
Boiling-1 50.825 74.892 57.668 
Boiling-2 43.557 72.706 44.828 
Boiling testAPAP50AP756
Boiling-1 50.825 74.892 57.668 
Boiling-2 43.557 72.706 44.828 
The additional attachment classification CNN model was also tested. Accuracy was used as a performance metric, which is calculated using Eq. (5). The model was found to perform well, achieving a classification accuracy of 94.96% on the test data,
(5)

To analyze the effectiveness of the tracking model under different bubble morphologies, videos at 15, 60, and 120 W in Boiling-1 were processed as benchmarks. Each video consisted of a total of 2093 frames, originally recorded at 3000 fps. Three key metrics: Multiple Object Tracking Accuracy (MOTA), Multiple Object Tracking Precision (MOTP), and ID F1 Score (IDF1) were utilized to evaluate the model's performance. As seen in Table IV, there is a notable improvement in all three metrics as the heating power increases. Particularly, the model's metrics significantly improve when the heating power is increased from 15 W to 60 W; however, the improvement is relatively insignificant when it is further increased from 60 W to 120 W. These results suggest that videos captured at higher heating powers yield higher accuracy and precision in tracking. Observations on boiling videos indicate that at a lower heating power, bubbles tend to be small, numerous, discrete, and their variation is complex and diverse. When the heating power is increased, the bubbles predominantly assume a large mushroom-like shape with fewer numbers, and the frequency of separation or merger is much lower than at lower heating powers, thereby enhancing the tracking model's performance. In addition to this, we also investigated the impact of different frame rates on the tracking model's performance. For the 15 W heating power videos, samples were taken at 3000, 1500, and 750 fps. As illustrated in Table IV, it indicates that higher frame rates effectively enhance the tracking model's performance.

TABLE IV.

Tracking model analysis.

Video properties
150 fps15 W
Metrics15 W60 W120 W3000 fps1500 fps750 fps
MOTA 52.6 82.3 87.4 79.2 73.3 67.2 
MOTP 64.7 90.6 92.5 84.5 83.6 80.3 
IDF1 50.2 75.5 80.9 69.3 64.7 68.0 
Video properties
150 fps15 W
Metrics15 W60 W120 W3000 fps1500 fps750 fps
MOTA 52.6 82.3 87.4 79.2 73.3 67.2 
MOTP 64.7 90.6 92.5 84.5 83.6 80.3 
IDF1 50.2 75.5 80.9 69.3 64.7 68.0 

Multi-object tracking for boiling feature extraction is not new as other groups have utilized tracking for bubble analysis. Chang et al.26 employs VISION-iT to predict the flow boiling heat flux. Lee et al.27 introduces a hybrid Ni/CuO NW surface for better heating performance and showcases its efficacy using VISION-iT. However, the code of VISION-iT is not open access, hindering direct comparison with our approach. Since VISION-iT's tracking model utilizes a cell tracking package based on TrackPy, we compared its tracking performance with ours. The Boiling-1, 120 W, 150 fps videos are used for the tracking comparison. TrackPy achieves a MOTA of 66.5, while ours achieves a higher MOTA of 87.4. Additionally, our method provides the ability to extract the approximate interface velocity, which none of the past methods have done.

Based on the segmentation and tracking result, the statistic and dynamic analysis of bubbles can be made. Figure 2 shows examples of the data that are obtained from the framework including; individual bubble information in Fig. 2(a), global (spatially averaged) information in Fig. 2(b), and temporal-spatial (dynamic) information in Fig. 2(c). In Fig. 2(a), the detection, segmentation, and classification results of one frame taken from Boiling-1 at a heat flux of 13.97 W/cm2 are provided. Using the proposed model, all the bubbles can be classified into two categories; attached and departure. They are marked in red and green in the left most image and marked as 0 and 1 in the right most table, respectively. From this figure, it can be seen that the model does well segmenting and classifying the bubbles. By performing static analysis on the image, we can obtain the diameter, interface morphology, and interface velocity of individual bubbles. The distribution of diameter is provided in a histogram in the middle of Fig. 2(a), which illustrates that the frequency of bubbles within the diameter range of (0 ∼ 5) increases initially and then decreases, reaching a maximum frequency at around 2 mm. In the rightmost table of Fig. 2(a), we provide some parameters of four bubbles including some attached to the heating surface and some departed. As for the interface morphology and velocity of bubbles, we will elaborate on them in the subsequent discussion. Global information in Fig. 2(b) refers to information utilizing all the bubbles in a single frame; this includes bubble count and vapor fraction. The vapor fraction equation is given below,
(6)
FIG. 2.

BubbleID-extracted features showing (a) individual bubble features including bubble ID, diameter, attachment status, and interface morphologies of each individual bubble, (b) spatial-averaged information including bubble count, attached vapor fraction, and total vapor fraction of each frame, and (c) dynamic features including the bubble departure rate. Example features in (a) and (b) are from Boiling-1 at a heat flux of 13.97 W/cm2 and data for (c) are from Boiling-1 and Boiling-2.

FIG. 2.

BubbleID-extracted features showing (a) individual bubble features including bubble ID, diameter, attachment status, and interface morphologies of each individual bubble, (b) spatial-averaged information including bubble count, attached vapor fraction, and total vapor fraction of each frame, and (c) dynamic features including the bubble departure rate. Example features in (a) and (b) are from Boiling-1 at a heat flux of 13.97 W/cm2 and data for (c) are from Boiling-1 and Boiling-2.

Close modal

Here, we provide the bubble count at four different time points, along with the attached vapor fraction and total vapor fraction, as shown in Fig. 2(b). The ability of the model to distinguish between attached and departed bubbles enables the automatic acquisition of attached vapor fraction and total vapor fraction. The motivation behind this distinction of vapor fraction is that a bubble once detached plays less of a role in the heat transfer compared to the attached ones. The framework also enables the calculation of bubble departure rates. As shown in Fig. 2(c), the trend of bubble departure rate with heat flux is depicted for two heating surfaces, namely, Plain Cu and Microchannel. It can be observed that they exhibit an inverse-proportional relationship. It also can be seen that both experiments with different surfaces show similar rates at around the same heat fluxes. This is an interesting observation because the two experiments had different critical heat fluxes; 102 W/cm2 for the Plain Cu surface and 203 W/cm2 for the Microchannel surface. The bubble count and vapor fraction for four different boiling tests are shown in Fig. 3. These consist of two steady-state cases and two transient cases. The steady-state cases show plots of the mean bubble count and vapor fraction for entire videos at each heat flux. The transient cases show bubble count and vapor fraction for a single video in gray and a rolling average of these values in dark blue. The corresponding heat flux plots vs time are also shown. It can be seen in both experimental types that generally for the nucleate boiling regime, the bubble count decreases as the heat flux increases. It is also seen that the average vapor fraction varies with the heat flux. As the heat flux rises and decreases, the average vapor fraction does the same. It is also observed that as the heat flux increases, the variation of the vapor fraction also increases.

FIG. 3.

BubbleID-extracted bubble count and vapor fraction for (left) steady-state experimental data and (right) transient experimental data. The steady-state data are from Boiling-1 and Boiling-2. The transient data are from Boiling-5 and Boiling-6.

FIG. 3.

BubbleID-extracted bubble count and vapor fraction for (left) steady-state experimental data and (right) transient experimental data. The steady-state data are from Boiling-1 and Boiling-2. The transient data are from Boiling-5 and Boiling-6.

Close modal
To demonstrate the effectiveness of the extracted features, the boiling images and extracted static bubble features were used as inputs to a machine learning regression model for predicting the heat flux. A hybrid model similar to that in the work by Suh et al.42 was adopted. The model was trained using boiling images and corresponding extracted bubble features for predicting the heat flux. The vapor fraction, bubble count, and min and max bubble size extracted from the segmentation model were used as input features for the model. The mean absolute percentage error (MAPE) was the metric used for determining the performance of the model and can be calculated using Eq. (7). Our similar model achieved a low MAPE value of 3.99% on the steady-state testing data.
(7)

Through the proposed method, the dynamic growth characteristics of bubbles can be measured. As shown in Fig. 4(a), the velocity direction of the bubble's interface at different positions is illustrated. With the instance segmentation of the bubble, obtaining the bubble's edge becomes feasible. Based on the acquired edges and tracking labels, a wealth of dynamic growth information can be quickly obtained. In Fig. 4(b), we provide a reference direction for the bubble's interface velocity, where a vector on the interface is either pointing inside the bubble or outside and those pointing outside are positive. Fig. 4(c) defines the description method for positions on the bubble's interface. In this study, the position of a velocity vector is described in terms of the relative perimeter. For each bubble, 0 is defined in the middle of the bottom of the bubble and then moves counter-clockwise around the bubble. Figure 4(d) displays the dynamic variation of the bubble's interface velocity over time of an example case from Boiling-1 at a heat flux of 102.63 W/cm2. This plot has a Gaussian filter applied to smooth the graph and highlight changes. The color bar only shows a range from −30 to 30 cm/s to highlight the different velocities. The vertical axis here is based on the annotations in Fig. 4(c). Based on Fig. 4, it can be observed that the interface velocity around 1/2 of the bubble's perimeter is the highest, and with the increase of time, the interface velocity also increases. The range of bubble interfaces with positive velocities gradually increases until the bubble detaches from the heating surface. Similarly, the velocity of the interface near the heating surface changes from an approximately zero magnitude to a large negative magnitude when the bubble detaches.

FIG. 4.

Bubble interface dynamics analysis showing (a) bubble interface contour and velocity vectors, (b) definition of velocity signs (vectors pointing outside of bubble are defined as positive, while vectors pointing inside bubble are defined as negative), (c) definition of locations on the bubble perimeter, which is used as the y-axis in (d), and (d) interface velocity profiles over time with representative images. This is an example case from Boiling-1 at a heat flux of 102.63 W/cm2.

FIG. 4.

Bubble interface dynamics analysis showing (a) bubble interface contour and velocity vectors, (b) definition of velocity signs (vectors pointing outside of bubble are defined as positive, while vectors pointing inside bubble are defined as negative), (c) definition of locations on the bubble perimeter, which is used as the y-axis in (d), and (d) interface velocity profiles over time with representative images. This is an example case from Boiling-1 at a heat flux of 102.63 W/cm2.

Close modal

To further demonstrate the usage of bubble dynamic analysis, a comparative visualization of bubble behavior and dynamics before and after the critical heat flux (CHF) in a boiling process is provided in Fig. 5. These new example cases are from Boiling-1 at a heat flux of 93.92 W/cm2 (pre-CHF) and Boiling-1 at a heat flux of 102.63 W/cm2 (post-CHF). For each situation, the dynamic changes in the maximum velocity magnitude of a bubble interface over time, the velocity magnitude along the bubble perimeter over time, and velocity magnitude distribution pattern over time are provided for one bubble. Maximum velocity magnitude is defined as the maximum absolute value velocity magnitude for the bubble in a single frame. Bubble vapor fraction is defined as the ratio of the bubble of interest's area in the image over the entire area of the image. These figures present a marked difference in bubble dynamics before and after the CHF is reached. Before CHF, the increasing curve of bubble maximum velocity magnitude is more stable compared with that of post-CHF, which reveals more significant fluctuations. More notably, the peak value of the maximum bubble interface velocity magnitude for the pre-CHF bubble departure is almost triple that of post-CHF. The velocity magnitude along the bubble perimeter over time also shows different increasing patterns. During the rising period, both cases exhibit growth curves resembling concave functions with different curvatures. However, the inflection point of the bubble vapor fraction at pre-CHF shows a smoother decline, while the transition post-CHF is more abrupt. From the velocity magnitude distribution spectrum and the actual bubble growth-departure images, it is easier to get an intuitive difference between the two cases. In the pre-CHF state, the morphology of bubbles undergoes more pronounced changes over time, whereas in the post-CHF state, bubble growth becomes more stable and predictable. Therefore, it can be summarized that employing the proposed bubble segmentation-tracking model and interface velocity has great potential for identifying or distinguishing the pre/post-CHF state.

FIG. 5.

Comparison between bubble dynamics before CHF (top panel) and after CHF (bottom panel) showing the velocity magnitude versus time, bubble vapor fraction versus time, and velocity profiles with representative bubble images. The pre-CHF test cases are from Boiling-1 at a heat flux of 93.92 W/cm2 and the post-CHF test cases are from Boiling-1 at a heat flux of 102.63 W/cm2.

FIG. 5.

Comparison between bubble dynamics before CHF (top panel) and after CHF (bottom panel) showing the velocity magnitude versus time, bubble vapor fraction versus time, and velocity profiles with representative bubble images. The pre-CHF test cases are from Boiling-1 at a heat flux of 93.92 W/cm2 and the post-CHF test cases are from Boiling-1 at a heat flux of 102.63 W/cm2.

Close modal

Based on the experiment results, the following conclusions can be drawn. (1) The proposed machine learning framework is used in boiling bubble detection, segmentation, classification, and tracking. (2) From individual bubble analysis, the bubble diameter and distribution per frame can be found. The global analysis allows for vapor fraction calculation of both only bubbles attached to the heater surface and all of the bubbles in a single frame. Dynamic analysis is also made by determining the departure rate through the use of the class labels. (3) The proposed metric, interface velocity, is used in bubble dynamic analysis. Using this metric, the bubble growth-departure features can be comprehensively evaluated. Meanwhile, the experiment result suggests that the interface velocity of bubbles in different locations shows a different changing pattern. Furthermore, qualitative differences in dynamic features between pre-CHF and post-CHF are observed.

Overall, this work, in particular the interface velocity vectors, has potential for utilization in verifying some boiling CFD models. The model performs well on additional datasets from the same experimental setup not included in training and is expected to perform well on external datasets through the use of further finetuning with a small set of annotated images from the new external setup. Future work will include further increasing the segmentation and tracking precision and improving the generalizability of the models. Meanwhile, a deeper analysis of bubble dynamics using machine learning and physical modeling based on the extracted features should also be carried out for a more fundamental understanding of bubble dynamics.

This work was supported by the National Science Foundation (NSF) Grant No. CBET-2323022. The authors would like to thank Professor Ashif Iquebal and Ridwan Olabiyi at Arizona State University for their valuable discussion about this work. The authors are grateful to Amanda Williams and Ethan Weems at the University of Arkansas for their assistance in annotating boiling images.

The authors have no conflicts to disclose.

C. Dunlap: Conceptualization (equal); Data curation (equal); Formal analysis (lead); Funding acquisition (lead); Investigation (lead); Methodology (equal); Project administration (lead); Resources (equal); Software (lead); Supervision (lead); Validation (equal); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). C. Li: Conceptualization (equal); Data curation (equal); Formal analysis (lead); Investigation (lead); Methodology (equal); Software (lead); Validation (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (equal). H. Pandey: Data curation (equal). N. Le: Methodology (equal); Resources (supporting); Supervision (supporting); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (supporting). H. Hu: Conceptualization (equal); Funding acquisition (lead); Methodology (supporting); Project administration (lead); Resources (equal); Supervision (lead); Validation (equal); Visualization (supporting); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
Q.
Zhang
,
Z.
Meng
,
X.
Hong
,
Y.
Zhan
,
J.
Liu
,
J.
Dong
,
T.
Bai
,
J.
Niu
, and
M. J.
Deen
, “
A survey on data center cooling systems: Technology, power consumption modeling and control strategy optimization
,”
J. Syst. Architect.
119
,
102253
(
2021
).
2.
C.
Roe
,
X.
Feng
,
G.
White
,
R.
Li
,
H.
Wang
,
X.
Rui
,
C.
Li
,
F.
Zhang
,
V.
Null
, and
M.
Parkes
, “
Immersion cooling for lithium-ion batteries—A review
,”
J. Power Sources
525
,
231094
(
2022
).
3.
Y. K.
Prajapati
and
P.
Bhandari
, “
Flow boiling instabilities in microchannels and their promising solutions—A review
,”
Exp. Therm. Fluid Sci.
88
,
576
593
(
2017
).
4.
S.
Fan
and
F.
Duan
, “
A review of two-phase submerged boiling in thermal management of electronic cooling
,”
Int. J. Heat Mass Transfer
150
,
119324
(
2020
).
5.
N.
Zuber
,
Hydrodynamic Aspects of Boiling Heat Transfer
(
United States Atomic Energy Commission, Technical Information Service
,
1959
).
6.
G.
Hazi
and
A.
Markus
, “
On the bubble departure diameter and release frequency based on numerical simulation results
,”
Int. J. Heat Mass Transfer
52
(
5–6
),
1472
1480
(
2009
).
7.
P. W.
McFadden
and
P.
Grassmann
, “
The relation between bubble frequency and diameter during nucleate pool boiling
,”
Int. J. Heat Mass Transfer
5
(
3–4
),
169
173
(
1962
).
8.
S.
Gong
and
P.
Cheng
, “
Lattice Boltzmann simulation of periodic bubble nucleation, growth and departure from a heated surface in pool boiling
,”
Int. J. Heat Mass Transfer
64
,
122
132
(
2013
).
9.
W.
Nakayama
,
T.
Daikoku
,
H.
Kuwahara
, and
T.
Nakajima
, “
Dynamic model of enhanced boiling heat transfer on porous surfaces—Part I: Experimental investigation
,”
J. Heat Transfer
102
(
3
),
445
450
(
1980
).
10.
J. P.
McHale
and
S. V.
Garimella
, “
Nucleate boiling from smooth and rough surfaces—Part 2: Analysis of surface roughness effects on nucleate boiling
,”
Exp. Therm. Fluid Sci.
44
,
439
455
(
2013
).
11.
J. P.
McHale
and
S. V.
Garimella
, “
Bubble nucleation characteristics in pool boiling of a wetting liquid on smooth and rough surfaces
,”
Int. J. Multiphase Flow
36
(
4
),
249
260
(
2010
).
12.
A.
Zou
,
A.
Chanana
,
A.
Agrawal
,
P. C.
Wayner
, and
S. C.
Maroo
, “
Steady state vapor bubble in pool boiling
,”
Sci. Rep.
6
(
1
),
1
8
(
2016
).
13.
O.
Oikonomidou
,
M.
Kostoglou
,
S.
Evgenidis
,
X.
Zabulis
,
P.
Karamaounas
,
A.
Sielaff
,
M.
Schinnerl
,
P.
Stephan
, and
T.
Karapantsios
, “
Power law exponents for single bubbles growth in nucleate pool boiling at zero gravity
,”
Int. Commun. Heat Mass Transfer
150
,
107175
(
2024
).
14.
O.
Oikonomidou
,
S.
Evgenidis
,
C.
Argyropoulos
,
X.
Zabulis
,
P.
Karamaounas
,
M. Q.
Raza
,
J.
Sebilleau
,
F.
Ronshin
,
M.
Chinaud
, and
A. I.
Garivalis
, “
Bubble growth analysis during subcooled boiling experiments on-board the international space station: Benchmark ιmage analysis
,”
Adv. Colloid Interface Sci.
308
,
102751
(
2022
).
15.
A. K.
Sadaghiani
,
R.
Altay
,
H.
Noh
,
H. J.
Kwak
,
K.
Şendur
,
B.
Mısırlıoğlu
,
H. S.
Park
, and
A.
Koşar
, “
Effects of bubble coalescence on pool boiling heat transfer and critical heat flux–A parametric study based on artificial cavity geometry and surface wettability
,”
Int. J. Heat Mass Transfer
147
,
118952
(
2020
).
16.
L. R.
Villegas
,
D.
Colombet
,
P.
Guiraud
,
D.
Legendre
,
S.
Cazin
, and
A.
Cockx
, “
Image processing for the experimental investigation of dense dispersed flows: Application to bubbly flows
,”
Int. J. Multiphase Flow
111
,
16
30
(
2019
).
17.
A.
Rokoni
,
L.
Zhang
,
T.
Soori
,
H.
Hu
,
T.
Wu
, and
Y.
Sun
, “
Learning new physical descriptors from reduced-order analysis of bubble dynamics in boiling heat transfer
,”
Int. J. Heat Mass Transfer
186
,
122501
(
2022
).
18.
S.
Torisaki
and
S.
Miwa
, “
Robust bubble feature extraction in gas-liquid two-phase flow using object detection technique
,”
J. Nucl. Sci. Technol.
57
(
11
),
1231
1244
(
2020
).
19.
S.
Zhang
,
H.
Li
,
K.
Wang
, and
T.
Qiu
, “
Accelerating intelligent microfluidic image processing with transfer deep learning: A microchannel droplet/bubble breakup case study
,”
Sep. Purif. Technol.
315
,
123703
(
2023
).
20.
A. N.
Chernyavskiy
and
I. P.
Malakhov
, “
CNN-based visual analysis to study local boiling characteristics
,”
J. Phys. Conf. Ser.
2119
,
12068
(
2021
).
21.
Y.
Cui
,
C.
Li
,
W.
Zhang
,
X.
Ning
,
X.
Shi
,
J.
Gao
, and
X.
Lan
, “
A deep learning-based image processing method for bubble detection, segmentation, and shape reconstruction in high gas holdup sub-millimeter bubbly flows
,”
Chem. Eng. J.
449
,
137859
(
2022
).
22.
J. H.
Seong
,
M.
Ravichandran
,
G.
Su
,
B.
Phillips
, and
M.
Bucci
, “
Automated bubble analysis of high-speed subcooled flow boiling images using U-net transfer learning and global optical flow
,”
Int. J. Multiphase Flow
159
,
104336
(
2023
).
23.
J.
Soibam
,
V.
Scheiff
,
I.
Aslanidou
,
K.
Kyprianidis
, and
R. B.
Fdhila
, “
Application of deep learning for segmentation of bubble dynamics in subcooled boiling
,”
Int. J. Multiphase Flow
169
,
104589
(
2023
).
24.
D.-C.
Cheng
and
H.
Burkhardt
, “
Bubble tracking in image sequences
,”
Int. J. Therm. Sci.
42
(
7
),
647
655
(
2003
).
25.
Y.
Suh
,
S.
Chang
,
P.
Simadiris
,
T. B.
Inouye
,
M. J.
Hoque
,
S.
Khodakarami
,
C.
Kharangate
,
N.
Miljkovic
, and
Y.
Won
, “
VISION-iT: A framework for digitizing bubbles and droplets
,”
Energy AI
15
,
100309
(
2024
).
26.
S.
Chang
,
Y.
Suh
,
C.
Shingote
,
C.-N.
Huang
,
I.
Mudawar
,
C.
Kharangate
, and
Y.
Won
, “
BubbleMask: Autonomous visualization of digital flow bubbles for predicting critical heat flux
,”
Int. J. Heat Mass Transfer
217
,
124656
(
2023
).
27.
J.
Lee
,
Y.
Suh
,
M.
Kuciej
,
P.
Simadiris
,
M. T.
Barako
, and
Y.
Won
, “Deep vision-inspired bubble dynamics on hybrid nanowires with dual wettability,” arXiv:2202.09417 (
2022
).
28.
H.
Pandey
,
H.
Mehrabi
,
A.
Williams
,
C.
Mira-Hernandez
,
R. H.
Coridan
, and
H.
Hu
, “
Acoustic sensing for investigating critical heat flux enhancement during pool boiling on electrodeposited copper foams
,”
Appl. Therm. Eng.
236
,
121807
(
2024
).
29.
H.
Pandey
,
X.
Du
,
E.
Weems
,
S.
Pierson
,
A.
Al-hmoud
,
Y.
Zhao
, and
H.
Hu
, “
Two-phase immersion cooler for medium-voltage silicon carbide MOSFETs
,” in
2024 23rd IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm)
,
Denver, CO
(
IEEE
,
2024
).
30.
C.
Dunlap
,
H.
Pandey
,
E.
Weems
, and
H.
Hu
, “
Nonintrusive heat flux quantification using acoustic emissions during pool boiling
,”
Appl. Therm. Eng.
228
,
120558
(
2023
).
31.
K.
Wada
, “labelme: Image Polygonal Annotation with Python” GitHub repository, (
2018
).
32.
K.
He
,
G.
Gkioxari
,
P.
Dollár
, and
R.
Girshick
, “
Mask R-CNN
,” in
Proceedings of the IEEE International Conference on Computer Vision
(ICCV,
2017
), pp.
2961
2969
.
33.
A.
Kirillov
,
Y.
Wu
,
K.
He
, and
R.
Girshick
, “
Pointrend: Image segmentation as rendering
,” in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
(
IEEE
,
2020
), pp.
9799
9808
.
34.
A.
Bewley
,
Z.
Ge
,
L.
Ott
,
F.
Ramos
, and
B.
Upcroft
, “
Simple online and realtime tracking
,” in
2016 IEEE International Conference on Image Processing (ICIP)
(
IEEE
,
2016)
, pp.
3464
3468
.
35.
N.
Wojke
,
A.
Bewley
, and
D.
Paulus
, “
Simple online and realtime tracking with a deep association metric
,” in
2017 IEEE International Conference on Image Processing (ICIP)
(
IEEE
,
2017
), pp.
3645
3649
.
36.
J.
Cao
,
J.
Pang
,
X.
Weng
,
R.
Khirodkar
, and
K.
Kitani
, “
Observation-centric sort: Rethinking sort for robust multi-object tracking
,” in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
(
IEEE
,
2023
), pp.
9686
9696
.
37.
G.
Bradski
, “
The openCV library
,” in
Dr. Dobb's Journal of Software Tools
, edited by T. Kientzle (
People's Computer Company
,
2000
).
38.
P.
Virtanen
,
R.
Gommers
,
T. E.
Oliphant
,
M.
Haberland
,
T.
Reddy
,
D.
Cournapeau
,
E.
Burovski
,
P.
Peterson
,
W.
Weckesser
, and
J.
Bright
, “
SciPy 1.0: Fundamental algorithms for scientific computing in Python
,”
Nat. Methods
17
(
3
),
261
272
(
2020
).
39.
R.
Padilla
,
S. L.
Netto
, and
E. A. B.
Da Silva
, “
A survey on performance metrics for object-detection algorithms
,” in
International Conference on Systems, Signals, and Image Processing
(IWSSIP,
2020
).
40.
C.
Li
,
L.
Guo
,
Y.
Lei
,
H.
Gao
, and
E.
Zio
, “
A signal segmentation method for CFRP/CFRP stacks drilling-countersinking monitoring
,”
Mech. Syst. Signal Process.
196
,
110332
(
2023
).
41.
C.
Li
,
Y.
Lei
,
Z.
You
,
L.
Guo
,
E.
Zio
, and
H.
Gao
, “
Vision-based defect measurement of drilled CFRP composites using double-light maging
,”
IEEE Trans. Instrum. Meas.
72
,
1
9
(
2023
).
42.
Y.
Suh
,
R.
Bostanabad
, and
Y.
Won
, “
Deep learning predicts boiling heat transfer
,”
Sci. Rep.
11
(
1
),
1
10
(
2021
).