X-ray computed microtomography (μCT) is an innovative and nondestructive versatile technique that has been used extensively to investigate bio-based systems in multiple application areas. Emerging progress in this field has brought countless studies using μCT characterization, revealing three-dimensional (3D) material structures and quantifying features such as defects, pores, secondary phases, filler dispersions, and internal interfaces. Recently, x-ray computed tomography (CT) beamlines coupled to synchrotron light sources have also enabled computed nanotomography (nCT) and four-dimensional (4D) characterization, allowing in situ, in vivo, and in operando characterization from the micro- to nanostructure. This increase in temporal and spatial resolutions produces a deluge of data to be processed, including real-time processing, to provide feedback during experiments. To overcome this issue, deep learning techniques have risen as a powerful tool that permits the automation of large amounts of data processing, availing the maximum beamline capabilities. In this context, this review outlines applications, synchrotron capabilities, and data-driven processing, focusing on the urgency of combining computational tools with experimental data. We bring a recent overview on this topic to researchers and professionals working not only in this and related areas but also to readers starting their contact with x-ray CT techniques and deep learning.

X-ray computed tomography (CT) is a powerful noninvasive technique that delves into the structure of materials and reveals details in their internal morphology in three spatial dimensions (3D).1 Initially, x-ray CT was employed as a pillar for medical applications, acquiring images with a spatial resolution on the millimeters scale.2–5 At the beginning of the 1980s, the first x-ray computed microtomography (μCT) images were obtained from biological structures,6 reaching a spatial resolution on the μm scale. This improvement enabled morphometric analysis and quantification, which was impractical to perform with other imaging techniques.1,6–18 In the same decade, the emergence of synchrotron light sources made it possible to map the first images of organisms using monochromatic and small x-ray sources.5,19 The continuous evolution of synchrotron beamlines has now reached the nanoscale imaging of materials, opening new perspectives in CT.14 In the early 2000s, μCT was expanded to bio-based applications to investigate plants, animals, soils, and synthetic biomaterials,19,20 increasing considerably the number of publications related to this technique.21,22 Recently, the extreme brilliance of synchrotron sources has opened new possibilities in time-resolved x-ray tomography (4DCT) and allowed a new understanding of these systems through in situ, in vivo, and in operando characterizations.14,23,24 In this scenario, the first fourth-generation synchrotron light sources have emerged as an unprecedented characterization tool to perform nano- and time-resolved CT due to their high brightness and coherence characteristics enabling higher resolution and faster data acquisition (Fig. 1, at the top of the image).25 

FIG. 1.

Overview timeline on landmarks of x-ray CT and computational methods for data processing. Medicine application image: Reproduced with permission from M. M. Ter-Pogossian, Semin. Nucl. Med. 7, 109 (1977). Copyright 1977 Elsevier.34 Snall shell image: Reproduced with permission from J. C. Elliott and S. D. Dover, J. Microsc. 126, 211 (1982). Copyright 2011 John Wiley and Sons.6 Pig heart image: Reproduced with permission from Thompson et al., Nucl. Instrum. Methods Phys. Res. 222, 319 (1984). Copyright 1984 Elsevier.5 Algae image: Reproduced with permission from Bianco-Stein et al., Adv. Sci. 7, 2000108 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.35 Foam beer images: Reproduced with permission from Merkle et al., J. Microsc. 277, 197 (2020). Copyright 2020 John Wiley and Sons.36 Setup image reproduced with permission from Poologasundarampillai et al., Mater. Today Adv. 2, 100011 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.37 MOGNO/SIRIUS synchrotron beamline scheme (Reproduced from https://lnls.cnpem.br/facilities/mogno-en/). Binary classification scheme from R. S. Cintra and H. F. de Campos Velho, Advanced Applications for Artificial Neural Networks. Copyright 2018 Intechopen. Reproduced with permission from Intechopen.38 LeNet architecture illustration reproduced with permission from Kim et al., Sensors 17, 2834 (2017). Copyright 2017 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.39 3D U-net scheme reproduced with permission from Safonov et al., Computers 8, 72 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.40 

FIG. 1.

Overview timeline on landmarks of x-ray CT and computational methods for data processing. Medicine application image: Reproduced with permission from M. M. Ter-Pogossian, Semin. Nucl. Med. 7, 109 (1977). Copyright 1977 Elsevier.34 Snall shell image: Reproduced with permission from J. C. Elliott and S. D. Dover, J. Microsc. 126, 211 (1982). Copyright 2011 John Wiley and Sons.6 Pig heart image: Reproduced with permission from Thompson et al., Nucl. Instrum. Methods Phys. Res. 222, 319 (1984). Copyright 1984 Elsevier.5 Algae image: Reproduced with permission from Bianco-Stein et al., Adv. Sci. 7, 2000108 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.35 Foam beer images: Reproduced with permission from Merkle et al., J. Microsc. 277, 197 (2020). Copyright 2020 John Wiley and Sons.36 Setup image reproduced with permission from Poologasundarampillai et al., Mater. Today Adv. 2, 100011 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.37 MOGNO/SIRIUS synchrotron beamline scheme (Reproduced from https://lnls.cnpem.br/facilities/mogno-en/). Binary classification scheme from R. S. Cintra and H. F. de Campos Velho, Advanced Applications for Artificial Neural Networks. Copyright 2018 Intechopen. Reproduced with permission from Intechopen.38 LeNet architecture illustration reproduced with permission from Kim et al., Sensors 17, 2834 (2017). Copyright 2017 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.39 3D U-net scheme reproduced with permission from Safonov et al., Computers 8, 72 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.40 

Close modal

Concurrently, with x-ray CT advances, machine learning (ML) and neural networks have developed and culminated in solutions for challenging problems related to model estimation in large datasets as in x-ray CT using synchrotron beamlines. In 1958, the perceptron emerged as the first supervised learning algorithm for binary classification tasks, built around a linear prediction function.26 This development is considered the predecessor of neural networks. Then, backpropagation appeared as an effective algorithm to train feedforward neural networks by estimating the weights and biases of the model.27 In the following decades, neural networks evolved to form more complex architectures, such as the multilayer perceptron consisting of fully connected neuron class and LeNet architecture as a successful case of a convolutional neural network (CNN).28,29 Next, the dropout regularization technique was proposed to improve the training stage of neural networks by reducing the overfitting problems.30 Finally, deep learning (DL) emerged as a powerful computational tool, allowing the design of end-to-end solutions for visual recognition tasks that demand both representation learning and predictive classification, such as the U-net architecture.31 Recently, neural networks have increased their ability to accurately model large datasets by adopting attention mechanisms, inspired by the human cognitive attention,32 and self-supervised/unsupervised learning, which aims to reduce the dependence of labeled data (Fig. 1, at the bottom of the image).33 

This review aims to compile introductory and recent literature to inspire and guide non-expert users in x-ray CT as well as those who have basic expertise to advance toward state-of-the-art synchrotron capabilities and data-driven processing. To reach this goal, the review is designed as a gateway for further studies and is structured as follows: Sec. II is devoted to applications of x-ray CT, outlining many research questions that have been addressed in recent research on biological systems and resources, basically giving an overview of bio-based applications. Section III presents the frontiers of capabilities for beamlines of fourth-generation synchrotrons, and how these capabilities are expanding the limits of what can be measured in spatial and time scales. Section IV discusses the deluge of data that is generated by cutting-edge x-ray CT beamlines, and how computational techniques can handle this challenge. Section V looks further into the future and provides an outlook and conclusions for this review. Therefore, this review provides a unique perspective of the recent literature addressing bio-based systems characterized by x-ray CT, bringing together an overview of recent synchrotron capabilities and data-driven processing, updating the state of the art in these fields.

The potential scientific applications of x-ray CT are countless. This section presents a selection of applications (by no means exhaustive of the potential of the technique) related to biological systems and resources: plants, animals, soils, and synthetic bio-based materials.1,19,41 The presentation in this Sec. II is driven by the research questions being addressed with the noninvasive 3D capabilities of x-ray CT, referencing to studies performed both in laboratory systems as well as in synchrotrons, using conventional 3D imaging as well as in situ, in vivo, and time-resolved experiments. This section provides an up-to-date overview of applications aiming at motivating in-depth, further studies, including the cutting-edge advances in synchrotron and computational capabilities discussed in Secs. III and IV of this review.

Plants have a hierarchical multiscale structure, typical of biological matter, organized from molecules to whole organisms (or even ecological systems).42,43 Plant cell walls encase the cells and are a key structural element of this hierarchy. Cell walls are of two types, primary and secondary, both made as nanocomposites based on cellulose microfibrils. The primary cell walls are synthesized during cell growth; they are thin, pliant, and highly hydrated. The secondary cell walls are deposited after cell growth ceases; they are thicker (typically 1–8 μm) and provide strength and rigidity to plant tissues.44 With vessel diameters and cell sizes of about 10–100 μm, the spatial resolution of μCT reveals the plant's cellular and tissue structures, contrasting the cell walls with the cell interiors containing air or water. Based on this common structural foundation, the use of μCT has advanced in several research fields such as plant biology, wood science, and biomass conversion into renewable fuels and chemicals, as detailed in the following paragraphs and illustrated in Fig. 2.

FIG. 2.

Plant matter imaged by μCT. (a) Arabidopsis plant, highlighting (left) a side view of a seedling; and (right) a hypocotyl section colored according to cell size. Reproduced with permission from Inzé et al., Trends Plant Sci. 15, 419 (2010). Copyright 2010 Elsevier.46 (b) Conifer wood showing (left) undecayed tracheids of earlywood and latewood with cross field and bordered pits, indicating radial (R), tangential (T), and longitudinal (L) directions; and (right) fungal-induced cavities in wood after 16 weeks of decay. Reproduced with permission from Militz et al., Micron 134, 102875 (2020). Copyright 2020 Elsevier.54 (c) Biomass particles from poplar, pine, and maize considered in tissue- and particle-scale computational models of biomass conversion, highlighting variations in particle morphology and intraparticle structure. Reprinted with permission from Ciesielski et al., ACS Sustainable Chem. Eng. 8, 3512 (2020). Copyright 2020 American Chemical Society.55 

FIG. 2.

Plant matter imaged by μCT. (a) Arabidopsis plant, highlighting (left) a side view of a seedling; and (right) a hypocotyl section colored according to cell size. Reproduced with permission from Inzé et al., Trends Plant Sci. 15, 419 (2010). Copyright 2010 Elsevier.46 (b) Conifer wood showing (left) undecayed tracheids of earlywood and latewood with cross field and bordered pits, indicating radial (R), tangential (T), and longitudinal (L) directions; and (right) fungal-induced cavities in wood after 16 weeks of decay. Reproduced with permission from Militz et al., Micron 134, 102875 (2020). Copyright 2020 Elsevier.54 (c) Biomass particles from poplar, pine, and maize considered in tissue- and particle-scale computational models of biomass conversion, highlighting variations in particle morphology and intraparticle structure. Reprinted with permission from Ciesielski et al., ACS Sustainable Chem. Eng. 8, 3512 (2020). Copyright 2020 American Chemical Society.55 

Close modal

Imaging techniques are workhorses of plant science, and x-ray CT is opening new avenues for research, as recently reviewed by Piovesan et al.45, Figure 2(a) shows examples of images from the Arabidopsis plant, where cells of the plant tissues can be resolved in 3D.46 With this level of resolution, μCT has been employed to investigate phenomena that are inherently tridimensional such as mass transport. Xylem embolism, the blockage of water transport from roots to leaves that may result in plant death, has been intensively researched using μCT,47,48 including many studies conducted in vivo.49–52 Food grains and fruits also stand out as an important part of the anatomy of plants in which μCT permits to analyze their texture, size, shape, and internal structure that are important parameters for the food quality.45,53

Wood is a natural material used by humanity since ancient times. Related natural materials (e.g., bamboo) and engineered forms of wood (e.g., plywood) complement the scenario of wood and related materials.56 3D images from μCT have found several applications in wood science, including visualization of wood biodegradation caused by fungi [Fig. 2(b)] and identification of wooden cultural heritage.54,57 Moreover, the technique has been expanded toward in situ experiments to reveal the structural changes promoted by wood chemical pulping,58 and the transport of fluids into the complex wood pore space, with implications, for instance, for the application of wood adhesives.59–61  In situ studies have also been developed to investigate wood micro-scale deformations under mechanical stress.62,63

Plant biomass is also a vast resource that can partly replace fossil fuels with renewable alternatives. A highlight is given to residuals from forestry (e.g., sawdust), agriculture (e.g., wheat straw), and agroindustry (e.g., sugarcane bagasse) usable as feedstocks for bioenergy and biorefining.64 Tomograms resolving the cellular and tissue structures have been used to reveal water retention within the cells of processed biomass65 and mineral particulate contaminants adhered and impregnated into disrupted biomass tissues.66 Moreover, x-ray CT revealed structural changes promoted by hydrothermal treatments that enhance biomass digestibility in bioconversion processes.67 In other studies of biomass processing, μCT provided microscopic visualizations of biomass combustion,68 chemical delignification,69,70 and pyrolysis.71,72 In all these fields of applications, from plant science to biomass conversion, 3D images from CT have been also used as inputs for realistic representations of biomass morphology in computational modeling [Fig. 2(c)].45,55

Soft (non-mineralized) and hard (mineralized) tissues constitute the main structural formations of animals. Hard tissues are characterized by having a certain degree of stiffness due to the process of tissue mineralization (e.g., bones and teeth). On the other hand, soft tissues present a certain degree of elasticity like ligaments, muscles, and tendons, and, in some cases, soft tissues are filled majorly by aqueous solution (e.g., internal organs).73 In this context, x-ray CT permits to obtain structural information on animal anatomy, aiming at the investigation of their functionality, development, lesions, disease progression, fractures, and regeneration, as illustrated in Fig. 3.

FIG. 3.

Soft and hard tissues of animals imaged by μCT. (a) I2E-stained heart and lung of mouse ex vivo.75 Reproduced with permission from Martins de Souza e Silva et al., Sci. Rep. 7, 17387 (2017). Copyright 2017 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license. (b) Segmented hemisphere brain of mouse, exhibiting the cortex (green), midbrain (blue), and cerebellum (red). Reproduced with permission from Fonseca et al., Sci. Rep. 8, 12074 (2018). Copyright 2018 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license.77 (c) Primate molar teeth displacement under load represented by Euclidean distance q p 2. Reproduced with permission from Bemmann et al., J. Struct. Biol. 213, 107658 (2021). Copyright 2021 Elsevier.82 (d) Haversian canals contained in old human cortical bone (the colorscale shows the diameter intensity of the canals). Reproduced with permission from Zimmermann et al., Proc. Natl. Acad. Sci. U. S. A. 108, 14416 (2011). Copyright 2011 PNAS.85 

FIG. 3.

Soft and hard tissues of animals imaged by μCT. (a) I2E-stained heart and lung of mouse ex vivo.75 Reproduced with permission from Martins de Souza e Silva et al., Sci. Rep. 7, 17387 (2017). Copyright 2017 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license. (b) Segmented hemisphere brain of mouse, exhibiting the cortex (green), midbrain (blue), and cerebellum (red). Reproduced with permission from Fonseca et al., Sci. Rep. 8, 12074 (2018). Copyright 2018 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license.77 (c) Primate molar teeth displacement under load represented by Euclidean distance q p 2. Reproduced with permission from Bemmann et al., J. Struct. Biol. 213, 107658 (2021). Copyright 2021 Elsevier.82 (d) Haversian canals contained in old human cortical bone (the colorscale shows the diameter intensity of the canals). Reproduced with permission from Zimmermann et al., Proc. Natl. Acad. Sci. U. S. A. 108, 14416 (2011). Copyright 2011 PNAS.85 

Close modal

Considering the numerous examples of soft tissues characterized by x-ray CT, 3D images of animals assist in the quantitative analysis of mutant and experimental phenotypes, and in the generation of 3D anatomical data.74 As an illustrative example, tomograms of stained heart and lung allow to observe the orientation of the fibrous muscular tissue and the blood vessels, which are essential for organ functioning, as illustrated in Fig. 3(a).75, μCT has also been employed to map the whole neuronal architecture of animal brains [Fig. 3(b)], permitting to understand its spatial organization and connectivity.76–78 In the area of food science, the characterization of meat as soft tissue has also been highlighted in relation to the distribution and amount of fat in salami, beef, and lamb that directly affect their taste and texture.79 

In relation to hard tissues, μCT has been widely applied in the mapping of porous structures, lesions, fractures, and tissue regenerations. For example, tomograms of corals can reveal their structure consisting of channels, which are crucial to understanding their directional growth.80 As illustrated in Fig. 3(c), tomograms of primate molar teeth exhibit its whole structure and determine their displacement under load to better understand the relationship between the morphology of periodontal ligament space and tooth movement.81,82 Furthermore, rendered images of human cortical bone demonstrated the effect of bone aging on the enlargement of Haversian canals, as indicated by the colorscale in Fig. 3(d), allowing for a better understanding about the causes of bone thinning.83–85 Other examples consolidate the advantages of x-ray CT being used in the description of bone regeneration in rat tibia,86 trabecular bone mass degeneration of mice,83 metabolic control of collagen synthesis in bone junctions,87,88 and complex lacuno-canalicular network (LCN) structure contained in bone.89 

Soil consists of solids and voids. With this duality, the description of soil structure has followed both a solid perspective focused on the mineralogy, size, shape, and state of aggregation of the solid matter and a pore perspective focused on the size, shape, and connectivity of the pore space and its occupation by microorganisms, water, and air,90,91 as shown in Fig. 4. From the solid perspective, sizing is used to classify soil texture—sand (50 μm–2 mm), silt (2–50 μm), and clay (<2 μm)—and hierarchy of aggregation—microaggregates (20–250 μm) and macroaggregates (>250 μm). These size-based classifications indicate the key length scales in soil structure.90 From the pore perspective, x-ray μCT has opened new horizons to soil science by enabling the visualization and analysis of the pore space of undisturbed soil with spatial resolution on the μm scale.91–93 

FIG. 4.

Soil structures imaged by μCT. (a) 3D cross sections of clayey and sandy soils in the bulk and rhizosphere regions showing segmented solid, water, and air phases. Reproduced with permission from Daly et al., J. Exp. Bot. 66, 2305 (2015). Copyright 2015 Authors, licensed under a Creative Commons Attribution (CC-BY) License.94 (b) Organic matter (cyan pointed by green arrow) contained in soil samples stained with I2. Reproduced with permission from Lammel et al., Front. Environ. Sci. 7, 153 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY) License.95 (c) Segmented image of rhizosphere region containing roots, root hairs, soil, and water in pores. Reproduced with permission from Daly et al., J. Exp. Bot. 67, 1059 (2016). Copyright 2016 Authors, licensed under a Creative Commons Attribution (CC-BY) License.96 

FIG. 4.

Soil structures imaged by μCT. (a) 3D cross sections of clayey and sandy soils in the bulk and rhizosphere regions showing segmented solid, water, and air phases. Reproduced with permission from Daly et al., J. Exp. Bot. 66, 2305 (2015). Copyright 2015 Authors, licensed under a Creative Commons Attribution (CC-BY) License.94 (b) Organic matter (cyan pointed by green arrow) contained in soil samples stained with I2. Reproduced with permission from Lammel et al., Front. Environ. Sci. 7, 153 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY) License.95 (c) Segmented image of rhizosphere region containing roots, root hairs, soil, and water in pores. Reproduced with permission from Daly et al., J. Exp. Bot. 67, 1059 (2016). Copyright 2016 Authors, licensed under a Creative Commons Attribution (CC-BY) License.96 

Close modal

The soil pore space governs a wide range of processes, such as the transport of water, nutrients, and contaminants; stabilization of organic matter; and microbial activity. Figure 4(a) exhibits examples of pore morphologies and pore filling with water and air.94 Pore-scale imaging techniques, most notably x-ray μCT, became essential to investigate such pore-scale phenomena.94,97,98 As an example of an urgent field of study, the reduction of agricultural N2O emissions and carbon sequestration in soils are recognized by the Intergovernmental Panel on Climate Change (IPCC) as important strategies for the mitigation of climate change.99 Research in this field employed μCT to reveal the role of soil pores on the microscale mechanisms of N2O emissions,100 and the stability of soil carbon.91,101

At the interface of plant and soil sciences, the rhizosphere has also attracted substantial attention, and the use of x-ray μCT in this research field was discussed in recent reviews.45,102 These reviews highlight that root–soil interactions are critical to plant growth and crop yield, regulating the acquisition of water and nutrients by the plants. X-ray μCT is a nondestructive method that, coupled to root segmentation algorithms, can delineate the configuration of the root system in space and time. Contrast agents for imaging the organic matter in the soil [Fig. 4(b)]95,103 and image segmentation algorithms tailored for roots in soils [Fig. 4(c)] are key ongoing developments in this field.96,104 An area that may be transformed by the new imaging capabilities is the understanding of the micromechanics of root development in soil.105 

Synthetic bio-based materials are here understood as materials that are synthesized from building blocks obtained from biological resources. Contrasting with a natural material like wood that preserves the native biological structure, the morphology of synthetic materials is given by man-made material processing. In this field of research, μCT allows the correlation of the processing parameters and the 3D architecture of the obtained materials, revealing their porosity, scaffold structure, and interface features that influence properties and functionalities.106,107 Aiming at a better appreciation of the diversity of synthetic bio-based materials that has been investigated by μCT, tomographic images of organic-based aerogel, biocarbon-based foam, polymer composites reinforced with biofillers and green electronic components are shown in Fig. 5.

FIG. 5.

Microstructures of synthetic bio-based materials imaged by μCT. (a) 3D renewable aerogel based on cellulose nanofibrils (CNF) and natural rubber latex (NR), highlighting 2D slices of CNF/NR aerogel. Top slice: CNF/NR 80/20 (w/w) contained large pores with wall interconnectivity between them; bottom slice: CNF/NR 30/70 (w/w) with small pores without wall interconnectivity. Adapted with permission from Lorevice et al., ACS Appl. Nano Mater. 3, 10954 (2020). Copyright 2020 American Chemical Society.112 (b) X-ray attenuation demonstrates the distribution of silica-rich regions (shown by blue color) into highly porous carbon-based foam. Reproduced with permission from Canencia et al., J. Mater. Sci. 52, 11269 (2017). Copyright 2017 Springer Nature.119 (c) Modified cellulose fibers (yellow color) and agglomerates (white color), their distribution and dispersion into low-density polyethylene (LDPE) matrix (red color). Reproduced with permission from Ferreira et al., Eur. Polym. J. 117, 105 (2019). Copyright 2019 Elsevier.120 (d) Pyrolyzed paper (blue color) surface soaked with polydimethylsiloxane (PDMS) (green color). Reproduced with permission from Damasceno et al., Adv. Electron. Mater. 6, 1900826 (2019). Copyright 2019 John Wiley and Sons.123 

FIG. 5.

Microstructures of synthetic bio-based materials imaged by μCT. (a) 3D renewable aerogel based on cellulose nanofibrils (CNF) and natural rubber latex (NR), highlighting 2D slices of CNF/NR aerogel. Top slice: CNF/NR 80/20 (w/w) contained large pores with wall interconnectivity between them; bottom slice: CNF/NR 30/70 (w/w) with small pores without wall interconnectivity. Adapted with permission from Lorevice et al., ACS Appl. Nano Mater. 3, 10954 (2020). Copyright 2020 American Chemical Society.112 (b) X-ray attenuation demonstrates the distribution of silica-rich regions (shown by blue color) into highly porous carbon-based foam. Reproduced with permission from Canencia et al., J. Mater. Sci. 52, 11269 (2017). Copyright 2017 Springer Nature.119 (c) Modified cellulose fibers (yellow color) and agglomerates (white color), their distribution and dispersion into low-density polyethylene (LDPE) matrix (red color). Reproduced with permission from Ferreira et al., Eur. Polym. J. 117, 105 (2019). Copyright 2019 Elsevier.120 (d) Pyrolyzed paper (blue color) surface soaked with polydimethylsiloxane (PDMS) (green color). Reproduced with permission from Damasceno et al., Adv. Electron. Mater. 6, 1900826 (2019). Copyright 2019 John Wiley and Sons.123 

Close modal

The morphological porous architecture of organic-based aerogel depends on its processing parameters and constituents, such as ice growth and additive addition, respectively. These variables are the physical–chemical conditions for homogeneity,108 higher porosity,109 mechanical robustness,110 and even thermal resistance of organic-based aerogels.111 Cellulose nanofibril (CNF)-based aerogels with natural rubber latex (NR) were produced as an illustrative example for that, varying the proportion of these two renewable ingredients. The μCT images of CNF/NR aerogels informed the correlation of the wall connectivity and porous microstructure with the addition of NR, as shown in the 2D slices for different ratios of CNF/NR [Fig. 5(a)].112 In this case, x-ray CT assists to tune the aerogel microstructures according to its application, such as metal ion capture,113,114 selective oil and organic solvent absorption,112,115,116 and pesticide incorporation for water remediation.117 The morphological porous structure of biocarbon-based foams has also been explored as adsorbents for effluents treatment, according to its porosity and specific surface area that can be analyzed by tomograms [Fig. 5(b)].118,119

With regard to polymeric composites, the biofiller dispersion, distribution, compatibility, and agglomeration can be mapped by μCT in the organic matrices. Tomograms of low-density polyethylene (LDPE) composites show the dispersion, distribution, and compatibility of the modified cellulose fibers (yellow color) into the polymer matrix (red color) [Fig. 5(c)]. These images enable the optimization of the filler incorporation process and distribution in the polymeric matrix, correlating with its mechanical performance,120 as also reported for polymers blends.121 In addition, in situ x-ray tomography analysis enables the observation of dynamic physical phenomena in polymers such as melting, bubbling, and ashing, promoting a better understanding of polymer processing.122 

In relation to bioscaffolds, there are numerous works regarding their porous structure with their application to bone regeneration.124–128 The process parameters such as temperature combined with the ratio of additives affect the final morphology of these materials, impacting their applicability for cell attachment.124,129,130 In situ tomography analysis during the sintering process of bioscaffolds allows better control of their final architecture, such as the degree of porosity and pore size.131 Aiming 3D bioscaffold strong enough mechanically (similar to bone) with interconnectivity appropriate for tissue development, several studies explored 3D bioglass (BG) templating obtained by sol-gel process,124 3D printing process combined with renewable matrices,132,133 and organic phase (like a polymer).125 In these cases, x-ray CT has been applied to map BG structure, observing the result of simulated body fluid (SBF) in their internal porous structure, the effect of morphological/composition structure on bone formation,124–127 and even its antibacterial activity.134 

μCT also stands out as a relevant method for the structural characterization of green electronic devices based on biocarbon, allowing electrical and morphological correlation.123,135 Lignin and cellulose nanofiber-based carbon aerogels demonstrated promising hierarchical structure applied as electrodes in supercapacitors, exhibiting an exceptional electrochemical performance.135 Additionally, tomographic images have been used to adjust the microfabrication process of bio-based electronic interfaces, enabling to map, for example, the permeabilization of pyrolyzed paper with polydimethylsiloxane (PDMS) aiming flexible electrodes [Fig. 5(d)].123 

Thereby, μCT has been consolidated as a powerful nondestructive tool to analyze bio-based materials as demonstrated in this section. The possibility to correlate the morphology of these systems to processability, chemical treatments, biofunctionality, and electronic performance enable researchers to comprehend their final mechanical, physical, chemical, biological, and other intrinsic characteristics. On the other hand, for a better understanding of these systems, high spatial and time resolutions have been increasingly required, leading to nano- and four-dimensional characterization, as extensively discussed in Sec. III.

The year 2022 marks the 75th anniversary of the discovery of synchrotron radiation. From 1947 until today, more than 60 synchrotron laboratories and free electron laser facilities have been built around the world.136 Since 1968, when the first spectrum was measured in the Tantalus I storage ring, until today, with the recent emergence of fourth-generation facilities, more than 170 000 articles, in many different areas of knowledge, have been published using this cutting-edge tool.137 As a way of characterizing the performance of these laboratories, one property of the beam is used, the so-called brilliance, which represents how the flux is collimated and how small the source is. Brilliance is calculated as the spectral flux divided by the unit source cross section and its solid angle.138 Green-field fourth-generation machines, such as Sirius and Max IV, increased brilliance by two orders of magnitude when compared to the third generation sources.

Another parameter worth mentioning is beam coherence, which reflects the fact that a real beam is not a perfect plane wave. As this is a property of the beam detailed in many different literatures,138,139 we will just mention that with the advent of fourth-generation synchrotrons, the coherence and coherent flux of the beam have increased considerably when compared to x-ray beams produced by previous generation machines. This fact enables plane wave coherent diffraction imaging (plane wave CDI) experiments, which were already possible with synchrotrons of previous generations, but little explored by the scientific community, due to all the experimental limitations associated with the reduced sample size (∼few micrometers) and long measurement times. This 3D imaging technique has great potential to increase the spatial resolution of x-ray images, with a theoretical limit of 10 nm, but still presents challenges for validation due to several causes, such as the difficulty of phase determination and object dimension.140 

There are several commercially available benchtop equipment that specify pixel size ranging from the micro to the nanoscale,141 and very fast scans (seconds-scale for a 3D image), when operating in relatively low resolution.14,19,142 In this review article, we focus on the exclusive “punctual” x-ray source in synchrotron beamlines. We discuss on full field x-ray imaging techniques, both in the contact (absorption) and Fresnel (phase contrast) regimes, as most of the well-established x-ray CT beamlines in the world work in these conditions—for more information regarding x-ray imaging regimes look at Mc Morrow and Als Nielsen143 and Salditt et al.144 We also highlight x-ray CT and fourth-generation machines, which have recently come into operation to discuss some new possibilities that this type of laboratory brings to scientific fields.93,145–147 For bio-based materials, usually considered as light atomic weight samples in high-energy x-rays, most of the x-ray microscopy techniques rely on contact regime (e.g., x-ray attenuation) as the method for measurement. Thus, it is often imperative to increase contrast of the sample by using heavy elements solution to impregnate soft materials (e.g., animal tissues) and bring radiodensities close to those heavy materials.148,149 Iodine-based contrast enhancement is one of the most used methods, but there are plenty of solutions that can be used to better resolve particular tissues or samples according to their composition and/or fixation procedures.150 On the other hand, synchrotron could offer the possibility to explore phase-contrast imaging techniques, which can not only reveal fine structures with subtle changes in refractive index inside the sample but also significantly reduce the deposited radiation dose. This technique relies on refraction, i.e., changes in the beam trajectory when traveling through objects composed of different materials, rather than attenuation or absorption, and it can reveal features previously considered “invisible” to x-rays in contact regime.144,150 At the same time, as phase retrieval does not require energy absorption, one can use higher energy x-rays, thus reducing the radiation dose.144 Although conventional methods in the biomedical area such as microtomy used in histological analysis remains the gold standard for fine anatomical studies, being able to use molecular markers to identify particular proteins or molecules, their destructive nature prevents in-depth reconstruction of the whole 3D sample.151 Thus, μCT, as a nondestructive technique, complements this information and allows fine internal characterization of these samples.

Two topics will be explored in more detail in this session with illustrative examples of applications, regarding the powerful advancement reached by the fourth-generation beamline capabilities. The first one is x-ray computed nanotomography (nCT), which greatly benefits from the reduction of the x-ray source size in fourth-generation synchrotrons, which is essential for full field CT measurements carried out in the contact and Fresnel regimes, as the image resolution depends on the size of the x-ray source, beam magnification, and pixel size of the detector.144 The second topic is time-resolved CT (4DCT), which greatly benefits from the high flux of the beam generated in synchrotron light sources.

With the rapid development of nanotechnology, a non-intrusive morphological technique has been required for a better comprehension of bio-based systems at the nanoscale. It is important to note that a high spatial resolution is obtained as a function of objective lens application (e.g., Fresnel zone plate—FZP) or elliptically shaped mirrors (KB) for x-ray phase contrast zoom tomography.14,152 Moreover, the temporal resolution at the nanoscale has been increased by implementing Zernike phase rings, permitting faster experiments with a low contrast of noise using the nCT approach.153 Conversely, innovative designs of beamlines in extremely brilliant fourth-generation synchrotrons use focusing optics (KB systems) that focalizes the x-ray beam in a nanometric region (i.e., creating a nano-focus), and thus a conical beam, which magnifies the image geometrically, dispensing the need for magnification via the indirect detection systems (i.e., visible light microscopes). Consequently, the use of direct x-ray detectors considerably increases the detection efficiency by ∼100 to 1000 times, decreasing the exposure time of each projection and increasing the temporal resolution of the measurement.154 In this context, nCT has been developed as a powerful method to describe distinct 3D nanostructures, achieving high spatial resolution at orders of <100 nm, as described for bio-based systems in Fig. 6.

FIG. 6.

Bio-based systems imaged by nCT. (a) Cross section image of an articulated coralline red alga (Jania sp.) acquired by μCT showing its porous structure (left), and a 3D image obtained by nCT, exhibiting its helical microstructure (right). The red dashed line and red arrow highlight a spiraling pore edge. Reproduced with permission from Pokroy et al., Adv. Sci. 7, 2000108 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.35 (b) nCT of lignocellulosic biomass (poplar wood) before and after hydrothermal treatment with cell walls' thicknesses indicated by colorscale. Comparison between images evidence the wall thinning promoted by the hydrothermal treatment. Reproduced with permission from Lancha et al., Sci. Rep. 11, 8444 (2021). Copyright 2021 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license.67 

FIG. 6.

Bio-based systems imaged by nCT. (a) Cross section image of an articulated coralline red alga (Jania sp.) acquired by μCT showing its porous structure (left), and a 3D image obtained by nCT, exhibiting its helical microstructure (right). The red dashed line and red arrow highlight a spiraling pore edge. Reproduced with permission from Pokroy et al., Adv. Sci. 7, 2000108 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.35 (b) nCT of lignocellulosic biomass (poplar wood) before and after hydrothermal treatment with cell walls' thicknesses indicated by colorscale. Comparison between images evidence the wall thinning promoted by the hydrothermal treatment. Reproduced with permission from Lancha et al., Sci. Rep. 11, 8444 (2021). Copyright 2021 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) license.67 

Close modal

For example, the high spatial resolution is displayed in 3D data acquisition of the nanoporous materials of bio-based systems. A complex hierarchical porous structure of the algae from micro- to nanoscale was observed using the concepts of nCT [Fig. 6(a)].35 Moreover, nCT revealed structural changes at multiple scales of lignocellulosic biomass promoted by hydrothermal treatments [Fig. 6(b)].67 In neuroscience, nCT has enabled imaging of neural tissue with sufficient resolution (voxel size of 30 nm) for individual neurons to be observed in Drosophila melanogaster tissue.155 The nanoscale resolution achieved in this study allowed authors to describe the motor and sensory single-neurons contained in the Drosophila's leg.156 In all these cases, the miniaturization of samples was essential to ensure good spatial resolution aligned in some cases to staining of samples with heavy metals.

Aiming at quantitative three-dimensional structural investigation resolved over time, 4DCT permits the correlation of morphological changes over a short-time step (≤1 s) with simultaneous exploration of the mechanical, thermal, physical, and chemical events in several bio-based systems.19,157–160 In other words, it is possible to observe numerous events in situ, in vivo, and in operando modes by 4DCT, as summarized in Fig. 7.

FIG. 7.

Several bio-based systems acquired by 4DCT. (a) The evolution of internal microstructure of a CNF aerogel as a function of compression strain (ε). Selected cells are numbered from 1 to 5 to help the visualization of the evolving foam geometry. Reproduced with permission from Martoïa et al., Mater. Des. 104, 376 (2016). Copyright 2016 Elsevier.161 (b) In situ permeation test of water (blue), gas (yellow), trichloroethylene contaminant (red), and metallic nanoparticles for water remediation (green) in a glass bead pack. Reproduced with permission from Pak et al., Proc. Natl. Acad. Sci. U. S. A. 117, 13366 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.168 (c) Beer foam bubbling over time with pores color-coded according to equivalent diameter (blue colors for smaller pores and red colors for larger ones). Reproduced with permission from Dewanckele et al., J. Microsc. 277, 197–209 (2020). Copyright 2020 John Wiley and Sons.36 

FIG. 7.

Several bio-based systems acquired by 4DCT. (a) The evolution of internal microstructure of a CNF aerogel as a function of compression strain (ε). Selected cells are numbered from 1 to 5 to help the visualization of the evolving foam geometry. Reproduced with permission from Martoïa et al., Mater. Des. 104, 376 (2016). Copyright 2016 Elsevier.161 (b) In situ permeation test of water (blue), gas (yellow), trichloroethylene contaminant (red), and metallic nanoparticles for water remediation (green) in a glass bead pack. Reproduced with permission from Pak et al., Proc. Natl. Acad. Sci. U. S. A. 117, 13366 (2020). Copyright 2020 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.168 (c) Beer foam bubbling over time with pores color-coded according to equivalent diameter (blue colors for smaller pores and red colors for larger ones). Reproduced with permission from Dewanckele et al., J. Microsc. 277, 197–209 (2020). Copyright 2020 John Wiley and Sons.36 

Close modal

The evolution of the internal microstructure of CNF aerogels under mechanical compression is one representative schema for in situ x-ray CT imaging [Fig. 7(a)]. The compressed aerogels' illustrations exposed the cell wall buckling/bending mechanisms, followed by the total aerogel's wall collapse.161 Other complex multiphase systems have been characterized in situ by x-ray CT, such as crystallization in basaltic magmas,162 grain compaction,163,164 wood micro-scale deformations under mechanical stress,62,63 combustion of wood,165 phosphorous release in soil,166 fluid flow in porous media applied to enhanced oil recovery,167 or in situ groundwater remediation [Fig. 7(b)],168–172 collapsing of a water drop within a sucrose suspension,173 and the bubbling of beer foam over time [Fig. 7(c)].36 Almost all of these in situ analysis required the development of setups that permit carrying out the tests during the x-ray scans.161 

In vivo analysis is another important procedure to understand the physiology of various biological systems. The sample's gradient movements can be captured over seconds by in vivo analysis, generating a tomographic movie.174 The same analysis can be performed for mitral valve deformation175,176 and bone structure strain along mechanical compression in short-time step acquisition.177 It is important to note that in vivo analysis requires a shorter acquisition time than the sample movement to avoid motion artifacts in tomograms.174,178 Consequently, an equivalent dose is delivered in a short time, increasing the so-called dose rate, which is critical for biological systems.179,180 Its effects are not completely understood, but cells and organs may be severely affected, prohibiting very fast 3D measurements in live systems. The same concept is applied when using a cone beam since by increasing resolution, the illuminated area is decreased and the dose increases. Summarily, every detail of the measurement (e.g., beam geometry, beam profile, peak energy, energy resolution, illuminated area, and scan time) in biological systems must be considered and tested (or simulated) before the experiment to verify its feasibility.

In the area of organic electronics, the in operando mode can be performed combining x-ray imaging to electrical characterization techniques in setups like electrochemical impedance spectroscopy (EIS).181,182 Studies based on in operando mode of organic electronics relate the morphological change of polymeric electrolytes with the cathode catalysts of fuel cells.182–184 

Even though numerous examples characterized by nCT and 4DCT have been presented in this section, these techniques are still little explored compared to conventional (benchtop equipment) x-ray CT. This scenario correlates with the challenge of real-time processing of large amount of data generated by x-ray CT coupled to synchrotron light source, which is discussed extensively in Sec. IV.

With a large amount of data generated in synchrotron beamlines, semi-manual processing is time-consuming and unfeasible.185 Furthermore, a single tomographic image can have hundreds or thousands of classes for segmentation, making semi-manual segmentation even more impractical.186–188 There are also challenges in data storage and handling, but in the context of our discussion on the potential techniques and tools for acquiring and processing x-ray CT and processing, we have determined that exploring data storage and handling specifically would fall outside of our scope. Therefore, to address the challenge of data processing in synchrotron beamlines, machine learning (ML) has emerged as a powerful solution to these issues and will be covered in this section. ML is a subfield of artificial intelligence that provides algorithms and techniques designed to teach machines to perform tasks by identifying correlations and patterns in a given set of data to acquire knowledge.189 The learned relations are consolidated into a machine learning model that can be used for predictions or to understand the patterns from unknown data. In the context of tomographic data processing, ML has been widely used for a variety of tasks, such as the removal of artifacts,190–192 and assisting in digital image segmentation and recognition.188,193–195

We can categorize machine learning techniques into two main types: supervised and unsupervised approaches.189 Supervised learning techniques learn a task and estimate a machine learning model by taking as input a set of labeled data to learn from, i.e., containing both the inputs and outputs. The most common supervised tasks are classification and regression, for learning categorical and numerical outputs, respectively. Examples of supervised learning algorithms include ridge regression, random forests,196 K-nearest neighbors,197 support vector machines,198 and multilayer perceptron.28 In turn, unsupervised learning learns a task by searching for structures and hidden patterns using only input data, without outputs (labels). The most common unsupervised tasks are dimensionality reduction and clustering. Examples of well-known unsupervised learning algorithms include principal component analysis (PCA), K-means, hierarchical clustering, and autoencoders.199–201 Nowadays, deep learning-based techniques, defined as neural networks with multiple hidden layers, have emerged as state-of-the-art ML approaches due to their high capability of generalization, flexibility, and accuracy. However, different from many classic ML techniques, DL-based approaches require large amounts of data to estimate thousands or millions of parameters that compose the models, making this kind of technique a natural choice to solve image segmentation problems at synchrotron beamlines.31,202

Considering the context of large data volumes generated in synchrotron beamlines, deep learning (DL) approaches emerge as a promising solution that benefits from these large-scale datasets, and more importantly, to deal with the diverse computer vision problems related to tomographic image processing and understanding. Among the several deep learning techniques, convolutional neural networks (CNN) are established in the processing of complex computational data concerning visual computing tasks.203,204

Convolutional neural networks are a powerful type of neural network that comprises a hierarchical structure built from stacked layers capable of generating local spatial features by leveraging the fact that nearby information (pixels in an image) is typically related to each other.31 This is referred to in the literature as local spatial information. Another important characteristic of CNN is its ability to produce low-level features (e.g., edges and corners) by the earlier layers and high-level features (e.g., more complex structures) by the later layers.

Another key aspect of this technique relies on the possibility to devise end-to-end computer vision applications, which include finding suitable representations for the input data (representation learning) and designing classifiers to operate on top of those representations (classification). In contrast to the classic machine learning pipeline that considers the design of these two phases separately, deep learning techniques allow the design of representation learning and classifiers altogether in a single framework and whose trainable parameters are estimated considering the synergy of these two phases. This favors the construction of sophisticated domain-specific learning methods without the need for hand-crafted designs, which are usually engineering intensive. The CNN must be trained, automatizing processes for new input data. Each training results in a certain error, which is defined by the original data. The difference between the model approximation and the reference data, defined by a loss function, is minimized via backpropagation with stochastic gradient descent optimizers, where there is an attempt to find the minimum of this loss function.204–206 Furthermore, the learning rate needs to be adjusted by having optimization steps that adapt the learning rate as the network calculates errors.207 

The success in CNN employment came from technological advances on one hand, represented by the efficient use of rapid graphics processing units (GPUs),208 and theoretical and algorithmic advances on the other,209 represented by the use of rectified linear units (ReLUs),210 and the dropout regularization technique.211 CNN architectures may have 10–20 layers of ReLUs, hundreds of millions of weights, and billions of connections between units, which provide efficiency in computational data processing.31,212 Therefore, DL has provided great advances in the learning processes of computer vision tasks containing high-dimensional data. In other words, DL can build its own computational model from a training dataset and even take advantage of classic algorithms such as the filtered back projection (FBP).213 Considering tomographic images, DL allows the processing of unknown images as opposed to conventional algorithms based on a known routine. In terms of different applications, DL can be applied to improve image quality, recognizing and classifying objects, segmenting phases of images, and generating labels automatically,214 as shown in the comparative scheme in Fig. 8.

FIG. 8.

Deep learning compared with classical computation methods for a noisy image of a cell (left side). Trained network applied for detecting and classifying objects, segmenting phases, and generating artificial labeling (right side). Reproduced with permission from von Chamier et al., Biochem. Soc. Trans. 47, 1029 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.214 

FIG. 8.

Deep learning compared with classical computation methods for a noisy image of a cell (left side). Trained network applied for detecting and classifying objects, segmenting phases, and generating artificial labeling (right side). Reproduced with permission from von Chamier et al., Biochem. Soc. Trans. 47, 1029 (2019). Copyright 2019 Authors, licensed under a Creative Commons Attribution (CC-BY 4.0) License.214 

Close modal

DL has been used to deal with problems related to image quality that appear due to some x-ray imaging conditions, such as low dose signal and sparse angle issues.215–217 The main challenge of dealing with low-dose x-ray imaging is the decreasing signal-to-noise ratio.217 In this case, the image can suffer from several artifacts, including streaking.218–232 Furthermore, ring artifact correction233,234 and Gaussian noise reduction are usually used to improve the quality of tomographic images.

In this direction, Ronneberger et al.235 proposed a U-Net-like architecture that takes advantage of both prior and phase-solved reconstructions to improve the quality of tomographic images. In addition to the U-Net architecture, other kinds of DL have been proposed toward improving the quality of x-ray-based data, such as generative adversarial networks (GAN).236,237 GANs are composed of two main components: the discriminator and generator networks. The first component is responsible for learning noise properties from original noisy data to guide the generator network, which, in turn, is responsible for denoising an input image and generating noisy synthetic images. The synthetic images are then used to challenge the discriminator toward improving its ability to discriminate noisy images from clean images. The main differences that distinguish GANs lie in the way the discriminator and generator are estimated. For further reading on this topic, we recommend additional references.238–240 Finally, for synchrotron nCT, DL can mitigate random jitter effects, enhancing spatial resolution and reducing time acquisition.241–243 

The flexibility of x-ray imaging techniques in measuring physical properties of different material types imposes several challenges in designing effective segmentation methods for solving a wide range of problems or target applications (e.g., porous media in soils and discrimination of biological tissues). With this regard, DL methods have been successfully used to devise methods able to deal with complex problems. Fully convolutional networks (FCN) and U-Net networks are prominent architectures to design end-to-end solutions for several tomography-related problems.235,244 In Xu's work, the FCN was used to devise an image decomposition algorithm capable of segmenting bones from tissues for a dual-energy CT application.245 Ben-Cohen et al. proposed the use of stacked FCN networks to design an end-to-end solution for liver segmentation and lesion detection tasks.246 Similarly, a cascade of FCN networks was employed to segment human abdominal organs, including artery, veins, stomach, and pancreas among others. The U-Net approach also has been used in a wide range of x-ray-based applications such as soil analysis and x-ray-based medical image segmentation.247,248

The great ability of both FCN-like and U-Net-like architectures in producing accurate segmentation maps lies in the introduction of skip-connection mechanisms. In both approaches, skip connections help estimate accurate segmentation maps, especially on boundaries, by recovering fine-grained details, even in deep architectures. That is because both kinds of architectures are composed of two sub-networks, an encoder and decoded network. Usually, the encoder network consists of sequences of two 3 × 3 convolutions, followed by a ReLU activation function, batch normalization, and max pooling operation to reduce spatial dimensions, although it is possible to use other backbone networks as an encoder. Later, He's work suggested that skip connections (also known as residual connections) may prevent the vanishing gradient problem in deeper architectures.249 

Although it is possible to use other backbone networks as the encoder, in Sec. IV C, we discuss possible alternatives for backbones toward achieving real-time processing; usually, the encoder sub-network consists of sequences of layers composed of two 3 × 3 convolutions, followed by a ReLU activation function, batch normalization, and max pooling operation. Thus, on each layer, the encoder reduces the spatial dimensions and increases the channels or feature maps. As a result, the encoder sub-network can learn meaningful features that help to predict the class of pixels but lose the ability to perform fine-grained segmentation.250 In turn, the decoder sub-network increases the size of feature maps and decreases the number of channels to recover spatial information and produce fine-grained segmentation. To achieve this goal, a decoder sub-network comprises an up-sampling operation (e.g., bi-linear interpolation) of feature maps followed by a 2 × 2 transpose convolution, and a concatenation operation to fuse the feature maps from its corresponding encoder sub-network via skip connections. The great ability of this kind of architecture to produce high-accurate segmentation maps, makes the U-Net a reference method for image segmentation, especially in x-ray-based medical image analysis, and has inspired the design of several other architectures for image segmentation, for instance, the V-Net architecture,251 which was designed to handle volumetric data. For more details of U-Net and its variants, we recommend Siddique's survey252 and Hesamian's works.253 

Each analyzed sample requires different x-ray CT image acquisition parameters such as radiation dose, acquisition time, detector resolution, and x-ray source. Additionally, different CTs of the same sample will produce different clusters of datasets, i.e., different texture, contrast, intensity, morphology, artifacts, and even noise characteristics. These parameters lead to a unique image domain. A potential downside for DL as a data-driven method is the inaccuracy of the output obtained from different image domains, usually proportional to the distance between them. This problem is known as distributional shift.254–258 

To address distributional shift, various techniques have been proposed, such as domain adaptation and data augmentation. Data augmentation increases the variety and size of the dataset by scaling, rotating, changing contrast, and adding noise to the original dataset.259–263 In domain adaptation, the model is trained to work in a new image domain, usually with adversarial neural networks such as generative adversarial networks (GANs), which can also be used for data augmentation with image synthesis.264–267 The advantage of these methods is that they succeed unsupervised. This is particularly valuable since most image segmentation using DL in synchrotron CT x-ray relies on producing a ground truth specific to its dataset and image domain in order to train a CNN like U-net, which in turn would require an expert in the particular field of the sample. Recent works on medical images are leading recent advances in DL-based methods for x-ray CT data, compared to reports related to synchrotron facilities. The design of efficient DL architectures is incipient in recent synchrotron x-ray-based developments, which mainly uses threshold-based segmentation techniques. While such techniques are less computationally expensive than DL-based methods, they lack accuracy in solving complex segmentation problems. In turn, medical CT techniques benefit from recent developments in deep learning techniques to devise advanced methods for x-ray image segmentation. However, in both research communities, recent works focus on achieving high accuracy to the detriment of efficiency.

Nowadays, there are several open-source software that implement DL-based image segmentation methods such as Biomedisa230 and Annotat3D.262 However, these software packages still depend on a high-performance computational infrastructure, being limited to a few laboratories. In this way, other open-source platforms like Caffe resurface, currently established in plug-and-play cloud-based DL (CDeep3M),268 democratizes access to data segmentation for users without access to high-performance computing resources. From the development of increasingly advanced and optimized algorithms based on DL, it is possible to extract the maximum amount of information from tomographic data, showing the interdependence between the advances in experimental capabilities and computation.

Real-time segmentation is an essential component in several computer vision applications, including autonomous vehicles269,270 and robotics.271 The requirement of a fast response for fast decision-making motivated several studies that concentrated on understanding the learning processes of CNNs and mechanisms that lead to the most effective and efficient architectures. In the context of x-ray-based image analysis in synchrotron beamlines, real-time data analysis arises due to demands for processing large datasets with low response latency. Modern synchrotron beamlines that produce time-resolved x-ray CT with high spatial resolutions require algorithms that leverage the power of high-performance computing environments to enable real-time applications in the context of x-ray image analysis as artificial intelligence-assisted segmentation and scientific visualization of large datasets.272 In addition to use of HPC environments, the following approaches can also be used to devise deep learning architectures for running on devices with computational constraints, such as FPGA devices.273,274

This section aims to provide a comprehensive overview of the most prominent approaches for designing efficient, and yet effective CNN architectures for real-time x-ray image analysis. Next, we discuss some approaches that have been used to reach this goal.

1. Lightweight backbones

The development of lightweight backbones is one of the early techniques for leading to real-time segmentation. We can understand the CNN architecture in two components: representation learning and classification. The representation learning component aims to extract features and, thus, automatically discover representations (or subspaces) that encode information useful for the classification task. Usually, the representation learning component is called the backbone of the network, and it concentrates most of the layers that compose a CNN architecture, which requires more computations. In turn, the classification component contains fewer layers and, thus, is expected to have fewer computations. Therefore, lightweight backbones refer to backbones that require less computational resources to extract meaningful representations, which impacts directly on the overall computations including the estimation of values for trainable parameters. For this reason, both the number of parameters and floating-point operations per second (FLOPS) performed by the network are the two main metrics to compare the efficiency of CNNs.275 

In a recent study, Tan and Le276 proposed a family of CNNs called Efficient Nets, which can produce classification results in about 8.4× smaller and 6.1× faster in comparison with their correspondent non-optimized architectures. To achieve these results, the authors performed a systematic study of model scaling to identify strategies for decreasing networks' parameters by considering three aspects: the depth, width, and resolution of the input data, as illustrated in Fig. 9. Nowadays, there is strong evidence that the depth of the neural network plays an important role in generating more complex and richer features, which is desirable for segmenting complex objects or regions.249,276,277 In turn, both the width of a neural network and the resolution of the input data can produce more fine-grained representations that lead to better classification results, especially on the boundaries of an object or region of interest, where most misclassifications usually lie, in general. With this in mind, the authors proposed a compound coefficient that allows the design of efficient and effective backbones. Finally, Fan et al. proposed an FFBnet that combines a lightweight backbone development with a feature fusion box, which is essentially an approach for feature aggregation.275 

FIG. 9.

Generic CNN architecture and its main axes: spatial resolution (height and width) and depth. The size of feature maps (gray blocks) is highly dependent on the size of the input data. Thus, the use of high-resolution images for training a deep neural network impacts the memory footprint significantly. On the other hand, max pooling (red blocks) is a down-sampling operation that reduces the spatial size of feature maps by selecting the maximum value from the regions covered by the sliding window filter. The fully connected layer (blue blocks) and soft-max layers (green blocks) have less influence on the complexity of the network in terms of floating-pointing operations.

FIG. 9.

Generic CNN architecture and its main axes: spatial resolution (height and width) and depth. The size of feature maps (gray blocks) is highly dependent on the size of the input data. Thus, the use of high-resolution images for training a deep neural network impacts the memory footprint significantly. On the other hand, max pooling (red blocks) is a down-sampling operation that reduces the spatial size of feature maps by selecting the maximum value from the regions covered by the sliding window filter. The fully connected layer (blue blocks) and soft-max layers (green blocks) have less influence on the complexity of the network in terms of floating-pointing operations.

Close modal

2. Feature aggregation

Feature aggregation aims to enrich representations by aggregating light and powerful detectors, usually built upon lightweight backbones, toward improving the accuracy with minimal efficiency loss. As illustrated in Fig. 10, the idea of fusing different lightweight backbones emerged to make efficient CNN architectures, without losing efficiency. As a successful example of this approach, we could mention Tan 's work, in which the authors proposed a feature fusion block method that has two main components: the feature aggregation block (FAB) and dense feature pyramid (DFP).275 Similar to Tan and Le,276 the authors observed that lightweight backbones may not produce representations with enough spatial and semantic information, which impact the qualitative results of classification significantly. The authors observed that shallow backbones can produce rich feature maps, but they lack spatial information. To overcome this limitation, they proposed a method to fuse feature maps with different selecting layers from different depths on the backbone as it is known that earlier layers capture low-level features, while deeper layers capture high-level features. As a result, the aggregation of these layers with different spatial scales and high-level semantic features can produce strong representations.

FIG. 10.

Illustration of a generic CNN architecture composed of a fusion aggregation module (yellow block) that combines features from two branches, whose outputs are produced from the convolution (gray blocks) and max pooling (red blocks) operations, which may have complementary information. The feature maps produced after the fusion operation are usually mapped in a one-dimensional vector by using the fully connected layer (blue blocks), whose output is used to feed a soft-max classifier (green block) that produces the final predictions.

FIG. 10.

Illustration of a generic CNN architecture composed of a fusion aggregation module (yellow block) that combines features from two branches, whose outputs are produced from the convolution (gray blocks) and max pooling (red blocks) operations, which may have complementary information. The feature maps produced after the fusion operation are usually mapped in a one-dimensional vector by using the fully connected layer (blue blocks), whose output is used to feed a soft-max classifier (green block) that produces the final predictions.

Close modal

Finally, the deep feature pyramid (DFP) component aims to improve multi-scale detections by concatenating the high-level layer with the shallow layer toward enhancing the detection of small objects. In comparison to similar strategies as dense connections,278 the DFP component has less trainable parameters, which favor the construction of lightweight backbones.

3. Attention mechanisms

Attention is an important concept in artificial neural networks, and it is present in several state-of-the-art algorithms such as BERT,279 Transformers,280 and Vision Transformers (ViT).281 Inspired by the cognitive attention of humans, studies on attention mechanisms in computer vision focus on guiding the training process of CNNs by enhancing part of the input data that really matter for the machine learning a specific task,282 while ignoring non-relevant information. This concept was introduced by Bahdanau et al.283 as a mechanism to improve neural machine translation systems. Later, Xu et al.284 proposed an adapted version of this mechanism for computer vision tasks, including object detection and image caption generation. Nowadays, the algorithms for attention mechanisms in computer vision and image understanding problems can be categorized according to which kind of information, or domain, the network will concentrate its attention, which includes spatial attention (where to pay attention), channel attention (what to pay attention to), temporal attention (when to pay attention), and, finally, branch attention (which task to pay attention to).

Recently, the adoption of attention mechanisms for real-time segmentation algorithms has emerged as an effective alternative approach to designing attention mechanisms with low computational costs.32 According to authors, to reach high accuracy detections, a CNN model must be able to produce representations with a rich spatial context, which in turn requires large receptive fields, and fine spatial details, which favor fine-grain segmentation and, thus, segmentation of high-resolution inputs. However, both requirements are achieved by using structures that demand high computational costs. To overcome this limitation, the authors introduced the fast spatial attention mechanism, using a self-attention mechanism to capture rich spatial context with a lower computation effort. Also, to segment high-resolution inputs efficiently, the authors reduce intermediate feature stages of the network by using feature fusion of such modules.

Finally, a more recent attention mechanism approach proposes a two-branch network BiAttnNet, with a unique Bilateral Attention structure that separates all attention modules into the Detail Branch, to contribute semantic detail selections for specialized detail exploring. The Detail Branch comprises AttnTrans entirely, which provides a better alternative for regular convolution. We can understand the AttnTrans based on three parts that comprise spatial attention, channel attention, and tensor reshaping parts. In summary, spatial attention is reached by averaging an input tensor on the channel dimension, while channel attention is performed by averaging on the spatial dimension. Two explored attention maps are applied to the input tensor by multiplying all three of them together. Then, a tensor reshaping operation is applied by using a 1 × 1 kernel convolution.285 

This review highlights the emerging trends and applications of μCT to characterize diverse bio-based systems, such as plants, animals, soils, and synthetic bio-based materials. The fundamental design of x-ray CT coupled to synchrotron beamline is addressed, considering their capabilities related to nCT, as well as in situ, in vivo, and in operando 4DCT characterization. To overcome challenges related to tomographic data, advanced computational algorithms based on machine learning and deep learning are outlined in this review, focusing on the urgency of combining computational tools with experimental data.

Future prospects for x-ray CT point toward increased access to fourth-generation synchrotron beamlines around the world, requiring robust setups, tomographic data management, and processing pipelines for fast and accurate analysis. While there are several strategies to design deep learning architectures for real-time data processing, as pointed out in this review, we observed that the adoption of those strategies to tackle x-ray CT-related problems is somewhat incipient still, especially in x-ray CT produced by modern synchrotron beamlines, which can produce images with a very high resolution, and thus, revealing the nanostructures of materials.

We hypothesize that the main challenges in adopting modern machine learning methods for x-ray tomographic data produced by modern synchrotrons are twofold: the lack of reliable ground-truth data, especially for nano-tomographic data, since most of the deep learning-based methods work in a supervised regime; and the lack of both explainability and interpretability of the complex machine learning models, which make difficult application-oriented analysis performed by specialists. We believe that recent advances in self-supervised and unsupervised learning,286,287 and the explainability of machine learning models may mitigate the aforementioned challenges.288,289 In summary, we believe that state-of-the-art methods for tomographic data must be able to address the following: handle large volumes of data, with as low latency as possible; operate in scenarios in which the obtaining of manual data labeling is very costly and time consuming; and be able to devise explainable or at least interpretable models so the specialist can judge when to trust, or not, predictions and inferences made by complex models, in order to have a more confident use of AI to answer research questions related to the relevant field of expertise.

The authors would like to thank the Brazilian National Council for Scientific and Technological Development-CNPq (DTI-A/SisNANO Grant Nos. 380312/2020-4, 302334/2022-0 and 380173/2022-0), INCT Nanocarbono, and the São Paulo Research Foundation—FAPESP (Grant Nos. 2014/50884-5, 2017/18139-6, 2017/02317-2, 2018/16453-8, 2020/08651-4, and 2021/03097-1) for supporting this work.

The authors have no conflicts to disclose.

Pedro Ivo Cunha Claro, Egon Borges, Gabriel Ravanhani Schleder, Nathaly Archilha, Allan Pinto, Murilo Carvalho, Carlos Driemeier, Adalberto Fazzio, and Rubia Figueredo Gouveia contributed equally to this work.

Pedro Ivo Cunha Claro: Conceptualization (equal); Investigation (equal); Methodology (equal); Project administration (equal); Supervision (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). Egon Borges: Writing – original draft (equal); Writing – review & editing (equal). Gabriel Ravanhani Schleder: Writing – original draft (equal); Writing – review & editing (equal). Nathaly Archilha: Writing – original draft (equal); Writing – review & editing (equal). Allan Pinto: Writing – original draft (equal); Writing – review & editing (equal). Murilo Carvalho: Writing – original draft (equal); Writing – review & editing (equal). Carlos Driemeier: Writing – original draft (equal); Writing – review & editing (equal). Adalberto Fazzio: Writing – review & editing (equal). Rubia Figueredo Gouveia: Conceptualization (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Supervision (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal).

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
S. R.
Stock
,
Int. Mater. Rev.
53
,
129
(
2008
).
2.
3.
G. N.
Hounsfield
, U.S. Patent 3,952,201 (20 April
1976
).
4.
V.
Petrik
,
V.
Apok
,
J. A.
Britton
,
B. A.
Bell
, and
M. C.
Papadopoulos
,
Neurosurgery
58
,
780
(
2006
).
5.
A. C.
Thompson
,
J.
Llacer
,
L.
Campbell Finman
,
E. B.
Hughes
,
J. N.
Otis
,
S.
Wilson
, and
H. D.
Zeman
,
Nucl. Instrum. Methods Phys. Res.
222
,
319
(
1984
).
6.
J. C.
Elliott
and
S. D.
Dover
,
J. Microsc.
126
,
211
(
1982
).
7.
S. D.
Rawson
,
J.
Maksimcuka
,
P. J.
Withers
, and
S. H.
Cartmell
,
BMC Biol.
18
,
21
(
2020
).
8.
M.
Kudryashev
,
Cellular Imaging: Electron Tomography and Related Techniques
(
Springer
,
2018
), pp.
261
282
.
9.
J.
Frank
,
Electron Tomography
(
Springer New York
,
New York
,
2004
), pp.
1
15
.
10.
S.
Bals
,
B.
Goris
,
A.
De Backer
,
S.
van Aert
, and
G.
Van Tendeloo
,
MRS Bull.
41
,
525
(
2016
).
11.
T. V.
Truong
,
W.
Supatto
,
D. S.
Koos
,
J. M.
Choi
, and
S. E.
Fraser
,
Nat. Methods
8
,
757
(
2011
).
13.
D.
Smith
and
T.
Starborg
,
Tissue Cell
57
,
111
(
2019
).
15.
E. A.
Wargo
,
T.
Kotaka
,
Y.
Tabuchi
, and
E. C.
Kumbur
,
J. Power Sources
241
,
608
(
2013
).
16.
F. J.
Giessibl
,
Rev. Mod. Phys.
75
,
949
(
2003
).
17.
L.-C.
Chen
and
C.-C.
Huang
,
Meas. Sci. Technol.
16
,
1061
(
2005
).
18.
C. S.
Betz
,
V.
Volgger
,
S. M.
Silverman
,
M.
Rubinstein
,
M.
Kraft
,
C.
Arens
, and
B.
Wong
,
Head Neck Oncol.
5
,
35
(
2013
).
19.
P. J.
Withers
,
C.
Bouman
,
S.
Carmignato
,
V.
Cnudde
,
D.
Grimaldi
,
C. K.
Hagen
,
E.
Maire
,
M.
Manley
,
A.
Du Plessis
, and
S. R.
Stock
,
Nat. Rev. Methods Primers
1
,
18
(
2021
).
20.
T.
Sera
,
Transparency in Biology
(
Springer
,
2021
), pp.
167
187
.
23.
Y.
Hwu
,
G.
Margaritondo
, and
A.
Chiang
,
BMC Biol.
15
,
122
(
2017
).
24.
M.
Dierolf
,
A.
Menzel
,
P.
Thibault
,
P.
Schneider
,
C. M.
Kewish
,
R.
Wepf
,
O.
Bunk
, and
F.
Pfeiffer
,
Nature
467
,
436
(
2010
).
25.
G.
Shen
and
Y.
Wang
,
Rev. Miner. Geochem.
78
,
745
(
2014
).
26.
F.
Rosenblatt
,
Psychol. Rev.
65
,
386
(
1958
).
27.
R.
Rojas
,
Neural Networks
(
Springer Berlin Heidelberg
,
Berlin, Heidelberg
,
1996
), pp.
149
182
.
28.
M.
Gardner
and
S.
Dorling
,
Atmos. Environ.
32
,
2627
(
1998
).
29.
A.
Ajit
,
K.
Acharya
, and
A.
Samanta
, in
2020 International Conference on Emerging Trends in Information Technology and Engineering
(
IEEE
,
2020
), pp.
1
5
.
30.
A.
Koivu
,
J.-P.
Kakko
,
S.
Mäntyniemi
, and
M.
Sairanen
,
Expert Syst. Appl.
207
,
117938
(
2022
).
31.
Y.
Lecun
,
Y.
Bengio
, and
G.
Hinton
,
Nature
521
,
436
(
2015
).
32.
M.-H.
Guo
,
T.-X.
Xu
,
J.-J.
Liu
,
Z.-N.
Liu
,
P.-T.
Jiang
,
T.-J.
Mu
,
S.-H.
Zhang
,
R. R.
Martin
,
M.-M.
Cheng
, and
S.-M.
Hu
,
Comput. Visual Media
8
,
331
(
2022
).
33.
J.
Borovec
,
J.
Švihlík
,
J.
Kybic
, and
D.
Habart
,
J. Electron. Imaging
26
,
17
(
2017
).
34.
M. M.
Ter-Pogossian
,
Semin. Nucl. Med.
7
,
109
(
1977
).
35.
N.
Bianco-Stein
,
I.
Polishchuk
,
G.
Seiden
,
J.
Villanova
,
A.
Rack
,
P.
Zaslansky
, and
B.
Pokroy
,
Adv. Sci.
7
,
2000108
(
2020
).
36.
J.
Dewanckele
,
M. A.
Boone
,
F.
Coppens
,
D.
Van Loo
, and
A. P.
Merkle
,
J. Microsc.
277
,
197
(
2020
).
37.
A.
Nommeots-Nomm
,
C.
Ligorio
,
A. J.
Bodey
,
B.
Cai
,
J. R.
Jones
,
P. D.
Lee
, and
G.
Poologasundarampillai
,
Mater. Today Adv.
2
,
100011
(
2019
).
38.
R. S.
Cintra
and
H. F.
de Campos Velho
,
Advanced Applications for Artificial Neural Networks
(
Intechopen
,
2018
), p.
265
.
39.
V.
Tra
,
J.
Kim
,
S. A.
Khan
, and
J.-M.
Kim
,
Sensors
17
,
2834
(
2017
).
40.
I.
Varfolomeev
,
I.
Yakimchuk
, and
I.
Safonov
,
Computers
8
,
72
(
2019
).
41.
S. R.
Stock
,
Int. Mater. Rev.
44
,
141
(
1999
).
42.
P.
Fratzl
and
R.
Weinkamer
,
Prog. Mater. Sci.
52
,
1263
(
2007
).
43.
L. J.
Gibson
,
J. R. Soc. Interface
9
,
2749
(
2012
).
44.
D. J.
Cosgrove
and
M. C.
Jarvis
,
Front. Plant Sci.
3
,
204
(
2012
).
45.
A.
Piovesan
,
V.
Vancauwenberghe
,
T.
Van De Looverbosch
,
P.
Verboven
, and
B.
Nicolaï
,
Trends Plant Sci.
26
,
1171
(
2021
).
46.
S.
Dhondt
,
H.
Vanhaeren
,
D.
Van Loo
,
V.
Cnudde
, and
D.
Inzé
,
Trends Plant Sci.
15
,
419
(
2010
).
47.
B.
Choat
,
C. R.
Brodersen
, and
A. J.
Mcelrone
,
New Phytol.
205
,
1095
(
2015
).
48.
H.
Cochard
,
S.
Delzon
, and
E.
Badel
,
Plant, Cell Environ.
38
,
201
(
2015
).
49.
C. R.
Brodersen
,
A. J.
McElrone
,
B.
Choat
,
M. A.
Matthews
, and
K. A.
Shackel
,
Plant Physiol.
154
,
1088
(
2010
).
50.
F.
Petruzzellis
,
C.
Pagliarani
,
T.
Savi
,
A.
Losso
,
S.
Cavalletto
,
G.
Tromba
,
C.
Dullin
,
A.
Bär
,
A.
Ganthaler
,
A.
Miotto
,
S.
Mayr
,
M. A.
Zwieniecki
,
A.
Nardini
, and
F.
Secchi
,
New Phytol.
220
,
104
(
2018
).
51.
A.
Losso
,
A.
Bär
,
B.
Dämon
,
C.
Dullin
,
A.
Ganthaler
,
F.
Petruzzellis
,
T.
Savi
,
G.
Tromba
,
A.
Nardini
,
S.
Mayr
, and
B.
Beikircher
,
New Phytol.
221
,
1831
(
2019
).
52.
R.
Rehschuh
,
A.
Cecilia
,
M.
Zuber
,
T.
Faragó
,
T.
Baumbach
,
H.
Hartmann
,
S.
Jansen
,
S.
Mayr
, and
N.
Ruehr
,
Plant Physiol.
184
,
852
(
2020
).
53.
G.
Van Dalen
,
H.
Blonk
, and
H.
Van Aalst
,
GIT Imaging Microsc.
3
,
18
(
2003
).
54.
T.
Koddenberg
,
M.
Zauner
, and
H.
Militz
,
Micron
134
,
102875
(
2020
).
55.
P. N.
Ciesielski
,
M. B.
Pecha
,
A. M.
Lattanzi
,
V. S.
Bharadwaj
,
M. F.
Crowley
,
L.
Bu
,
J. V.
Vermaas
,
K. X.
Steirer
, and
M. F.
Crowley
,
ACS Sustainable Chem. Eng.
8
,
3512
(
2020
).
56.
C.
Chen
,
Y.
Kuang
,
S.
Zhu
,
I.
Burgert
,
T.
Keplinger
,
A.
Gong
,
T.
Li
,
L.
Berglund
,
S. J.
Eichhorn
, and
L.
Hu
,
Nat. Rev. Mater.
5
,
642
(
2020
).
57.
S. W.
Hwang
,
S.
Tazuru
, and
J.
Sugiyama
,
J. Korean Wood Sci. Technol.
48
,
283
(
2020
).
58.
A.
Wagih
,
M.
Hasani
,
S. A.
Hall
,
V.
Novak
, and
H.
Theliander
,
Holzforschung
75
,
754
(
2021
).
59.
J. E.
Jakes
,
C. R.
Frihart
,
C. G.
Hunt
,
D. J.
Yelle
,
N. Z.
Plaza
,
L.
Lorenz
,
W.
Grigsby
,
D. J.
Ching
,
F.
Kamke
,
S. C.
Gleber
,
S.
Vogt
, and
X.
Xiao
,
J. Mater. Sci.
54
,
705
(
2019
).
60.
D. M.
Nguyen
,
G.
Almeida
,
T. M. L.
Nguyen
,
J.
Zhang
,
P.
Lu
,
J.
Colin
, and
P.
Perré
,
A Critical Review of Current Imaging Techniques to Investigate Water Transfers in Wood and Biosourced Materials
(
Springer Netherlands
,
2021
).
61.
P.
Perré
,
D. M.
Nguyen
, and
G.
Almeida
,
Sci. Rep.
12
,
1750
(
2022
).
62.
F.
Forsberg
,
R.
Mooser
,
M.
Arnold
,
E.
Hack
, and
P.
Wyss
,
J. Struct. Biol.
164
,
255
(
2008
).
63.
S. J.
Sanabria
,
F.
Baensch
,
M.
Zauner
, and
P.
Niemz
,
Sci. Rep.
10
, 2
1615
(
2020
).
64.
M.
Downing
,
L. M.
Eaton
,
R. L.
Graham
,
M. H.
Langholtz
,
R. D.
Perlack
,
A. F. Turhollow
Jr
,
B.
Stokes
, and
C. C.
Brandt
,
US Billion-Ton Update: Biomass Supply for a Bioenergy and Bioproducts Industry
(Oak Ridge National Lab.(ORNL), Oak Ridge, TN, 2011).
65.
C. E.
Driemeier
,
L. Y.
Ling
,
D.
Yancy-caballero
,
P. E.
Mantelatto
,
C. S. B.
Dias
, and
N. L.
Archilha
,
PLoS One
13
,
e0208219
(
2018
).
66.
D. R.
Negrão
,
L. Y.
Ling
,
R. O.
Bordonal
, and
C.
Driemeier
,
Energy Fuels
33
,
9965
(
2019
).
67.
J. P.
Lancha
,
P.
Perré
,
J.
Colin
,
P.
Lv
,
N.
Ruscassier
, and
G.
Almeida
,
Sci. Rep.
11
,
8444
(
2021
).
68.
A.
Strandberg
,
M.
Thyrel
,
N.
Skoglund
,
T. A.
Lestander
,
M.
Broström
, and
R.
Backman
,
Fuel Process. Technol.
176
,
211
(
2018
).
69.
R. L.
Pereira Oliveira Moreira
,
J. A.
Simão
,
R. F.
Gouveia
, and
M.
Strauss
,
ACS Appl. Bio Mater.
3
,
2193
(
2020
).
70.
F. V.
Ferreira
,
M.
Mariano
,
S. C.
Rabelo
,
R. F.
Gouveia
, and
L. M. F.
Lona
,
Appl. Surf. Sci.
436
,
1113
(
2018
).
71.
K.
Murai
,
T.
Daitoku
, and
T.
Tsuruda
,
Proc. Combust. Inst.
38
,
3987
(
2021
).
72.
M. R.
Barr
,
R.
Jervis
,
Y.
Zhang
,
A. J.
Bodey
,
C.
Rau
,
P. R.
Shearing
,
D. J. L.
Brett
,
M. M.
Titirici
, and
R.
Volpe
,
Sci. Rep.
11
,
2656
(
2021
).
73.
M. N.
Holme
,
G.
Schulz
,
H.
Deyhle
,
T.
Weitkamp
,
F.
Beckmann
,
J. A.
Lobrinus
,
F.
Rikhtegar
,
V.
Kurtcuoglu
,
I.
Zanette
,
T.
Saxer
, and
B.
Müller
,
Nat. Protoc.
9
,
1401
(
2014
).
74.
B. D.
Metscher
,
Cold Spring Harbor Protoc.
2011
,
1462
.
75.
J.
Martins de Souza e Silva
,
J.
Utsch
,
M. A.
Kimm
,
S.
Allner
,
M. F.
Epple
,
K.
Achterhold
, and
F.
Pfeiffer
,
Sci. Rep.
7
,
17387
(
2017
).
76.
M. C.
Strotton
,
A. J.
Bodey
,
K.
Wanelik
,
M. C.
Darrow
,
E.
Medina
,
C.
Hobbs
,
C.
Rau
, and
E. J.
Bradbury
,
Sci. Rep.
8
,
12017
(
2018
).
77.
M. D. C.
Fonseca
,
B. H. S.
Araujo
,
C. S. B.
Dias
,
N. L.
Archilha
,
D. P. A.
Neto
,
E.
Cavalheiro
,
H.
Westfahl
,
A. J. R.
da Silva
, and
K. G.
Franchini
,
Sci. Rep.
8
,
12074
(
2018
).
78.
S. I.
Prajapati
and
C.
Keller
,
J. Vis. Exp.
47
,
2377
(
2011
).
79.
Z.
Du
,
Y.
Hu
,
N.
Ali Buttar
, and
A.
Mahmood
,
Food Sci. Nutr.
7
,
3146
(
2019
).
80.
S.
Puce
,
D.
Pica
,
L.
Mancini
,
F.
Brun
,
A.
Peverelli
, and
G.
Bavestrello
,
Zoomorphology
130
,
85
(
2011
).
81.
V. M.
Soviero
,
S. C.
Leal
,
R. C.
Silva
, and
R. B.
Azevedo
,
J. Dent.
40
,
35
(
2012
).
82.
M.
Bemmann
,
E.
Schulz-Kornas
,
J. U.
Hammel
,
A.
Hipp
,
J.
Moosmann
,
A.
Herrel
,
A.
Rack
,
U.
Radespiel
,
E.
Zimmermann
,
T. M.
Kaiser
, and
K.
Kupczik
,
J. Struct. Biol.
213
,
107658
(
2021
).
83.
K.
Laperre
,
M.
Depypere
,
N.
Van Gastel
,
S.
Torrekens
,
K.
Moermans
,
R.
Bogaerts
,
F.
Maes
, and
G.
Carmeliet
,
Bone
49
,
613
(
2011
).
84.
M.
Langer
and
F.
Peyrin
,
Osteoporos. Int.
27
,
441
(
2016
).
85.
E. A.
Zimmermann
,
E.
Schaible
,
H.
Bale
,
H. D.
Barth
,
S. Y.
Tang
,
P.
Reichert
,
B.
Busse
,
T.
Alliston
,
J. W.
Ager
, and
R. O.
Ritchie
,
Proc. Natl. Acad. Sci. U. S. A.
108
,
14416
(
2011
).
86.
V. M.
Zelaya
,
N. L.
Archilha
,
M.
Calasans
,
A. L.
Rossi
,
M.
Farina
,
T. S.
Santisteban
, and
A. M.
Rossi
,
Microsc. Microanal.
24
,
536
(
2018
).
87.
S.
Stegen
,
K.
Laperre
,
G.
Eelen
,
G.
Rinaldi
,
P.
Fraisl
,
S.
Torrekens
,
R.
Van Looveren
,
S.
Loopmans
,
G.
Bultynck
,
S.
Vinckier
,
F.
Meersman
,
P. H.
Maxwell
,
J.
Rai
,
M.
Weis
,
D. R.
Eyre
,
B.
Ghesquière
,
S.
Fendt
,
P.
Carmeliet
, and
G.
Carmeliet
,
Nature
565
,
511
(
2019
).
88.
L.
Kuang
,
J.
Huang
,
Y.
Liu
,
X.
Li
,
Y.
Yuan
, and
C.
Liu
,
Adv. Funct. Mater.
31
,
2105383
(
2021
).
89.
M.
Langer
,
A.
Pacureanu
,
H.
Suhonen
,
Q.
Grimal
,
P.
Cloetens
, and
F.
Peyrin
,
PLoS One
7
,
e35691
(
2012
).
90.
E.
Rabot
,
M.
Wiesmeier
,
S.
Schlüter
, and
H.-J.
Vogel
,
Geoderma
314
,
122
(
2018
).
91.
A. N.
Kravchenko
and
A. K.
Guber
,
Geoderma
287
,
31
(
2017
).
92.
I. A.
Taina
,
R. J.
Heck
, and
T. R.
Elliot
,
Can. J. Soil Sci.
88
,
1
(
2008
).
93.
T. R.
Ferreira
,
L. F.
Pires
, and
K.
Reichardt
,
Braz. J. Phys.
52
,
33
(
2022
).
94.
K. R.
Daly
,
S. J.
Mooney
,
M. J.
Bennett
,
N. M. J.
Crout
,
T.
Roose
, and
S. R.
Tracy
,
J. Exp. Bot.
66
,
2305
(
2015
).
95.
D. R.
Lammel
,
T.
Arlt
,
I.
Manke
, and
M. C.
Rillig
,
Front. Environ. Sci.
7
,
153
(
2019
).
96.
K. R.
Daly
,
S. D.
Keyes
,
S.
Masum
, and
T.
Roose
,
J. Exp. Bot.
67
,
1059
(
2016
).
97.
Y.
Zhao
,
X.
Hu
, and
X.
Li
,
CATENA
193
,
104622
(
2020
).
98.
S.
Schlüter
,
S.
Sammartino
, and
J.
Koestel
,
Geoderma
370
,
114370
(
2020
).
99.
See https://spiral.imperial.ac.uk/bitstream/10044/1/76618/2/SRCCL-Full-Report-Compiled-191128.pdf for P. R. Shukla, J. Skea, E. Calvo Buendia, V. Masson-Delmotte, H. O. Pörtner, D. C. Roberts, P. Zhai, R. Slade, S. Connors, and R. Van Diemen, IPCC, 2019: Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems (accessed 28 February 2020).
100.
A. N.
Kravchenko
,
E. R.
Toosi
,
A. K.
Guber
,
N. E.
Ostrom
,
J.
Yu
,
K.
Azeem
,
M. L.
Rivers
, and
G. P.
Robertson
,
Nat. Geosci.
10
,
496
(
2017
).
101.
R.
Bhattacharyya
,
S. M. F.
Rabbi
,
Y.
Zhang
,
I. M.
Young
,
A. R.
Jones
,
P. G.
Dennis
,
N. W.
Menzies
,
P. M.
Kopittke
, and
R. C.
Dalal
,
Sci. Total Environ.
778
,
146286
(
2021
).
102.
L.
Hou
,
W.
Gao
,
F.
der Bom
,
Z.
Weng
,
C. L.
Doolette
,
A.
Maksimenko
,
D.
Hausermann
,
Y.
Zheng
,
C.
Tang
,
E.
Lombi
, and
P. M.
Kopittke
,
Geoderma
405
,
115405
(
2022
).
103.
C. P.
Scotson
,
A.
van Veelen
,
K. A.
Williams
,
N.
Koebernick
,
D.
McKay Fletcher
, and
T.
Roose
,
Plant Soil
460
,
647
(
2021
).
104.
W.
Gao
,
S.
Schlüter
,
S.
Blaser
,
J.
Shen
, and
D.
Vetterlein
,
Plant Soil
441
,
643
(
2019
).
105.
L.
Dupuy
,
M.
Mimault
,
D.
Patko
,
V.
Ladmiral
,
B.
Ameduri
,
M.
MacDonald
, and
M.
Ptashnyk
,
Curr. Opin. Genet. Dev.
51
,
18
(
2018
).
106.
K. J.
De France
,
T.
Hoare
, and
E. D.
Cranston
,
Chem. Mater.
29
,
4609
(
2017
).
107.
S.
Deville
,
J. Mater. Res.
28
,
2202
(
2013
).
108.
J.
González-Rivera
,
R.
Iglio
,
G.
Barillaro
,
C.
Duce
, and
M.
Tinè
,
Polymers
10
,
616
(
2018
).
109.
P.
Gupta
,
B.
Singh
,
A. K.
Agrawal
, and
P. K.
Maji
,
Mater. Des.
158
,
224
(
2018
).
110.
M.
Mariano
,
L.
Wang
,
S.
Bernardes
, and
M.
Strauss
,
Carbohydr. Polym.
195
,
153
(
2018
).
111.
L.
Berglund
,
T.
Nissilä
,
D.
Sivaraman
,
S.
Komulainen
,
V. V.
Telkki
, and
K.
Oksman
,
ACS Appl. Mater. Interfaces
13
,
34899
(
2021
).
112.
M. V.
Lorevice
,
E. O.
Mendonça
,
N. M.
Orra
,
A. C.
Borges
, and
R. F.
Gouveia
,
ACS Appl. Nano Mater.
3
,
10954
(
2020
).
113.
M. V.
Lorevice
,
P. I. C.
Claro
,
N. A.
Aleixo
,
L. S.
Martins
,
M. T.
Maia
,
A. P. S.
Oliveira
,
D. S. T.
Martinez
, and
R. F.
Gouveia
,
Chem. Eng. J.
462
,
142166
(
2023
).
114.
P. V. O.
Toledo
,
L. R.
Marques
, and
D. F. S.
Petri
,
Int. J. Polym. Sci.
2019
,
8179842
.
115.
R. K.
Ramakrishnan
,
V. V. T.
Padil
,
M.
Škodová
,
S.
Wacławek
,
M.
Černík
, and
S.
Agarwal
,
Adv. Funct. Mater.
31
,
2100640
(
2021
).
116.
M.
Chhajed
,
C.
Yadav
,
A. K.
Agrawal
, and
P. K.
Maji
,
Carbohydr. Polym.
226
,
115286
(
2019
).
117.
M. L. B.
Almeida
,
E.
Ayres
,
F. C. C.
Moura
, and
R. L.
Oréfice
,
J. Hazard. Mater.
346
,
285
(
2018
).
118.
E.
Padilla-Ortega
,
M.
Darder
,
P.
Aranda
,
R.
Figueredo Gouveia
,
R.
Leyva-Ramos
, and
E.
Ruiz-Hitzky
,
Appl. Clay Sci.
130
,
40
(
2016
).
119.
F.
Canencia
,
M.
Darder
,
P.
Aranda
,
F. M.
Fernandes
,
R. F.
Gouveia
, and
E.
Ruiz-hitzky
,
J. Mater. Sci.
52
,
11269
(
2017
).
120.
F. V.
Ferreira
,
G. N.
Trindade
,
L. M. F.
Lona
,
J. S.
Bernardes
, and
R. F.
Gouveia
,
Eur. Polym. J.
117
,
105
(
2019
).
121.
D. B.
Rocha
and
D. S.
Rosa
,
Compos. Part B
172
,
1
(
2019
).
122.
K.
Vegso
,
Y.
Wu
,
H.
Takano
,
M.
Hoshino
, and
A.
Momose
,
Sci. Rep.
9
,
7404
(
2019
).
123.
S.
Damasceno
,
C. C.
Corrêa
,
R. F.
Gouveia
,
M.
Strauss
,
C. C. B.
Bufon
, and
M.
Santhiago
,
Adv. Electron. Mater.
6
,
1900826
(
2019
).
124.
125.
C.
Renghini
,
V.
Komlev
,
F.
Fiori
,
E.
Verné
,
F.
Baino
, and
C.
Vitale-Brovarone
,
Acta Biomater.
5
,
1328
(
2009
).
126.
V. S.
Komlev
,
F.
Peyrin
,
M.
Mastrogiacomo
,
A.
Cedola
,
A.
Papadimitropoulos
,
F.
Rustichelli
, and
R.
Cancedda
,
Tissue Eng.
12
,
3449
(
2006
).
127.
G.
Labate
,
G.
Catapano
,
C.
Vitale-Brovarone
, and
F.
Baino
,
Ceram. Int.
43
,
9443
(
2017
).
128.
S.-J.
Jiang
,
M.-H.
Wang
,
Z.-Y.
Wang
,
H.-L.
Gao
,
S.-M.
Chen
,
Y.-H.
Cong
,
L.
Yang
,
S.-M.
Wen
,
D.-D.
Cheng
,
J.-C.
He
, and
S.-H.
Yu
,
Adv. Funct. Mater.
32
,
2110931
(
2022
).
129.
J. H.
Lopes
,
J. A.
Magalhães
,
R. F.
Gouveia
,
C. A.
Bertran
,
M.
Motisuke
,
S. E. A.
Camargo
, and
E. de S.
Trichês
,
J. Mech. Behav. Biomed. Mater.
62
,
10
(
2016
).
130.
L.
de Siqueira
,
C. G.
de Paula
,
R. F.
Gouveia
,
M.
Motisuke
, and
E.
de Sousa Trichês
,
J. Mech. Behav. Biomed. Mater.
90
,
635
(
2019
).
131.
A. I.
Kondarage
,
G.
Poologasundarampillai
,
A.
Nommeots‐Nomm
,
P. D.
Lee
,
T. D.
Lalitharatne
,
N. D.
Nanayakkara
,
J. R.
Jones
, and
A.
Karunaratne
,
J. Am. Ceram. Soc.
105
,
1671
(
2022
).
132.
M.
Tonelli
,
A.
Faralli
,
F.
Ridi
, and
M.
Bonini
,
J. Colloid Interface Sci.
598
,
24
(
2021
).
133.
F. V.
Ferreira
,
L. P.
Souza
,
T. M. M.
Martins
,
J. H.
Lopes
,
B. D.
Mattos
,
M.
Mariano
,
I. F.
Pinheiro
,
T. M.
Valverde
,
S.
Livi
,
J. A.
Camilli
,
A. M.
Goes
,
R. F.
Gouveia
,
L. M. F.
Lona
, and
O. J.
Rojas
,
Nanoscale
11
,
19842
(
2019
).
134.
L.
Wang
,
Q.
Yang
,
M.
Huo
,
D.
Lu
,
Y.
Gao
,
Y.
Chen
, and
H.
Xu
,
Adv. Mater.
33
,
e2100150
(
2021
).
135.
B.
Thomas
,
S.
Geng
,
J.
Wei
,
H.
Lycksam
,
M.
Sain
, and
K.
Oksman
,
ACS Appl. Nano Mater.
5
,
7954
(
2022
).
136.
A. C.
Thompson
and
D.
Vaughan
,
X-Ray Data Booklet
(
Lawrence Berkeley National Laboratory, University of California Berkeley
,
2001
).
137.
See https://escholarship.org/uc/item/5tc256w3 for A. Robinson, Lightsources. org: An Internet Site for Light Source Communication (2004). (accessed 28 February 2020).
138.
P. R.
Willmott
,
Springer Proceedings in Physics
(Springer,
2021
), pp.
1
37
.
139.
J.
Als-Nielsen
,
APS Meeting Abstract
(
APS
,
1996
), p.
A1302
.
141.
See https://imaging.rigaku.com/products for Rigaku, Rigaku x-ray computed tomography products (2020) (accessed 12 January 2023).
142.
K.
Taylor
,
L.
Ma
,
P.
Dowey
, and
P.
Lee
, in
Sixth EAGE Shale Work
(
European Association of Geoscientists & Engineers
,
2019
), pp.
1
5
.
143.
D.
McMorrow
and
J.
Als-Nielsen
,
Elements of Modern X-Ray Physics
(
John Wiley & Sons
,
2011
).
144.
A.
Ruhlandt
,
M.
Krenkel
,
M.
Bartels
, and
T.
Salditt
,
Phys. Rev. A
89
,
033847
(
2014
).
145.
L.
Liu
,
N.
Milas
,
A. H. C.
Mukai
,
X. R.
Resende
, and
F. H.
de Sá
,
J. Synchrotron Radiat.
21
,
904
(
2014
).
146.
Z.
Matěj
,
R.
Mokso
,
K.
Larsson
,
V.
Hardion
, and
D.
Spruce
,
Adv. Struct. Chem. Imaging
2
,
16
(
2016
).
147.
P.
Raimondi
,
Synchrotron Radiat. News
29
,
8
(
2016
).
148.
F.
Pfeiffer
,
J.
Herzen
,
M.
Willner
,
M.
Chabior
,
S.
Auweter
,
M.
Reiser
, and
F.
Bamberg
,
Z. Med. Phys.
23
,
176
(
2013
).
149.
D.
Pfeiffer
,
F.
Pfeiffer
, and
E.
Rummeny
,
Molecular Imaging in Oncology
(
Springer
,
2020
), p.
3
.
150.
P. M.
Gignac
,
N. J.
Kley
,
J. A.
Clarke
,
M. W.
Colbert
,
A. C.
Morhardt
,
D.
Cerio
,
I. N.
Cost
,
P. G.
Cox
,
J. D.
Daza
,
C. M.
Early
,
M. S.
Echols
,
R. M.
Henkelman
,
A. N.
Herdina
,
C. M.
Holliday
,
Z.
Li
,
K.
Mahlow
,
S.
Merchant
,
J.
Müller
,
C. P.
Orsbon
,
D. J.
Paluh
,
M. L.
Thies
,
H. P.
Tsai
, and
L. M.
Witmer
,
J. Anat.
228
,
889
(
2016
).
151.
J.
Albers
,
S.
Pacilé
,
M. A.
Markus
,
M.
Wiart
,
G.
Vande Velde
,
G.
Tromba
, and
C.
Dullin
,
Mol. Imaging Biol.
20
,
732
(
2018
).
152.
M.
Bartels
,
M.
Krenkel
,
P.
Cloetens
,
W.
Möbius
, and
T.
Salditt
,
J. Struct. Biol.
192
,
561
(
2015
).
153.
S.
Flenner
,
M.
Storm
,
A.
Kubec
,
E.
Longo
,
F.
Doring
,
D. M.
Pelt
,
C.
David
,
M.
Muller
, and
I.
Greving
,
J. Synchrotron Radiat.
27
,
1339
(
2020
).
154.
N. L.
Archilha
,
G. R.
Costa
,
G. R. B.
Ferreira
,
G.
Moreno
,
A. S.
Rocha
,
B. C.
Meyer
,
A. C.
Pinto
,
E. X. S.
Miqueles
,
M. B.
Cardoso
, and
H.
Westfahl
, Jr.
,
J. Phys.: Conf. Ser.
2380
,
012123
(
2022
).
155.
A. T.
Kuan
,
J. S.
Phelps
,
L. A.
Thomas
,
T. M.
Nguyen
,
J.
Han
,
C. L.
Chen
,
A. W.
Azevedo
,
J. C.
Tuthill
,
J.
Funke
,
P.
Cloetens
,
A.
Pacureanu
, and
W. C. A.
Lee
,
Nat. Neurosci.
23
,
1637
(
2020
).
156.
R.
Mizutani
,
R.
Saiga
,
A.
Takeuchi
,
K.
Uesugi
,
Y.
Terada
,
Y.
Suzuki
,
V.
De Andrade
,
F.
De Carlo
,
S.
Takekoshi
,
C.
Inomoto
,
N.
Nakamura
,
I.
Kushima
,
S.
Iritani
,
N.
Ozaki
,
S.
Ide
,
K.
Ikeda
,
K.
Oshima
,
M.
Itokawa
, and
M.
Arai
,
Transl. Psychiatry
9
,
85
(
2019
).
157.
K.
Mader
,
F.
Marone
,
C.
Hintermu
,
G.
Mikuljan
,
A.
Isenegger
, and
M.
Stampanoni
,
J. Synchrotron Radiat.
18
,
117
(
2010
).
158.
See https://www.psi.ch/en/media/our-research/x-ray-microscopy-with-1000-tomograms-per-second for P.S. Institut, X-ray microscopy with 1000 tomograms per second (2021) (accessed 9 September 202e).
159.
F.
García-Moreno
,
P. H.
Kamm
,
T. R.
Neu
,
F.
Bülk
,
R.
Mokso
,
C. M.
Schlepütz
,
M.
Stampanoni
, and
J.
Banhart
,
Nat. Commun.
10
,
3762
(
2019
).
160.
F.
García‐Moreno
,
P. H.
Kamm
,
T. R.
Neu
,
F.
Bülk
,
M. A.
Noack
,
M.
Wegener
,
N.
von der Eltz
,
C. M.
Schlepütz
,
M.
Stampanoni
, and
J.
Banhart
,
Adv. Mater.
33
,
2104659
(
2021
).
161.
F.
Martoïa
,
T.
Cochereau
,
P. J. J.
Dumont
,
L.
Orgéas
,
M.
Terrien
, and
M. N.
Belgacem
,
Mater. Des.
104
,
376
(
2016
).
162.
M.
Polacci
,
F.
Arzilli
,
G.
La Spina
,
N.
Le Gall
,
B.
Cai
,
M. E.
Hartley
,
D.
Di Genova
,
N. T.
Vo
,
S.
Nonni
,
R. C.
Atwood
,
E. W.
Llewellin
,
P. D.
Lee
, and
M. R.
Burton
,
Sci. Rep.
8
,
8377
(
2018
).
163.
L.
Huang
,
P.
Baud
,
B.
Cordonnier
,
F.
Renard
, and
L.
Liu
,
Earth Planet. Sci. Lett.
528
,
115831
(
2019
).
164.
A.
Gupta
,
R. S.
Crum
,
C.
Zhai
,
K. T.
Ramesh
, and
R. C.
Hurley
,
J. Appl. Phys.
129
,
225902
(
2021
).
165.
E.
Boigné
,
N.
Bennett
,
A.
Wang
, and
M.
Ihme
,
12th U.S. National Combustion Meeting
(
2021
), p.
1
.
166.
C.
Petroselli
,
K. A.
Williams
,
A.
Ghosh
,
D.
McKay Fletcher
,
S. A.
Ruiz
,
T.
Gerheim Souza Dias
,
C. P.
Scotson
, and
T.
Roose
,
Soil Sci. Soc. Am. J.
85
,
172
(
2021
).
167.
T.
Pak
,
N. L.
Archilha
,
I. F.
Mantovani
,
A. C.
Moreira
, and
I. B.
Butler
,
Sci. Data
6
,
190004
(
2019
).
168.
T.
Pak
,
L. F.
de
,
L.
Luz
,
T.
Tosco
,
G. S. R.
Costa
,
P. R. R.
Rosa
, and
N. L.
Archilha
,
Proc. Natl. Acad. Sci. U. S. A.
117
,
13366
(
2020
).
169.
A.
Scanziani
,
K.
Singh
,
H.
Menke
,
B.
Bijeljic
, and
M. J.
Blunt
,
Appl. Energy
259
,
114136
(
2020
).
170.
T.
Pak
,
I. B.
Butler
,
S.
Geiger
,
R.
Van Dijke
,
Z.
Jiang
,
S.
Elphick
, and
K.
Sorbie
,
SPE Reservoir Characterisation and Simulation Conference and Exhibition
(
Society of Petroleum Engineers
,
2013
), Vol.
1
, p.
595
.
171.
T.
Saif
,
Q.
Lin
,
K.
Singh
,
B.
Bijeljic
, and
M. J.
Blunt
,
Geophys. Res. Lett.
43
,
6799
, https://doi.org/10.1002/2016GL069279 (
2016
).
172.
K.
Singh
,
H.
Menke
,
M.
Andrew
,
C.
Rau
,
B.
Bijeljic
, and
M. J.
Blunt
,
Sci. Data
5
,
180265
(
2018
).
173.
S. F.
Islam
,
L.
Mancini
,
R. V.
Sundara
,
S.
Whitehouse
,
S.
Palzer
,
M. J.
Hounslow
, and
A. D.
Salman
,
Chem. Eng. Res. Des.
117
,
756
(
2016
).
174.
T.
dos Santos Rolo
,
A.
Ershov
,
T.
van de Kamp
, and
T.
Baumbach
,
Proc. Natl. Acad. Sci. U. S. A.
111
,
3921
(
2014
).
175.
T.
Tsukube
,
K.
Yokawa
,
Y.
Okita
,
K.
Ataka
,
M.
Hoshino
, and
N.
Yagi
,
Eur. Heart J.
40
,
ehz748.0534
(
2019
).
176.
M.
Holbrook
,
D. P.
Clark
, and
C. T.
Badea
,
Phys. Med. Biol.
63
,
025009
(
2018
).
177.
M.
Peña Fernández
,
E.
Dall'Ara
,
A. J.
Bodey
,
R.
Parwani
,
A. H.
Barber
,
G. W.
Blunn
, and
G.
Tozzi
,
ACS Biomater. Sci. Eng.
5
,
2543
(
2019
).
178.
D.
Wangpraseurt
,
C.
Wentzel
,
S. L.
Jacques
,
M.
Wagner
, and
M.
Kühl
,
J. R. Soc. Interface
14
,
20161003
(
2017
).
179.
S. A.
Ghandhi
,
L. B.
Smilenov
,
C. D.
Elliston
,
M.
Chowdhury
, and
S. A.
Amundson
,
BMC Med. Genomics
8
,
22
(
2015
).
180.
D.
Olofsson
,
L.
Cheng
,
R. B.
Fernández
,
M.
Płódowska
,
M. L.
Riego
,
P.
Akuwudike
,
H.
Lisowska
,
L.
Lundholm
, and
A.
Wojcik
,
Radiat. Environ. Biophys.
59
,
451
(
2020
).
181.
S.
Chevalier
,
J.
Lee
,
N.
Ge
,
R.
Yip
,
P.
Antonacci
,
Y.
Tabuchi
,
T.
Kotaka
, and
A.
Bazylak
,
Electrochim. Acta
210
,
792
(
2016
).
182.
T.
Uruga
,
M.
Tada
,
O.
Sekizawa
,
Y.
Takagi
,
T.
Yokoyama
, and
Y.
Iwasawa
,
Chem. Rec.
19
,
1444
(
2019
).
183.
A.
Kato
,
S.
Kato
,
S.
Yamaguchi
,
T.
Suzuki
, and
Y.
Nagai
,
J. Power Sources
521
,
230951
(
2022
).
184.
S.
Yamaguchi
,
S.
Kato
,
A.
Kato
,
Y.
Matsuoka
,
Y.
Nagai
, and
T.
Suzuki
,
Electrochem. Commun.
128
,
107059
(
2021
).
185.
N. L.
Archilha
,
F. P.
ÓDowd
,
G.
Moreno
, and
E. X.
Miqueles
,
SEG Tech. Program Expanded Abstr.
35
,
3241
(
2016
).
186.
M.
Loog
and
B.
Van Ginneken
,
IEEE Trans. Med. Imaging
25
,
602
(
2006
).
187.
O.
Furat
,
M.
Wang
,
M.
Neumann
,
L.
Petrich
,
M.
Weber
,
C. E.
Krill
, and
V.
Schmidt
,
Front. Mater.
6
,
145
(
2019
).
188.
S. B.
Lo
,
S. A.
Lou
,
J.
Lin
,
M. T.
Freedman
,
M. V.
Chien
, and
S. K.
Mun
,
IEEE Trans. Med. Imaging
14
,
711
(
1995
).
189.
G. R.
Schleder
,
A. C.
Padilha
,
C. M.
Acosta
,
M.
Costa
, and
A.
Fazzio
,
J. Phys. Mater.
2
,
032001
(
2019
).
190.
S.
Berg
,
N.
Saxena
,
M.
Shaik
, and
C.
Pradhan
,
Leading Edge
37
,
412
(
2018
).
191.
A. A.
Hendriksen
,
M.
Bührer
,
L.
Leone
,
M.
Merlini
,
N.
Vigano
,
D. M.
Pelt
,
F.
Marone
,
M.
di Michiel
, and
K. J.
Batenburg
,
Sci. Rep.
11
,
11895
(
2021
).
192.
E.
Cha
,
H.
Chung
,
J.
Jang
,
J.
Lee
,
E.
Lee
, and
J. C.
Ye
,
ACS Nano
16
,
10314
(
2022
).
193.
K.
Suzuki
,
F.
Li
,
S.
Sone
, and
K.
Doi
,
IEEE Trans. Med. Imaging
24
,
1138
(
2005
).
194.
B.
Sahiner
,
H.
Chan
,
N.
Petrick
,
D.
Wei
,
M. A.
Helvie
,
D. D.
Adler
, and
M. M.
Goodsitt
,
IEEE Trans. Med. Imaging
15
,
598
(
1996
).
195.
B.
Midtvedt
,
S.
Helgadottir
,
A.
Argun
,
J.
Pineda
,
D.
Midtvedt
, and
G.
Volpe
,
Appl. Phys. Rev.
8
,
011310
(
2021
).
197.
198.
V.
Jakkula
, Tutorial on Support Vector Machine (SVM) (
Washington State University
,
2006
), p.
37
.
199.
W. S.
Haddad
and
J. E.
Trebes
,
Proc. SPIE
3149
,
222
231
(
1997
).
200.
Y.
Zhao
and
G.
Karypis
, in
Proceedings of the Eleventh International Conference on Information and Knowledge Management
(
ACM
,
2002
), pp.
515
524
.
201.
P.
Vincent
,
H.
Larochelle
,
I.
Lajoie
,
Y.
Bengio
, and
P. A.
Manzagol
,
J. Mach. Learn. Res.
11
,
3371
(
2010
).
202.
C.
Wang
,
U.
Steiner
, and
A.
Sepe
,
Small
14
,
1802291
(
2018
).
203.
I.
Arel
,
D. C.
Rose
, and
T. P.
Karnowski
,
IEEE Comput. Intell. Mag.
5
,
13
(
2010
).
204.
L.
Bottou
, in
Proceedings of COMPSTAT'2010
, edited by
Y.
Lechevallier
and
G.
Saporta
(
Physica-Verlag
,
Heidelberg
,
2010
), pp.
177
186
.
205.
S.
Jadon
, in
2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology
(
IEEE
,
2020
), pp.
1
7
.
206.
D. E.
Rumelhart
,
G. E.
Hinton
, and
R. J.
Williams
,
Nature
323
,
533
(
1986
).
207.
T.
Takase
,
S.
Oyama
, and
M.
Kurihara
,
Neural Networks
101
,
68
(
2018
).
208.
R.
Raina
,
A.
Madhavan
, and
A. Y.
Ng
, in
Proceedings of the 26th International Conference on Machine Learning
(
ACM
,
2009
), pp.
873
880
.
209.
G. R.
Schleder
,
A. C. M.
Padilha
,
A.
Reily Rocha
,
G. M.
Dalpian
, and
A.
Fazzio
,
J. Chem. Inf. Model.
60
,
452
(
2020
).
210.
X.
Glorot
,
A.
Bordes
, and
Y.
Bengio
, in
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
(
PMLR
,
2011
), pp.
315
323
.
211.
N.
Srivastava
,
G.
Hinton
,
A.
Krizhevsky
,
I.
Sutskever
, and
R.
Salakhutdinov
,
J. Mach. Learn. Res.
15
,
1929
(
2014
).
212.
P.
Sermanet
,
D.
Eigen
,
X.
Zhang
,
M.
Mathieu
,
R.
Fergus
, and
Y.
LeCun
, in
2nd International Conference on Learning Representations
,
2014
.
213.
S.
Arridge
,
P.
Maass
,
O.
Öktem
, and
C.-B.
Schönlieb
,
Acta Numer.
28
,
1
(
2019
).
214.
L.
von Chamier
,
R. F.
Laine
, and
R.
Henriques
,
Biochem. Soc. Trans.
47
,
1029
(
2019
).
215.
J.
Leuschner
,
M.
Schmidt
,
P. S.
Ganguly
,
V.
Andriiashen
,
S. B.
Coban
,
A.
Denker
,
D.
Bauer
,
A.
Hadjifaradji
,
K. J.
Batenburg
,
P.
Maass
, and
M.
van Eijnatten
,
J. Imaging
7
,
44
(
2021
).
216.
G.
Wang
,
J. C.
Ye
, and
B.
De Man
,
Nat. Mach. Intell.
2
,
737
(
2020
).
217.
J.
Liu
,
J.
Ma
,
Y.
Zhang
,
Y.
Chen
,
J.
Yang
,
H.
Shu
,
L.
Luo
,
G.
Coatrieux
,
W.
Yang
,
Q.
Feng
, and
W.
Chen
,
IEEE Trans. Med. Imaging
36
,
2499
(
2017
).
218.
X.
Yang
,
V.
De Andrade
,
W.
Scullin
,
E. L.
Dyer
,
N.
Kasthuri
,
F.
De Carlo
, and
D.
Gürsoy
,
Sci. Rep.
8
,
2575
(
2018
).
219.
L.
Gjesteby
,
Q.
Yang
,
Y.
Xi
,
H.
Shan
,
B.
Claus
,
Y.
Jin
, and
G.
Wang
,
Proc. SPIE
10391
,
103910W
(
2017
).
220.
Y. S.
Han
,
J.
Yoo
, and
J. C.
Ye
, arXiv:1611.06391 (
2016
).
221.
T. V.
Spina
,
G. J. Q.
Vasconcelos
,
H. M.
Gonçalves
,
G. C.
Libel
,
H.
Pedrini
,
T. J.
Carvalho
, and
N. L.
Archilha
,
Microsc. Microanal.
24
,
90
(
2018
).
222.
T.
Stan
,
Z. T.
Thompson
, and
P. W.
Voorhees
,
Mater. Charact.
160
,
110119
(
2020
).
223.
F.
Lu
,
F.
Wu
,
P.
Hu
,
Z.
Peng
, and
D.
Kong
,
Int. J. Comput. Assist. Radiol. Surg.
12
,
171
(
2017
).
224.
H.
Wu
,
W. Z.
Fang
,
Q.
Kang
,
W. Q.
Tao
, and
R.
Qiao
,
Sci. Rep.
9
,
20387
(
2019
).
225.
Y.
Zhang
,
Y.
Chu
, and
H.
Yu
,
Proc. SPIE
10391
,
103910V
(
2017
).
226.
H. S.
Park
,
S. M.
Lee
,
H. P.
Kim
, and
J. K.
Seo
, arXiv:1708.00244 (
2017
).
227.
L.
Gjesteby
,
Q.
Yang
,
Y.
Xi
,
B.
Claus
,
Y.
Jin
, in
B.
De Man
, and
G.
Wang
, in
14th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine
(Fully3D,
2017
), pp.
611
614
.
228.
H. U. C.
Hen
,
Y. I. Z.
Hang
,
W. E. Z.
Hang
,
P. E. L.
Iao
,
J.
Iliu
,
Z.
Hou
, and
G. E. W.
Ang
,
Biomed. Opt. Express
8
,
679
(
2017
).
229.
L.
Gjesteby
,
Q.
Yang
,
Y.
Xi
,
Y.
Zhou
,
J.
Zhang
, and
G.
Wang
,
Proc. SPIE
10132
,
101322W
(
2017
).
230.
Y.
Liu
and
Y.
Zhang
,
Neurocomputing
284
,
80
(
2018
).
231.
H.
Chen
,
Y.
Zhang
,
Y.
Chen
,
J.
Zhang
,
W.
Zhang
,
H.
Sun
,
Y.
Lv
,
P.
Liao
,
J.
Zhou
, and
G.
Wang
,
IEEE Trans. Med. Imaging
37
,
1333
(
2018
).
232.
W.
Du
,
H.
Chen
,
Z.
Wu
,
H.
Sun
,
P.
Liao
, and
Y.
Zhang
,
PLoS One
12
,
e0190069
(
2017
).
233.
X.
Yang
,
F.
De Carlo
,
C.
Phatak
, and
D.
Gürsoy
,
J. Synchrotron Radiat.
24
,
469
(
2017
).
234.
A.
Kornilov
,
I.
Safonov
, and
I.
Yakimchuk
, in
2020 International Conference Information Technology and Nanotechnology
(
IEEE
,
2020
), pp.
1
6
.
235.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
, in
Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18
(Springer,
2015
), pp. 234–241.
236.
A.
Guo
,
L.
Fang
,
M.
Qi
, and
S.
Li
,
IEEE Trans. Instrum. Meas.
70
,
5000712
(
2021
).
237.
J. M.
Wolterink
,
T.
Leiner
,
M. A.
Viergever
, and
I.
Isgum
,
IEEE Trans. Med. Imaging
36
,
2536
(
2017
).
238.
S.
Kazeminia
,
C.
Baur
,
A.
Kuijper
,
B.
van Ginneken
,
N.
Navab
,
S.
Albarqouni
, and
A.
Mukhopadhyay
,
Artif. Intell. Med.
109
,
101938
(
2020
).
239.
V.
Sorin
,
Y.
Barash
,
E.
Konen
, and
E.
Klang
,
Acad. Radiol.
27
,
1175
(
2020
).
240.
K.
Armanious
,
C.
Jiang
,
M.
Fischer
,
T.
Küstner
,
T.
Hepp
,
K.
Nikolaou
,
S.
Gatidis
, and
B.
Yang
,
Comput. Med. Imaging Graph.
79
,
101684
(
2020
).
241.
S. J.
Ihle
,
A. M.
Reichmuth
,
S.
Girardin
,
H.
Han
,
F.
Stauffer
,
A.
Bonnin
,
M.
Stampanoni
,
K.
Pattisapu
,
J.
Vörös
, and
C.
Forró
,
Nat. Mach. Intell.
1
,
461
(
2019
).
242.
X.
Yang
,
M.
Kahnt
,
D.
Brückner
,
A.
Schropp
,
Y.
Fam
,
J.
Becher
,
J.-D.
Grunwaldt
,
T. L.
Sheppard
, and
C. G.
Schroer
,
J. Synchrotron Radiat.
27
,
486
(
2020
).
243.
T.
Fu
,
K.
Zhang
,
Y.
Wang
,
J.
Li
,
J.
Zhang
,
C.
Yao
,
Q.
He
,
S.
Wang
,
W.
Huang
,
Q.
Yuan
,
P.
Pianetta
, and
Y.
Liu
,
J. Synchrotron Radiat.
28
,
1909
(
2021
).
244.
Chaoyi
,
Y.
Duan
,
X.
Tao
, and
J.
Lu
,
IEEE Access
7
,
43369
(
2019
).
245.
Y.
Xu
,
B.
Yan
,
J.
Chen
,
L.
Zeng
, and
L.
Li
,
J. X-Ray. Sci. Technol.
26
,
361
(
2018
).
246.
A.
Ben-Cohen
,
I.
Diamant
,
E.
Klang
,
M.
Amitai
, and
H.
Greenspan
, in
Deep Learning and Data Labeling for Medical Applications
(
Springer
,
2016
), pp.
77
85
.
247.
F.
Shi
,
J.
Wang
,
J.
Shi
,
Z.
Wu
,
Q.
Wang
,
Z.
Tang
,
K.
He
,
Y.
Shi
, and
D.
Shen
,
IEEE Rev. Biomed. Eng.
14
,
4
(
2021
).
248.
I. M.
Baltruschat
,
H.
Ćwieka
,
D.
Krüger
,
B.
Zeller-Plumhoff
,
F.
Schlünzen
,
R.
Willumeit-Römer
,
J.
Moosmann
, and
P.
Heuser
,
Sci. Rep.
11
,
24237
(
2021
).
249.
K.
He
,
X.
Zhang
,
S.
Ren
, and
J.
Sun
, in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
(
IEEE
,
2016
), pp.
770
778
.
250.
Z.
Zhou
,
M. M.
Rahman Siddiquee
,
N.
Tajbakhsh
, and
J.
Liang
, in
Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support
(
Springer
,
2018
), pp.
3
11
.
251.
F.
Milletari
,
N.
Navab
, and
S.-A.
Ahmadi
, in
2016 Fourth International Conference on 3D Vision
(
IEEE
,
2016
), pp.
565
571
.
252.
N.
Siddique
,
S.
Paheding
,
C. P.
Elkin
, and
V.
Devabhaktuni
,
IEEE Access
9
,
82031
(
2021
).
253.
M. H.
Hesamian
,
W.
Jia
,
X.
He
, and
P.
Kennedy
,
J. Digit. Imaging
32
,
582
(
2019
).
254.
H.
Daume
III
and
D.
Marcu
,
J. Artif. Intell. Res.
26
,
101
(
2006
).
255.
A.
Torralba
and
A. A.
Efros
, in
CVPR 2011
(
IEEE
,
2011
), pp.
1521
1528
.
256.
S.
Asgari Taghanaki
,
K.
Abhishek
,
J. P.
Cohen
,
J.
Cohen-Adad
, and
G.
Hamarneh
,
Artif. Intell. Rev.
54
,
137
(
2021
).
257.
S.
Ben-David
,
J.
Blitzer
,
K.
Crammer
, and
F.
Pereira
,
Advances in Neural Information Processing Systems19
(
NIPS
,
2006
).
258.
R.
Volpi
and
V.
Murino
, in
Proceedings of the IEEE/CVF International Conference on Computer Vision
(
IEEE
,
2019
), pp.
7980
7989
.
259.
H.
Wang
,
Z.
Huang
,
X.
Wu
, and
E. P.
Xing
, arXiv:2206.01909 (
2022
).
260.
L.
Perez
and
J.
Wang
, arXiv:1712.04621 (
2017
).
261.
C.
Shorten
and
T. M.
Khoshgoftaar
,
J. Big Data
6
,
60
(
2019
).
262.
B.
Abdollahi
,
N.
Tomita
, and
S.
Hassanpour
,
Deep Learners and Deep Learner Descriptors for Medical Application
(
Springer
,
2020
), pp.
167
180
.
263.
L.
Taylor
and
G.
Nitschke
, in
2018 IEEE Symposium Series Computational Intelligence
(
IEEE
,
2018
), pp.
1542
1547
.
264.
M.
Wang
and
W.
Deng
,
Neurocomputing
312
,
135
(
2018
).
265.
J.
Yang
,
N. C.
Dvornek
,
F.
Zhang
,
J.
Chapiro
,
M.
Lin
, and
J. S.
Duncan
, in
International Conference Medical Image Computing and Computer-Assisted Intervention
(
Springer
,
2019
), pp.
255
263
.
266.
C. S.
Perone
and
J.
Cohen-Adad
,
J. Med. Artif. Intell.
2
,
1
(
2019
).
267.
P.
Luc
,
C.
Couprie
,
S.
Chintala
, and
J.
Verbeek
, arXiv:1611.08408 (
2016
).
268.
M. G.
Haberl
,
C.
Churas
,
L.
Tindall
,
D.
Boassa
,
S.
Phan
,
E. A.
Bushong
,
M.
Madany
,
R.
Akay
,
T. J.
Deerinck
,
S. T.
Peltier
, and
M. H.
Ellisman
,
Nat. Methods
15
,
677
(
2018
).
269.
Q.
Zhou
,
Y.
Wang
,
Y.
Fan
,
X.
Wu
,
S.
Zhang
,
B.
Kang
, and
L. J.
Latecki
,
Appl. Soft Comput.
96
,
106682
(
2020
).
270.
B.
Wu
,
F.
Iandola
,
P. H.
Jin
, and
K.
Keutzer
, arXiv:1612.01051 (
2016
).
271.
I.
Alonso
,
L.
Riazuelo
, and
A. C.
Murillo
,
IEEE Trans. Robot.
36
,
1340
(
2020
).
272.
M.
Bührer
,
H.
Xu
,
A. A.
Hendriksen
,
F. N.
Büchi
,
J.
Eller
,
M.
Stampanoni
, and
F.
Marone
,
Sci. Rep.
11
,
24174
(
2021
).
273.
K.
Neshatpour
,
A.
Koohi
,
F.
Farahmand
,
R.
Joshi
,
S.
Rafatirad
, in
A.
Sasan
, and
H.
Homayoun
,
2016 IEEE International Symposium on Circuits and Systems
(
IEEE
,
2016
), pp.
1134
1137
.
274.
S.
Yaman
,
B.
Karakaya
, and
Y.
Erol
,
Evol. Syst.
11
,
1
(
2022
).
275.
B.
Fan
,
Y.
Chen
,
J.
Qu
,
Y.
Chai
,
C.
Xiao
in, and
P.
Huang
,
2019 IEEE International Conference on Image Processing
(
IEEE
,
2019
), pp.
3920
3924
.
276.
M.
Tan
and
Q.
Le
, in
International Conference on Machine Learning
(PMLR,
2019
), pp.
6105
6114
.
277.
C.
Feichtenhofer
, in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
(
IEEE
,
2020
), pp.
203
213
.
278.
G.
Huang
,
Z.
Liu
,
L.
Van Der Maaten
, and
K. Q.
Weinberger
, in
Proceedings of the IEEE Conference on Computer Vision Pattern Recognition
(
IEEE
,
2017
), pp.
4700
4708
.
279.
J.
Devlin
,
M.-W.
Chang
,
K.
Lee
, and
K.
Toutanova
, arXiv:1810.04805 (
2018
).
280.
A.
Vaswani
,
N.
Shazeer
,
N.
Parmar
,
J.
Uszkoreit
,
L.
Jones
,
A. N.
Gomez
,
Ł.
Kaiser
, and
I.
Polosukhin
,
Adv. Neural Inf. Process. Syst.
30
,
1
15
(
2017
).
281.
A.
Dosovitskiy
,
L.
Beyer
,
A.
Kolesnikov
,
D.
Weissenborn
,
X.
Zhai
,
T.
Unterthiner
,
M.
Dehghani
,
M.
Minderer
,
G.
Heigold
, and
S.
Gelly
, arXiv:2010.11929 (
2020
).
282.
G. W.
Lindsay
,
Front. Comput. Neurosci.
14
,
29
(
2020
).
283.
D.
Bahdanau
,
K. H.
Cho
, and
Y.
Bengio
, in
3rd International Conference on Learning Representations
(
Computational and Biological Learning Society
,
2015
), pp.
1
15
.
284.
K.
Xu
,
J.
Ba
,
R.
Kiros
,
K.
Cho
,
A.
Courville
, in
R.
Salakhudinov
,
R.
Zemel
, and
Y.
Bengio
, in
International Conference on Machine Learning
(
PMLR
,
2015
), pp.
2048
2057
.
285.
G.
Li
,
L.
Li
, and
J.
Zhang
,
IEEE Signal Process. Lett.
29
,
46
(
2021
).
286.
X.
Liu
,
F.
Zhang
,
Z.
Hou
,
L.
Mian
,
Z.
Wang
,
J.
Zhang
, and
J.
Tang
,
IEEE Trans. Knowl. Data Eng.
35
,
1
(
2021
).
287.
A. B.
Arrieta
,
N.
Díaz-Rodríguez
,
J.
Del Ser
,
A.
Bennetot
,
S.
Tabik
,
A.
Barbado
,
S.
García
,
S.
Gil-López
,
D.
Molina
, and
R.
Benjamins
,
Inf. Fusion
58
,
82
(
2020
).
288.
C.
Molnar
,
Interpretable Machine Learning
(
Leanpub
,
2020
).
289.
L.
Jing
and
Y.
Tian
,
IEEE Trans. Pattern Anal. Mach. Intell.
43
,
4037
(
2020
).