We describe a method to generate 3D architected materials based on mathematically parameterized human readable word input, offering a direct materialization of language. Our method uses a combination of a vector quantized generative adversarial network and contrastive language-image pre-training neural networks to generate images, which are translated into 3D architectures that are then 3D printed using fused deposition modeling into materials with varying rigidity. The novel materials are further analyzed in a metallic realization as an aluminum-based nano-architecture, using molecular dynamics modeling and thereby providing mechanistic insights into the physical behavior of the material under extreme compressive loading. This work offers a novel way to design, understand, and manufacture 3D architected materials designed from mathematically parameterized language input. Our work features, at its core, a generally applicable algorithm that transforms any 2D image data into hierarchical fully tileable, periodic architected materials. This method can have broader applications beyond language-based materials design and can render other avenues for the analysis and manufacturing of architected materials, including microstructure gradients through parametric modeling. As an emerging field, language-based design approaches can have a profound impact on end-to-end design environments and drive a new understanding of physical phenomena that intersect directly with human language and creativity. It may also be used to exploit information mined from diverse and complex databases and data sources.

Directed materials design is an exciting frontier for science and engineering.1–5 The quest to develop new tools that intersect directly with human cues such as language, however, has remained challenging and largely limited to database mining approaches. While these are powerful, we envision new material models and design approaches that directly engage with human forms of communication. With the emergence of deep learning and the use of the generative adversarial network (GAN), transformer methods, and natural language processing (NLP), we can now generate quantitative and rigorous models toward that goal6 and provide a computationally sound realization of earlier category theoretic approaches.7–10 

Reflecting these developments, neural networks have been used in recent studies to develop novel new classes of materials, such as materials developed from fire or musical expression,10–13 offering a systematic approach to translate information across manifestations.14–19 More recently, in earlier work,20 we reported a general concept to use a combination of the Vector Quantized Generative Adversarial Network (VQGAN)—a variation of the Vector Quantized Variational Autoencoder (VQ-VAE) model21—and Contrastive Language-Image Pre-training (CLIP)22 to realize a word-to-matter paradigm. However, earlier work was limited to quasi-2D materials. Here, we expand the concept to 3D architected materials and provide a detailed mechanistic analysis of failure under extreme mechanical deformation using molecular dynamics (MD) simulation.

In recent years, massive foundational neural network models have been built,23 thanks to the broad availability of computational resources and the development of “attention” mechanism neural networks. The “attention” mechanism, as the name suggests, originates from the idea of human attention, which directs neural networks to focus on key components of the input data.24,25 Such attention to certain details of a process is exquisitely suited to describe physical phenomena and, more broadly, design approaches, as it reflects key factors, such as those displayed in singular mathematical events or rare occurrences of events.

The transformer neural network model is a model built based on an attention mechanism,26 which shows great success in natural language processing (NLP) and also in computer vision (CV).27 Following the transformer model concept, numerous large pretrained DL models have been developed including, but not limited to, “Bidirectional Encoder Representations from Transformers” (BERT)28 and “Generative Pre-trained Transformer” (GPT)29 for NLP tasks and “DEtection TRansformer” (DETR)30 and “Vision Transformer” (ViT)31 for CV tasks. Those pretrained models, due to their outstanding performances, benefit both direct applications and adapted domain learning via transfer approaches.

Within the context of applications in the physical sciences and specifically materials research, the transformer concept has been applied to generate molecular fingerprints,32 predict organic chemical reactions,33,34 and design de novo drugs.35 In our previous work, by combining the attention-based CLIP and VQGAN models, we showed how we can generate images of 2D architected materials that reflect text prompt driven designs.20 The “words-to-matter” approach reported in that work opened up a new perspective of materials design enabled by both generative networks and pretrained foundation models and illustrated how they can be generalized toward broader applications in materials science.

With the rapid development of ML approaches, and especially generative models, the materials-by-design paradigm has evolved using generative ideas proposed by artificial intelligence (AI). Additive manufacturing (AM) enables the realization of those conceptual ideas of materials design, bypassing the traditional manufacturing techniques using a bottom-up approach. The combination of AM and generative designs has proved successful in soft robot optimization,36 composite device development,37 and bio-inspired design.38 However, earlier studies have not examined a generative design framework for architected 3D materials using human-readable and parameterizable text input. Such input, especially including the concept to systematically vary the text using a mix of mathematical and language-based parameterization, is a straightforward and natural way for human beings to input and output certain ideas for materials design.

Indeed, materials-by-design has been motivated by human ideas that are often initialized based on written text or artisan work, especially considering its historical relevance and context over thousands of years of civilization.17,39 Words and sentences are our species’ natural way of communication to spread and describe our ideas. However, the gap between the original thoughts described by human language and the final materialization is not easy to bridge. The text-to-material paradigm usually involves a modeling process with the assistance of experimental manufacturing techniques or computational simulations and/or mathematical models. However, the possibility of utilizing a deep learning based approach for the paradigm remains little explored. Given the rapid advances of DL architectures in NLP and the development of massive generative neural nets, we are now able to link the text to generative designs through an iterative process combining the approaches from these two fields.

Here, we propose an approach that combines transformer neural nets with generative models to enable a text-to-material translation for 3D architected materials that feature fully periodic, tileable unit cells. We extend the approach from 2D to 3D and develop an atomistic model based on the continuum structures using MD simulations along with 3D printed geometry, which enables the modeling of generated designs across different levels of scales, leading to various insights about extreme mechanical deformation mechanisms via atomically precise models. Compared to continuum-level modeling we performed in our previous work,20 MD simulations can reveal atomic-level phenomena, such as dislocation motion, providing multiscale insights including atomistic details about the generated designs.

In this paper, we demonstrate the use of a transformer neural network in the design of architected, 3D materials. 3D hierarchical architected materials find numerous applications in a variety of industries,40,41 ranging from healthcare to structural engineering, and offer significant advances to enable multifunctional properties at low weight.42–45 

Serving as an outline of this work, Fig. 1 depicts a flowchart of the approach reported here, translating words—human readable descriptive text—toward 3D physical material designs. We proceed with various examples generated based on the approach, describe how we convert the predicted images into 3D models, and use multimaterial additive manufacturing to manufacture them. We then report experimental and computational analysis of mechanical properties to assess the viability of the designs generated.

FIG. 1.

Flowchart of work reported here, translating human readable word-based descriptions of material representations into 3D material designs, which can be realized using additive manufacturing and further analyzed using experimental testing and/or modeling. (b) Introduction to transformer neural networks, enabling text-to-image translation at high resolution, used here to develop microstructures for de novo architected materials, following a pairing of generator-classifier.

FIG. 1.

Flowchart of work reported here, translating human readable word-based descriptions of material representations into 3D material designs, which can be realized using additive manufacturing and further analyzed using experimental testing and/or modeling. (b) Introduction to transformer neural networks, enabling text-to-image translation at high resolution, used here to develop microstructures for de novo architected materials, following a pairing of generator-classifier.

Close modal

Figure 2 shows example images generated from various text inputs using mathematical parameterization of the word cues provided using a variable X that offers distinct weight to specific words. In this example, we use the text cue “hexagonal lattice|hollow circles,” whereas we systematically vary the weight of the two terms in the image generating using the algorithm reported in Ref. 22. This is achieved by adding the variable X to the text cue, as in: “hexagonal lattice X | hollow circles (1 − X)” where X is between [0, 1]. One can clearly see the variation from a focus on a “lattice” toward “round” objects as the weights are varied.

FIG. 2.

Example images generated from various text inputs, using mathematical parameterization of the word cues provided using a variable X that offers distinct weight to specific words. The variable X ranges from 1 (top left) to 0 (bottom right). One can clearly see the variation from a focus on a “lattice” toward “round” objects as the weights are varied.

FIG. 2.

Example images generated from various text inputs, using mathematical parameterization of the word cues provided using a variable X that offers distinct weight to specific words. The variable X ranges from 1 (top left) to 0 (bottom right). One can clearly see the variation from a focus on a “lattice” toward “round” objects as the weights are varied.

Close modal

Figure 3 depicts further processing of one of the generated images, showing how they are processed into a periodic, 3D structure. Figure 3(a) shows how periodicity is achieved in the xy plane, whereby the image is ultimately repeated four times through a series of mirroring operations. Figure 3(b) depicts the chosen image that was picked for transformation into a 3D architected material, following the concept described in panel (a). Figure 3(c) shows the two-step transformation of the image into a pixel intensity map (left). The intensity map is then used for generating the 3D model, as described in Sec. IV.

FIG. 3.

Illustration of image transformations applied to realize a periodic unit cell in the x-y plane. Panel (a) shows the method to augment a single image, the unit in blue, to form a periodic base 2D image. Panel (b) shows the periodic image after applying this technique to expand one of the images generated (as shown in Fig. 2) into a periodic base image. Panel (c) shows the two-step transformation to process the base image into a pixel intensity map. The first transformation (right) takes the input to smooth its noise by visualization tools. The second transformation (left) directly calculates the intensity of color at each pixel. Finally, the value at each pixel will be taken as a contour of a 3D structure to generate a printable model.

FIG. 3.

Illustration of image transformations applied to realize a periodic unit cell in the x-y plane. Panel (a) shows the method to augment a single image, the unit in blue, to form a periodic base 2D image. Panel (b) shows the periodic image after applying this technique to expand one of the images generated (as shown in Fig. 2) into a periodic base image. Panel (c) shows the two-step transformation to process the base image into a pixel intensity map. The first transformation (right) takes the input to smooth its noise by visualization tools. The second transformation (left) directly calculates the intensity of color at each pixel. Finally, the value at each pixel will be taken as a contour of a 3D structure to generate a printable model.

Close modal

Figure 4 displays the process by which a 2D image (i.e., the xy plane) is used to construct a 3D representation by generating a stack of images (z-direction) for volumetric reconstruction. Each layer in the stack is represented by a different threshold value of pixel intensity, whereas the brightest spots achieve the largest height while the least bright spots the lowest height. The construction is pursued in both positive and negative z-directions, leading to a symmetric structure that can be periodically stacked in the z-direction, and repeated multiple times, yielding a fully periodic and hence tileable architected material. Since the xy plane is already periodic via the process described in Fig. 3, the resulting architecture represents a fully periodic unit cell. Threshold values for the minimum and maximum values are used to achieve a continuous 3D model, such that the material is present when transitioning between multiple repeats.

FIG. 4.

In the algorithm developed here, a 2D image (x–y plane) is used to construct a 3D representation by generating a stack of images (z-direction) for volumetric reconstruction. Each layer in the stack is represented by a different threshold value of pixel intensity, whereas the brightest spots achieve the largest height and the least bright spots achieve the lowest height. The construction is pursued in both positive and negative z-direction, leading to a symmetric structure that can be periodically stacked in the z-direction, and repeated multiple times. Since the x–y plane is already periodic as shown in Fig. 3, the resulting architecture represents a fully periodic unit cell. Threshold values for the minimum and maximum values are used to form a continuous 3D model. Panel (a) shows the image used for this 3D material reconstruction, showing the original image (left) and the pixel intensity map (right). Panel (b) depicts how these continually varying thresholds can collect different contours to generate layers for the architected material. The whole process is repeated four times along the z-direction to embody four copies of the x–y plane for periodicity. To be more specific, the higher threshold will rule out low intensity in the map and hence result in a sparse image. Movie M1 shows a traverse through the layers, in the z-direction.

FIG. 4.

In the algorithm developed here, a 2D image (x–y plane) is used to construct a 3D representation by generating a stack of images (z-direction) for volumetric reconstruction. Each layer in the stack is represented by a different threshold value of pixel intensity, whereas the brightest spots achieve the largest height and the least bright spots achieve the lowest height. The construction is pursued in both positive and negative z-direction, leading to a symmetric structure that can be periodically stacked in the z-direction, and repeated multiple times. Since the x–y plane is already periodic as shown in Fig. 3, the resulting architecture represents a fully periodic unit cell. Threshold values for the minimum and maximum values are used to form a continuous 3D model. Panel (a) shows the image used for this 3D material reconstruction, showing the original image (left) and the pixel intensity map (right). Panel (b) depicts how these continually varying thresholds can collect different contours to generate layers for the architected material. The whole process is repeated four times along the z-direction to embody four copies of the x–y plane for periodicity. To be more specific, the higher threshold will rule out low intensity in the map and hence result in a sparse image. Movie M1 shows a traverse through the layers, in the z-direction.

Close modal

Figure 4(a) shows the image used for this 3D material reconstruction, showing the original image (left) and the pixel intensity map (right) (the detailed process by which this was obtained is described in Fig. 3). Figure 4(b) depicts how a continually varying threshold is used to generate layers for the architected material. The process is repeated four times in the z-direction to match the four copies in the xy direction needed to achieve full periodicity, as explained in Fig. 3. The supplementary material, Movie M1, shows a traverse through the layers in the z-direction. These layers are then processed into a 3D mesh representation, as shown at the bottom of Fig. 4(b).

Figure 5 shows the results of additive manufacturing result of the architected material described in Fig. 4. The architected material is printed using a multimaterial printer, whereas PLA is used for the material phase and PVA as a water-soluble support material (to realize the complex 3D structures with internal holes). A gyroid infill is used in this example to illustrate the capacity to yield hierarchical designs. The final material is depicted in the lower right panel of the image. Figure 6 shows various snapshots of the resulting architected material, from different angles. The bottom image shows a macro-view of the printed material, revealing the individual printed layers (approximate length-scale of each layer is on the order of tens of micrometers). Figure 7 illustrates prints generated from soft materials, printed using black TPU filament. Using such flexible, soft material, novel architected materials can be fabricated that allows for large deformation. Figure 7 (left) depicts a full 3D architecture, showing one periodic layer in the z-direction (as opposed to four layers as in Fig. 6). Figure 7 (right) shows examples of deformation of a soft architected material, featuring a hollow architecture as shown in the top visual. Such architected materials could find applications in the biomedical field or soft robotics, for instance.

FIG. 5.

Additive manufacturing result of the hierarchical architected material described in Fig. 2. The architected material is printed using a multimaterial printer, whereas PLA is used for the material phase, and PVA as a water-soluble PVA support material. A gyroid infill is used in this example to illustrate the capacity to yield hierarchical designs, offering multiple-scale structural realization from the smallest to the largest scale. The final material is depicted in the lower right panel of the image.

FIG. 5.

Additive manufacturing result of the hierarchical architected material described in Fig. 2. The architected material is printed using a multimaterial printer, whereas PLA is used for the material phase, and PVA as a water-soluble PVA support material. A gyroid infill is used in this example to illustrate the capacity to yield hierarchical designs, offering multiple-scale structural realization from the smallest to the largest scale. The final material is depicted in the lower right panel of the image.

Close modal
FIG. 6.

Several snapshots of the resulting architected material printed using white PLA filament, from different angles [panels (a) and (b)]. The bottom image shows a macro-view of the printed material [panel (c)], revealing the individual printed layers (approximate length-scale of each layer is on the order of tens of micrometers).

FIG. 6.

Several snapshots of the resulting architected material printed using white PLA filament, from different angles [panels (a) and (b)]. The bottom image shows a macro-view of the printed material [panel (c)], revealing the individual printed layers (approximate length-scale of each layer is on the order of tens of micrometers).

Close modal
FIG. 7.

Several snapshots of the resulting architected material printed using black TPU filament. Using such flexible, soft material, novel architected materials can be fabricated that allows for large deformation. Left: Full 3D architecture, showing one periodic layer in the z-direction (as opposed to 4 as in Fig. 6). Right: Examples of deformation of a soft architected material, featuring a hollow architecture as shown in the top visual.

FIG. 7.

Several snapshots of the resulting architected material printed using black TPU filament. Using such flexible, soft material, novel architected materials can be fabricated that allows for large deformation. Left: Full 3D architecture, showing one periodic layer in the z-direction (as opposed to 4 as in Fig. 6). Right: Examples of deformation of a soft architected material, featuring a hollow architecture as shown in the top visual.

Close modal

The resulting 3D model cannot only be used to generate 3D physical samples using 3D printing but can also serve as the basis for atomistic modeling, for instance, by simulating how an architected material with such a geometry but made out of metal (e.g., aluminum) would behave. Figure 8 shows an atomistic version of the architected material, modeled using an embedded-atom method (EAM) potential.46 The atomistic structure is generated using a perfect FCC aluminum crystal based on the continuum-level architected materials with atoms removed in the region of voids (details in Sec. IV: Atomistic model of architected materials). We aim at investigating the atomic-level behaviors of the generated structure using molecular dynamics (MD) simulations. A compression test at a high strain rate is performed on the atomistic model [Fig. 8(a)]. To better visualize the atomic porous structure, surface meshes are constructed based on atomic positions.

FIG. 8.

Atomistic simulation results of an architected material realized based on aluminum as a prototype lightweight material. Panel (a) shows the boundary condition of the compression test using NEMD simulations. The surface mesh is constructed based on atomic positions to better visualize the porous structure. Panel (b) shows snapshots of NEMD simulations at different compressive strains. The absolute values of the compressive strain are used here. Panel (c) shows the strain–stress curve of the compression test. Both stress and strain values are plotted with positive values. Three stages that are typically witnessed in the compressive behaviors of porous materials are manifested labeled “linear elastic,” “plateau,” and “densification.”

FIG. 8.

Atomistic simulation results of an architected material realized based on aluminum as a prototype lightweight material. Panel (a) shows the boundary condition of the compression test using NEMD simulations. The surface mesh is constructed based on atomic positions to better visualize the porous structure. Panel (b) shows snapshots of NEMD simulations at different compressive strains. The absolute values of the compressive strain are used here. Panel (c) shows the strain–stress curve of the compression test. Both stress and strain values are plotted with positive values. Three stages that are typically witnessed in the compressive behaviors of porous materials are manifested labeled “linear elastic,” “plateau,” and “densification.”

Close modal

Figure 8(b) displays the structural evolution of the atomic structure during compression. The corresponding strain–stress curve [Fig. 8(c)] shows the typical three-stage evolution of porous materials under the compression test. At a small compressive strain (<3%), the structure is in the linear elasticity region. When the strain is between 3% and around 45%, the stress increases slowly, showing a plateau region. After the porous structure reaches increasing densities above a threshold, a densification process leads to a significant increase in stress.47 

To analyze the compressive behaviors, we compute local dislocation lines and lattice types,48 which are visualized in Fig. 9(a). The crystal structures are colored based on the local lattice type and the dislocation lines are colored based on either the local character that is classified into edge and screw dislocation or the dislocation type. As the compressive strain increases, we observe more and more dislocations and shear bands in the structure. In order to quantify the structural evolution, we calculate the lengths of different types of dislocations and the number of particles given local crystal types during compression. Figure 9(b) confirms the visual observation in Fig. 9(a), showing the overall increasing trend of dislocation lengths. During strain between 45% and 60%, the lengths decrease in some dislocations because of the densification. Based on Fig. 9(c), we find that the shear bands shown in Fig. 9(a) are mostly HCP lattices. More details about the evolution of dislocations and crystal structures are visualized in the supplementary material, Movie M2.

FIG. 9.

Mechanistic analysis of the compressive behavior of atomistic architected materials. Panel (a) shows the snapshots of crystal structures and dislocation lines during NEMD simulations at different compressive strains. Dislocation lines and crystal structures are visualized in one-eight part of the whole structure given the symmetry. The crystal structures are colored based on the local lattice type, and the dislocation lines are colored based on either the local character that is classified into edge and screw dislocation or the dislocation type. In the “crystal” images, gray, green, and red colors represent “other,” “FCC,” and “HCP” crystal types, respectively. In the “dislocation (local character)” images, red color corresponds to screw dislocation and the blue color refers to edge dislocation. In the “dislocation (type)” images, red, dark blue, green, pink, yellow, and light blue colors represent “other,” “perfect,” “Shockley,” “stair-rod,” “Hirth,” and “Frank” dislocation types, respectively.48 Panel (b) shows the dislocation length evolution during the compression test. Different types of dislocations are plotted separately. Panel (c) shows the particles count evolution during the compression test. Particles are counted based on the local crystal type.

FIG. 9.

Mechanistic analysis of the compressive behavior of atomistic architected materials. Panel (a) shows the snapshots of crystal structures and dislocation lines during NEMD simulations at different compressive strains. Dislocation lines and crystal structures are visualized in one-eight part of the whole structure given the symmetry. The crystal structures are colored based on the local lattice type, and the dislocation lines are colored based on either the local character that is classified into edge and screw dislocation or the dislocation type. In the “crystal” images, gray, green, and red colors represent “other,” “FCC,” and “HCP” crystal types, respectively. In the “dislocation (local character)” images, red color corresponds to screw dislocation and the blue color refers to edge dislocation. In the “dislocation (type)” images, red, dark blue, green, pink, yellow, and light blue colors represent “other,” “perfect,” “Shockley,” “stair-rod,” “Hirth,” and “Frank” dislocation types, respectively.48 Panel (b) shows the dislocation length evolution during the compression test. Different types of dislocations are plotted separately. Panel (c) shows the particles count evolution during the compression test. Particles are counted based on the local crystal type.

Close modal

The work reported here provides a path to translate words or language input more broadly into 3D architected material designs, based on a generally applicable algorithm that transforms 2D image data into hierarchical architected materials. While a general algorithmic variation of the produced image can be accomplished (see Fig. 2), the method also provides us with the capacity to translate a single 2D image into an architected material, as described in Figs. 3 and 4. Future work could explore the use of gradients, where the constructed architecture can smoothly vary in spatial directions x, y, and/or z. This offers yet another way to develop material functionality.

The designs generated here were 3D printed (Figs. 57), from both stiff and soft materials, to demonstrate the generation of physical prototypes. In another example, we generated a 3D aluminum molecular model and exposed the material to extreme compressive loading. This enabled us to assess the detailed molecular mechanisms of deformation including dislocation activity (Figs. 8 and 9).

Future work could focus on better mechanical characterization during the design process, for instance, via the use of genetic or other optimization algorithms. This can not only yield static functional materials but also provide access to multidimensional 4D printed materials45,49,50 that would originate from neural network synthesis and incorporate tunable and hence time-dependent material properties. Also, from the perspective of Fused Deposition Modeling (FDM) 3D printing, the thermal–mechanical effects may be an important factor that may contribute to mechanical performance.51 Since our aim is to build architected materials with a high variety of changes in wall thickness and shape in real space, such effects may be important and could be addressed in future work when more comprehensive experimental studies are performed.

More broadly, research into the materialization of language can be of great interest to the physics community, especially the intersections of physics and philosophy.52–54 The availability of mapping models as presented in this paper can form a foundation for future research into such relationships, especially when explored with massive NLP models and how they relate with physical models.

The methods used in this paper include the following:

  • deep neural networks (a combination of VQGAN and CLIP for an integrated NLP-transformer image generation approach that can be mathematically parameterized);

  • a method to translate a 2D image into a periodic 3D architecture in x-, y-, and z-directions, fully tileable;

  • additive manufacturing using multi-material FDM, to construct complex materials with internal architectures and voids; and

  • mechanical analyses using molecular dynamics (MD), to elucidate fundamental deformation mechanisms of metal-based nanoarchitected metamaterials.

1. Integrated CLIP and VQGAN model

The Contrastive Language-Image Pre-training (CLIP) model55 is a large pretrained neural network model that was trained on a variety of (image-to-text) pairings. The model was originally built for general image classification tasks. In our work, the CLIP model is utilized to evaluate the generated images from VQGAN given a specified input text, realized in an iterative optimization process. The detailed architecture and hyperparameters of CLIP model are the same as our previous work.20 

The VQGAN model is a neural net that combines convolution operation with transformers to generate images at super high resolutions. The only difference of the pretrained model we use in this work compared to the previous work20 is the size of the codebook. The codebook size is 128 × 128 in this work, which can produce higher reconstruction quality compared to the former size of 16 × 16 in earlier reports. The VQGAN model serves as a generator that produces new candidates guided by CLIP model.

The integration of CLIP and VQGAN model was first proposed by Komatsuzaki to generate art images from text prompt.56 Here, we leverage the integrated CLIP and VQGAN model for 3D architected materials design,22 and specifically the creation of 3D material architectures as described in Sec. IV B.

The primary objective for image processing is to transform a complex image with multi-channel data into a printable geometry. To that end, we use colormap operations, smoothing functions (cv2.GaussianBlur and cv2.bilateralFilter), and other image processing methods to remove small image parcels, and/or islands, to ultimately generate printable mechanically functional designs based on continuous material distributions. All image operations are performed using the OpenCV computer vision package implemented in Python.57 

1. Translating a 2D image into a stack of images for 3D model construction

We use an algorithm to convert an image into a 3D representation that is periodic in all directions after transforming it into a grayscale image to enable algorithmic processing using pixel intensities. The approach is described in detail in Figs. 3 and 4. There are several steps involved.

  • Step 1: To create a periodic topology from a 2D image, we rotate and mirror the original unit and then concatenate the results together as shown in Fig. 3(a). By doing so, the expanded image is now periodic in both x- and y-directions and can be extended by itself one after another. Also, we are able to apply smoothing functions to find the original image to meet the desired precision during this process.

  • Step 2: To further expand an image from a 2D plane to a 3D printable geometry, we need extra information to build the geometric shape toward the normal direction of the plane. This information is “hiding” in these images and is extracted using several processing steps. First, by applying different colormaps, the pixel-by-pixel intensity map of an image is calculated, which is then transformed into only a single channel. The value at each pixel can now be considered as an indicator to decide its position in the corresponding 3D structure subjecting to a directional light, that is, the height perpendicularly to the original 2D plane. With such spatial information, we apply a scanning function to pick up pixels in an image and then pin them into a 3D binary array according to its assumed position. The above construction will go through the original image not only once but reversely again to expand a mirror image for the geometry to be periodic in the z-direction. Stacks of such periodic 3D structures are assembled into higher level architectures as those shown in Figs. 5 and 6.

With this algorithm, any 2D image with multiple channels can be transformed into a 3D geometry periodic in all directions, and during the process, the expanded geometry can be directly stored as a stack of images layer by layer. Gradients in the z-direction can be realized by slowly varying the source image, whereby Step 2 is conducted with slowly changing input data, which is then tiled in the z-direction. Gradients in x and y can be realized by repeating Step 1 with distinct source images that slowly vary as periodic images are tiled in such directions.

While not done here, multimaterial designs can be easily realized by using different pixel intensity thresholds (for, say, stiff vs soft materials) or different layers in the final makeup of the architected material. That way, complex plywood-like material geometries can be designed and manufactured.

2. Translating a stack of images into a 3D volume and mesh representation

With the above approach to translate a 2D image into a stack of images in order, it is straightforward to place them into a 3D model to form a 3D array, in which any nonzero value can be considered as a cube, or more precisely, a voxel, to be printed out afterward. In this study, a binary 3D array, in which 1 represents the entity and 0 denotes the void region, can form a 3D porous material. After a 3D geometry is generated, we then apply two python open-source libraries, skimage58 and trimesh,59 to render the 3D arrays into STL format that can then be read by 3D printing slicing software.

This voxel-based approach using a 3D array enables us to solidify our stacks of images and to modify the details in a model before it is rendered into STL format. As mentioned before, we avoid generating parcels or islands in a 3D geometry because they hardly have a contribution to the mechanical performance and sometimes are not even printable with certain 3D printing techniques.

Overall, this two-step method not only can translate a 2D image into a 3D printable model but also gives us the potential to explore additional mechanically functional designs based on information within just one image.

We employ additive manufacturing to generate 3D models of the materials designed from words. 3D files are sliced using Cura 4.9.1 to obtain GCODE and printed using a Ultimaker S3 multi-material printer. The process is shown in detail in Fig. 5. A water-dissolvable Ultimaker PVA support material is used to realize the complex geometries, whereas the primary material component is printed using an Ultimaker White PLA filament. Similarly, soft specimens are printed using TPU filament (both NinjaTek NinjaFlex and Ultimaker TPU95 are used). PVA is also used as a water-dissolvable support material.

We investigate the behavior of architected materials at the atomic level using the unit cell in Fig. 4. We generate the initial configuration of the atomistic structure based on a stack of images showing the continuous distribution of 3D architected materials. The detailed procedure to generate the atomistic structure is as follows: We first create a perfect FCC aluminum crystal containing 250 unit cells of aluminum in all three directions (with a total of more than 15 × 106 unit cells in total). Then, we remove atoms based on the voids in the image stacks to create pores in the crystal to reflect the architected material design.

Specifically, we find the corresponding pixel of each aluminum atom in the perfect crystal based on the spatial position and remove the atom if the color of the pixel is white. We use a relatively large atomistic system to avoid surface instabilities (which can occur if the void dimension reaches atomic vacancy dimensions or that of crystal unit cells and hence coalescence of pores so that the initial atomistic structure). With this constraint, we find that the metal-based architected nanomaterial structure is maintained after equilibration in MD simulations. The force field we utilize for aluminum nanocrystal modeling is a many-body interatomic potential developed for monoatomic metals based on the Embedded Atom Method (EAM).46 

With the initial atomistic structure of the architected material, we implement non-equilibrium molecular dynamics (NEMD) simulations using a Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)60 to perform a compression test to the atomistic architected material. Before the deformation is exerted, the initial structure we obtain from the image slack is equilibrated with both energy minimization (10 ps) and relaxation process in NPT ensemble (50 ps). During the compression test, an NVT ensemble is applied to the system and the strain rate is set to be −7.5 × 10−3/ps. The final compressive strain is −0.75.

The visualization of the MD results is performed in OVITO.61 To better visualize the porous structure, we utilize the “construct surface mesh” modifier that generates a geometrical description of the outer and inner boundaries of an atomistic solid using the alpha-shape algorithm.62 In terms of visualization of dislocation lines and crystal structure evolution during the compression, a “dislocation analysis” modifier is leveraged, which implements a so-called dislocation extraction algorithm63 to calculate burger vectors and generate line representation for dislocations. The dislocation lengths and crystal types are output from the “dislocation analysis” modifier.

See the supplementary material for Supplementary Movie M1: Traverse across image stacks in the z-direction, illustrating the individual x–y layers that form the basis for 3D architected material construction (https://www.dropbox.com/s/mgo4yi5knxyqf05/Movie_M1.mp4?dl=0) and Supplementary Movie M2: MD simulation snapshots of compression tests of the atomistic-level architected material made of aluminum (https://www.dropbox.com/s/x59n309nkrchvc3/Movie_M2.MP4?dl=0).

The authors acknowledge support from the MIT-IBM AI Lab, MIT Quest, ONR (Grant Nos. N000141912375 and N000142012189), AFOSR-MURI (Grant No. FA9550-15-1-0514), and ARO (Grant No. W911NF1920098).

The authors have no conflicts to disclose.

M.J.B. performed the neural network calculations, developed the image processing, and performed the 3D printing and experimental tests. Y.-C.H. developed the 3D rendering algorithm, and Z.Y. developed, carried out, and analyzed the MD simulations. All authors wrote and edited the paper, analyzed the data and conclusions, and contributed to the scientific research design and interpretations.

Y.-C.H. and Z.Y. contributed equally to this work.

The data that support the findings of this study are available from the corresponding author upon reasonable request.

1.
Z.
Qin
,
L.
Dimas
,
D.
Adler
,
G.
Bratzel
, and
M. J.
Buehler
, “
Biological materials by design
,”
J. Phys.: Condens. Matter
26
(
7
),
073101
(
2014
).
2.
U. G. K.
Wegst
,
H.
Bai
,
E.
Saiz
,
A. P.
Tomsia
, and
R. O.
Ritchie
, “
Bioinspired structural materials
,”
Nat. Mater.
14
(
1
),
23
36
(
2015
).
3.
S. D.
Palkovic
,
D. B.
Brommer
,
K.
Kupwade-Patil
,
A.
Masic
,
M. J.
Buehler
, and
O.
Büyüköztürk
, “
Roadmap across the mesoscale for durable and sustainable cement paste—A bioinspired approach
,”
Constr. Build. Mater.
115
,
13
(
2016
).
4.
M. J.
Buehler
and
A.
Misra
, “
Mechanical behavior of nanocomposites
,”
MRS Bull.
44
(
1
),
19
(
2019
).
5.
M. J.
Buehler
, “
Tu(r)ning weakness to strength
,”
Nano Today
5
,
379
(
2010
).
6.
K.
Guo
,
Z.
Yang
,
C.-H.
Yu
, and
M. J.
Buehler
, “
Artificial intelligence and machine learning in design of mechanical materials
,”
Mater. Horiz.
8
(
4
),
1153
1172
(
2021
).
7.
M.
Milazzo
et al., “
Additive manufacturing approaches for hydroxyapatite-reinforced composites
,”
Adv. Funct. Mater.
29
,
1903055
(
2019
).
8.
D. B.
Brommer
,
T.
Giesa
,
D. I.
Spivak
, and
M. J.
Buehler
, “
Categorical prototyping: Incorporating molecular mechanisms into 3D printing
,”
Nanotechnology
27
(
2
),
024002
(
2016
).
9.
D. I.
Spivak
,
T.
Giesa
,
E.
Wood
, and
M. J.
Buehler
, “
Category theoretic analysis of hierarchical protein materials and social networks
,”
PLoS One
6
(
9
),
e23911
(
2011
).
10.
T.
Giesa
,
D. I.
Spivak
, and
M. J.
Buehler
, “
Reoccurring patterns in hierarchical protein materials and music: The power of analogies
,”
Bionanoscience
1
(
4
),
153
(
2011
).
11.
C.-H.
Yu
,
Z.
Qin
,
F. J.
Martin-martinez
, and
M. J.
Buehler
, “
A self-consistent sonification method to translate amino acid sequences into musical compositions and application in protein design using artificial intelligence
,”
ACS Nano
13
,
7471
7482
(
2019
).
12.
M.
Milazzo
and
M. J.
Buehler
, “
Materials from fire: Sonification of flames, use in neural image generation and 3D printing using deep learning
,”
iScience
24
(
8
),
102873
(
2021
).
13.
M.
Milazzo
,
G. I.
Anderson
, and
M. J.
Buehler
, “
Bioinspired translation of classical music into de novo protein structures using deep learning and molecular modeling
,”
Bioinspir. Biomim.
17
,
015001
(
2022
).
14.
Z.
Yang
,
C.-H.
Yu
, and
M. J.
Buehler
, “
Deep learning model to predict complex stress and strain fields in hierarchical composites
,”
Sci. Adv.
7
(
15
),
eabd7416
(
2021
).
15.
Z.
Yang
,
C.-H.
Yu
,
K.
Guo
, and
M. J.
Buehler
, “
End-to-end deep learning method to predict complete strain and stress tensors for complex hierarchical composite microstructures
,”
J. Mech. Phys. Solids
154
,
104506
(
2021
).
16.
S. L.
Franjou
,
M.
Milazzo
,
C.-H.
Yu
, and
M. J.
Buehler
, “
Sounds interesting: Can sonification help us design new proteins?
,”
Expert Rev. Proteomics
16
(
11–12
),
875
(
2019
).
17.
S. W.
Cranford
and
M. J.
Buehler
,
Biomateriomics
(
Springer Netherlands
,
2012
).
18.
T.
Giesa
,
R.
Jagadeesan
,
D. I.
Spivak
, and
M. J.
Buehler
, “
Matriarch: A Python library for materials architecture
,”
ACS Biomater. Sci. Eng.
1
,
1009
(
2015
).
19.
T.
Giesa
,
D. I.
Spivak
, and
M. J.
Buehler
, “
Category theory based solution for the building block replacement problem in materials design
,”
Adv. Eng. Mater.
14
(
9
),
810
(
2012
).
20.
Z.
Yang
and
M. J.
Buehler
, “
Words to matter: De novo architected materials design using transformer neural networks
,”
Front. Mater.
8
,
740754
(
2021
).
21.
A.
van den Oord
,
O.
Vinyals
, and
K.
Kavukcuoglu
, “
Neural discrete representation learning
,”
CoRR
abs/1711.00937 (
2017
).
23.
R.
Bommasani
 et al., “
On the opportunities and risks of foundation models
,”
CoRR
abs/2108.07258 (
2021
).
24.
S.
Chaudhari
,
V.
Mithal
,
G.
Polatkan
, and
R.
Ramanath
, “
An attentive survey of attention models
,”
ACM Trans. Intell. Syst. Technol.
37
(
4
),
53
(
2019
).
25.
D.
Bahdanau
,
K.
Cho
, and
Y.
Bengio
, “
Neural machine translation by jointly learning to align and translate
,”
CoRR
abs/1409.0473 (
2014
).
26.
A.
Vaswani
 et al., “
Attention is all you need
,”
CoRR
abs/1706.03762 (
2017
).
27.
F.
Wang
and
D. M. J.
Tax
, “
Survey on the attention based RNN model and its applications in computer vision
,”
CoRR
abs/1601.06823 (
2016
).
28.
J.
Devlin
,
M. W.
Chang
,
K.
Lee
, and
K.
Toutanova
, “
BERT: Pre-training of deep bidirectional transformers for language understanding
,”
CoRR
abs/1810.04805 (
2019
).
29.
T. B.
Brown
 et al.,
Language Models Are Few-Shot Learners
(
CoRR
,
2020
).
30.
N.
Carion
,
F.
Massa
,
G.
Synnaeve
,
N.
Usunier
,
A.
Kirillov
, and
S.
Zagoruyko
,
End-to-end Object Detection with Transformers, Lecture Notes in Computer Science (LNCS) (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics
)
12346
,
213
229
(
2020
).
31.
A.
Dosovitskiy
 et al., “
An image is worth 16 × 16 words: Transformers for image recognition at scale
,”
CoRR
abs/2010.11929 (
2020
).
32.
F.
Wu
 et al., “
3D-Transformer: Molecular representation with transformer in 3D space
,”
CoRR
abs/2110.01191,
1
20
(
2021
).
33.
P.
Schwaller
et al., “
Molecular transformer: A model for uncertainty-calibrated chemical reaction prediction
,”
ACS Cent. Sci.
5
(
9
),
1572
1583
(
2019
).
34.
G.
Pesciullesi
,
P.
Schwaller
,
T.
Laino
, and
J.-L.
Reymond
, “
Transfer learning enables the molecular transformer to predict regio- and stereoselective reactions on carbohydrates
,”
Nat. Commun.
11
(
1
),
4874
(
2020
).
35.
D.
Grechishnikova
, “
Transformer neural network for protein-specific de novo drug generation as a machine translation problem
,”
Sci. Rep.
11
(
1
),
321
(
2021
).
36.
A.
Zolfagharian
,
L.
Durran
,
S.
Gharaie
,
B.
Rolfe
,
A.
Kaynak
, and
M.
Bodaghi
, “
4D printing soft robots guided by machine learning and finite element models
,”
Sens. Actuators, A
328
,
112774
(
2021
).
37.
Y.
He
et al., “
Exploiting generative design for 3D printing of bacterial biofilm resistant composite devices
,”
Adv. Sci.
8
(
15
),
2100249
(
2021
).
38.
Y.
Zhang
,
Z.
Wang
,
Y.
Zhang
,
S.
Gomes
, and
A.
Bernard
, “
Bio-inspired generative design for support structure generation and optimization in Additive Manufacturing (AM)
,”
CIRP Ann.
69
(
1
),
117
120
(
2020
).
39.
R. E.
Hummel
,
Understanding Materials Science: History, Properties, Applications
(
Springer
,
2005
), pp.
1
440
.
40.
M.-S.
Pham
,
C.
Liu
,
I.
Todd
, and
J.
Lertthanasarn
, “
Damage-tolerant architected materials inspired by crystal microstructure
,”
Nature
565
(
7739
),
305
311
(
2019
).
41.
K.
Guo
and
M. J.
Buehler
, “
A semi-supervised approach to architected materials design using graph neural networks
,”
Extrem. Mech. Lett.
41
,
101029
(
2020
).
42.
A. A.
Zadpoor
, “
Meta-biomaterials
,”
Biomater. Sci.
8
(
1
),
18
38
(
2020
).
43.
N. E.
Putra
,
M. J.
Mirzaali
,
I.
Apachitei
,
J.
Zhou
, and
A. A.
Zadpoor
, “
Multi-material additive manufacturing technologies for Ti-, Mg-, and Fe-based biomaterials for bone substitution
,”
Acta Biomater.
109
,
1
20
(
2020
).
44.
A. A.
Zadpoor
, “
Mechanical meta-materials
,”
Mater. Horiz.
3
(
5
),
371
381
(
2016
).
45.
T.
van Manen
,
S.
Janbaz
,
K. M. B.
Jansen
, and
A. A.
Zadpoor
, “
4D printing of reconfigurable metamaterials and devices
,”
Commun. Mater.
2
,
56
(
2021
).
46.
Y.
Mishin
,
D.
Farkas
,
M. J.
Mehl
, and
D. A.
Papaconstantopoulos
, “
Interatomic potentials for monoatomic metals from experimental data and ab initio calculations
,”
Phys. Rev. B
59
(
5
),
3393
3407
(
1999
).
47.
L. J.
Gibson
and
M. F.
Ashby
,
Cellular Solids: Structure and Properties
, 2nd ed. (
Cambridge University Press
,
2014
), pp.
1
510
.
48.
P. M.
Anderson
,
J. P.
Hirth
, and
J.
Lothe
,
Theory of Dislocations
, 3rd ed. (
Cambridge University Press
,
2017
), p.
1543
.
49.
X.
Xia
et al., “
Electrochemically reconfigurable architected materials
,”
Nature
573
(
7773
),
205
213
(
2019
).
50.
A. R.
Studart
, “
Biological and bioinspired composites with spatially tunable heterogeneous architectures
,”
Adv. Funct. Mater.
23
(
36
),
4423
4436
(
2013
).
51.
M.
Moradi
,
M.
Karami Moghadam
,
M.
Shamsborhan
, and
M.
Bodaghi
, “
The synergic effects of FDM 3D printing parameters on mechanical behaviors of bronze poly lactic acid composites
,”
J. Compos. Sci.
4
(
1
),
17
(
2020
).
52.
N.
Cartwright
,
How the Laws of Physics Lie
(
Clarendon Press
,
1983
).
53.
P. P.
Goff
,
Consciousness and Fundamental Reality
(
Oxford University Press
,
2017
).
54.
D.
Macauley
,
Elemental Philosophy: Earth, Air, Fire, and Water as Environmental Ideas
(
Suny Press
,
2010
).
55.
A.
Radford
 et al., “
Learning transferable visual models from natural language supervision
,”
CoRR
abs/2103.00020 (
2021
).
56.
A.
Komatsuzaki
, “
LAION-400M: Open dataset of CLIP-filtered 400 million image-text pairs
,”
CoRR
abs/2111.02114 (
2021
).
57.
G.
Bradski
,
The OpenCV Library
(
Dr. Dobb’s Journal of Software Tools
,
2000
).
58.
S.
van der Walt
et al., “
Scikit-image: Image processing in python
,”
PeerJ
2
(
1
),
e453
(
2014
).
59.
Trimesh, Computer software, Retrieved from https://github.com/mikedh/trimesh,
2019
.
60.
A. P.
Thompson
et al., “
LAMMPS—A flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales
,”
Comput. Phys. Commun.
271
,
108171
(
2022
).
61.
A.
Stukowski
, “
Visualization and analysis of atomistic simulation data with OVITO-the open visualization tool
,”
Model. Simul. Mater. Sci. Eng.
18
(
1
),
015012
(
2010
).
62.
A.
Stukowski
, “
Computational analysis methods in atomistic modeling of crystals
,”
JOM
66
(
3
),
399
407
(
2014
).
63.
A.
Stukowski
,
V. V.
Bulatov
, and
A.
Arsenlis
, “
Automated identification and indexing of dislocations in crystal interfaces
,”
Model. Simul. Mater. Sci. Eng.
20
(
8
),
085007
(
2012
).

Supplementary Material