Recent advances in scanning transmission electron microscopy (STEM) have enabled direct visualization of the atomic structure of ferroic materials, enabling the determination of atomic column positions with approximately picometer precision. This, in turn, enabled direct mapping of ferroelectric and ferroelastic order parameter fields via the top-down approach, where the atomic coordinates are directly mapped on the mesoscopic order parameters. Here, we explore the alternative bottom-up approach, where the atomic coordinates derived from the STEM image are used to explore the extant atomic displacement patterns in the material and build the collection of the building blocks for the distorted lattice. This approach is illustrated for the La-doped BiFeO3 system.

After more than 70 years of intensive research, ferroelectrics remain one of the most fascinating material classes.1,2 This is due to both a broad spectrum of applications ranging from nonvolatile electronics3 to electromechanical and optoelectronic devices4 and a dazzling array of the physical phenomena and functionalities.5 The issues such as the nature of the morphotropic and relaxor states,6,7 ferroelectric and size effects,4 and coupling between polarization and chemical,8 transport,9 and light10 phenomena have captured the attention of the scientific community worldwide.

Until recently, the insights into the fundamental physics of ferroelectric materials were derived using the synergy of macroscopic measurements and neutron and X-ray scattering techniques. The invention of Piezoresponse Force Microscopy (PFM) and associated spectroscopic techniques has opened the realm of mesoscopic phenomena including polarization and domain wall dynamics, domain switching, and nonlinear responses for exploration.11–16 The structural information on comparable length scales has become available via the focused X-ray and tomographic techniques.17 However, the associated atomistic mechanisms have remained poorly understood until the last decade, when the spatial resolution and information limit in transmission and scanning transmission electron microscopy techniques (TEM and STEM) have allowed the mapping of structural distortions within the unit cells and hence mapping the mesoscopic ferroelectric order parameter field.

This approach was pioneered in TEM by Jia et al.18 and in STEM by Chisholm et al.19 and Borisevich et al.20 and further extended by Nelson et al.,21 Mundy et al.,22 Lubk et al.,23 and others. Here, the atomic positions of cations and anions within a unit cell are used to derive the polarization, either directly as a weighted sum of the effective Born charges or indirectly when a certain measure of unit cell asymmetry is assumed to be proportional to the polarization vector. Thus, derived polarization fields can be used as a qualitative measure of materials functionality. Beyond the qualitative description, the polarization distributions in the vicinity of surfaces, interfaces, or topological defects can be directly matched to the solutions of the Ginzburg–Landau equations to extract the magnitude of the gradient and gradient coupling terms such as flexoelectricity24–26 or the correlation and screening lengths describing boundary conditions at surfaces and interfaces27 and in certain cases extended to the interfacial electrochemistry.28,29

However, these analyses relied until now on the postulated relationship between experimentally determined atomic coordinates and the polarization vector. In more complex analyses, the bulk form of the Ginzburg–Landau free energy was additionally adopted as determined from macroscopic thermodynamic and scattering studies.24–26 However, in many materials systems such as morphotropic and relaxor ferroelectrics, the nature of the order parameter itself and hence the corresponding free energy expansions are actively debated. Correspondingly, of interest is the question whether these descriptors can be obtained from the experimental data, as opposed to being postulated. We further note that even the nature of the local descriptors of molecular structures is one of the key issues determining the efficiency and performance of machine learning algorithms for materials property prediction and optimization, and hence, deriving these from experimental data is a target of interest for both the machine learning / artificial intelligence (ML/AI) community and condensed matter physics.

Here, we explore the nature of the building blocks in the morphotropic ferroelectric systems using the statistical analysis of the atomically resolved STEM data in a weakly supervised fashion. Using the deep-learning analysis, we identify the localization of atomic columns in the form of the probability density field. We further explore the use of several linear statistical unmixing techniques including Gaussian mixture models and independent component analysis (ICA) to build the library of structural distortions and the associated domain structures. These analyses and fully integrated workflow are available as a part of publication. Further perspectives for the analysis are discussed.

As our model system, we acquired STEM images from BiFeO3 thin films doped with La to La0.17Bi0.83FeO3, driving the system near a morphotropic phase boundary between ferroelectric rhombohedral (R) and nonpolar/antipolar orthorhombic (O) phases.30 The films were grown by pulsed laser deposition and exhibit a mixture of these two phases, as detailed in Ref. 30. High Angle Annular Dark Field (HAADF) STEM images were acquired on a Cs corrected FEI Titan at 300 kV. The HAADF-STEM images are input to the neural network analysis in their raw format, i.e., they are single raster scan datasets with no denoising or other preprocessing. In particular, the low conductivity of the LBFO results in time-varied sample charging under the electron beam which manifests in slow-scan axis artifacts. This morphotropic phase boundary composition dataset presents a diversity of phases and orientations within a small region, each nominally distinguishable by subtle symmetry breaking from the cubic perovskite structure but with significant scan error/noise.

As a first step of the analysis, we adopt a deep learning neural network analysis to convert noisy experimental data into atomic coordinates of different atomic species. We used a U-net-like fully convolutional neural network (FCNN)31 supplemented by the dilated convolutions in the bottleneck layer, which allows performing the simultaneous mixed-scale denoising of the atomic image and separation of atomic columns with different intensities into different “channels”/classes. The training set was generated for a regular square bipartite lattice under the assumption that two sublattices have different contrasts in the image but without any specific knowledge of the particular system. The atomic objects (columns) were represented as 2D Gaussian objects with the sublattice intensity ratio in the (1.5; 2.5) range. The Gaussian noise and Poisson noise were added to simulated images to ensure the robustness of the neural network. Once a model is trained, it allows rapid identification of atomic positions based on the local contrast. We notice that the same model can also be in principle used for the analysis of surfaces with a bipartite lattice structure in the scanning tunneling microscopy experiments.32 The output of the FCNN is a probability density field, where for each pixel the probability of belonging to a given type of atomic column is shown in Fig. 1. While it is possible to use other methods (peak-fitting) for finding atoms in the high-quality data with minimal noise/distortions, the deep learning-based approach also works in the presence of high levels of noise and larger image distortions where the “standard” methods may fail, as was demonstrated for STEM experiments on 2D materials.33,34

FIG. 1.

(a) Experimental image of La-doped BiFeO3 system (LBFO). (b) and (c) Neural network output consisting of two channels corresponding to different atomic sublattices. (d) Principal component analysis (PCA) scree plot showing the explained variance as a function of the number of PCA components for different cell sizes d, that is, different sizes of subimages extracted from the neural network output as shown in (e), where “red” and “green” are the two sublattices shown in (b) and (c) and “blue” is a background class. The inset in (d) shows a variance for each of the first 4 PCA components as a function of the effective cell size d.

FIG. 1.

(a) Experimental image of La-doped BiFeO3 system (LBFO). (b) and (c) Neural network output consisting of two channels corresponding to different atomic sublattices. (d) Principal component analysis (PCA) scree plot showing the explained variance as a function of the number of PCA components for different cell sizes d, that is, different sizes of subimages extracted from the neural network output as shown in (e), where “red” and “green” are the two sublattices shown in (b) and (c) and “blue” is a background class. The inset in (d) shows a variance for each of the first 4 PCA components as a function of the effective cell size d.

Close modal

To get fundamental insight into the nature of the elementary building block in the material, we generate the local neighborhoods of side d for each site in the lattice, as shown in Fig. 1(e). These subimages are centered at the center of mass of each individual column and hence are robust with respect to intrinsic factors such as large scale strains and distortions and extrinsic factors such as microscope drift. In this stage, the image is transformed from the 2D object to the set of subimages cn, where n = i, j defines the lattice site at which the subimage is centered.

The initial insight into the information content of the system can be derived from the principal component analysis of cn, decomposing the feature vectors into the independent components. Shown in Fig. 1(d) is the information content for the first 4 principal component analysis (PCA) components35 as a function of the effective unit cell d (subimage size). One can see that most of the information contains 4–6 eigenvectors. Curiously, the behavior of the information content depends on the subimage size, showing a pronounced increase in the 4th value for the even subimage sizes. This behavior is directly related to the dominant ordering in the system. The eigenmodes and their loading maps corresponding to the PCA decomposition of the local neighborhoods for d =1 from the neural network output are shown in Figs. 2(a)–2(d) and 2(e)–2(h), respectively.

FIG. 2.

(a). PCA decomposition of the local image descriptors for the d = 1 case [see Figs. 1(d) and 1(e)] extracted from the neural network output into four components with the associated eigenmodes (a)–(d) and loading maps for each eigenmode (e)–(h).

FIG. 2.

(a). PCA decomposition of the local image descriptors for the d = 1 case [see Figs. 1(d) and 1(e)] extracted from the neural network output into four components with the associated eigenmodes (a)–(d) and loading maps for each eigenmode (e)–(h).

Close modal

In the forthcoming discussion, we note that a potential aspect of the adopted analysis is that it potentially allows for separation of the domain and orientation variants throughout the thickness of a material. In other words, if random components are presented throughout the thickness of the material with sufficient statistics, these can in principle be separated. Insofar as the HAADF signal through the sample depth is represented as a linear mixture and where pure variants are present, the true phase composition averaged over the beam direction as well as endmembers corresponding to the individual phases can be extracted. It should be noted that these are generally not good assumptions for STEM signals, and the HAADF signal for the sample thickness and strongly scattering heavy elements in this system is highly nonlinear and will preferentially weigh the entrance surface. Nevertheless, depth mixing within this region may still manifest qualitatively in the adopted analysis.

In analyzing the PCA components, we note that the characteristic aspect of PCA decomposition is that (a) the expansion is unlimited, i.e., full PCA expansions preserve the full information in the system, but (b) PCA components are defined purely from the perspective of the decreasing information content and are not subject to specific physical constraints. As such, analysis of PCA eigenvectors (building blocks) and loading maps (local concentration of these blocks) as illustrated in Fig. 2 is ideally suited for exploratory analysis. Here, we show that the first two PCA components clearly define the ferroelectric distortion map in the system. Indeed, the eigenvectors clearly define the cation displacements in the [1-1] and [11] directions, as visible from zero contrast at the central atom (it is almost invisible, meaning that it does not change much) and the characteristic up-down pattern for the 4 corner atoms. The corresponding loading maps show the spatial localization of these distortions. Note that there are clear domains where both distortion components are present with different signs—e.g., compare bottom-left and top-left regions. This is due to the fact that domains can have 4 possible in-plane orientations of polarization, which gives rise to 4 possible mixtures; different PCA component maps pick the x-component and y-component of polarization separately.

To get a better understanding of the domain structure (and achieve a better domain separation), we plotted the features associated with the first two PCA components in Fig. 3(a). Interestingly, a simple visual inspection of the plot in the PCA feature space suggests that it is possible to group the extracted features into ∼3 clusters. This was achieved by performing a k-means clustering36 on the features associated with the first two PCA components. The spatial distribution of the resultant clusters is shown in Fig. 3(b), and it clearly reveals the three well-defined R-phase domains oriented with A-site displacements along the [11] (red), [1-1] (green), and [-11] (blue-top right) directions. An O-phase at the top left manifests in alternating red/blue (-11) layers due to the projected O-phase unit cell consisting of four d = 1 cells that have antiferrodistortions of the A-site in a ++−− pattern.

FIG. 3.

(a) Plot of the features associated with the first two PCA components from Fig. 2. The k-means clustering was applied to these data and the corresponding cluster centers are shown with yellow crosses. (b) Spatial distribution of the k-means clusters in the coordinates of the original image.

FIG. 3.

(a) Plot of the features associated with the first two PCA components from Fig. 2. The k-means clustering was applied to these data and the corresponding cluster centers are shown with yellow crosses. (b) Spatial distribution of the k-means clusters in the coordinates of the original image.

Close modal

Further unexpected insight into the ferroelectric behavior in this system can be obtained from the 3rd PCA component. There are a number of “outlier points” observed, but importantly, there is also a clearly visible negative feature colocated with the domain walls in the first component. The corresponding eigenvector shows the “sharpening” of the central atom, the decrease in intensity of corner A-site atoms, and an increase in intensity at the oxygen positions. Rather than an A-site translation as for components 1, 2, and 4, this component indicates relative changes in atomic column scattering as from a change in local stoichiometry or electron beam channeling. The possible underlying causes include the strain or charge driven stoichiometric defect concentration at the twin boundaries or the suppression of the counterrotated oxygen octahedral tilt system in the vicinity of the domain wall increasing the channeling along these columns.

Finally, the 4th PCA component illustrates the large scanning artifacts from beam charging within this dataset. It also highlights the segregation of this error from the “real” structural information contained in the other components. The corresponding eigenvector consists of opposed A-site translations corresponding to a slow-scan axis dilation. The loading map shows how these dilations are uniform across scan-lines, varying significantly along the slow-scan axis of the image. Unfortunately, this dilation is also a component at some boundaries of antiparallel A-site displacements. As a result, some structural information of the R/O boundary and the alternating layers of the O-phase regions are also included in this component.

Based on the PCA decomposition, we have further implemented other linear unmixing methods, including non-negative matrix factorization (NMF),37 independent component analysis (ICA),38 and Gaussian Mixture modeling (GMM).39 The logic of this choice is that these models allow for a certain form of physical constraints. Of these, NMF separates the mixture in non-negative components, corresponding to positive intensities for the image. ICA operates to “decrease the Gaussianity” and maximize the variability between components. GMM seeks to represent the data as a probability to belong to a specific component of the model mixture. Note that all these methods postulate the number of components, and hence, practical decompositions with the different numbers of the component mixtures should be explored. We also note that linear unmixing models with the specific constraints on the eigenvectors or sparsity of the loading maps exist40 but are not explored here.

The comparative analysis of the different methods has demonstrated that ICA method leads to the separation of instrumental noise and aliasing effect components that dominate the images. The GMM method usually cannot separate fine features in the domain distribution unless the number of components is larger than five, which would be unphysical in this case. The best results for clustering/decomposition analysis were obtained using the NMF. The NMF solves the problem of decomposing the input data represented by matrix X of size n×m, where n is the number of samples (subimages) and m is the number of features, into two non-negative factors W and H such that XWH=i=1kWiHi. Here, H corresponds to the endmembers while W is used to construct loading maps (abundances) associated with the extracted endmembers (see a notebook for the details of the method implementation). Due to the non-negativity constraint, NMF can be applied to problems involving the finding of km physically meaningful endmembers from the input data, such that all the data can be explained as a mixture of these k basic phases. Shown in Fig. 4 are the endmembers extracted via NMF [Figs. 4(a)–4(c)] and their loading maps [Figs. 4(d)4(f)] for the case of 3 mixture components. Notice that the NMF analysis shows the same localization of domains as we found after applying k-means clustering to the PCA output. The unexpected (“anomalous”) domain wall feature observed in the third component from PCA decomposition appears at k >6.

FIG. 4.

(a)–(c) Endmembers from the non-negative matrix factorization-based decomposition of local image descriptors from the output of the neural network. The position of the central blob is the same in (a)–(c), and thus, the relative displacement of the 4 corner atoms is clearly seen. (d)–(f) The corresponding spatial distribution (abundance map) for each endmember.

FIG. 4.

(a)–(c) Endmembers from the non-negative matrix factorization-based decomposition of local image descriptors from the output of the neural network. The position of the central blob is the same in (a)–(c), and thus, the relative displacement of the 4 corner atoms is clearly seen. (d)–(f) The corresponding spatial distribution (abundance map) for each endmember.

Close modal

To summarize, we have developed the approach for the bottom-up construction of the individual structural elements of the ferroelectric solid. This approach allows for the separation of the components averaged in the beam direction and provides the way to efficiently work with such data. The anomalous behaviors at the ferroelectric domain walls were detected. The data analysis tools developed in this work are available in a form of the interactive, live-code mirror paper (jupyter paper) at https://doi.org/10.26434/chemrxiv.8001473.v1 and can be used across the STEM and ferroelectric community.

See the supplementary material for the short video showing how to open and execute a Jupyter notebook to retrace the analysis and results of this paper.

The concept of the Jupyter based publications is proposed and developed by M.Z., R.K.V., and S.V.K. This research was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division (S.V.K., R.K.V., C.N.). This research was conducted at the Center for Nanophase Materials Sciences and is a DOE Office of Science User Facility. Electron microscopy at the Molecular Foundry, Lawrence Berkeley National Laboratory, was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-05CD11231.

1.
D.
Damjanovic
,
Rep. Prog. Phys.
61
(
9
),
1267
1324
(
1998
).
2.
O.
Auciello
,
J. F.
Scott
, and
R.
Ramesh
,
Phys. Today
51
(
7
),
22
27
(
1998
).
3.
M.
Alexe
,
A.
Gruverman
,
C.
Harnagea
,
N. D.
Zakharov
,
A.
Pignolet
,
D.
Hesse
, and
J. F.
Scott
,
Appl. Phys. Lett.
75
(
8
),
1158
1160
(
1999
).
4.
N.
Setter
,
D.
Damjanovic
,
L.
Eng
,
G.
Fox
,
S.
Gevorgian
,
S.
Hong
,
A.
Kingon
,
H.
Kohlstedt
,
N. Y.
Park
,
G. B.
Stephenson
,
I.
Stolitchnov
,
A. K.
Tagantsev
,
D. V.
Taylor
,
T.
Yamada
, and
S.
Streiffer
,
J. Appl. Phys.
100
(
5
),
051606
(
2006
).
5.
A. K.
Tagantsev
,
L. E.
Cross
, and
J.
Fousek
,
Domains in Ferroic Crystals and Thin Films
(
Springer
,
New York
,
2010
).
6.
A. E.
Glazounov
,
A. K.
Tagantsev
, and
A. J.
Bell
,
Phys. Rev. B
53
(
17
),
11281
11284
(
1996
).
7.
A. A.
Bokov
,
B. J.
Rodriguez
,
X. H.
Zhao
,
J. H.
Ko
,
S.
Jesse
,
X. F.
Long
,
W. G.
Qu
,
T. H.
Kim
,
J. D.
Budai
,
A. N.
Morozovska
,
S.
Kojima
,
X. L.
Tan
,
S. V.
Kalinin
, and
Z. G.
Ye
,
Z. Kristallogr.
226
(
2
),
99
107
(
2011
).
8.
S. M.
Yang
,
A. N.
Morozovska
,
R.
Kumar
,
E. A.
Eliseev
,
Y.
Cao
,
L.
Mazet
,
N.
Balke
,
S.
Jesse
,
R. K.
Vasudevan
,
C.
Dubourdieu
, and
S. V.
Kalinin
,
Nat. Phys.
13
(
8
),
812
(
2017
).
9.
P.
Maksymovych
,
S.
Jesse
,
P.
Yu
,
R.
Ramesh
,
A. P.
Baddorf
, and
S. V.
Kalinin
,
Science
324
(
5933
),
1421
1425
(
2009
).
10.
A. L.
Kholkin
,
S. O.
Iakovlev
, and
J. L.
Baptista
,
Appl. Phys. Lett.
79
(
13
),
2055
2057
(
2001
).
11.
R. K.
Vasudevan
,
D.
Marincel
,
S.
Jesse
,
Y.
Kim
,
A.
Kumar
,
S. V.
Kalinin
, and
S.
Trolier-McKinstry
,
Adv. Funct. Mater.
23
(
20
),
2490
2508
(
2013
).
12.
S. V.
Kalinin
,
A. N.
Morozovska
,
L. Q.
Chen
, and
B. J.
Rodriguez
,
Rep. Prog. Phys.
73
(
5
),
056502
(
2010
).
13.
A.
Gruverman
and
A.
Kholkin
,
Rep. Prog. Phys.
69
(
8
),
2443
2474
(
2006
).
14.
D. A.
Bonnell
,
S. V.
Kalinin
,
A. L.
Kholkin
, and
A.
Gruverman
,
MRS Bull.
34
(
9
),
648
657
(
2009
).
15.
B. J.
Rodriguez
,
S.
Choudhury
,
Y. H.
Chu
,
A.
Bhattacharyya
,
S.
Jesse
,
K.
Seal
,
A. P.
Baddorf
,
R.
Ramesh
,
L. Q.
Chen
, and
S. V.
Kalinin
,
Adv. Funct. Mater.
19
(
13
),
2053
2063
(
2009
).
16.
B. J.
Rodriguez
,
Y. H.
Chu
,
R.
Ramesh
, and
S. V.
Kalinin
,
Appl. Phys. Lett.
93
(
14
),
142901
(
2008
).
17.
R. P.
Winarski
,
M. V.
Holt
,
V.
Rose
,
P.
Fuesz
,
D.
Carbaugh
,
C.
Benson
,
D. M.
Shu
,
D.
Kline
,
G. B.
Stephenson
,
I.
McNulty
, and
J.
Maser
,
J. Synchrotron Radiat.
19
,
1056
1060
(
2012
).
18.
C.-L.
Jia
,
V.
Nagarajan
,
J.-Q.
He
,
L.
Houben
,
T.
Zhao
,
R.
Ramesh
,
K.
Urban
, and
R.
Waser
,
Nat. Mater.
6
(
1
),
64
69
(
2007
).
19.
M. F.
Chisholm
,
W.
Luo
,
M. P.
Oxley
,
S. T.
Pantelides
, and
H. N.
Lee
,
Phys. Rev. Lett.
105
(
19
),
197602
(
2010
).
20.
A.
Borisevich
,
O. S.
Ovchinnikov
,
H. J.
Chang
,
M. P.
Oxley
,
P.
Yu
,
J.
Seidel
,
E. A.
Eliseev
,
A. N.
Morozovska
,
R.
Ramesh
,
S. J.
Pennycook
, and
S. V.
Kalinin
,
ACS Nano
4
(
10
),
6071
6079
(
2010
).
21.
C. T.
Nelson
,
B.
Winchester
,
Y.
Zhang
,
S.-J.
Kim
,
A.
Melville
,
C.
Adamo
,
C. M.
Folkman
,
S.-H.
Baek
,
C.-B.
Eom
,
D. G.
Schlom
,
L.-Q.
Chen
, and
X.
Pan
,
Nano Lett.
11
(
2
),
828
834
(
2011
).
22.
J. A.
Mundy
,
J.
Schaab
,
Y.
Kumagai
,
A.
Cano
,
M.
Stengel
,
I. P.
Krug
,
D. M.
Gottlob
,
H. D.
Anay
,
M. E.
Holtz
,
R.
Held
,
Z.
Yan
,
E.
Bourret
,
C. M.
Schneider
,
D. G.
Schlom
,
D. A.
Muller
,
R.
Ramesh
,
N. A.
Spaldin
, and
D.
Meier
,
Nat. Mater.
16
(
6
),
622
627
(
2017
).
23.
A.
Lubk
,
M. D.
Rossell
,
J.
Seidel
,
Y. H.
Chu
,
R.
Ramesh
,
M. J.
Hÿtch
, and
E.
Snoeck
,
Nano Lett.
13
(
4
),
1410
1415
(
2013
).
24.
Q.
Li
,
C. T.
Nelson
,
S. L.
Hsu
,
A. R.
Damodaran
,
L. L.
Li
,
A. K.
Yadav
,
M.
McCarter
,
L. W.
Martin
,
R.
Ramesh
, and
S. V.
Kalinin
,
Nat. Commun.
8
,
1468
(
2017
).
25.
E. A.
Eliseev
,
S. V.
Kalinin
,
Y. J.
Gu
,
M. D.
Glinchuk
,
V.
Khist
,
A.
Borisevich
,
V.
Gopalan
,
L. Q.
Chen
, and
A. N.
Morozovska
,
Phys. Rev. B
88
(
22
),
224105
(
2013
).
26.
A. Y.
Borisevich
,
E. A.
Eliseev
,
A. N.
Morozovska
,
C. J.
Cheng
,
J. Y.
Lin
,
Y. H.
Chu
,
D.
Kan
,
I.
Takeuchi
,
V.
Nagarajan
, and
S. V.
Kalinin
,
Nat. Commun.
3
,
775
(
2012
).
27.
A. Y.
Borisevich
,
A. N.
Morozovska
,
Y. M.
Kim
,
D.
Leonard
,
M. P.
Oxley
,
M. D.
Biegalski
,
E. A.
Eliseev
, and
S. V.
Kalinin
,
Phys. Rev. Lett.
109
(
6
),
065702
(
2012
).
28.
A. Y.
Borisevich
,
A. R.
Lupini
,
J.
He
,
E. A.
Eliseev
,
A. N.
Morozovska
,
G. S.
Svechnikov
,
P.
Yu
,
Y. H.
Chu
,
R.
Ramesh
,
S. T.
Pantelides
,
S. V.
Kalinin
, and
S. J.
Pennycook
,
Phys. Rev. B
86
(
14
),
140102(R)
(
2012
).
29.
Y. M.
Kim
,
J.
He
,
M. D.
Biegalski
,
H.
Ambaye
,
V.
Lauter
,
H. M.
Christen
,
S. T.
Pantelides
,
S. J.
Pennycook
,
S. V.
Kalinin
, and
A. Y.
Borisevich
,
Nat. Mater.
11
(
10
),
888
894
(
2012
).
30.
D.
Chen
,
C. T.
Nelson
,
X.
Zhu
,
C. R.
Serrao
,
J. D.
Clarkson
,
Z.
Wang
,
Y.
Gao
,
S.-L.
Hsu
,
L. R.
Dedon
,
Z.
Chen
,
D.
Yi
,
H.-J.
Liu
,
D.
Zeng
,
Y.-H.
Chu
,
J.
Liu
,
D. G.
Schlom
, and
R.
Ramesh
,
Nano Lett.
17
(
9
),
5823
5829
(
2017
).
31.
O.
Ronneberger
,
P.
Fischer
, and
T.
Brox
, preprint arXiv:1505.04597 (
2015
).
32.
M.
Ziatdinov
,
A.
Maksov
,
L.
Li
,
A. S.
Sefat
,
P.
Maksymovych
, and
S. V.
Kalinin
,
Nanotechnology
27
(
47
),
475706
(
2016
).
33.
M.
Ziatdinov
,
O.
Dyck
,
A.
Maksov
,
X.
Li
,
X.
Sang
,
K.
Xiao
,
R. R.
Unocic
,
R.
Vasudevan
,
S.
Jesse
, and
S. V.
Kalinin
,
ACS Nano
11
(
12
),
12742
12752
(
2017
).
34.
M.
Ziatdinov
,
O.
Dyck
,
S.
Jesse
, and
S. V.
Kalinin
, preprint arXiv:1901.09322 (
2019
).
35.
S.
Jesse
and
S. V.
Kalinin
,
Nanotechnology
20
(
8
),
085714
(
2009
).
36.
J. A.
Hartigan
and
M. A.
Wong
,
J. R. Stat. Soc. Ser. C
28
(
1
),
100
108
(
1979
).
37.
D. D.
Lee
and
H. S.
Seung
,
Nature
401
(
6755
),
788
791
(
1999
).
38.
A.
Hyvärinen
and
E.
Oja
,
Neural Networks
13
(
4-5
),
411
430
(
2000
).
39.
D. A.
Reynolds
,
R. C.
Rose
 et al,
IEEE Trans. Audio, Speech, Lang. Process.
3
(
1
),
72
83
(
1995
).
40.
R.
Kannan
,
A. V.
Ievlev
,
N.
Laanait
,
M. A.
Ziatdinov
,
R. K.
Vasudevan
,
S.
Jesse
, and
S. V.
Kalinin
,
Adv. Struct. Chem. Imaging
4
(
1
),
6
(
2018
).

Supplementary Material