X-ray photoelectron spectroscopy (XPS) is widely used to identify chemical species at a surface through the observation of peak positions and peak shapes. It is less widely recognized that intensities in XPS spectra can also be used to obtain information on the chemical composition of the surface of the sample and the depth distribution of chemical species. Transforming XPS data into meaningful information on the concentration and distribution of chemical species is the topic of this article. In principle, the process is straightforward, but there are a number of pitfalls that must be avoided to ensure that the information is representative and as accurate as possible. This paper sets out the things that should be considered to obtain reliable, meaningful, and useful information from quantitative XPS. This includes the necessity for reference data, instrument performance checks, and a consistent and methodical method for the separation of inelastic background from peaks. The paper contains relevant and simple equations along with guidance on their use, validity, and assumptions.

It is the normal expectation that, over time, improvements are made and things get better. There are often ebbs and flows in this process and from some perspectives it may appear that, in certain periods, there is regression rather than progression. As x-ray photoelectron spectroscopy (XPS) transforms from being a specialist subject into a mainstream analytical tool, a period of relearning is likely to occur. The fact that XPS has achieved widespread adoption is a testament to the community of experts who, in the final decades of the 20th century, solved most of the associated challenges and issues. Through that work, XPS data were shown to be highly repeatable and reproducible, the instrumentation became more efficient and automated, simple algorithms to interpret the data were shown to be sufficiently accurate for most purposes and in some cases, such as the measurement of very thin silicon oxide layers, XPS was found to be the most accurate measurement method.1 

Since the turn of the century, XPS instrument sales have continued to grow and the number of papers that include XPS data is doubling every ten years. This is a faster rate than the increase in the number of scientific publications, which generally double every 15 years.2 The majority of this growth can be ascribed to articles that include XPS as one of many “characterization tools” rather than papers that employ XPS as the main analytical method. This implies that a large number of relatively inexperienced researchers use XPS to measure samples. These users may assume that the instrument manufacturer has written the appropriate software, included all the calibration procedures, and has the appropriate reference data to make their analysis meaningful. Sadly, this is often not the case. This problem, combined with the difficulty of accessing coherent, simple, and accessible guidance, results in a significant proportion of errors in XPS data analysis. One misconception that I have heard expressed by those outside the surface analysis community is that XPS cannot be used in a quantitative manner. With perhaps slightly more justification, I have heard XPS described by casual users as “semiquantitative.” Both points of view relate to the manner in which some researchers use XPS and are fundamentally wrong. Like all other measurement methods, XPS is quantitative if the instrument is calibrated and reference materials or data are used. This article is part of a series of practical guides and covers in more detail one of the points highlighted in an introductory article, “First steps in planning, conducting and reporting XPS measurements.”3 The assumption is that, having planned and considered the experiment, you have decided that a quantitative result regarding thickness, concentration, or surface coverage should be extracted from the XPS data.

In writing this article, my intention is to compile what I think are the important considerations in performing quantitative analysis with XPS. Quantitative XPS is the process of comparing two or more intensities in XPS spectra to determine the amount of material at the surface of a sample. These intensities may be from the same peak or different peaks in the same spectrum or the same set of peaks in different spectra. The amount of material may be expressed as atomic concentration, coverage, area fraction, or thickness. It is all quantitative XPS and involves the following process: acquire data, extract peak areas, and employ an equation or algorithm to arrive at a result.

If one reads the detailed literature, some of the issues that I omit in this article will be highlighted more strongly than those I have emphasized. The reader may have the impression that I have glossed over a number of significant problems. I make no claim to infallibility and hope that, if the reader knows better than myself about such issues, they are already beyond the stage where this article can be of help to them.

Before progressing, it is worth commenting that all measurements have an associated error. The important question is “how much error?” Therefore, I have tried to highlight the most significant (>30%) sources of error as critical for quantitative XPS; those that should be considered to achieve approximately 10% error as important; and finally mention a number that are, in the context of most works, trivial. In this vein, I have simplified many equations so that they contain, as far as possible, only things that an average analyst might be expected to know. Further detail and a more thorough treatment can be found in textbooks on the subject.4,5

I suggest that the reader begins at the summary, where all the major points raised in this article are listed, and then proceed to any of the sections that they are interested in or would like more understanding. For specific types of materials, more information is available in other guides in this series, for example, polymers6 and catalysts.7 

If it is your intention to use XPS to measure something, then there are several things that you should have at hand. First, of course, it is essential to have an operational XPS instrument, but it is also necessary to know something about the instrument and its performance. Having the correct pieces of information ensures that the data, once you have it, are meaningful.

Modern XPS instruments are reliable and they produce the same data from the same sample, typically to within a few percent variability. This is an excellent starting point for quantitative XPS, but it should not be taken for granted. All instruments drift in performance over time and any maintenance work may also alter performance. If you have any doubt that the XPS signal varies, check that the sample gives the same spectrum on two different occasions during the analysis. It is important to confirm that the sample does not change during XPS analysis before considering that the instrument performance has changed.

1. Variability (critical)

There are many sources of variability, all of which should be assessed. Usually, the most notable changes of a few percent in intensity are due to variations in x-ray power, particularly, during warmup after the instrument has just been switched on. Find out how long this takes and let the instrument equilibrate before taking measurements.

It is a good practice to assess the performance of the instrument on a weekly or monthly basis. This is generally performed on a reference material, such as freshly sputter-cleaned silver or gold. Details on this form of regular monitoring of instrument performance are provided in another article in this series8 and in ISO 16129:2018 “SCA-XPS-Procedures for assessing the day-to-day performance of an X-ray photoelectron spectrometer.” The simplest assessment is to acquire a survey spectrum, divide this by a reference spectrum taken using the same conditions, and sample at an earlier point in time and identify issues from any deviations in the intensity ratio.9 Such practices ensure that the data from your instrument are at least consistent over time and provide you with a value for the repeatability of your data. It is important to remember that every operational mode of an XPS instrument, such as pass energy, slits, lens mode, apertures, and so on, produces a different, energy-dependent intensity response. Therefore, each of the modes that you use for quantitative analysis should be assessed for repeatability.

Another aspect of variability is the comparability of results generated by your XPS instrument to those generated by other XPS instruments. Such assessments are intermittently carried out through inter-laboratory studies. Previous inter-laboratory studies have shown that there can be significant variability between even nominally identical XPS instruments in different laboratories.10–13 These differences can be mitigated by commonly applied calibration schemes,14–18 but it is of prime importance to remember that uncalibrated data from your instrument should not be compared directly to those from other instruments.

2. Analyzer (important)

If one is simply identifying peak positions in XPS, there are many instrument imperfections that can be ignored. For this “simple identification” purpose, the performance metrics are energy resolution and energy scale calibration. For quantitative XPS, these metrics are less important as long as the peaks being analyzed can be identified and distinguished from other features.

The performance metric for quantitative XPS is detector linearity.19–21 This should be assessed during instrument commissioning, servicing, and at approximately yearly intervals if no regular servicing is performed. Different detector types have different characteristics, but most modern instruments have excellent linearity up to a saturation level. Beyond this limit, many strange things can happen that are often difficult to unravel. For accurate work, it is important to avoid this regime, which will typically start somewhere between 100 kcps and 1 Mcps. Modern instruments are quite capable of achieving such count rates in routine operation, particularly, with calibration materials such as silver and gold. Reducing the count rate is simply a matter of reducing the emission from the x-ray anode. Do not alter instrument settings (e.g., irises or apertures) to reduce the count rate unless you have reference spectra or a calibration for the new settings.

Consideration should also be given to dark noise in the detector and scattering in the spectrometer.21,22 These may affect data by producing a broad background but are usually of minor significance except in the case of high-resolution, low-pass energy experiments. Dark noise can be assessed simply by running a spectrum without the x-ray source on, or with both the sample and sample holder well away from the analysis position. Scattering can be identified from low-pass energy survey spectra and will appear as an anomalous background in the XPS spectrum, a shoulder on peaks or, in monochromated instruments, significant counts at energies higher than the x-ray energy. One instrument that I was responsible for produced a distinct peak at the high kinetic energy side of each peak immediately after installation. This feature resulted from electrons scattering from a movable plate near the detector that was failing to close fully. The problem, after identification, was swiftly fixed by the manufacturer. In general, these effects should be resolved by the manufacturer, but it is important that the user is able to identify such problems before generating data.

Finally, and of considerable importance, a calibration of the intensity scale should be performed. Most spectrometers transmit electrons from the sample to the detector with an efficiency that changes with the kinetic energy of the electrons, the mode that the analyzer is operated in and the analyzer pass energy or retarding ratio of the lens. Each of the instrument modes has an energy-dependent response, which is commonly called the “transmission function.” The instrument response has to be applied to the raw data from the instrument to enable a quantitative comparison between different modes of the same instrument. It also permits a comparison of your data to data from other instruments which have been calibrated in the same way.

Most manufacturers will either perform this calibration during service visits or provide a routine for the user to calibrate the instrument. Other methods are also available, and it is worth pointing out that there is a true XPS spectrum (the number of electrons emitted at a particular angle per photon eV steradian) from any sample, which can be obtained from a properly calibrated XPS instrument.10 Interpretation of XPS results is facilitated by having the true spectrum, or at least a spectrum proportional to the true spectrum, because such data can be directly compared with the theory.23–25 

In conjunction with intensity scale calibration, the use of relative sensitivity factors (RSFs) is common in XPS analysis. It is important to select the RSFs that are valid for the calibration procedure employed on the instrument. For example, the sensitivity factors of Wagner et al.26 are appropriate to an instrument with a transmission, which declines approximately inversely with electron kinetic energy. Others, such as those from National Physical Laboratory (NPL), are based on theory and applicable to true XPS spectra. It is a good idea to check that the combination of “transmission function” and RSFs that you are using is consistent by measuring spectra from materials of known stoichiometry that have a clean surface. Ionic liquids16,18,27 are potentially very useful in this regard because they often contain a large number of elements, are electrically conductive, and can be cleaned by ion bombardment. The use of sensitivity factors is considered in detail in another article in this series.28 

It is essential to note that changing any of the following can change the transmission function of the analyzer: aperture or iris settings; lens settings; and analyzer pass energy. The x-ray illumination area also affects the instrument response because transmission efficiency varies away from the focus of the lens. It is also worth noting that the sample itself may affect instrument response due to electric or magnetic fields or topographic features that change the x-ray illumination area.

3. Geometry (trivial, but occasionally important)

The angle between incoming x-rays and detected photoelectrons at the sample should be known. This is a parameter of your instrument that cannot be changed without spanners and a great deal of work. This is not the same as the electron emission angle, or electron take-off angle, which is the geometry of the experiment and can be easily changed by tilting the sample.

For almost all laboratory-based instruments with x-ray sources, the geometry is not important for most quantitative work, but you should be able to report what it is. The geometry affects the photoelectron intensity that reaches the analyzer due to the angular distribution of emission from the atom.29 In contrast to photoelectron emission, Auger electron emission from an isolated atom is isotropic, and, therefore, the average intensity is the same in any direction. This distribution is indicated in Fig. 1(a), where the incoming x-ray is in blue, the atom is a small silver sphere and the analyzer direction indicated by a black line and cone to indicate the analyzer acceptance angle (here about 15°). The semi-transparent sphere represents the uniform angular emission and the red overlap represents the detected intensity. A typical magic angle (∼55°) geometry between the x-ray source and analyzer is depicted in Fig. 1(a).

FIG. 1.

Illustration of the angular distribution of electron emission from an atom. (a) Isotropic (spherical) emission, typical of Auger electrons. (b)–(f) Example photoelectron emission (β = 1.5) with (b) unpolarized x-rays at 55°, (c) unpolarized x-rays at 90°, (d) polarized x-rays at 55° with x-ray e-vector 45° out of plane, (e) polarized x-rays at 55° with x-ray e-vector in plane, and (f) polarized x-rays at 90° with x-ray e-vector in plane. The intensities are nearly equivalent in (a), (b), and (d). In (c), (e), and (f), they are larger by an approximate factor of 1.3, 1.8, and 2.5, respectively.

FIG. 1.

Illustration of the angular distribution of electron emission from an atom. (a) Isotropic (spherical) emission, typical of Auger electrons. (b)–(f) Example photoelectron emission (β = 1.5) with (b) unpolarized x-rays at 55°, (c) unpolarized x-rays at 90°, (d) polarized x-rays at 55° with x-ray e-vector 45° out of plane, (e) polarized x-rays at 55° with x-ray e-vector in plane, and (f) polarized x-rays at 90° with x-ray e-vector in plane. The intensities are nearly equivalent in (a), (b), and (d). In (c), (e), and (f), they are larger by an approximate factor of 1.3, 1.8, and 2.5, respectively.

Close modal

For photoemission using unpolarized x-rays, the electron intensity is distributed typically as the closed “doughnut” or red blood cell shape in Fig. 1(b), the red volume is approximately identical to that in Fig. 1(a) at the magic angle geometry. The most intense emission is at a right angle to the incoming x-ray and many early instruments, including the instrument on which my Ph.D. work was carried out, had a 90° angle between the x-ray source and analyzer as depicted in Fig. 1(c). Other instruments have 45° angles, with a smaller direct intensity into the analyzer, but a reduction in shadowing effects for topographic samples.

In the dipole approximation, which is useful for XPS with aluminum and magnesium x-ray sources, the degree of asymmetry in photoemission is characterized by the parameter β, which has a value between 0 (isotropic emission) and 2 (emission from s orbitals). Orbitals which have angular momentum, i.e., p, d, f subshells have β values that range from approximately 0.6 to 1.8. Depending upon the depth distribution of the element in the sample, this angular distribution is modified by elastic scattering of electrons and the appropriate, “effective” value of β is smaller than for an isolated atom.

So, why have I categorized this as mainly trivial? For the vast majority of commercial instrument geometries, the error is less than 10% even if angular anisotropy in photoemission is not taken into account. The worst case occurs for 90° instruments when errors could reach up to 20% or more and, for these instruments, corrections for angular distributions in photoemission intensity should be made.

Figure 2 illustrates the effect of changing the angle between x-rays and analyzer from 90° (black line) to 45° (red dashed line). Note that the Auger electron intensity is constant in the two geometries and also that the inelastic backgrounds converge at kinetic energies lower than the photoelectron peaks because elastic scattering randomizes the direction of electrons which have a long path length in the sample.

FIG. 2.

Effect of x-ray to analyzer angle on XPS spectra. Monochromated Al Kα XPS of clean copper at 45° (red dashed) and 90° (black). The absolute intensities of photoelectron peaks change by ∼50% and the Auger electron peak intensities are unchanged.

FIG. 2.

Effect of x-ray to analyzer angle on XPS spectra. Monochromated Al Kα XPS of clean copper at 45° (red dashed) and 90° (black). The absolute intensities of photoelectron peaks change by ∼50% and the Auger electron peak intensities are unchanged.

Close modal

The problem of electron angular distribution can, however, become critical if a polarized x-ray source, such as a synchrotron, is used. Although laboratory-based instruments with monochromators also have a degree of polarization from the Bragg reflection, this is almost always a trivial effect. Synchrotrons are often designed to generate near-perfect plane-polarized x-rays, which causes some issues for quantitative XPS. Figure 1(d) shows the best geometry for quantitative XPS in the dipole approximation, having the magic angle between incoming x-rays and the analyzer and also the plane of polarization 45° out of the x-ray/sample/analyzer plane. Note that the cross section through the angular emission has the same shape as in Fig. 1(b). In this case, the detected intensity per photon will be similar to an isotropic distribution. However, I have never encountered a synchrotron end station that has this geometry, typical geometries are: magic angle with in-plane x-ray polarization, shown in Fig. 1(e), and the “maximum intensity in the peak” 90° geometry shown in Fig. 1(f). Such geometries pose interesting interpretive challenges if quantitative XPS is desired.

In this section, some of the underpinning detail is provided on how XPS can be made quantitative. In the most general form of XPS analysis, the peak intensities are extracted from spectra and divided by sensitivity factors to provide numbers that correspond to relative compositions. Such analysis places a large amount of trust in the instrument, its calibration, the sensitivity factors, and the ideality of the sample.

1. Reference materials (critical)

This section is vital for careful quantitative analysis in which accuracy is required. Even if you only intend to carry out routine XPS work, it is worth looking through this section to understand the uncertainties that arise from assumptions used in more general and routine XPS analysis.

A sample that requires XPS measurement will have more than one chemical phase in it. For the most accurate measurements, it is best to have pure samples of each of the phases in the sample. Ideal reference materials are flat, single phase, pure compounds of known stoichiometry with a very clean surface. The essential point is that the reference material composition is known by other means and that it will not be determined by XPS. If your samples are to be measured with better than 5% accuracy, then the reference materials should be pure phases of the individual mixed phases in the sample. Each of the reference materials should be measured using the same conditions as that for the sample measurement, ideally several times both before and after the test sample to establish measurement precision and account for instrument drift. For each phase, this then provides a “pure material” signal intensity, Ii,∞, that can be used to normalize the signal from the sample, Ii, to provide an accurate measurement using equations presented later.

You may ask why this is necessary, because XPS is a well understood technique and, therefore, we should be able to predict the intensity arising from each material. The answer is that we can predict the intensity, but an unknown fraction of that intensity goes into satellite peaks. In general, the majority (typically >80%, but sometimes less than 50%) of the intensity will be in a main, easily visible, peak.28 However, a fraction of the intensity will be in satellite features, which usually appear at a lower kinetic energy than the main peak and are due to additional, intrinsic energy losses that occur during photoemission. These are called intrinsic plasmon losses, shake-up, or shake-off features and arise from additional electronic excitations that can occur simultaneously with the photoemission event. In a few cases, such as for the Cu2+ 2p peaks and Ce4+ 3d peaks, these loss features are sharp, easily identified, and can be included in the quantification. In other cases, they are broad and hard to separate from the inelastic background, which may also include similar lumps and bumps.

Figure 3(a) shows an example, simulated spectrum representative of the 2p region of pure aluminum or silicon where the features at lower kinetic energy contain contributions from both the inelastic background and the intrinsic peak. In the simulation, about 25% of the total intrinsic intensity is in the plasmon feature. These contributions are almost impossible to separate from the inelastic background in a practical analysis. Figure 3(b) is a simulation of the metal oxide spectrum which is, naïvely, modeled without shake-up features. Figure 3(c) represents an overlayer of the oxide on the metal, where the relative intensities in the two main peaks could be used to measure the oxide thickness. Using the reference spectra, this is straightforward and the analysis can be carried out with little error. However, if theoretical intensities were used, it would be difficult to account for the different amounts of intrinsic intensity outside the analysis region and the resulting thickness would almost certainly be erroneous.

FIG. 3.

Why we need reference materials. (a) Simulated spectrum of an element with plasmon losses, which are both intrinsic and extrinsic. The inelastic background (red, lower solid curve) is shown under the total simulated spectrum (black, upper solid curve). (b) Simulated spectrum of the metal oxide, which is modeled without a shake-up structure. (c) Simulation of an overlayer of oxide on the metal, the practical analysis region is indicated by dashed lines, which misses the intrinsic plasmon intensity.

FIG. 3.

Why we need reference materials. (a) Simulated spectrum of an element with plasmon losses, which are both intrinsic and extrinsic. The inelastic background (red, lower solid curve) is shown under the total simulated spectrum (black, upper solid curve). (b) Simulated spectrum of the metal oxide, which is modeled without a shake-up structure. (c) Simulation of an overlayer of oxide on the metal, the practical analysis region is indicated by dashed lines, which misses the intrinsic plasmon intensity.

Close modal

In practice, it is often hard to get hold of such clean, pure, and flat reference materials and therefore some thought is required to acquire suitable materials. It is often necessary to find reference materials that are not identical to the phases of the sample under study, but do contain chemically similar environments in known concentrations. For example, to find the correct reference materials for organic overlayers on gold surfaces and nanoparticles, it was not possible to source the exact organic materials. Instead, other pure organic materials were used and an assumption made that they have similar effective attenuation lengths (EALs) and atomic densities.30,31 It was found that the experimental ratio of reference intensities was 40% different to that predicted theoretically, almost certainly due to consistent errors in determining peak areas and excluding some of the intrinsic structure.

The two examples of oxide on metal, with ∼25% error in a peak intensity, and organic on gold, with ∼40% error in a peak intensity, illustrate the point of this section. The standard XPS practice of reporting elemental concentrations without reference material data is likely to be inaccurate. In the worst case, if there are different phases in the sample, then the relative error could be as high as a factor of 1.5. However, this lack of accuracy does not detract from the fact that XPS is remarkably precise and relative changes in composition of less than 1% are often easy to detect. For most practical purposes, this is sufficient.

2. Sensitivity factors (important)

The majority of quantitative XPS measurements express an analysis result as an atomic fraction of each element or chemical state. For this purpose, it is essential to have a set of sensitivity factors, Si, for each type of photoelectron in the sample. These essentially represent the intensity of the peak relative to another, reference, elemental peak. If your instrument is not calibrated in any way, then it is necessary to find these yourself. Wagner et al.26 did this for an Al Kα XPS instrument by measuring the intensities of a set of compounds of fluorides of known stoichiometry and, therefore, the natural reference peak was F 1s. They found that the C 1s sensitivity factor was SC1s = 0.25, implying that for polytetrafluoroethylene (PTFE) (CF2), the F 1s peak is eight times more intense than the C 1s. A similar approach was taken by Edgell et al.32 when they determined experimental sensitivity factors for an Ag Lα x-ray source. As noted previously, such sensitivity factors are ideal for the instrument and operating mode that was used to measure them, but only applicable to instruments and operating modes that have the same energy-dependent transmission, or the same intensity scale calibration procedure. For an Al Kα instrument with constant transmission at all energies, the F 1s peak for CF2 is actually only six times as intense as the C 1s peak.18 The ∼25% difference between the sensitivity factors found in these references is indicative of the error incurred by using the wrong sensitivity factors. More detail can be found in ISO 18118:2015 “SCA-XPS-Guide to the use of experimentally determined relative sensitivity factors for the quantitative analysis of homogeneous materials.”

If you have any doubt whatsoever, then analyze a few samples with known stoichiometry and a clean surface such as: PTFE, freshly spin-cast or scraped poly(methyl methacrylate), a salt such as a fluoride, or an ionic liquid.16,18 If you do not get the answer you are expecting, then something is wrong.

For true XPS spectra, it is possible to estimate sensitivity factors from theoretical parameters. In magic angle instruments, this can be as simple as multiplying the theoretical photoionization cross section by the IMFP, that is, Si is proportional to σi λi,M and the only question is which representative material “M” to use for the IMFP and which element to use as the reference element. This type of calculation is quite adequate for most purposes,18 especially when it is considered that the relative variation in theoretical cross sections may be in the region of 10% depending upon which theory is used.33–35 Far more detailed analysis has been carried out,24,25,36,37 but the typical relative scatter between experimental and theoretical sensitivity factors is ∼10%.38 For any sort of routine work without appropriate reference materials, this should be considered as the best possible accuracy of standard XPS measurements, while remembering that compositional changes smaller than 1 at. % can be identified routinely.

Finally, in some sources of theoretical sensitivity factor data, the separate spin–orbit doublets are assigned a sensitivity factor, but these can sometimes be impossible to separate in a spectrum. If this is the case, the correct approach is to sum the two sensitivity factors, for example, SSi2p, the sensitivity factor for the silicon 2p peak will be the sum of the sensitivity factors for the silicon 2p3/2 and 2p1/2 orbitals.

3. Full survey spectrum (important)

In quantitative XPS, all the elements present, except hydrogen and helium should be considered and reported. Therefore, it is necessary to take a full survey spectrum of the sample and report all of the elements detected. There should be no need to say any more than this, but there may be a temptation among the inexperienced to save time by not taking a full survey spectrum or to disregard certain elements, which are not considered important.

4. Information about the sample (important)

The more information you have about the sample, the better. It is always helpful to have some idea which elements you are looking for and at roughly what concentrations. XPS has variable sensitivity: generally very good for heavy elements and not so good for light elements. It is also good to check whether the experiment is even feasible or identify if it will take a long time due to poor sensitivity.39 There are often peak overlaps that make the identification of certain elements difficult, for example, aluminum in copper for most standard x-ray sources which cannot access the Al 1s orbital.

If the sample is topographic, for example, is rough or has a cylindrical or spherical form, then this will affect the measurement. Often, the topography will merely reduce the intensity of peaks due to shadowing effects. However, there will also be a wider variety of electron emission angles relative to the local surface normal. This will alter the calculation method for film thickness or overlayer coverage measurements.40–43 For such samples, microscopic data such as atomic force microscopy or electron microscopy are helpful.

The sample is best considered as part of the instrument and some samples may change the performance of the XPS spectrometer in a significant way. Magnetic materials may significantly alter the trajectory of electrons and many instruments use magnetic lenses that can change the magnetization of the sample itself. Considerable expertise is required to deal with these types of materials and quantitative analysis should not be attempted from XPS data of strongly magnetic materials without understanding the pitfalls. Highly topographic, nonconducting samples can also affect XPS intensities by altering the trajectory of emitted electrons when they become charged under an electron flood gun. It is difficult to predict the effect that this will have on the instrument transmission, and in these cases, quantitative analysis should also be approached with caution.

If the sample is a single crystal, you need to be careful about the orientation of the crystal to the emission angle of the electrons. Most quantitative XPS analyses assume that the sample is either amorphous or polycrystalline. The angular distribution of photoelectrons and Auger electrons are influenced by the local arrangement of atoms, which is a significant effect for single crystals44 and this is known as photoelectron diffraction.45,46 These effects can be used as a powerful form of structural analysis but, in the context of routine quantitative XPS analysis, they can be annoying. These diffraction effects can be reduced by both selecting the sample angle to avoid collecting electrons emitted along the low-index axes of the crystal and also by increasing the collection angle of the analyzer to average out diffraction effects.47 

After checking that you have all the necessary data and information, it is time to start the process of quantitative analysis. The ability to assign peaks correctly is an important step and for quantitative analysis, this becomes critical in the case that there are peak overlaps. Also remember that, if you have one peak from an element, then all the other peaks from that element should also be observed in the survey spectra. If an element is present, but the main peak overlaps another peak from a different element, then the two recourses are to (1) find another peak from the same element that does not overlap or (b) attempt a peak fit; an example of this method is provided later. Peak fitting different chemical states of the same element is also a form of analysis that may be quantitative. This is not covered in detail in this article because it is dealt with in another article in this series.48 

A good quantitative analysis hinges upon making a sensible choice for which peaks to use and how they should be measured. There are a number of issues here, but, in general, it is best to use the sharpest peaks. These tend to be the “leading” peaks with the highest orbital angular momentum in each shell, i.e., 1s, 2p, 3d, and 4f. Sharp peaks are easier to find in a spectrum and it is slightly less challenging to define a suitable background for them.

1. Choosing a suitable background (critical)

The separation of an XPS peak from its associated background is a process that is difficult to get right. If accuracy in the raw peak area is required, then one requires either a reference material, as discussed previously, or a great deal of detailed and time consuming work. For most practical work, the commonly adopted approach is to select a background that is fit-for-purpose and this essentially restricts the choice to linear, Shirley49 or Tougaard.50 A linear background usually works well for overlayers, or if the other two are not working for various reasons. Shirley is useful if there is a “step” in the background after a peak and it is commonly used for peak fitting. It makes the assumption that the change in background intensity is proportional to the peak intensity above the background. Tougaard’s background has a physical basis, calculating the background from an inelastic scattering cross section.51 Although the choice of the cross section can be uncertain, Tougaard’s background should be preferred for quantitative analysis, particularly for peak areas from survey spectra, because it provides results that are often similar to accurate experimental results.52–54 Tougaard’s background requires a large energy range in the background at lower kinetic energy to the peak, typically more than 30 eV, and this should be considered before collecting data. Additionally, there can be problems from overlapping peaks within this large energy window.

However, the really critical issue in determining peak areas is not the shape of the background, but the choice of the end points on either side of the peak that define the background.

Figure 4 illustrates some of the problems that may be encountered in defining a background. The spectrum in Fig. 4(a) is generated as a simplistic model of the 2p region of a transition metal such as iron, which has an oxide overlayer. The reason for using a model is so that it is clear which region corresponds to the peak and which corresponds to the background. The two metal peaks, 2p1/2 and 2p3/2, have a higher intensity in the pure material, a lower binding energy, and are sharp. However, because the metal is under the oxide layer, they have a small intensity even though the metal contributes significantly to the inelastic background. The oxide overlayer contributes two broad peaks, but less intensity to the inelastic background. The blue dashed line is the division between peak and background from the model and it is unreasonable to expect any standard background shape to match it exactly because there is no way of inputting the layer structure. By choosing end points close to where the model background starts and ends and a variety of background shapes, the gray shaded area represents the range of estimated background positions. In this case, both standard Tougaard and Shirley backgrounds provide areas within 10% of the model and a linear background within the gray area has a 15% discrepancy. Thus, the background shape can be important but is not usually critical.

FIG. 4.

Choosing a background. (a) Simulated XPS spectrum typical of the 2p region of a 3d metal with an oxide overlayer. The blue dashed line is the background from the simulation and the gray area is the range of background positions found using reasonable selections of algorithms and end points. The red dashed line with a cross is an unreasonable linear background. (B) Linear background on a noisy peak; dashed red lines use one point in the data and the blue line uses an average position over a large number of background data points. (C) Problems encountered analyzing peaks on a sloping background. Naïve interpretations of the Tougaard (red dashed) and Shirley (red solid curve) backgrounds are unphysical, the blue line is linear and the only reasonable choice here.

FIG. 4.

Choosing a background. (a) Simulated XPS spectrum typical of the 2p region of a 3d metal with an oxide overlayer. The blue dashed line is the background from the simulation and the gray area is the range of background positions found using reasonable selections of algorithms and end points. The red dashed line with a cross is an unreasonable linear background. (B) Linear background on a noisy peak; dashed red lines use one point in the data and the blue line uses an average position over a large number of background data points. (C) Problems encountered analyzing peaks on a sloping background. Naïve interpretations of the Tougaard (red dashed) and Shirley (red solid curve) backgrounds are unphysical, the blue line is linear and the only reasonable choice here.

Close modal

The dashed red line with a big cross through it illustrates the effect of a badly chosen end point. In this case, a linear background is shown because in this case, it is most sensitive to end point selection with an approximate 100% error in peak area for the line in the figure. It is worth noting that, for these end point choices, a Shirley background (not shown) is hardly better than a linear background and even a Tougaard background using normal parameters and assuming depth homogeneity has a 50% error. The link between the peak intensity and the background intensity is not straightforward because of the layer structure and accurate background subtraction requires a more detailed model.

Figure 4(b) illustrates a simple error made by inexperienced XPS users. Here, there is a weak peak with significant noise and a linear background is used. The two red dashed lines use single points in the data to define the end points of the background. These are clearly poor choices compared to the blue line, which uses an average of many data points to define the end points in the background before and after the peak. The lesson to remember is that the line describing the background should go through the middle of the noise in the data before and after the peak. This is very easy to assess visually. It is also quantitatively critical; the two red dashed lines provide areas of either zero or twice that provided by the blue line, i.e., 100% error. Guidance is provided by NIST (Ref. 55) on the means of achieving the lowest statistical uncertainty in XPS peak areas. There should be at least as many data points on each side of the peak to define the background end points as there are data points across the peak itself. Thus, it is essential to collect at least twice as much background data as peak data if you want to minimize statistical uncertainty in the peak area. It is also possible to “actively” fit the background and peak simultaneously, which may help if you have inadvertently not collected enough background data.56 The uncertainty associated with this approach is not clear and it is affected by the choice of both the peak shape and the background shape. Of course, this does not ensure your choice of background is correct, but it is a warning against trying to make experiments faster by chopping the background regions out of the scan.

While on the topic of correct background choices, Fig. 4(c) is included to emphasize that software will let you do things that even a novice should be able to spot as an error. Problems can arise for peaks on a sloping background, particularly, one that increases in intensity with kinetic energy. This is actually quite common for surface species on a bulk material in calibrated XPS data, but is not so common for uncalibrated data from older instruments. Some implementations of the Tougaard and Shirley backgrounds assume that there is no slope in the background and this leads to a flat line for the Tougaard, the red dashed line, and a negative peak area. Since the “flatline Tougaard” is so obviously wrong, I have not seen it used in publications or presentations. However, I have seen the “upside-down Shirley” shown by the red solid line. In the case shown in Fig. 4(c), it gives a similar area to the blue linear background because of the end point choices, but in other cases it can give very inaccurate peak areas and should never be used in peak fitting.

2. Peak fitting (important)

The purpose of a peak fit is to separate overlapping peaks in the same region of the spectrum in a physically meaningful way and in order to extract useful information. This can be relatively straightforward if the relative peak positions are known from the literature and the peaks are sharp and well-defined. For organic materials, this is often the case and reference works are available.57 For other materials, care should be taken, especially if there are extensive and intense shake-up satellites.58 In some cases, peak fitting is the only means of performing an analysis. Assessing the accuracy of a peak fit is not at all easy and I will not attempt to do it here. However, it should be noted that uncertainty calculations from the reduced chi-squared will only tell you the level at which the model agrees with the data; it does not tell you whether the model itself is correct. Most peak fits fail the “physically meaningful” criterion for the following reasons: the peak assignments are flaky; the background is poorly modeled; shake-up structure is not considered; or the depth distribution of components is not taken into account. Naturally, it is important to have sufficient energy resolution to distinguish the peaks of interest and this often means using a small pass energy. It is possible to compare relative intensities of peaks within a small kinetic energy range, but the raw intensities should not be directly compared to higher pass energy data, or different energy regions, without transmission function correction.

Figure 5 illustrates spectra from a material containing: lanthanum, zirconium, oxygen, lithium, and variable amounts of gallium. The concentration of gallium was of interest but this was a minority component and the only sharp features suitable for analysis were the Ga 2p peaks, which overlap the La 3p3/2 peak. This issue was identified before XPS analysis and a sample made without any gallium. The gallium-free sample spectrum is shown in Fig. 5(a) and a spectrum from one of the gallium-containing sample in Fig. 5(b). The reasons for the peak shapes here are not important, but may largely be ascribed to damage after ion bombardment. To obtain the gallium peak areas, the best approach was to first find a description of the La 3p3/2 peak shape as shown in Fig. 5(a). This shape was then applied to the data in Fig. 5(b) along with Ga 2p peaks, in this case the constraints included: the known separation between the spin–orbit doublets; the known 2:1 intensity ratio between j = 3/2 and j = 1/2 pairs and a constraint to keep the widths of each spin–orbit couple the same (NB: this constraint is not a strict necessity, but is always a useful starting point). Two pairs of Ga 2p peaks were found to be required. The fit is driven by the Ga 2p3/2 peaks, which are relatively clear of the structure at higher binding energy. The fact that the whole data are matched by the fit without any additional peaks provides confidence that the Ga 2p area has been measured with reasonable accuracy.

FIG. 5.

Using a reference spectrum to cope with overlaps. (a) La 3p3/2 peak shape determined from a sample without gallium in. (B) Fitting of the Ga 2p peaks in a sample containing gallium. Sample courtesy of Federico Pesci and Sarah Fearn, Imperial College, UK.

FIG. 5.

Using a reference spectrum to cope with overlaps. (a) La 3p3/2 peak shape determined from a sample without gallium in. (B) Fitting of the Ga 2p peaks in a sample containing gallium. Sample courtesy of Federico Pesci and Sarah Fearn, Imperial College, UK.

Close modal

3. Data mixing and matching (important)

The direct comparison of data taken using different modes of an XPS instrument should be carried out with care. In some cases, it is necessary. For example, a quantitative analysis of atomic concentrations is best carried out at high pass energy due to higher counts and the lower influence of effects like scattering in the spectrometer. At the same time, an element may be in different states and will also require a low-pass energy, high-resolution spectrum to separate these. The low-pass energy spectrum can be used to find the fraction of elements in each state and this information combined with the high pass energy quantitative analysis. If there are valid transmission function corrections available for the low-pass energy mode, then quantitative analysis can also be carried out using those spectra. Since a high pass energy survey spectrum is required in any case, the data may as well be employed to support a quantitative analysis. If it disagrees with the low-pass energy results, then you need to check whether your sample is damaging or whether you have a problem with your instrument.

Make sure that all the peak areas have the same units, which are typically written as cps.eV or counts.eV. If they do not have the same units, then nonsense will result. I also strongly advise you to make sure you understand what any software packages you use do to your data and check that the peak areas they report are what you think they are. For example, you should understand when and how a transmission function is applied. Sometimes, it is applied to the whole data set and sometimes it is applied only to the peak areas that have been extracted from the raw data. You should ensure that it is not applied twice to your data. It is worth working through the calculations yourself for one or two sets of data and checking that the software gives the same result.

Now we have peak areas, we have to turn them into something understandable. This is always done by comparing one peak area to another and the simplest example is a peak fit of the C 1s region. The area of each peak, compared to the total C 1s area, can be interpreted as the fraction of carbon in the chemical environment represented by that peak, weighted by the depth distribution of the various species and the depth sensitivity of XPS. The comparison of peak intensities from different elements or to measure film thickness is somewhat more involved.

1. Equivalent homogeneous atomic fraction

The most commonly used method of reporting XPS data is as atomic fractions and usually as an atomic percent (at. %). This convention is perfectly fine and understandable as long as it is remembered that there is an underlying assumption. The assumption is that the sample is homogeneous and single phase within the XPS sampling depth. Very few samples actually meet this criterion and it is a matter of faith that both the person reporting the result and the person reading the report understand the assumption. If there is any doubt, I recommend making it clear using the phrase “equivalent homogeneous composition.” The calculation [Eq. (1) in Table I] is very simple. For each element, select a peak and divide the area of that peak, Ii, by the sensitivity factor, Si, to obtain a normalized peak area, Ii/Si. The equivalent homogenous atomic fraction, Xi, of each element is simply that elements normalized peak area divided by the sum of all normalized peak areas. To get atomic percent (at. %), simply multiply Xi by 100%.

TABLE I.

Summary of some equations used in quantitative XPS.

ExpressionQuantity, assumption
(1)
 
Atomic fraction
Homogeneous 
(2)
 
Relative area of P to Q
Homogeneous in depth 
(3)
 
Coverage of P on Q
Low coverage 
(4)
 
Thickness of P on Q
Flat sample and Ep ≈ Eq 
(5)
 
Thickness of P on Q
Flat sample 
ExpressionQuantity, assumption
(1)
 
Atomic fraction
Homogeneous 
(2)
 
Relative area of P to Q
Homogeneous in depth 
(3)
 
Coverage of P on Q
Low coverage 
(4)
 
Thickness of P on Q
Flat sample and Ep ≈ Eq 
(5)
 
Thickness of P on Q
Flat sample 

2. Relative area

If the sample is suspected to have different phases within the volume analyzed by XPS, then the equivalent homogeneous composition can only be used as an indication of the amount of each material present. However, if the distribution of material is known, then more accurate information is available. A simple, but rare, case is a mixture of two phases (P and Q) at the surface, which occupy a constant fractional area of the surface within the XPS information depth. The normalized relative intensity Ap,q [Eq. (2) in Table I] represents the relative area of phase P to phase Q. If these are the only two phases, then the fractional area of phase Q is (1 + Ap,q)−1.

The most tricky problem is finding the ratio of the pure phase intensities, Iq,∞:Ip,∞. This is best done with pure, flat reference materials as described earlier. If this is not possible, but there are clean mixed materials available with similar surface conditions, then the pure material intensities can be estimated by assuming that the signal is proportional to the fractional area of the surface. For a binary mixture, this is illustrated in Fig. 6(a), by plotting the intensity of a peak, q, from one component, Q, against a peak, p, from the other component, P. It is then possible to extrapolate to the axes and estimate the pure phase intensities or use the slope of a linear fit to obtain the ratio of pure material intensities. The error can be estimated from the scatter of the points around a linear fit. A wide range of compositions is required to reduce uncertainty. Please note that a consistent variation in contamination, topography, or instrument performance will cause the estimated intensities to be wrong and the plot itself will not identify such issues. Therefore, these possibilities should be checked by looking for contaminants in the survey spectra, measuring topography, measuring the samples in the same area at least twice in different sequences and using different areas of each sample to assess variability.

FIG. 6.

Estimating pure material intensities from a set of samples. (a) Two phases P and Q, homogeneous in depth. Black solid line represents ideality and data points are with added noise. Red dotted line is a linear fit to the data. (b) Two phases with P on top of Q; the black solid line is the expectation with equal effective attenuation lengths and the dashed black lines with factors 3/4 and 4/3 difference in effective attenuation lengths for the overlayer and substrate electrons.

FIG. 6.

Estimating pure material intensities from a set of samples. (a) Two phases P and Q, homogeneous in depth. Black solid line represents ideality and data points are with added noise. Red dotted line is a linear fit to the data. (b) Two phases with P on top of Q; the black solid line is the expectation with equal effective attenuation lengths and the dashed black lines with factors 3/4 and 4/3 difference in effective attenuation lengths for the overlayer and substrate electrons.

Close modal

If no other information is available, the pure material intensity ratio can be estimated from first principles using photoionization cross sections, inelastic mean free paths (IMFPs), and atomic densities, if these are known or can be estimated for the material under investigation. For reasons given earlier, these estimates should be associated with a high (>20%) uncertainty.

Although the use of the normalized relative intensity Ap,q as a measure of relative area is not common, I have introduced it here because it is used in other XPS measurements, such as the coverage of one material on another [Eq. (3) in Table I]. It is much more frequently used as an input for the measurement of overlayer thickness.

3. Overlayer thickness

The most common assumptions in measuring film thickness are (1) that the intensity of electrons decline exponentially with distance travelled through a material, (2) that the substrate is flat, (3) that the overlayer has uniform thickness, and (4) that the electrons all have the same effective attenuation length in the overlayer material. With these assumptions, the thickness of the overlayer, P, on the substrate, Q, is given by Eq. (4) in Table I. The equation uses the normalized intensity ratio, Ap,q, and once again the main problem is finding the pure material reference intensities. If reference samples are not available, but a range of samples with different film thicknesses are available, then a plot of peak intensities from each material can be used in a similar manner to that described for relative area reference intensities as shown in Fig. 6(b).

If the electrons from peak “p” have a different kinetic energy to the electrons from peak “q,” then Eq. (4) is no longer valid. Also, the plot in Fig. 6(b) should no longer be a straight line, but this may be hard to spot in the presence of noise or if there is a limited range of thicknesses in the samples. Dashed black lines are included to indicate the expected effect of unequal sampling depths for the electrons. Estimating the ratio of effective attenuation lengths is quite easily done because the electron kinetic energy is known [Eq. (A2) in the  Appendix].

Assumption (1) concerning exponential attenuation is a good one; however, it is important to use EALs rather than IMFPs. The former accounts for the fact that electrons can change the direction in a sample.

Assumption (2) concerning flat samples requires checking by microscopies, preferable AFM because this provides quantitative height and slope information. Typically, samples can look quite rough in AFM due to an exaggerated height scale, but the lateral distance over a height change often means the local surface normal is close to the “average” surface normal of the sample. For XPS, the slope is important but it should not be a major concern unless the range of local emission angles from the sample exceeds ∼10° or so. If the sample consists of, for example, spheres or fibers, then the equations given here are not valid and corrections are required.41,42 Core–shell nanoparticles also require special consideration.40,43,59

Assumption (3) regarding the uniformity of the overlayer is difficult to test without the aid of microscopy. For flat samples, changing the take-off angle and hence the information depth could identify some forms of nonuniformity.60 Similarly, photoelectron peaks from the same element but with different EALs could be used.61 If your instrument has two, quite different x-ray energies, then nonuniformity may be assessed by changing the x-ray source and analyzing the same area.18 A more elegant approach is to perform a detailed analysis of the inelastic background, which can reveal nonuniformity.62,63 All these methods are far from routine, require considerable understanding, and can generally only identify the grossest forms of nonuniformity (such as holes or islands). The effect of unidentified overlayer nonuniformity on an XPS thickness measurement is that the average thickness of the layer will be underestimated and this is true in all cases, from flat films62 to nanoparticles.64 

Assumption (4) is applicable when the electron kinetic energies being compared are within ∼5% of each other. The important effective attenuation length is that for electrons travelling through the overlayer material, P. Because the kinetic energies of the electrons p and q are similar, then the EAL for the overlayer electrons, Lp,P, is approximately the same as that for the substrate electrons, Lq,P. If the EALs are different, then it is clear that another approach is required. The methods to cope with this are either by iteratively changing t in the relevant equation [Eq. (A3) in the  Appendix] to match the experimental Ap,q; using a graphical method;65 or use an accurate empirical equation such as that given in Eq. (5). Equation (5) in Table I is mathematically identical to a previous, validated direct equation for flat surfaces with uniform overlayers.40 

When reporting the results of an XPS analysis, it is important to provide sufficient details so that the analysis can be repeated by others. More details can be found in ISO 13424:2013 “SCA-XPS-Reporting of results of thin-film analysis,” ISO 15470:2017 “SCA-XPS-Description of selected instrumental performance parameters,” ISO 19830:2015 “SCA-XPS-Minimum reporting requirements for peak fitting in X-ray photoelectron spectroscopy,” and ISO 20903:2019 “SCA-XPS-Methods used to determine peak intensities and information required when reporting results.” In an academic publication, there is no barrier to provide all the necessary information in electronic supplementary information. Ideally, the following should be specified as a minimum:

  1. Instrument details

    • Make and model of the XPS instrument.

    • Lens mode and pass energy or retarding ratio.

    • Angular range of electrons collected.

    • Dimensions of analysis area.

    • Type of anode or x-ray source.

    • Geometry of the instrument.

    • Linear range of detector in cps.

    • Method of energy and intensity calibration.

  2. Information used

    • References to sources of RSFs, EALs, IMFPs, and so on.

    • A list of the values used, especially if these are not in a readily accessible reference.

  3. Samples

    • Emission angle of electrons.

    • X-ray intensity: anode potential and current, power or flux.

    • Surface roughness or topography, if known.

    • Number of areas analyzed and number of repeats.

    • Any details of degradation or change during analysis.

    • Reference materials and samples.

  4. Data

    • Survey spectrum, with all detected elements identified.

    • Regions analyzed.

    • Width of regions and data point spacing.

    • Type of background and method of end point determination.

    • The software used to analyze data.

    • Area of peaks used in quantification and whether these are raw or calibrated.

    • Full details of any peak fitting: position, width, shape, constraints.

  5. Results

    • Equations used to analyze the data, with references.

    • Estimate of the uncertainty in the results.

This may seem a very exhaustive and exhausting list, but many of these details will be identical for a large number of analyses carried out in your laboratory. Therefore, if a system is set up to record and report these details for each set of samples, there is some initial work but then the additional burden for each sample is not so great. On the other hand, if you do not know what some of these are, you should find out before reporting your XPS results.

The messages from each of the sections in this article are summarized in Table II.

TABLE II.

Topics covered in this article.

SectionMain points
Instrument Assess typical repeatability and variability
Assess drift on a regular basis
Check the linear range of your detector
Calibrate both the energy and intensity scale
Know your instrument geometry 
Information Find appropriate reference materials
Check that your sensitivity factors are useful
Always take a survey spectrum to identify contaminants
If necessary, measure the topography of the sample 
Data Collect more background data than peak data
Choose background shapes and positions with care
Peak fitting should be approached with caution
Understand what your software does to the data
Carefully explain your calculation method
Provide enough information for others to repeat the calculation 
Reporting Report everything that may affect your result 
SectionMain points
Instrument Assess typical repeatability and variability
Assess drift on a regular basis
Check the linear range of your detector
Calibrate both the energy and intensity scale
Know your instrument geometry 
Information Find appropriate reference materials
Check that your sensitivity factors are useful
Always take a survey spectrum to identify contaminants
If necessary, measure the topography of the sample 
Data Collect more background data than peak data
Choose background shapes and positions with care
Peak fitting should be approached with caution
Understand what your software does to the data
Carefully explain your calculation method
Provide enough information for others to repeat the calculation 
Reporting Report everything that may affect your result 

The author acknowledges the “Metrology for Advanced Coatings and Formulated Products” theme of the UK National Measurement System, funded by the Department of Business, Energy and Industrial Strategy (BEIS). Support, information, and advice were provided by Don Baer (PNNL), David Cant (NPL), Steve Spencer (NPL), Andrew Pollard (NPL), and Dick Brundle (C. R. Brundle and Associates).

ai

size of an atom of element i calculated using Mi = ρiNAai3

Ai,j

normalized intensity ratio of photoelectron peaks i and j

cps

counts per second

EAL

effective attenuation length

Ei

kinetic energy of photoelectron peak i

Ii

background-subtracted peak area of photoelectron peak i

Ii,∞

background-subtracted peak area of photoelectron peak i from a pure, homogeneous, flat reference material

IMFP

inelastic mean free path

ISO

International Organization for Standardization

Li,M

effective attenuation length (EAL) of photoelectron peak i in material M

Mi

relative atomic mass of element i

NA

Avogadro constant

NIST

National Institute of Standards and Technology, USA

NPL

National Physical Laboratory

PTFE

polytetrafluoroethylene

RSF

relative sensitivity factor

SCA

surface chemical analysis

Si

relative sensitivity factor of photoelectron peak i

t

thickness of a uniform overlayer

X

atomic fraction of specified element, often expressed as atomic percent

XPS

x-ray photoelectron spectroscopy

Zi

atomic number of element i

Greek letters
β

angular asymmetry parameter for photoelectron emission

λi,M

inelastic mean free path (IMFP) of photoelectron peak i in material M

Φi

fractional surface coverage of atom i

ρi

density of element i

σi

photoionization cross section of photoelectron peak i

θ

electron emission angle relative to the surface normal

ai

size of an atom of element i calculated using Mi = ρiNAai3

Ai,j

normalized intensity ratio of photoelectron peaks i and j

cps

counts per second

EAL

effective attenuation length

Ei

kinetic energy of photoelectron peak i

Ii

background-subtracted peak area of photoelectron peak i

Ii,∞

background-subtracted peak area of photoelectron peak i from a pure, homogeneous, flat reference material

IMFP

inelastic mean free path

ISO

International Organization for Standardization

Li,M

effective attenuation length (EAL) of photoelectron peak i in material M

Mi

relative atomic mass of element i

NA

Avogadro constant

NIST

National Institute of Standards and Technology, USA

NPL

National Physical Laboratory

PTFE

polytetrafluoroethylene

RSF

relative sensitivity factor

SCA

surface chemical analysis

Si

relative sensitivity factor of photoelectron peak i

t

thickness of a uniform overlayer

X

atomic fraction of specified element, often expressed as atomic percent

XPS

x-ray photoelectron spectroscopy

Zi

atomic number of element i

Greek letters
β

angular asymmetry parameter for photoelectron emission

λi,M

inelastic mean free path (IMFP) of photoelectron peak i in material M

Φi

fractional surface coverage of atom i

ρi

density of element i

σi

photoionization cross section of photoelectron peak i

θ

electron emission angle relative to the surface normal

Some useful relationships for quantitative XPS are given below and in Table III.

TABLE III.

Some equations for electron inelastic mean free paths and effective attenuation lengths.

ExpressionComment
(A1)
 
S1, λ in nm 
(A2)
 
S3, L in nm 
(A3)
 
Overlayer-substrate 
ExpressionComment
(A1)
 
S1, λ in nm 
(A2)
 
S3, L in nm 
(A3)
 
Overlayer-substrate 

1. Inelastic mean free path

These are dealt with in detail within another article in this series.66 Inelastic mean free paths (IMFPs) can be calculated using the TPP-2M formula67 with ∼10% accuracy. It requires the inputs of electron kinetic energy, E, number of valence electrons per atom or molecule, Nv, density of material in g cm−3, ρ, atomic number or molecular weight of material, M, and the bandgap energy in eV, Eg. A more recent expression,68 S1, given in Eq. (A1) has a similar accuracy to TPP-2M and is easier to apply in the case of materials where some of the details are unknown. In this case, Z is the number averaged atomic number (Z = 4 for organic materials) and a is the atomic size in nm, which is typically 0.25 nm. The same paper provides some even more general equations that can be used in the case that both the composition and the IMFP require consistency during an automated analysis.

2. Effective attenuation length

For practical purposes, the effective attenuation length (EAL) is more useful than the IMFP. There are a number of EALs generated for different purposes,66 but generally the ones that are used are those suitable for measuring overlayer thickness. They account for elastic scattering of electrons: the fact that electrons do not always travel in straight lines.69,70 There are a number of relationships available that involve the use of a parameter called the single scattering albedo, for which information is available for some materials. The single scattering albedo depends upon the elemental composition of the material and the electron kinetic energy and, therefore, does not lend itself to simple analysis without extensive databases. The EALs that result from such calculations are summarized in a database available from NIST.71 A relatively simple equation is given in Eq. (A2) which has the same inputs as Eq. (A1) and a slightly reduced accuracy.72 

3. Substrate-overlayer intensities

Equation (8) predicts the normalized intensity ratio for a substrate-overlayer system using the straight line approximation. Substrate Q generates electrons, q, with EAL Lq,P in overlayer material P, which generates electrons, p, with EAL Lp,P in itself.

1.
M. P.
Seah
et al,
Surf. Interface Anal.
41
,
430
(
2009
).
2.
P.
Larsen
and
M.
Von Ins
,
Scientometrics
84
,
575
(
2010
).
3.
D. R.
Baer
et al,
J. Vac. Sci. Technol. A
37
,
031401
(
2019
).
4.
D.
Briggs
and
M. P.
Seah
,
Practical Surface Analysis: Auger and X-Ray Photoelectron Spectroscopy
(
Wiley
,
Chichester
,
1996
).
5.
J. F.
Watts
and
J.
Wolstenholme
,
An Introduction to Surface Analysis by XPS and AES
(
Wiley
,
Chichester
,
2003
).
6.
C. D.
Easton
,
C.
Kinnear
,
S. L.
McArthur
, and
T. R.
Gengenbach
,
J. Vac. Sci. Technol. A
38
,
023207
(
2020
).
7.
P. R.
Davies
and
D. J.
Morgan
,
J. Vac. Sci. Technol. A
38
,
033204
(
2020
).
8.
J.
Wolstenholme
, “
A procedure which allows the performance and calibration of an XPS instrument to be checked rapidly and frequently
,”
J. Vac. Sci. Technol. A
(to be published).
9.
J.
Wolstenholme
,
Surf. Interface Anal.
45
,
1071
(
2013
).
10.
M. P.
Seah
and
G. C.
Smith
,
Vacuum
41
,
1601
(
1990
).
11.
A. G.
Shard
et al,
J. Phys. Chem. B
119
,
10784
(
2015
).
12.
N. A.
Belsey
et al,
J. Phys. Chem. C
120
,
24070
(
2016
).
13.
M.
Seah
,
M.
Jones
, and
M.
Anthony
,
Surf. Interface Anal.
6
,
242
(
1984
).
14.
M. P.
Seah
,
J. Electron Spectr. Relat. Phenom.
71
,
191
(
1995
).
15.
M. P.
Seah
and
S. J.
Spencer
,
J. Electron Spectr. Relat. Phenom.
151
,
178
(
2006
).
16.
M.
Holzweber
,
A.
Lippitz
,
R.
Hesse
,
R.
Denecke
,
W. S.
Werner
, and
W. E.
Unger
,
J. Electron Spectr. Relat. Phenom.
233
,
51
(
2019
).
17.
A. G.
Shard
and
S. J.
Spencer
,
Surf. Interface Anal.
51
,
618
(
2019
).
18.
A. G.
Shard
,
J. D.
Counsell
,
D. J.
Cant
,
E. F.
Smith
,
P.
Navabpour
,
X.
Zhang
, and
C. J.
Blomfield
,
Surf. Interface Anal.
51
,
763
(
2019
).
19.
M. P.
Seah
,
Surf. Interface Anal.
36
,
1645
(
2004
).
20.
M. P.
Seah
,
I. S.
Gilmore
, and
S. J.
Spencer
,
J. Electron Spectr. Relat. Phenom.
104
,
73
(
1999
).
21.
R.
Wicks
and
N.
Ingle
,
Rev. Sci. Instrum.
80
,
053108
(
2009
).
22.
M.
Seah
,
Surf. Interface Anal.
20
,
865
(
1993
).
23.
M. P.
Seah
,
J. Electron Spectr. Relat. Phenom.
100
,
55
(
1999
).
24.
M. P.
Seah
and
I. S.
Gilmore
,
Phys. Rev. B
75
,
149901
(
2007
).
25.
M. P.
Seah
and
I. S.
Gilmore
,
Phys. Rev. B
73
,
174113
(
2006
).
26.
C.
Wagner
,
L.
Davis
,
M.
Zeller
,
J.
Taylor
,
R.
Raymond
, and
L.
Gale
,
Surf. Interface Anal.
3
,
211
(
1981
).
27.
E. F.
Smith
,
F. J.
Rutten
,
I. J.
Villar-Garcia
,
D.
Briggs
, and
P.
Licence
,
Langmuir
22
,
9386
(
2006
).
28.
C. R.
Brundle
and
B. V.
Crist
, “
XPS: A perspective on quantitation accuracy for composition analysis of homogeneous materials
,”
J. Vac. Sci. Technol. A
(to be published).
29.
R. F.
Reilman
,
A.
Msezane
, and
S. T.
Manson
,
J. Electron Spectr. Relat. Phenom.
8
,
389
(
1976
).
30.
S.
Ray
,
R. T.
Steven
,
F. M.
Green
,
F.
Hook
,
B.
Taskinen
,
V. P.
Hytonen
, and
A. G.
Shard
,
Langmuir
31
,
1921
(
2015
).
31.
N. A.
Belsey
,
A. G.
Shard
, and
C.
Minelli
,
Biointerphases
10
,
019012
(
2015
).
32.
M.
Edgell
,
R.
Paynter
, and
J.
Castle
,
J. Electron Spectr. Relat. Phenom.
37
,
241
(
1985
).
33.
J. H.
Scofield
, Report No. UCRL51326, Lawrence Livermore Laboratory Report, CA, USA (1973).
34.
L.
Sabbatucci
and
F.
Salvat
,
Radiat. Phys. Chem.
121
,
122
(
2016
).
35.
M.
Trzhaskovskaya
and
V.
Yarzhemsky
,
At. Data Nucl. Data Tables
119
,
99
(
2018
).
36.
M. P.
Seah
,
I. S.
Gilmore
, and
S. J.
Spencer
,
J. Electron Spectr. Relat. Phenom.
120
,
93
(
2001
).
37.
M. P.
Seah
,
I. S.
Gilmore
, and
S. J.
Spencer
,
Surf. Interface Anal.
31
,
778
(
2001
).
38.
C.
Battistoni
,
G.
Mattogno
, and
E.
Paparazzo
,
Surf. Interface Anal.
7
,
117
(
1985
).
39.
A. G.
Shard
,
Surf. Interface Anal.
46
,
175
(
2014
).
40.
A. G.
Shard
,
J. Phys. Chem. C
116
,
16806
(
2012
).
41.
A. G.
Shard
,
J.
Wang
, and
S. J.
Spencer
,
Surf. Interface Anal.
41
,
541
(
2009
).
42.
R. C.
Chatelier
,
H. A.
St John
,
T. R.
Gengenbach
,
P.
Kingshott
, and
H. J.
Griesser
,
Surf. Interface Anal.
25
,
741
(
1997
).
43.
C. J.
Powell
,
W. S.
Werner
,
H.
Kalbe
,
A. G.
Shard
, and
D. G.
Castner
,
J. Phys. Chem. C
122
,
4073
(
2018
).
44.
S.
Evans
and
M. D.
Scott
,
Surf. Interface Anal.
3
,
269
(
1981
).
46.
D.
Woodruff
and
A.
Bradshaw
,
Rep. Prog. Phys.
57
,
1029
(
1994
).
47.
H.
Bishop
,
Surf. Interface Anal.
17
,
197
(
1991
).
48.
G. H.
Major
,
N.
Fairley
,
P. M.
Sherwood
,
M. R.
Linford
,
J.
Terry
,
V.
Fernandez
, and
K.
Artyushkova
, “
Practical guide for curve fitting in X-ray photoelectron spectroscopy
,”
J. Vac. Sci. Technol. A
(submitted).
49.
54.
M. P.
Seah
,
I. S.
Gilmore
, and
S. J.
Spencer
,
Surf. Sci.
461
,
1
(
2000
).
55.
S. B.
Hill
,
N. S.
Faradzhev
, and
C. J.
Powell
,
Surf. Interface Anal.
49
,
1187
(
2017
).
56.
A.
Herrera-Gomez
,
M.
Bravo-Sanchez
,
O.
Ceballos-Sanchez
, and
M.
Vazquez-Lepe
,
Surf. Interface Anal.
46
,
897
(
2014
).
57.
G.
Beamson
and
D.
Briggs
,
High Resolution XPS of Organic Polymers: The Scienta ESCA300 Database
(
Wiley
,
Chichester
,
1992
).
58.
E.
Paparazzo
,
J. Phys. Condens. Matter
30
,
343003
(
2018
).
59.
D. J. H.
Cant
,
Y.-C.
Wang
,
D. G.
Castner
, and
A. G.
Shard
,
Surf. Interface Anal.
48
,
274
(
2016
).
60.
K.
Piyakis
,
D.-Q.
Yang
, and
E.
Sacher
,
Surf. Sci.
536
,
139
(
2003
).
61.
G.
Vereecke
and
P.
Rouxhet
,
Surf. Interface Anal.
27
,
761
(
1999
).
62.
A. G.
Shard
and
S. J.
Spencer
,
Surf. Interface Anal.
49
,
1256
(
2017
).
63.
A.
Cohen Simonsen
,
J. P.
Pøhler
,
C.
Jeynes
, and
S.
Tougaard
,
Surf. Interface Anal.
27
,
52
(
1999
).
64.
Y.-C.
Wang
,
M. H.
Engelhard
,
D. R.
Baer
, and
D. G.
Castner
,
Anal. Chem.
88
,
3917
(
2016
).
66.
C. J.
Powell
,
J. Vac. Sci. Technol. A
38
,
023209
(
2020
).
67.
S.
Tanuma
,
C. J.
Powell
, and
D. R.
Penn
,
Surf. Interface Anal.
21
,
165
(
1994
).
68.
M. P.
Seah
,
Surf. Interface Anal.
44
,
497
(
2012
).
69.
A.
Jablonski
,
Surf. Interface Anal.
14
,
659
(
1989
).
70.
A.
Jablonski
,
J. Phys. D Appl. Phys.
48
,
075301
(
2015
).
72.
M. P.
Seah
,
Surf. Interface Anal.
44
,
1353
(
2012
).