BOSTON UNIVERSITY COLLEGE OF ENGINEERING Dissertation ...

8 downloads 24441 Views 5MB Size Report
Professor of Electrical Engineering ... truckload of other equipment that we have put to good use in our lab. ... Boston University, College of Engineering, 2004.
BOSTON UNIVERSITY COLLEGE OF ENGINEERING

Dissertation

HIGH SPATIAL RESOLUTION SUBSURFACE MICROSCOPY

by

STEPHEN BRADLEY IPPOLITO

B.A., Boston University, 1999 B.S., Boston University, 1999 M.S., Boston University, 1999

Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2004

Approved by

First Reader

______________________________________________________ M. Selim Ünlü, Ph.D. Professor of Electrical Engineering

Second Reader

______________________________________________________ Bennett B. Goldberg, Ph.D. Professor of Physics

Third Reader

______________________________________________________ Shyamsunder Erramilli, Ph.D. Professor of Physics

Fourth Reader

______________________________________________________ Bahaa E. A. Saleh, Ph.D. Professor of Electrical Engineering

ACKNOWLEDGMENTS I would like to thank, first and foremost, my advisors, Prof. Selim Ünlü, Prof. Bennett Goldberg, and Prof. Anna Swan, who have guided me through eight years of laboratory research. After I failed several times at fabricating a thermal test sample, Prof. Selim Ünlü personally fabricated the thermal test sample at the facility of our collaborator, Prof. Yusuf Leblebici, in Lausanne, Switzerland. For going that extra mile to help me complete my work, I am very grateful to Prof. Selim Ünlü. I would like to thank Prof. Thomas Kincaid for advising me as an undergraduate student and for serving as the chairman of my defense committee. I would also like to thank Prof. Shyamsunder Erramilli and Prof. Bahaa Saleh for serving on my defense committee. I would like to thank everyone with whom I have had the pleasure of working in the Ultramicroscopy laboratory. My numerous colleagues have provided me the opportunity to draw from a great depth of experience and to participate in a variety of research projects. I would like to especially thank Dr. Shawn Thorne for his immeasurable contribution to my work. His assistance on the thermal emission microscopy project and the many small jobs he carried out, were enabling factors to the completion of my research. I would like to thank Dr. Bill Herzog, Dr. Greg Vander Rhodes, and Dr. Mutlu Gökkavas for spending the time to teach me practical optics and my way around the lab, while I worked for them on small jobs as an undergraduate student. I would like to thank Dr. Olufemi Dosunmu and Dr. Greg Blasche for always lending a helping hand. I have been friends with Femi since I came to BU nine years ago, and he has never refused a request for assistance in the lab. For all things computer related, Dr. Greg Blasche has iii

been my advisor for the past few years, teaching me *nix and answering an endless stream of questions. I would like to thank Dr. Greg Blasche, Dr. Matthew Emsley, Dr. Gökhan Ulu, and Dr. Greg Vander Rhodes for administering the computer resources that were an essential part of my research. I would like to thank Dr. Matthew Emsley for providing a stable windows network and for giving me this brilliant format for my dissertation. I would like to thank Dr. Samuel Lipoff for teaching me the art of literature research. I would like to thank Dr. Zhiheng Liu, Dr. Mesut Eraslan, and Dr. Nick Vamivakas for their interest in continuing this project. I would like to thank Michael Pratt for fostering the relationship with Hamamatsu Photonics and always answering my intellectual property questions. I would like to thank Hitoshi Iida, Yasushi Hiruma, and everyone else I have worked with at Hamamatsu Photonics for their assistance in demonstrating my technique on their commercial instruments. I would like to thank Melles Griot for donating the scanning stage and a truckload of other equipment that we have put to good use in our lab. I would like to thank Dave Vallett and everyone else I have worked with at IBM for their collaboration. I would like to thank Stefan Mohrdiek and JDS Uniphase for contributing a laser, Lise Varner and Intel for contributing a sample, and the DARPA HERETIC Program for supporting this research. Last, but not least, I would like to thank my mother, Caren Ippolito, for her tireless editing and encouragement during the writing of this dissertation.

iv

HIGH SPATIAL RESOLUTION SUBSURFACE MICROSCOPY (Order No.

)

STEPHEN BRADLEY IPPOLITO Boston University, College of Engineering, 2004 Major Professor: M. Selim Ünlü, Professor of Electrical Engineering ABSTRACT A subsurface microscopy technique that significantly improves the light-gathering, resolving, and magnifying power of a conventional optical microscope is presented. The Numerical Aperture Increasing Lens (NAIL) is a plano-convex lens placed on the planar surface of an object to enhance the amount of light coupled from subsurface structures within the object, to a conventional optical microscope. In particular, the collection of light at angles beyond the critical angle increases the limit on numerical aperture, from unity for conventional subsurface microscopy through the planar surface of the object to the refractive index of the object for NAIL microscopy. The aberrations associated with conventional subsurface microscopy are also eliminated by the NAIL. Consequently, both the amount of light collected and diffraction-limited spatial resolution are improved beyond the limits of conventional subsurface microscopy. A theoretical optical model and experimental procedures for imaging structures below the planar surface of an object, both without and with a NAIL are presented, and the results are compared. The NAIL technique was experimentally demonstrated in two Si integrated circuit (Si IC) failure analysis applications. Since the spatial resolution of conventional failure analysis microscopy is insufficient for current Si IC technology, the improvements using v

a NAIL, due to the high refractive index of Si, have significant implications for the industry. The NAIL technique was first applied to near-infrared inspection microscopy. By realizing a numerical aperture of 3.4, a lateral spatial resolution of better than 0.23 µm and a longitudinal spatial resolution of better than 1.7 µm were experimentally demonstrated at a free space wavelength of 1.05 µm. The NAIL technique was then applied to thermal emission microscopy. By realizing a numerical aperture of 2.9, a lateral spatial resolution of 1.4 µm and a longitudinal spatial resolution of 7.4 µm were experimentally demonstrated at free space wavelengths up to 5 µm. To the best of my knowledge, these results are spatial resolution records for their respective applications.

vi

TABLE

OF

Chapter 1

CONTENTS Introduction to optical microscopy ......................................................... 1

1.1

The microscope................................................................................................. 1

1.2

The conventional optical microscope ............................................................... 2

1.3

Surface optical microscopy............................................................................... 5

1.4

Subsurface optical microscopy ......................................................................... 8

1.5

Si Integrated Circuit failure analysis............................................................... 12

Chapter 2

Numerical Aperture Increasing Lens (NAIL) theoretical model ....... 15

2.1

Imaging functions ........................................................................................... 15

2.2

Electromagnetic optics.................................................................................... 19

2.3

Geometrical optics .......................................................................................... 27

2.4

Imaging subsurface structures in an object without a NAIL .......................... 30

2.5

Imaging subsurface structures in an object with a NAIL ............................... 35

2.5.1

Central NAIL ............................................................................................ 40

2.5.2

Aplanatic NAIL ........................................................................................ 44

2.6 Chapter 3

Comparison ..................................................................................................... 50 Experimental Procedures for using a NAIL......................................... 54

3.1

Cleaning the planar surfaces of the object and NAIL..................................... 54

3.2

Positioning and imaging the object without the NAIL ................................... 56 vii

3.3

Placing and laterally positioning the NAIL on the object............................... 58

3.4

Longitudinally positioning and imaging the object with the NAIL................ 60

3.5

Variations in the experimental procedures ..................................................... 63

Chapter 4

Near-Infrared Inspection microscopy using a NAIL........................... 65

4.1

Introduction to near-infrared inspection microscopy...................................... 65

4.2

Preliminary experiment................................................................................... 67

4.3

High resolution experiment............................................................................. 72

4.4

Commercial microscope experiment .............................................................. 83

Chapter 5 5.1

Thermal emission microscopy using a NAIL ....................................... 89 Thermal microscopy ....................................................................................... 89

5.1.1

Techniques using a mechanical method of interaction ............................. 90

5.1.2

Techniques using an optical method of interaction .................................. 92

5.2

Thermal emission microscopy ........................................................................ 94

5.3

Custom thermal emission microscope .......................................................... 100

5.4

Thermal test sample ...................................................................................... 106

5.5

Experimental results using a NAIL............................................................... 116

Chapter 6

Conclusions............................................................................................ 125

Appendix A Hamamatsu Phemos 2000 datasheet ................................................... 127 Appendix B NAIL Patent .......................................................................................... 130 Bibliography .................................................................................................................. 145 Curriculum Vitae .......................................................................................................... 154

viii

LIST

OF

TABLES

Table 1: Plano-convex lens designs.................................................................................. 12 Table 2: Imaging subsurface structures in an object with a planar surface. ..................... 14 Table 3: Imaging characteristics at several values of zA. .................................................. 35 Table 4: Imaging characteristics. ...................................................................................... 44 Table 5: Imaging characteristics. ...................................................................................... 48 Table 6: Imaging characteristics. ...................................................................................... 49 Table 7: Comparison of imaging characteristics. ............................................................. 51 Table 8: Comparison of magnifications............................................................................ 52 Table 9: Comparison of longitudinal chromatic aberration in the object space. .............. 53 Table 10: Thermodynamic constants of the materials in the thermal test sample. ......... 108 Table 11: Length across the region of temperature change in each material is much smaller than the respective thermal diffusion length, so the time dependent term in the heat equation is insignificant............................................................................. 111 Table 12: Optical constants for the line. ......................................................................... 113 Table 13: Optical constants for the oxide and wafer. ..................................................... 114

ix

LIST

OF

FIGURES

Figure 1: Conventional optical microscope. ....................................................................... 3 Figure 2: Imaging the surface of an object with (a) a conventional optical microscope, (b) an oil immersion microscope, (c) a solid immersion lens microscope, and (d) a near field scanning optical microscope............................................................................... 6 Figure 3: Conventional optical microscope imaging a subsurface structure within an object (a) with normal configuration, (b) with a central NAIL, and (c) with an aplanatic NAIL.......................................................................................................... 10 Figure 4: Parameters of the object and NAIL................................................................... 11 Figure 5: Si ICs imaged by (a) Scanning Electron Microscope and (b) Transmission Electron Microscope. Photographs courtesy of International Business Machines Corporation. Unauthorized use not permitted........................................................... 13 Figure 6: (a) Rayleigh, (b) modified Rayleigh, (c) Sparrow, and (d) Houston resolution criterion. .................................................................................................................... 18 Figure 7: Reflection and refraction by a dielectric planar boundary. ............................... 20 Figure 8: Refraction by a stratified medium. .................................................................... 23 Figure 9: Imaging according to Gaussian optics............................................................... 28 Figure 10: Imaging parameters for an object with a planar surface. ................................ 31 Figure 11: Wave aberration functions, as function of θ.................................................... 32 x

Figure 12: Transmissivity (T ), as a function of angle (θ). ................................................ 33 Figure 13: Intensity along the optical axis at ρ equals zero, as a function of z. ............... 34 Figure 14: Intensity in the lateral plane of peak intensity, as a function of ρ. .................. 34 Figure 15: Imaging parameters for a sphere. .................................................................... 36 Figure 16: Transmissivity (T ) and phase change on transmission (δt), as a function of angle (θ), for a gap height (h) of λ0 divided by (a) 10, (b) 20, (c) 40, and (d) 80. .... 40 Figure 17: Wave aberration functions, as function of θ.................................................... 42 Figure 18: Intensity along the optical axis at ρ equals zero, as a function of z. ............... 43 Figure 19: Intensity in the lateral plane of peak intensity, as a function of ρ. .................. 43 Figure 20: Transmissivity (T ), as a function of angle (θ). ................................................ 45 Figure 21: Wave aberration functions, as function of θ.................................................... 46 Figure 22: Intensity along the optical axis at ρ equals zero, as a function of z. ............... 47 Figure 23: Intensity in the lateral plane of peak intensity, as a function of ρ. .................. 47 Figure 24: Intensity along the optical axis at ρ equals zero, as a function of z. ............... 48 Figure 25: Intensity in the lateral plane of peak intensity, as a function of ρ. .................. 49 Figure 26: Intensity in the object plane at z equals zero, as a function of ρ. .................... 51 Figure 27: Intensity along the optical axis at ρ equals zero, as a function of z. ............... 52 Figure 28: Imaging the (a) planar surface of the object and (b) subsurface structure within the object. .................................................................................................................. 57 Figure 29: Photograph of an aplanatic Si NAIL placed on a Si IC................................... 59 Figure 30: Conventional image of a Si IC during lateral positioning of an aplanatic Si NAIL (bottom right). ................................................................................................ 60 xi

Figure 31: Imaging the (a) top of the NAIL, (b) interface, (c) center of the spherical surface of the NAIL, (d) subsurface structure within the object............................... 61 Figure 32: Image of the Si IC with the aplanatic Si NAIL. .............................................. 62 Figure 33: Layout of fabricated Si IC. .............................................................................. 68 Figure 34: Illustration of the preliminary NIR microscope. ............................................. 69 Figure 35: (a) Conventional, NIR inspection microscopy image (b) image of polycrystalline Si lines with a NAIL and (c) image of n+ type diffusion lines with a NAIL (d) Linecut of data in image (c)...................................................................... 71 Figure 36: Photograph of the high resolution NIR microscope........................................ 74 Figure 37: Illustration of the high resolution NIR inspection microscope. ...................... 76 Figure 38: Image at best focus, of a 15 µm by 9 µm area in the Si IC. ............................ 79 Figure 39: Linecut across a physical edge with (a) fitted detector signal and (b) NAIL microscope Line Spread Function (LSF) plotted by distance, demonstrating 0.23 µm lateral spatial resolution. ........................................................................................... 80 Figure 40: Linecut across two closely separated features, with the detector signal plotted by distance. ............................................................................................................... 81 Figure 41: Successive images at different defocus distances in the longitudinal direction, from best focus (a) -1.7 µm, (b) 0 µm, and (c) 1.7 µm............................................ 82 Figure

42:

µAMOS-200,

a

commercial

NIR

microscope.

Photograph courtesy of Hamamatsu Corporation. Unauthorized use not permitted. 84 Figure 43: Images of an interconnect area of the SRAM Si IC, taken using (a) a 100X objective lens and (b) a 20X objective lens with a NAIL. ...................................... 84 xii

Figure 44: Images of a cell area of the SRAM Si IC, taken using (a),(c) a 100X objective lens and (b),(d) a 10X objective lens with a NAIL. .................................................. 86 Figure 45: Images of a grating area of the SRAM Si IC, taken using (a) a 100X objective lens and (b) a 10X objective lens with a NAIL. ........................................................ 87 Figure 46: Linecut across a grating with a spatial period of 2.64 µm. ............................. 88 Figure 47: Radiated power density (S) from a blackbody, as a function of wavelength and temperature. .............................................................................................................. 95 Figure 48: Peak wavelength (λpeak) of the radiated power density, as a function of temperature, for a blackbody or greybody. ............................................................... 95 Figure 49: Thermal emission from (a) a Si IC and (b) a Si IC with the addition of a NAIL. ................................................................................................................................... 98 Figure 50: Photograph of the custom thermal emission microscope.............................. 100 Figure 51: Illustration of the custom thermal emission microscope............................... 102 Figure 52: Thermal test sample (not to scale)................................................................. 106 Figure 53: Temperature change per unit power (F) for a 2 µm wide line, as a function of lateral displacement (x) from the center of the line along its width and longitudinal displacement (z) from the top surface of the oxide into the oxide and wafer. ........ 110 Figure 54: Temperature change per unit power (F), as a function of longitudinal displacement (z) from the top surface of the oxide into the oxide and wafer, with lateral displacements (x and y) equal to zero at the center of the 2 µm wide line. . 111

xiii

Figure 55: Temperature change per unit power (F), as a function of lateral displacement (x) from the center of the 2 µm wide line along its width at the top surface of the oxide........................................................................................................................ 112 Figure 56: Temperature change per unit power (F), as a function of lateral displacement (y) from the center of the 2 µm wide line along its length at the top surface of the oxide........................................................................................................................ 113 Figure 57: The amplitude of the second harmonic term of the radiant exitance (M2f) from the bottom surface of a 2 µm wide line with a peak power of 0.36 W, as a function of lateral displacement (x) from the center of the line along its width. .................. 115 Figure 58: The amplitude of the second harmonic term of the radiant exitance (M2f) from the bottom surface of a 0.8 µm wide line with a peak power of 0.058 W, as a function of lateral displacement (x) from the center of the line along its width. .... 116 Figure 59: The Signal, as a function of the Peak Power applied to a 2 µm wide Al line, obeys the Stefan-Boltzmann Law. .......................................................................... 117 Figure 60: (a) Inspection image of a 0.8 µm wide Al line, taken by an InGaAs camera and (b) thermal emission image of the Joule heating in the Al line, taken with an InSb detector at best focus............................................................................................... 118 Figure 61: (a) Signal and Radiant Exitance from a 0.8 µm wide Al line, as a function of Lateral Distance, and (b) resulting Line Spread Function of the NAIL microscope with a FWHM of 1.4 µm......................................................................................... 119

xiv

Figure 62: (a) Signal and Radiant Exitance from a 2 µm wide Al line, as a function of Lateral Distance, and (b) resulting Line Spread Function of the NAIL microscope with a FWHM of 1.4 µm......................................................................................... 120 Figure 63: Signal from the center of a 2 µm wide Al line, as a function of Longitudinal Distance from best focus......................................................................................... 121

xv

LIST

OF

ABBREVIATIONS

FWHM

Full Width at Half Maximum

IC

Integrated Circuit

LSF

Line Spread Function

NA

Numerical Aperture

NAIL

Numerical Aperture Increasing Lens

NIR

Near-Infrared

NSOM

Near-field Scanning Optical Microscopy

MIR

Mid-Infrared

OTF

Optical Transfer Function

PSF

Point Spread Function

SIL

Solid Immersion Lens

SRAM

Static Random Access Memory

xvi

Chapter 1 INTRODUCTION TO OPTICAL MICROSCOPY

1.1

The microscope Since the late seventeenth century, microscopes have yielded innumerable glimpses

of a minute world that can not be seen by the naked eye.[1] Microscopes now play a pivotal role in many aspects of our society, from the sciences to the arts. So, what exactly is a microscope? A microscope is simply an “instrument that produces enlarged images of small objects, allowing them to be viewed at a scale convenient for examination and analysis.”[2] An image is a reproduction of the form of an object and may be a sculpture, photograph, or numerical data on a computer. Today, there exists a multitude of different types of microscopes and microscopy techniques. Microscopes may be broadly divided into four categories, according to the method of interaction they use to measure an object: stylus profilometers, acoustic microscopes, electron microscopes, and optical microscopes. Stylus profilometers, the first category, measure a mechanical interaction between a probe and the surface of an object. The probe is scanned over the surface of the object, measuring one point at a time, to build an image in a serial fashion, which requires a considerable amount of time. Stylus profilometers are commonly used to assess many properties of the surface of an object, including its topography, temperature, and hardness. The invention of the Atomic Force 1

Microscope has allowed stylus profilometry to reach the atomic scale.[3] Acoustic microscopes, the second category, measure an interaction between ultrasonic waves and an object. Scanning Acoustic Microscopes focus a beam of ultrasonic waves onto a small volume of an object and detect the reflected ultrasonic waves from that same volume.[4] The back scattering of ultrasonic waves varies according to the material composition of the object. Electron microscopes, the third category, measure an interaction between electrons and an object. A Scanning Electron Microscope and Transmission Electron Microscope focus a beam of electrons into a small volume of an object, and measure the resulting scattered photons or electrons.[5] A Scanning Tunneling Microscope measures the number of electrons that tunnel across a potential between a probe and an object.[6] Optical microscopes, the final category, measure an interaction between photons and an object. Each type of microscope offers unique advantages and disadvantages, and all have become essential tools to the progress of science. In this dissertation, I will primarily discuss the principles of optical microscopy. 1.2

The conventional optical microscope Of the many types of optical microscopes, I will first describe the conventional

optical microscope, which is a lens or combination of lenses, as illustrated in Figure 1. Since both the object and its image are three dimensional, they are measured and described in terms of an object space and image space. The volume in the object space that a conventional optical microscope clearly images is said to be its focus. In a conventional optical microscope, the first lens encountered by the light from an object is referred to as the objective lens. The distance between the focus and objective lens is 2

termed the working distance. Most objective lenses are rotationally symmetric, with respect to a line called the optical axis, which, in this dissertation, will be the z-axis, unless specified otherwise. The angle of light, with respect to the optical axis, is defined as θ.

Figure 1: Conventional optical microscope.

The magnification or magnifying power of a microscope is the degree of enlargement of an image relative to the actual size of the object it represents. For example, a microscope that creates an image that is twice as large as an object has a magnification of 2x. The light-gathering power of an optical microscope is the angular range and total amount of light from an object, that an optical microscope admits. A variety of relative 3

apertures are commonly taken as the measure of light-gathering power including: numerical aperture, angular aperture, F number, and collected solid angle. These relative apertures are all based on the angular semiaperture in the object space (θa), which is the maximum angle that a microscope admits, as shown in Figure 1. The Numerical Aperture (NA) is, by definition, nsinθa, where n is the refractive index of the object space, and the collected solid angle (Ω) is, by definition, 2π (1-cosθa). Understanding the limitations and aberrations in the imaging process of a given microscope is critical, as duplicating the characteristics of an object is never exact. The spatial resolution or resolving power of a microscope is quantitatively defined by a variety of criteria, which are presented in Chapter 2, but qualitatively defined as the limit in the size of minute objects that a microscope can clearly distinguish.[7] For example, the unaided human eye is incapable of clearly resolving objects smaller than about 0.1 mm, but a conventional optical microscope can extend this spatial resolution to about 0.2 µm, in the visible light spectrum.[1] With rotational symmetry, spatial resolution can be expressed in two directions: lateral, for that perpendicular to the optical axis and longitudinal for that parallel to the optical axis. The design, manufacture, and assembly of a conventional optical microscope can introduce aberrations, which degrade its spatial resolution. However, modern production methods have been able to eliminate all significant aberrations.[8] Even in the absence of aberrations, the diffraction of light ultimately limits the spatial resolution of a conventional optical microscope to approximately λ0 /2nsinθa laterally, and λ0 /n(1-cosθa) longitudinally, where λ0 is the free space wavelength of light.[9] Thus, in the absence of aberrations, the light-gathering 4

power and resolving power are coupled. Increasing θa improves the spatial resolution and increases the amount of light collected, however, in practice it also increases the cost and decreases the working distance of an objective lens. Of course, reducing the wavelength of light by either decreasing λ0 or increasing n also improves the resolution. The necessity for improved spatial resolution spawned the invention of the optical microscope and continues to drive microscopy research today. The imaging fidelity will not be the same throughout an object, since spatial resolution depends on the specific media light traverses. Optical microscopy may be divided into two modes, according to the location of the focus: surface optical microscopy and subsurface optical microscopy. In surface optical microscopy, the focus of an optical microscope is at the surface of an object. In subsurface microscopy the focus of an optical microscope is below the surface of an object. I will briefly review surface optical microscopy, but subsurface optical microscopy will be the primary focus of this dissertation. 1.3

Surface optical microscopy Optical microscopes are usually employed to image the surface of an object.

Conventional optical microscopy of the surface of an object is illustrated in Figure 2(a). The refractive index of air in the object space is unity, which limits the NA to unity and ultimately limits the lateral spatial resolution to approximately λ0 /2 and the longitudinal spatial resolution to approximately λ0. Many surface optical microscopy techniques have been developed to improve spatial resolution beyond the limits of conventional optical microscopy, by placing additional elements in the optical near field of the surface of an object.[10,11] 5

Figure 2: Imaging the surface of an object with (a) a conventional optical microscope, (b) an oil immersion microscope, (c) a solid immersion lens microscope, and (d) a near field scanning optical microscope.

Immersion techniques introduce a medium with a high refractive index, n, into the object space, above the surface of an object. This increases the limit on the NA to n, and improves the ultimate limit on the lateral spatial resolution to approximately λ0 /2n and the longitudinal spatial resolution to approximately λ0 /n. In liquid immersion microscopy, the region between a liquid immersion objective lens and the surface of an object is filled with an immersion liquid, which can have a refractive index up to 1.6, as shown in Figure 2(b). Oil and water immersion objective lenses are available from most microscope manufacturers and are commonly used in many fields, including biology. Despite the significant improvements in the NA and spatial resolution, the drawbacks of liquid immersion microscopy include the contamination of an object, a small working distance, high cost, and time consuming preparation. The Solid Immersion Lens (SIL) technique places a plano-convex lens on the surface of an object, as shown in Figure 2(c).[12] The SIL can have a refractive index up to 4, depending on the free space wavelength of light and the medium selected. To date, Si, 6

GaP, GaAs, sapphire, and glass SILs have been successfully fabricated and experimentally demonstrated. Despite the significant improvements in NA and spatial resolution, the major drawback of SIL microscopy is that direct physical contact with an object is necessary. The SIL technique was first proposed and demonstrated by Mansfield and Kino to clearly resolve 100 nm lines and spaces in photoresist, with light at a free space wavelength of 436 nm, using a glass SIL.[12] Since then, the SIL technique has been applied to a variety of applications to improve spatial resolution. In optical data storage, the SIL technique increased information density by a factor of four.[13,14,15,16] Low

temperature

photoluminescence

microscopy

studies

of

low

dimensional

semiconductor structures, using a SIL, have revealed new physical phenomena and material properties.[17,18,19,20,21,22,23] The SIL technique has been used in ultrafast pump-probe spectroscopy of a quantum well structure to study lateral carrier transport.[24] In fluorescence microscopy, not only has the SIL yielded increased spatial resolution, but also increased the amount of light collected.[25,26,27] The SIL technique has also been used in micro-Raman spectroscopy of patterned Si,[28] and has been suggested as a way of increasing the resolution of photolithography,[29] as well as being used to measure the laser power of highly divergent beams.[30] For most of the aforementioned experiments, the diameter of the SIL was a few millimeters. However, a microfabricated SIL with a diameter of 15 µm has also been demonstrated.[31,32,33] Mansfield’s dissertation is a comprehensive reference on the SIL technique.[34] Thus, both solid and liquid immersion techniques have been shown to improve spatial resolution by introducing a medium with a high refractive index into the object space. 7

The Near-field Scanning Optical Microscopy (NSOM) technique introduces an aperture with dimensions much less than the wavelength of light in close proximity to an object, to achieve spatial resolution better than the diffraction limit.[35] The most common implementation of NSOM is a single mode optical fiber drawn into a fine tip and sidewalls coated with metal to act as an aperture. The NSOM tip can also act as a Stylus profilometer to measures the topography while measuring an optical signal. Low light-gathering power is the main drawback of the NSOM technique, however, the lightgathering power has been improved by combining the SIL and NSOM techniques.[36] In summary, both immersion and NSOM surface optical microscopy techniques have improved spatial resolution beyond the limits of conventional optical microscopy. 1.4

Subsurface optical microscopy It is not possible to image the inner features of an opaque object, such as a piece of

metal, with an optical microscope. However, if an object consists of structures buried within a transparent and homogenous medium with an optical quality surface, then its subsurface structures may be imaged with an optical microscope. The encapsulating medium, through which light must propagate, provides additional optical constraints: the range of λ0 where it is transparent, its refractive index (n), as well as the reflection and refraction at its surface. Once the lowest possible value of λ0 /n for the encapsulating medium is selected, the spatial resolution can only be improved by altering the surface geometry of the encapsulating medium. Ideally, the surface geometry should maximize the angular semiaperture in the object space (θa), and introduce no significant aberrations, to permit operation at the diffraction limit. 8

A planar surface geometry is the most common type of surface geometry, as it is either characteristic of the object or producible by polishing or cleaving. Figure 3(a) illustrates a conventional optical microscope imaging subsurface structures in an object with a planar surface. Reflection and refraction at the planar surface of an object limits a conventional optical microscope’s light-gathering power from subsurface structures within the object. Reflection at the planar surface of an object reduces the amount of light that escapes from the object, allowing no light to escape above the critical angle (θc), which is by definition sin-1(1/n). According to Snell’s Law, refraction at the planar surface of an object reduces sinθa for a given objective lens by a factor of n, however, the NA remains the same because of the increased refractive index in the object space. Refraction also imparts spherical aberration that increases monotonically with angle, thereby requiring correction to operate at the diffraction limit. The absence of light from above the critical angle ultimately limits the lateral spatial resolution to approximately λ0 /2 and the longitudinal spatial resolution to approximately λ0 n(1+cosθc). Thus, despite the advantage of having a higher refractive index of the encapsulating medium in relation to the refractive index of air, the subsurface spatial resolution is always worse than the surface spatial resolution, in conventional optical microscopy of an object with planar surface geometry. Circumventing this planar surface barrier would yield significant improvement in the amount of light collected and the spatial resolution, which is ultimately limited to approximately λ0 /(2n) laterally and λ0 /n longitudinally. Despite these limitations, a conventional optical microscope is normally used for subsurface imaging of objects with planar surface geometry. 9

Figure 3: Conventional optical microscope imaging a subsurface structure within an object (a) with normal configuration, (b) with a central NAIL, and (c) with an aplanatic NAIL.

As illustrated in Figure 3(b) and Figure 3(c), the addition of a Numerical Aperture Increasing Lens (NAIL), a plano-convex lens, to the planar surface of an object increases the conventional optical microscope’s light-gathering power from that object, without introducing any aberrations.[37] When first proposed and demonstrated, the NAIL was termed a truncated SIL, however, I later adopted the terms SIL and NAIL to delineate between application of a plano-convex lens for surface and subsurface microscopy, respectively.[12,15,34] In surface microscopy with a SIL, the refractive index in the object space is altered by immersion. In subsurface microscopy the refractive index in the object space may be high, but can not be altered, thus a NAIL simply increases the NA or light-gathering power. The NAIL is made of a material with a refractive index matching that of the encapsulating medium. The planar surface of the NAIL closely matches that of the object, so that light propagates from the object into the NAIL by evanescently coupling across any potential gap between them. After propagating through the NAIL, the light then refracts at the spherical surface into air, and is collected by a conventional

10

optical microscope. In Chapter 3, I will describe the experimental procedures for using a NAIL in a conventional optical microscope.

Figure 4: Parameters of the object and NAIL.

The key parameters of the object and NAIL are shown in Figure 4. The subsurface structure within the object to be imaged is located at a depth of X below the planar surface of the object. The convex surface of the NAIL is spherical with a radius of curvature of R. The center thickness of the NAIL, the distance from the top of the spherical surface to the planar surface, is D. There are two types of NAILs that create stigmatic, or aberration free images. A central NAIL, as illustrated in Figure 3(b), is designed to have the subsurface structures in an object at the center of the spherical surface of the NAIL, so D must equal R-X. An aplanatic NAIL, as illustrated in Figure 3(c), is designed to have the subsurface structures in an object at the aplanatic points of the spherical surface of the NAIL, so D must equal R(1+1/n)-X.[38] In Chapter 2, I will present a theoretical optical model for both types of NAILs. Table 1 summarizes the naming convention for the four types of SILs and NAILs, depending on the location of the object plane. Both types of NAILs have experimentally demonstrated improvements in light-gathering power and spatial resolution. The central NAIL has been used to 11

improve the density of optical data storage.[39] The aplanatic NAIL has been experimentally demonstrated in several Si Integrated Circuit failure analysis applications and InGaAs quantum dot spectroscopy. Table 1: Plano-convex lens designs.

Location of

At the geometrical center of the

At the aplanatic points of the

object plane

spherical surface of the lens

spherical surface of the lens

central SIL

aplanatic SIL, Weierstrass SIL,

or hemispherical SIL

or hyper-hemispherical SIL

D=R

D = R(1+1/n)

Within

central NAIL

aplanatic NAIL

the object

D = R-X

D = R(1+1/n)-X

At the surface of the object

1.5

Si Integrated Circuit failure analysis Current Si Integrated Circuit (IC) technology has reached process size scales of 0.09

µm, and optical microscopes are necessary tools in the failure analysis of a Si IC. However, current Si IC technology includes many opaque metal layers and structures above the semiconductor devices. The CMOS 7S process by IBM, for example, includes six layers of Cu, as shown in Figure 5(a). A cross-section view of the IBM 64-bit PowerPC microprocessor, shown in Figure 5(b), has the metal layers marked and shows that topside microscopy of the buried devices in their final state is greatly hindered. Thus, backside microscopy through the planar Si substrate is used almost exclusively in failure analysis applications. For backside microscopy through the planar Si substrate, X is equal to the substrate thickness. 12

Figure 5: Si ICs imaged by (a) Scanning Electron Microscope and (b) Transmission Electron Microscope. Photographs courtesy of International Business Machines Corporation. Unauthorized use not permitted.

The Si substrate does not transmit light at visible wavelengths, but does so at infrared wavelengths, where the refractive index varies from 3.4 to 3.7.[40,41] Failure analysis microscopes operate in the near-infrared (NIR), at wavelengths between 0.75 µm and 3 µm, and the mid-infrared (MIR), at wavelengths between 3 µm and 30 µm. The backside of the Si substrate is typically planar, because the fabrication process begins with a planar Si wafer. Table 2 recaps the spatial resolution limits associated with conventional optical microscopy and NAIL microscopy of subsurface structures in an object with a planar surface.

13

Table 2: Imaging subsurface structures in an object with a planar surface.

lateral

longitudinal

spatial resolution

spatial resolution

conventional optical microscopy

λ0 /2

λ0 n(1+cosθc)

NAIL microscopy

λ0 /(2n)

λ0 /n

At infrared wavelengths, the lateral spatial resolution for conventional optical microscopy is much greater than the size scale of current Si IC technology. However, the large refractive index of Si offers the potential for significant improvement in spatial resolution and amount of light collected from the buried devices in a Si IC, using the NAIL technique. I chose to demonstrate the NAIL technique in two Si IC failure analysis applications: near-infrared inspection microscopy in Chapter 4 and thermal emission microscopy in Chapter 5.

14

Chapter 2 NUMERICAL APERTURE INCREASING LENS (NAIL) THEORETICAL MODEL

In this chapter, I will first present principles of imaging functions, electromagnetic optics, and geometrical optics. The derivation of these principles and further information may be found in Principles of optics by Born and Wolf.[38] Using these principles, I will then create a theoretical optical model for imaging subsurface structures in an object with a planar surface, both without and with a NAIL, and compare the results. 2.1

Imaging functions Imaging functions are often used to characterize the behavior of light in optical

systems.[42] An imaging function takes the intensity of light in the object plane (O(x,y)) and generates the intensity of light in the image plane (I(x,y)). If the imaging function is shift invariant in both lateral directions, it can be represented by a convolution with a Point Spread Function (PSF), according to Equation (1). A delta function or single point source of intensity at the origin of the object plane will collapse the integral in Equation (1), and yield an intensity in the image plane proportional to the PSF. In this manner, the intensity from each point in the object plane is spread over an area in the image plane to yield a blurred image of the object. ∞ ∞

I ( x , y ) = PSF( x , y ) ∗ O( x , y ) =

∫ ∫ PSF(

−∞ −∞

15

x ', y ′ )

O( x − x′, y − y′) dy′dx′

(1)

The Line Spread Function (LSF), defined by Equation (2), is the one dimensional analogue of the two dimensional PSF. If the intensity in the object plane (Ô(x)) is independent of y, then the intensity in the image plane (Î(x)) will also be independent of y, according to Equation (3), which results from Equation (1). The LSF can then be used to simplify Equation (3) to a single variable in Equation (4). The intensity from each line in the object plane of constant x is spread over an area in the image plane to yield a blurred image of the object. It is important to note that if an optical system has rotational symmetry about the optical axis, then its PSF also has rotational symmetry, and its LSF uniquely determines its PSF. ∞

LSF( x′) = Iˆ( x ) =

∫ PSF(

dy′

(2)

dy′ Oˆ ( x − x′) dx′

(3)

Oˆ ( x − x′) dx′

(4)

−∞

∞ ∞

∫ ∫ PSF(

−∞ −∞

Iˆ( x ) =

x ', y ′ )



∫ LSF(

−∞

x ')

x ′, y ′ )

Taking the one dimensional Fourier transform of Equation (4) yields Equation (5), where the Fourier transform of each quantity, denoted by lower case letters, is a function of spatial frequency (sx). Similarly, taking the two dimensional Fourier transform of Equation (1) yields Equation (6), where the Fourier transform of the PSF is defined as the Optical Transfer Function (OTF). A three dimensional point spread function (PSF3D) and the resulting three dimensional optical transfer function can be used to additionally characterize the blurring along the optical axis.[43] Fourier analysis is often used for

16

numerical calculations, because multiplication in Fourier space is a much simpler mathematical operation than corresponding convolution in real space. iˆ( sx ) = lsf ( sx ) oˆ( sx )

(5)

i( s , s ) = OTF( s , s ) o( s , s ) x y x y x y

(6)

The LSF, PSF, and PSF3D can be derived from a theoretical optical model of the optical system. The PSF3D and PSF are equal to the intensity in the image space and image plane, respectively, resulting from a point source at the origin of the object space. In a simple optical system, the PSF is the same, for both collection of light from the object, and illumination of the object through the same optical path. Thus, the PSF3D and PSF are also equal to the intensity in the object space and object plane, respectively, resulting from a point source at the origin of the image space. The LSF, PSF, and PSF3D for the optical system can also be found from experimental measurements of the intensity in the image space, resulting from a known incoherent intensity distribution in the object space. The primary measure of the quality of imaging, resulting from the PSF of an optical system, is the spatial resolution, which can be defined by a variety of criteria in each direction.[7] According to the Rayleigh criterion, the spatial resolution is defined as the distance (d) from the central maximum in the PSF to the first zero in the PSF, as illustrated in Figure 6(a). The Rayleigh criterion, which is rooted in the PSF resulting from a rectangular aperture, leaves the spatial resolution undefined for a PSF without a finite zero, such as a Gaussian or Lorentzian function. However, for such a PSF, the modified Rayleigh criterion defines the spatial resolution as the distance (d) between the 17

peaks of the PSF and a shifted PSF, where their sum at the midpoint divided by the maximum is equal to 0.81, as illustrated in Figure 6(b). Similarly, the Sparrow criterion defines the spatial resolution as the distance (d) between the peaks of the PSF, and a shifted PSF, where their sum at the midpoint initially becomes less than the maximum, as illustrated in Figure 6(c).

Figure 6: (a) Rayleigh, (b) modified Rayleigh, (c) Sparrow, and (d) Houston resolution criterion.

The Houston criterion defines the spatial resolution as the Full Width at Half Maximum (FWHM) of the PSF, as illustrated in Figure 6(d), and is the most commonly used resolution criterion in experimental work, primarily because of its mathematical 18

simplicity. I will use the Houston criterion hereafter to define the spatial resolution, so the lateral spatial resolution in each lateral direction of an optical system will be the FWHM of the LSF or PSF in that direction, and the longitudinal spatial resolution will be the FWHM of the PSF3D in the longitudinal direction. Another measure of the quality of imaging is the Strehl ratio, which is the ratio of the maximum intensities, with and without aberrations. If the Strehl ratio is greater than or equal to 0.8, the spatial resolution is said to be diffraction limited.[38] 2.2

Electromagnetic optics Electromagnetic optics, which is governed by Maxwell’s equations, is the most

general theory of optics that I will employ. In electromagnetic optics, light is a vector wave that propagates in modes determined by the optical characteristics of the medium. I will use the principles of electromagnetic optics to characterize the behavior of light at the surfaces of the object and NAIL, as well as at the focus in the object space. I will consider, first, the reflection and refraction of light by a planar dielectric boundary, then, the refraction of light by a stratified media consisting of a single homogenous layer, and finally, the PSF of an optical system with a high angular semiaperture. The electromagnetic optics theory for the reflection and refraction of light by a planar dielectric boundary will be used to characterize the behavior of light at the planar surface of the object without the NAIL, and at the spherical surface of the NAIL when it is in use. The reflected and transmitted monochromatic plane waves resulting from a monochromatic plane wave incident on a planar boundary between two linear homogeneous isotropic media can be derived, using the boundary conditions of 19

Maxwell’s equations.[38] The refractive indices of the first and second media are n1 and n2, respectively, and both media are nonmagnetic. The plane of incidence is specified by the normal to the boundary and the wave vector of the incident plane wave, ki. The wave vectors of the reflected and transmitted plane waves are kr and kt, respectively, and must also lie in the plane of incidence to match the relative phase at every point on the boundary. As illustrated in Figure 7, for simplicity, the boundary is at z equals zero, and the x-z plane is the plane of incidence.

Figure 7: Reflection and refraction by a dielectric planar boundary.

The angle of incidence (θi), angle of reflection (θr), and angle of transmission (θt) are the angles from the respective wave vectors to the boundary normal, which is, in this case, the z axis. The angles must satisfy the law of reflection, Equation (7), and the law of

20

refraction (Snell’s law), Equation (8), to match the relative phase at every point on the boundary.

θ r = π − θi

(7)

n1 sin θ i = n2 sin θ t

(8)

The amplitude of the electric field vector of the incident plane wave is divided into an Ais component that is perpendicular or transverse to plane of incidence (TE) and an Aip component that is parallel to plane of incidence, thereby having a resulting magnetic field vector transverse to the plane of incidence (TM). The amplitudes of the electric field vectors of the reflected and transmitted plane waves are similarly divided into polarization components: Ars, Arp, Ats, and Atp, respectively. The Fresnel formulae, Equations (9)-(12), result from the application of the boundary conditions of Maxwell’s equations.

rs =

rp =

ts =

tp =

Ars n1 cos θi − n2 cos θt = Ais n1 cos θi + n2 cos θt

(9)

n2 cos θi − n1 cos θt n2 cos θi + n1 cos θt

(10)

Ats 2n1 cos θi = Ais n1 cos θi + n2 cos θt

(11)

2n1 cos θi n2 cos θi + n1 cos θt

(12)

Arp Aip

Atp Aip

=

=

The reflectivity (R) and transmissivity (T ) are given in Equations (13) and (14), respectively, and obey the law of conservation of energy, Equation (15).

21

ℜ= r ℑ=

2

n2 cos θt 2 t n1 cos θi

ℜ+ ℑ =1

(13) (14) (15)

The phase change on transmission (δt) and phase change on reflection (δr) are given in Equations (16) and (17), respectively. Note that the phase change on transmission (δt) is always equal to zero.[44]  t − t∗  δ t = arg {t} = tan  −i ∗   t +t 

(16)

 r − r∗  δ r = arg {r} = tan  −i ∗   r+r 

(17)

−1

−1

If n1 is greater than n2, the transmissivity is zero when θi is above the critical angle (θc), which is defined as sin-1(n2/n1). In my theoretical model, I will only examine the transmitted or refracted waves. The electromagnetic optics theory for the refraction of light by a stratified media consisting of a single homogenous layer will be used to characterize the behavior of light at the interface between the planar surface of the object and the planar surface of the NAIL, when it is in use. A stratified medium has properties that only vary along one direction, and may be composed of multiple homogenous layers or an inhomogeneous structure. The reflected and transmitted monochromatic plane waves resulting from a monochromatic plane wave incident on a stratified linear isotropic medium between two linear homogeneous isotropic media can be derived using the boundary conditions of 22

Maxwell’s equations.[38] I will find the amplitude of the plane wave transmitted through a single linear homogenous isotropic layer with a thickness of h, as illustrated in Figure 8.

Figure 8: Refraction by a stratified medium.

The refractive indices of the first, second, and third media are n1, n2, and n1, respectively, and all of the media are nonmagnetic. According to the law of refraction, the angle of incidence (θi) is equal to the angle of transmission (θt), and the angle in the stratified medium (θ2) is given by Equation (18).  n1  sin θ1   n2 

θ 2 = sin −1 

(18)

I use the characteristic matrix method to solve for a transmitted s-polarized plane wave with the intermediate variables (β, p1, and p2), given in Equations (19)-(21).[38] The transmission coefficient (ts) is given in Equation (22), and the resulting 23

transmissivity (Ts) and phase change on transmission (δts) are given in Equation (23) and (24), respectively.

ts =

Ats = Ais

β = k0 n2 h cos θ 2

(19)

p1 = n1 cos θ1

(20)

p2 = n2 cos θ 2

(21)

1  p12 + p22  cos β − i   sin β  2 p1 p2  1

2

ℑs = t s =



δ ts = tan −1  −i 

  p 2 − p22   1+   1  sin β    2 p1 p2  

2

2  2  ts − ts∗  −1  p1 + p2  tan =    tan β  ∗  ts + ts    2 p1 p2  

(22)

(23)

(24)

To solve for a transmitted p-polarized plane wave, p1 and p2 in Equations (22)-(24) are replaced with the intermediate variables (q1 and q2), given in Equations (25) and (26), respectively. q1 =

1 cos θ1 n1

(25)

q2 =

1 cos θ 2 n2

(26)

If n1 is greater than n2, and h is much less than λ0, then optical contact between the first and third media is established, and light above the critical angle is coupled from the first medium to the third medium.

24

The electromagnetic optics theory for the transmissivities at all of the surfaces of the object and NAIL has been presented. To quantify the relative amount of light transmitted through the optical system, I use the collection efficiency (η), which is equal to the ratio of the power transmitted into air (Pt) to the power incident on the planar boundary of the object (Pi). Equation (27) integrates the total transmissivities of the optical system to yield the collection efficiency for a uniform emitter of unpolarized light in the object space. π

P 12 η = t = ∫ ( ℑs + ℑ p ) sin (θ ) dθ Pi 2 0

(27)

The electromagnetic optics theory for the PSF of an optical system with a high angular semiaperture will be used to characterize the behavior of light at the focus in the object space, both without and with a NAIL. The PSF of an optical system with a high angular semiaperture is equal to the intensity in the object space, resulting from a point source at the origin of the image space. If the optical system has rotational symmetry about the optical axis, then the plane of incidence for refraction is always a meridional plane for rays from a point source at the center of the image space. Therefore, the transmission coefficients of the optical system components can be multiplied, to yield the total transmission coefficients of ts and tp. Consideration of a high angular semiaperture beam focused at the origin in the object space requires a vector field theory model of diffraction.[45] An angular spectrum of plane waves approach, as described by Wolf, can be used to find the electromagnetic fields at any point near the focus, with cylindrical coordinates (ρ,φ,z).[46] Ichimura et al. formulated the electromagnetic fields for an

25

optical system, consisting of a SIL and a magneto-optical disk, based on a formulation for an aplanatic system by Richards and Wolf.[15,47] I use parts of the Wolf and Ichimura formulations, to construct my formulation, in which light from the origin of the image space is uniform, the polarization cross-terms are equal to zero, and an additional phase change (δ) is introduced to account for optical aberrations. The wave vector in free space (k0) is equal to 2π /λ0, and the wave vector at the formation of the focus (k) is equal to nk0. The resulting complex amplitudes (E and H) are given by Equations (28)-(32).

I0 =

θa

∫ (t

s

 −iA ( I 0 + I 2 cos ( 2ϕ ) )    Ε= −iAI 2 sin ( 2ϕ )    −2 AI1 cos (ϕ )  

(28)

 −iAI 2 sin ( 2ϕ )   1 Η =  −iA ( I 0 − I 2 cos ( 2ϕ ) )  η  −2 AI1 sin (ϕ )  

(29)

+ t p cos θ ) sin θ J 0 ( k ρ sin θ ) eikz cosθ +iδ dθ

(30)

0

θa

I1 = ∫ t p sin 2 θ J1 ( k ρ sin θ ) eikz cosθ +iδ dθ

(31)

0

I2 =

θa

∫ (t

s

− t p cos θ ) sin θ J 2 ( k ρ sin θ ) eikz cosθ +iδ dθ

(32)

0

The resulting time averaged Poynting vector () and the intensity (||) are given in Equations (33) and (34), respectively. It must be noted that the incident beam is polarized, however, the intensity for unpolarized light remains the same.

26

 2 cos (ϕ ) Im ( I1∗ ( I 0 − I 2 ) )    A2  1 S = Re {Ε × Η ∗ } = 2sin (ϕ ) Im ( I1∗ ( I 0 − I 2 ) )   2 2η  2 2   I0 − I2   A2 S = 2η

2.3

(I

2 0

− I2

) + 4 ( Im ( I

2 2

∗ 1

( I0 − I2 )))

2

(33)

(34)

Geometrical optics

Geometrical optics, which is governed by the eikonal equation, is the approximation of electromagnetic optics where λ0 approaches zero. In geometrical optics, light is a ray that propagates in paths determined by the refractive index of a medium, and in a homogenous medium these paths are straight lines. The principles of geometrical optics are used to characterize the behavior of light in the remaining areas of the object and NAIL for mathematical simplicity. I will consider, first, imaging within the realm of Gaussian optics, which is a further approximation of geometrical optics, then, primary lateral chromatic aberration that occurs within the realm of Gaussian optics, and finally, the geometrical aberrations that occur outside of the realm of Gaussian optics, to complete the domain of geometrical optics. Gaussian or paraxial optics is a first order approximation of geometrical optics that considers only rays at small angles relative to the optical axis, where sinθ is approximately equal to θ. Within the realm of Gaussian optics, a perfect projective image of an object plane can be produced by refraction from a spherical dielectric boundary. As illustrated in Figure 9, the first and second mediums have refractive indices of nA and nB, respectively.

27

Figure 9: Imaging according to Gaussian optics.

The spherical dielectric boundary between the first and second media passes through the origin and has a center at PC with cylindrical coordinates (0,0,-R). A point in the object plane, PA with cylindrical coordinates (ρA,φA,zA), is imaged by refraction to a point in the image plane, PB with cylindrical coordinates (ρB,φB,zB), within the realm of Gaussian optics, according to Equations (35)-(37). 1 1  1 1  n A  +  = nB  +   R zA   R zB 

ρB =

1 − z A  nB   − 1 + 1 R  nA 

ϕB = ϕ A 28

ρA

(35)

(36)

(37)

The lateral magnification (Mρ) is given by Equation (38), and does not change within the object plane at zA. The longitudinal magnification (Mz) is given by Equation (39), and continually varies with respect to zA. Mρ =

Mz =

d ρB 1 = d ρ A − z A  nB   − 1 + 1 R  nA  nB nA

dz B = 2 dz A  − z  n   A B   − 1 + 1  R  nA  

(38)

(39)

If PB, as a result of Equations (35)-(37), is located below the spherical boundary in the region where the refractive index is nA, then it clearly cannot be a real image point. However, after refracting from the boundary, all of the rays diverge from this virtual image point, PB, as opposed to converging to a real image point. The refractive index of a dispersive medium varies with the free space wavelength of light. The deviations in imaging caused by dispersion are termed chromatic aberrations. The longitudinal displacement of a Gaussian object or image point is a longitudinal chromatic aberration. If the first medium in Figure 9 is dispersive and the second medium is nondispersive, and PB is fixed, then the primary, or first order, longitudinal chromatic aberration of the object point is given by Equation (40), which was found by differentiating Equation (35), with respect to wavelength. dz A =

z A2  1 1   +  dnA nA  R z A 

29

(40)

The deviations from the Gaussian optics approximation of imaging that occur for rays at large angles, relative to the optical axis, are termed geometrical aberrations. The optical path length (OPL) of a ray, from PA to a reference sphere centered at PB, varies with the direction of the ray from PA, which is given in spherical coordinates (θ,ψ). It must be noted that the optical path length moving backwards along a ray to a virtual image point is negative. If the radius of the reference sphere is much greater than the size of the aberrations, then rays encounter the reference sphere at near normal angles, allowing the aberrations to be expressed as a scalar function. The wave aberration function (Φ) for a ray is defined as the difference between the OPL of a ray and the OPL of a reference ray at the center of the exit pupil, as given in Equation (41). Φ ( ρ A ,ϕ A , z A ,θ ,ψ ) = OPL( ρ A ,ϕ A , z A ,θ ,ψ ) − OPL( ρ A ,ϕ A , z A ,0,0)

(41)

In a power series expansion of the wave aberration function for a rotationally symmetric optical system, the first set of nonzero terms are of the fourth order and called the primary, or Seidel, aberrations.[38] In my theoretical optical model the angular semiaperture is too large for the wave aberration function to be approximated by the primary aberrations, however their existence will be noted. 2.4

Imaging subsurface structures in an object without a NAIL

I will now create a theoretical optical model for imaging subsurface structures in an object with a planar surface, without a NAIL. I will begin with the Gaussian optics approximation of imaging. The planar dielectric boundary between the object, with a refractive index of n, and air, with a refractive index of unity, passes through the origin, and by definition has an infinite radius (R). As illustrated in Figure 10, using the same 30

notation as Section 2.3, an object point, PA, is imaged by refraction to a virtual image point, PB. According to Equations (35)-(37), ρB is equal to ρA, φB is equal to φA, and zB is equal to zA /n. The lateral magnification is equal to unity according to Equation (38), and the longitudinal magnification is equal to 1/n, according to Equation (39). The primary longitudinal chromatic aberration is equal to zA dn /n, according to Equation (40). When imaging a subsurface structure within the object, -zA is close to X, so approximations for the imaging characteristics may be obtained by setting -zA equal to X.

Figure 10: Imaging parameters for an object with a planar surface.

I will now find the geometrical aberrations that become significant as θ increases. Ray 1, shown in Figure 10, is parallel to the optical axis, originating from PA and passing through PB. It is defined as the reference ray, and the exit pupil is defined as the planar surface of the object. Ray 2 results from an angle of θ, and intersects the planar boundary at PD. The distances from PA to PD and PB to PD are defined as M and N, respectively. Equations (42) and (43) give values for M and N, which result from trigonometry and the Pythagorean Theorem, respectively. The optical path length (OPL) from PA to PD and 31

back to PB is given in Equation (44), and the resulting wave aberration function (Φ) is given in Equation (45). The wave aberration function is independent of ρA, φA, and ψ, because of the planar geometry, so only spherical aberration proportional to zA is present. The primary spherical aberration (Φ040) term in the power series expansion of the wave aberration function, as a function of sinθ, is given in Equation (46).

M=

−zA cos θ

(42)

N = z B2 − z A2 + M 2

OPL( z A ,θ ) = nM − N =

−zA cos θ

  1   2  n − cos θ  2 − 1 + 1  n   

 1 1 Φ ( z A ,θ ) = OPL( z A ,θ ) − OPL( z A ,0) = z A  n − −  n cos θ 

Φ 040 = − z A

(43)

  1   2  n − cos θ  2 − 1 + 1   n    

n 2 n − 1) sin 4 θ ( 8

Figure 11: Wave aberration functions, as function of θ.

32

(44)

(45)

(46)

Figure 11 shows the wave aberration function and primary spherical aberration in Si at a λ0 of 1.3 µm, on both logarithmic and linear scales, as a function of θ, up to the critical angle (θc) of 0.29 radians. Clearly, in this angular domain the primary spherical aberration is not a good estimate for the wave aberration function.[39] The transmission coefficients for refraction at the planar surface of the object are given by the Fresnel Formulae, in Equations (47) and (48). Figure 12 shows the transmissivity (T ), for s and p polarized light in Si at a λ0 of 1.3 µm, as a function of θ. The collection efficiency, as defined by Equation (27), for Si at a λ0 of 1.3 µm is 0.03, obviously very low. ts =

tp =

2 1 − n 2 sin 2 θ 1 − n 2 sin 2 θ + n cos θ 2 1 − n 2 sin 2 θ n 1 − n 2 sin 2 θ + cos θ

Figure 12: Transmissivity (T ), as a function of angle (θ).

33

(47)

(48)

Using the vector field formulation, the intensity of light, which results from unpolarized uniform illumination of PB, is calculated in the vicinity of PA, for several values of zA in Si at a λ0 of 1.3 µm. The additional phase change (δ) from the geometrical aberrations is equal to k0Φ, and the new focus coordinate z is equal to zA - z. Figure 13 shows the intensity along the optical axis at ρ equals zero, as a function of z. Figure 14 shows the intensity in the lateral plane of peak intensity, as a function of ρ.

Figure 13: Intensity along the optical axis at ρ equals zero, as a function of z.

Figure 14: Intensity in the lateral plane of peak intensity, as a function of ρ.

34

Table 3 shows the imaging characteristics at several values of zA. For the most practical depths of above 65 µm, the spherical aberration must be corrected by the objective lens or elsewhere, to operate at diffraction limited spatial resolution. To achieve these values of spatial resolution and collection efficiency would require an objective lens with an NA of 1, so in practice the values are significantly degraded. Table 3: Imaging characteristics at several values of zA.

Strehl ratio

2.5

Lateral

Longitudinal

spatial resolution

spatial resolution

Corrected

1

0.71 µm

8.4 µm

zA = -65 µm

0.80

0.72 µm

9.1 µm

zA = -130 µm

0.40

0.73 µm

19.7 µm

zA = -260 µm

0.19

0.96 µm

52.7 µm

zA = -520 µm

0.09

1.28 µm

119 µm

Imaging subsurface structures in an object with a NAIL

I will now create a theoretical optical model, for imaging subsurface structures in an object with a planar surface with a NAIL. I will first create a geometrical optics model of a sphere, with a refractive index of n, in air, with a refractive index of unity, and then extend this model to the object and NAIL. I start with the Gaussian optics approximation of imaging. As illustrated in Figure 15, using the same notation as Section 2.3, an object point, PA, is imaged by refraction to a virtual image point, PB. New longitudinal position variables, A and B, which are equal to -R-zA and -R-zB, respectively, are introduced and yield Equations (49)-(53) from Equations (35)-(39). 35

Figure 15: Imaging parameters for a sphere.

B=

ρB =

n A (1 − n ) + 1 R n A (1 − n ) + 1 R

A

(49)

ρA

(50)

ϕB = ϕ A Mρ =

n A (1 − n ) + 1 R

36

(51) (52)

Mz =

n A   (1 − n ) + 1 R 

2

(53)

Next, I will find the geometrical aberrations that become significant as θ increases. Ray 1, shown in Figure 15, originates from PA, passes through PC, and is collinear with PB, in accordance with the projective nature of Equations (49)-(51). It is defined as the reference ray, and the exit pupil is defined as the surface of the sphere. The distances from PA to PC and PB to PC are defined as K and L, respectively. Ray 2 results from an angle of γ, and intersects the spherical boundary at PD. The distances from PA to PD and PB to PD are defined as M and N, respectively. The law of cosines yields Equation (54). Completing the square yields Equation (55). Finally, solving for M yields Equation (56). R 2 = K 2 + M 2 − 2 KM cos ( γ )

( M − K cos (γ ) )

2

= R 2 − K 2 + K 2 cos 2 ( γ )

M = K cos ( γ ) + R 2 − K 2 sin 2 ( γ )

(54) (55) (56)

The law of cosines yields Equations (57) and (58). Substituting for cos(ξ) yields Equation (59). Finally, solving for N, using Equation (54) to reduce the power of M, yields Equation (60). M 2 = R 2 + K 2 − 2 RK cos (ξ )

(57)

N 2 = R 2 + L2 − 2 RL cos (ξ )

(58)

N 2 = R 2 + L2 +

L M 2 − R2 − K 2 ) ( K

37

(59)

N = R 2 + L2 + 2 L ( M cos ( γ ) − K )

(60)

The optical path length from PA to PD and back to PB is given in Equation (61), and the resulting wave aberration function is given in Equation (62). OPL( K , L ,γ ) = nM − N = nK cos ( γ ) + n R 2 − K 2 sin 2 ( γ )

(

− R + L + 2 L − K sin ( γ ) + cos ( γ ) R − K sin ( γ ) 2

2

2

2

2

2

(61)

)

Φ ( K , L ,γ ) = OPL( K , L ,γ ) − OPL( K , L ,0) = R (1 − n ) + nK ( cos ( γ ) − 1) + L

(

+ n R 2 − K 2 sin 2 ( γ ) − R 2 + L2 + 2 L − K sin 2 ( γ ) + cos ( γ ) R 2 − K 2 sin 2 ( γ )

)

(62)

So far, the wave aberration function has been derived purely from geometry, and the Gaussian relationship between K and L has not applied. This was done to show that there are two solutions for K and L, where the wave aberration function is equal to zero for all γ, and therefore the imaging is stigmatic, or sharp. In the central solution, both K and L are equal to zero, which occurs when both PA and PB are at PC. For the central solution the rays are normal to the spherical boundary and refraction does not change their direction. In the aplanatic solution, K and L are equal to R/n and Rn, respectively, and as previously mentioned PA, PB, and PC are collinear. The points on the spherical surface, defined by K equal to R/n in the aplanatic solution, are termed the aplanatic points of the spherical boundary. The Gaussian optics relationships between PA and PB, Equations (49)-(51), yield the values for K and L given in Equations (63) and (64), in terms of the object point and ray coordinates. Taking the scalar product of the line from PA to PC and the line from PA to

38

PD in Cartesian coordinates, Equation (65), yields the value for γ given in Equation (66) in terms of the object point and ray coordinates. K = A2 + ρ A2

L=

n A (1 − n ) + 1 R

(63)

A2 + ρ A2

 − ρ cos ϕ A   M cosψ sin θ  uuuuur uuuuur  A    PA PC ⋅ PA PD =  − ρ A sin ϕ A  ⋅  M sinψ sin θ  = KM cos γ    M cos θ  A    

cos γ =

− ρ A sin θ cos (ϕ A −ψ ) + A cos θ A2 + ρ A2

(64)

(65)

(66)

The geometrical optics model of the sphere that I just derived can be extended to the object and NAIL by including three additional effects. The first effect is the angular semiaperture limit introduced by the object and NAIL parameters, which results from geometry, and is given in Equation (67).   X2 + 1 sin θ a =  2 2  R − ( X − A)   

−1/ 2

(67)

The second effect is the transmission across the spherical surface of the NAIL, which is governed by the Fresnel Formulae. The third effect is the transmission across an air gap that exists in practice between the object and NAIL. Using the stratified medium model, the transmissivity (T ) and phase change on transmission (δt), across an air gap between a Si object and NAIL at a λ0 of 1.3 µm, are calculated for varying air gap height (h) and shown in Figure 16. 39

Figure 16: Transmissivity (T ) and phase change on transmission (δt), as a function of angle (θ), for a gap height (h) of λ0 divided by (a) 10, (b) 20, (c) 40, and (d) 80.

Simulations for the transmission across an air gap between an object and a SIL, in the visible spectrum, were previously conducted by Milster et al.[48,49] 2.5.1

Central NAIL

The central NAIL is based on the central stigmatic solution, where A and B are both equal to zero, so Equation (68) must be satisfied. 40

D = R− X

(68)

For a central NAIL, R should be selected to be less than the working distance of the microscope’s objective lens. The lateral and longitudinal magnifications are both equal to n, according to Equation (38) and (39), respectively. The primary longitudinal chromatic aberration is equal to zero. If ρA is small, compared to R, the angle of incidence (α in Figure 15) for refraction at the spherical surface of a central NAIL is approximately equal to zero, so the transmission coefficients are given by the Fresnel Formulae for normal incidence in Equation (69). The collection efficiency, as defined by Equation (27), without a gap for Si, at a λ0 of 1.3 µm, is 0.69. ts = t p =

2 n +1

(69)

At the central stigmatic solution, the only primary aberrations are astigmatism and distortion. I will now introduce a new variable ∆, which is the longitudinal displacement from the stigmatic solution. For the central NAIL, A is equal to ∆, and zA is equal to -R-∆. The first term (Φc) in the power series expansion of the wave aberration function along the optical axis around the central stigmatic solution, as a function of ∆, is given in Equation (70), which is identical to the expression given by Baba et al.[21] Again, in this angular domain the primary spherical aberration (Φ040), given in Equation (71), is not a good estimate for the wave aberration function.[39] Φ c = 2n ( n − 1)

∆2 θ  sin 4   R 2

(70)

∆2 sin 4 θ 8R

(71)

Φ 040 = n ( n − 1) 41

These wave aberration functions are shown in Figure 17 for a central NAIL with R equal to 1.61 mm and ∆ equal to -50 µm. The wave aberration functions, Φ and Φc, are approximately the same at this value of ∆.

Figure 17: Wave aberration functions, as function of θ.

Using the vector field formulation, the intensity of light, which results from unpolarized uniform illumination of PB, using a central NAIL with R equal to 1.61 mm, is calculated in the vicinity of PA for several values of ∆ in Si at a λ0 of 1.3 µm. The additional phase change (δ) from the geometrical aberrations is equal to k0Φ, and the new focus coordinate z is equal to -R-∆-z. Figure 18 shows the intensity along the optical axis at ρ equals zero, as a function of z. Figure 19 shows the intensity in the lateral plane of peak intensity, as a function of ρ.

42

Figure 18: Intensity along the optical axis at ρ equals zero, as a function of z.

Figure 19: Intensity in the lateral plane of peak intensity, as a function of ρ.

Table 4 shows the imaging characteristics at several values of ∆. Thus, the absolute value of ∆ must be less than 22 µm, to operate at diffraction limited spatial resolution. However, significant gains in spatial resolution still exist, when the Strehl ratio is equal to 0.4 at ∆ is equal to 32 µm. To achieve these values of spatial resolution and collection efficiency would require an objective lens with an NA of 1, so in practice the values are significantly degraded. 43

Table 4: Imaging characteristics.

Strehl ratio

2.5.2

lateral

longitudinal

spatial resolution

spatial resolution

∆=0

1

0.17 µm

0.34 µm

∆ = -22 µm

0.80

0.17 µm

0.36 µm

∆ = -30 µm

0.48

0.18 µm

0.66 µm

∆ = -32 µm

0.40

0.18 µm

0.73 µm

∆ = -40 µm

0.29

0.20 µm

1.41 µm

∆ = -50 µm

0.18

0.21 µm

2.40 µm

∆ = -60 µm

0.14

0.23 µm

3.74 µm

Aplanatic NAIL

The aplanatic NAIL is based on the aplanatic stigmatic solution, where A and B are equal to R/n and Rn, respectively, so Equation (72) must be satisfied.  1 D = R 1 +  − X  n

(72)

For an aplanatic NAIL, R should be selected such that R(n+1) is less than the working distance of the microscope’s objective lens. The lateral and longitudinal magnifications are equal to n2 and n3, according to Equations (38) and (39), respectively. The primary longitudinal chromatic aberration is equal to R(n+1) dn /n3. The angle of incidence (α in Figure 15) for refraction at the spherical surface of an aplanatic NAIL is given in Equation (73), according to the law of sines. If ρA is small compared to A, then K and γ are approximately equal to R/n and θ, respectively, yielding Equation (74). The transmission coefficients are given by the Fresnel Formulae in Equations (75) and (76). 44

Figure 20 shows the transmissivity (T ) for s and p polarized light in Si at a λ0 of 1.3 µm, as a function of θ. The collection efficiency, as defined by Equation (27), without a gap for Si at a λ0 of 1.3 µm is 0.62. sin α sin γ = K R

(73)

 sin θ    n 

(74)

ts =

2 cos θ cos θ + n cos α

(75)

tp =

2 cos θ n cos θ + cos α

(76)

α = sin −1 

Figure 20: Transmissivity (T ), as a function of angle (θ).

At the aplanatic stigmatic solution, the only primary aberrations are curvature of field and distortion. For the aplanatic NAIL, A is equal to R/n+∆, and zA is equal to -R(1+1/n)-∆. The first term (Φa) in the power series expansion of the wave aberration function along the optical axis around the aplanatic stigmatic solution, as a function of ∆,

45

is given in Equation (77). The wave aberration functions, Φ and Φa, for an aplanatic NAIL with R equal to 1.61 mm and ∆ equal to -8 µm, are approximately the same, as shown in Figure 21.

(

Φ a = n∆ cos θ − 1 + n 2 − n n 2 − sin 2 θ

)

(77)

Figure 21: Wave aberration functions, as function of θ.

Using the vector field formulation, the intensity of light, which results from unpolarized uniform illumination of PB, using an aplanatic NAIL, with R equal to 1.61 mm, is calculated in the vicinity of PA for several values of ∆, in Si at a λ0 of 1.3 µm. The additional phase change (δ) from the geometrical aberrations is equal to k0Φ, and the new focus coordinate z is equal to -R(1+1/n)-∆-z. Figure 22 shows the intensity along the optical axis at ρ equals zero, as a function of z. Figure 23 shows the intensity in the lateral plane of peak intensity, as a function of ρ.

46

Figure 22: Intensity along the optical axis at ρ equals zero, as a function of z.

Figure 23: Intensity in the lateral plane of peak intensity, as a function of ρ.

Table 5 shows the imaging characteristics at several values of ∆. Thus, the absolute value of ∆ must be less than 1 µm, to operate at diffraction limited spatial resolution. However, significant gains in spatial resolution still exist, when the Strehl ratio is equal to 0.4 at ∆ is equal to 3.2 µm.

47

Table 5: Imaging characteristics.

Strehl ratio

lateral spatial

longitudinal spatial

resolution

resolution

∆=0

1

0.19 µm

0.42 µm

∆ = -1 µm

0.80

0.20 µm

0.51 µm

∆ = -2 µm

0.54

0.21 µm

0.73 µm

∆ = -3.2 µm

0.40

0.22 µm

0.92 µm

∆ = -4 µm

0.34

0.23 µm

1.02 µm

∆ = -8 µm

0.20

0.26 µm

2.30 µm

∆ = -16 µm

0.12

0.30 µm

3.33 µm

The intensity of light is then calculated in the vicinity of PA for several values of air gap height (h), with ∆ equal to zero. Figure 24 shows the intensity along the optical axis at ρ equals zero, as a function of z. Figure 25 shows the intensity in the lateral plane of peak intensity, as a function of ρ.

Figure 24: Intensity along the optical axis at ρ equals zero, as a function of z.

48

Figure 25: Intensity in the lateral plane of peak intensity, as a function of ρ.

Table 6: Imaging characteristics.

Strehl

lateral

longitudinal

collection

ratio

spatial resolution

spatial resolution

efficiency

h=0

1

0.19 µm

0.42 µm

0.62

h = λ0/166 = 7.8 nm

0.80

0.20 µm

0.44 µm

0.40

h = λ0/80 = 16.3 nm

0.65

0.20 µm

0.44 µm

0.32

h = λ0/40 = 32.5 nm

0.47

0.21 µm

0.45 µm

0.24

h = λ0/31 = 41.4 nm

0.40

0.21 µm

0.46 µm

0.21

h = λ0/20 = 65 nm

0.27

0.22 µm

0.48 µm

0.16

h = λ0/10 = 130 nm

0.09

0.23 µm

0.53 µm

0.07

h = λ0/6 = 217 nm

0.026

0.27 µm

0.68 µm

0.03

h = λ0/4 = 325 nm

0.008

0.35 µm

2.57 µm

0.02

Table 6 shows the imaging characteristics at several values of h, including the resulting changes in collection efficiency, as defined by Equation (27), for Si at a λ0 of 1.3 µm. Thus, h must be less than 7.8 nm, to operate at diffraction limited spatial 49

resolution. However, significant gains in spatial resolution still exist, when the Strehl ratio is equal to 0.4 at h is equal to 41 nm. To achieve these values of spatial resolution and collection efficiency requires an objective lens with an NA of 1/n, which is usually available in practice. 2.6

Comparison

I will now compare the results for the test case in Si at a λ0 of 1.3 µm, and explain the improvements provided by the NAIL. The resulting values shown in Table 7 compare well to the approximations for the diffraction limit spatial resolution, given in Table 2. Of course, the ultimate limit on spatial resolution is approximately λ0 /(4n), but requires super-resolution techniques.[50] The lateral and longitudinal spatial resolution improvements resulting from the NAIL are approximately factors of n and n2(1+cosθc), which in this case are 3.5 and 24, respectively. The effective volume of the focus is reduced by a factor of approximately n4(1+cosθc), which in this case is 294. The improvement in collection efficiency is greater than a factor of twenty. To achieve the stated values of spatial resolution for a conventional optical microscope requires an objective lens that corrects for the spherical aberration. To achieve the stated values of spatial resolution and collection efficiency with a conventional optical microscope, both without and with a central NAIL, would require an objective lens with an NA of 1, so in practice the values will be significantly degraded. However, the stated values of spatial resolution and collection efficiency can be achieved in practice with an aplanatic NAIL and an objective lens with an NA of 1/n.

50

Table 7: Comparison of imaging characteristics.

lateral

longitudinal

collection

spatial resolution

spatial resolution

efficiency

corrected conventional

0.71 µm

8.4 µm

0.03

aplanatic NAIL

0.19 µm

0.42 µm

0.62

central NAIL

0.17 µm

0.34 µm

0.69

Figure 26 shows the intensity in the object plane at z equals to zero, as a function of ρ, and Figure 27 shows the intensity along the optical axis at ρ equals zero, as a function of z. The intensity for the planar/corrected case has to be placed on a separate scale because the light-gathering power is so low.

Figure 26: Intensity in the object plane at z equals zero, as a function of ρ.

51

Figure 27: Intensity along the optical axis at ρ equals zero, as a function of z.

Another advantage of using a NAIL is the additional magnification it provides as shown in Table 8. If the object and NAIL are affixed to each other, then the additional magnification greatly reduces the positioning noise due to vibration, and causes a scan relaxation that allows less accurate, and therefore less costly, scanning stages to be used for positioning of the object and NAIL. Table 8: Comparison of magnifications.

lateral

longitudinal

magnification

magnification

conventional

1

1/n

aplanatic NAIL

n2

n3

central NAIL

n

n

Another advantage of using a NAIL is the reduction in longitudinal chromatic aberration, as shown in Table 9. The example used is thermal emission microscopy of a 1 mm thick Si IC using an InSb detector, both without and with a NAIL having R equal 52

to 1.61 mm. Most of the light is at free space wavelengths between 4 µm and 5 µm, where the refractive index of Si monotonically varies from 3.4294 to 3.4261, respectively. Table 9: Comparison of longitudinal chromatic aberration in the object space.

general

thermal emission microscopy

conventional

X dn /n

1.0 µm

aplanatic NAIL

R(n+1) dn /n3

0.6 µm

central NAIL

0

0

My theoretical optical modeling indicates that both the central and aplanatic NAIL provide significant improvement in spatial resolution and light-gathering power. Furthermore, improvements in magnification and lower NA requirements for the objective lens provide significant cost savings when using a NAIL.

53

Chapter 3 EXPERIMENTAL PROCEDURES FOR USING A NAIL

In this chapter, I will describe a general set of experimental procedures for using a NAIL in a conventional optical microscope. I have tested various alternative experimental procedures, and have found the following particular set of procedures to be the most practical and reliable. The are four main phases: (1) cleaning the planar surfaces of the object and NAIL, (2) positioning and imaging the object without the NAIL, (3) placing and laterally positioning the NAIL on the object, and (4) longitudinally positioning and imaging the object with the NAIL. I also present several examples of possible variations in procedures for different experiments. 3.1

Cleaning the planar surfaces of the object and NAIL

A finite gap will exist between the planar surfaces of any object and NAIL for two reasons: surface planarity and contamination. In practice, the surfaces of the object and NAIL are not perfectly planar, because of fabrication limitations in the production of the surfaces. The surface planarity tolerance is a design variable of any object and NAIL, and cannot be adjusted during an experiment. Contaminants on the planar surfaces of the object and NAIL also increase the size of the gap between the planar surfaces. Thus, the first phase of experimental procedures is the cleaning of the planar surfaces of the object and NAIL. Oil from hands and mechanical equipment, such as the microscope stage, is a 54

common contaminate that can be reduced by using gloves and cleaning the object and NAIL with solvents. Microparticulate dust, present in the air, is another common contaminate that settles on and attaches to the planar surfaces of the object and NAIL. Ideally, experiments would be conducted in a clean room, where the planar surfaces of the object and NAIL are exposed to very little dust. However, good results can still be obtained in a normal dusty environment, if the planar surfaces of the object and NAIL are not exposed until the experiment is conducted. I will now detail the methods by which oil, dust, and other contaminants may be easily removed using successive solvents. I use acetone, methanol, and deionized water, in that order, for a Si object and NAIL, but there are many other choices available depending on the materials of the object and NAIL. Placing the object in successive ultrasonic baths of these solvents provided the best results. Often, however, the object must be affixed to the microscope stage prior to cleaning, so an ultrasonic bath cannot be used for cleaning the planar surface of the object. In this case, first scrub the planar surface of the object with a piece of lens tissue or a cotton qtip that is wet with the first solvent. Then use the lens cleaning technique of drawing a piece of lens tissue, wetted with a solvent, across the planar surface of the object. A new clean piece of lens tissue for each successive solvent is necessary. If a small amount of dust accumulates after cleaning with the successive solvents, then blowing the surface with clean canned air will remove most of the dust.

55

3.2

Positioning and imaging the object without the NAIL

The object will now be mounted to the microscope stage, which will control the position of the object. All actions that might accidentally displace the object, such as electrically connecting the object to external devices with wires or cleaning the planar surface of the object, should be completed before starting to position the object. The positions of the object in the lateral directions (x and y) are relative, and their quantification is not important for explaining the experimental procedures. However, all changes in positions should be recorded for the sake of accurate experimental documentation and repeatability. The position of the object in the longitudinal direction (z) will be defined as zero, when the imaging planar surface, the planar surface above the subsurface structure within the object, is in focus, as illustrated in Figure 28(a), and as positive, when the object is moved closer to the objective lens, as illustrated in Figure 28(b). First, adjust z to bring the scratches and digs on the imaging surface of the object into focus. Moving laterally to the edge of the image planar surface of the object can aid in this coarse focusing. Record the current value of the longitudinal position on the microscope stage, as the first approximation for z equals zero. In objects with two parallel planar surfaces, such as a Si IC, there are many reflected images of subsurface structures within the object. Thus, it is important to anticipate the longitudinal position of the true image of a subsurface structure within the object, in relation to the overall layout of the object, to avoid mistaking one of the reflected images for the true image. The longitudinal distance from the imaging planar surface of the object to the subsurface structure within the object is defined as X. The magnification due to refraction at the imaging planar 56

surface of the object causes the subsurface structure to come into focus at z equals X/n. Adjust z to bring the subsurface structure into focus, and adjust x and y to center the subsurface structure, as illustrated in Figure 28(b). Record the current value of the longitudinal position on the microscope stage as the value for z equals X/n. Now that the subsurface structure within the object is centered, again adjust z, to bring the scratches and digs on the planar surface of the object into focus, as illustrated in Figure 28(a). The previous approximation for z equals zero at a different lateral position is probably no longer accurate due to sample tilt, so record the current value of the longitudinal position on the microscope stage as the exact value for z equals zero. The object will be moved longitudinally in later steps, but take special care not to move the object laterally, now that the subsurface structure is centered.

Figure 28: Imaging the (a) planar surface of the object and (b) subsurface structure within the object.

57

3.3

Placing and laterally positioning the NAIL on the object

Many pickups for SILs and NAILs have been suggested that would greatly simplify the

experimental

procedures

for

placing

and

centering,

because

of

their

repeatability.[12,14,15,39] An inflexible pickup attached to a NAIL with a millimeter scale diameter is not practical, because any angular misalignment will significantly increase the gap between the planar surfaces of the object and NAIL. Gravity is the simplest way to keep the planar surfaces of the object and NAIL together, so a flexible pickup attached to the NAIL, or a pickup that is not permanently attached to the NAIL would be practical for routine use of the NAIL. Developing a pickup would have taken significantly more time than conducting my experiments to verify the NAIL technique without a pickup. That adaptation has been left for future consideration. I will now describe the experimental procedures for placing and centering the NAIL on the object without a pickup. Manually place the NAIL on the planar surface of the object with either vacuum tweezers or standard tweezers. Vacuum tweezers hold an object at the tip of a small tube by applying a vacuum source to the other end of the tube, and then release the object, by removing the vacuum source. The tip of vacuum tweezers may be either plastic or metal, but a plastic tip is much less likely to scratch the NAIL. Likewise, when standard tweezers are used, the ones with plastic tips are preferable, but the ones with metal tips can also be used after wrapping with lens tissue. Figure 29 shows a photograph of an aplanatic Si NAIL placed on a Si IC, mounted to the microscope stage by two small pieces of tape, for the experiment described in Section 4.3. This photograph illustrates the problem, that there is often very little working 58

distance between the objective lens and the stage of a microscope during the imaging process. In this experiment, the planar surface of the object was large enough to allow me to place the NAIL on the planar surface of the object, outside of the path of light between the subsurface structure in the object and the microscope, before conducting the procedures described in Section 3.2.

Figure 29: Photograph of an aplanatic Si NAIL placed on a Si IC.

For coarse lateral positioning, manually move the NAIL on the object by pushing the NAIL with a qtip or plastic tweezers until it appears in the image. Figure 30 shows a conventional image taken by the commercial NIR inspection microscope, described later in Section 4.4, of a Si IC, while laterally positioning an aplanatic Si NAIL. The edge of the NAIL will be in focus, because the NAIL is in contact with the planar surface of the object, which was previously brought into focus. Next, finely center the NAIL above the subsurface structure within the object, using one of the following two methods. One method is to use an additional stage with an implement attached to push the NAIL over the object. The second method is to use inertial sliding, often referred to as the frictional stick-slip method, which moves the NAIL by either applying a sawtooth electric 59

waveform to the piezoelectric microscope stage, or by tapping different parts of the stage.[51] Once the NAIL is centered above the subsurface structure within the object, the NAIL and object are moved as a single entity.

Figure 30: Conventional image of a Si IC during lateral positioning of an aplanatic Si NAIL (bottom right).

3.4

Longitudinally positioning and imaging the object with the NAIL

While adjusting the longitudinal position of the object and NAIL, many features will appear and disappear in the image. I will discuss the important features, but there are often many other reflected features, or ghosts, as well. The important features are the brightest, and provide a good reference for longitudinal position. First, adjust z until the top of the spherical surface of the NAIL comes into focus, as illustrated in Figure 31(a). Record the current value of the longitudinal position on the microscope stage as the value 60

for z equals –D. This image will verify that the NAIL is well centered over the subsurface structure within the object. Next, adjust z until the interface between the planar surfaces of the object and NAIL comes into focus, as illustrated in Figure 31(b). In a reflection microscope, the intensity of this image indicates the degree of optical contact between the object and NAIL. Record the current value of the longitudinal position on the microscope stage as the value of z, given by Equation (78). z=

1 −D n 1− n + D R

(78)

Figure 31: Imaging the (a) top of the NAIL, (b) interface, (c) center of the spherical surface of the NAIL, (d) subsurface structure within the object.

The NAIL should have been designed for a longitudinal distance of X, from the planar surface of the object to the subsurface structure within the object. Thus, the point at which the subsurface structure should come into focus is at z equals R-D, for a central NAIL, or z equals R(n+1)-D, for an aplanatic NAIL. In a reflection microscope, a single intense point in the middle of the image will appear when the center of the spherical 61

surface of the NAIL comes into focus at z equals R-D, because of the reflection from the spherical surface of the NAIL. This occurs for a central NAIL, when the subsurface structure within the object is in focus and for an aplanatic NAIL at an intermediate point, as illustrated in Figure 31(c). Finally, adjust z until the subsurface structure within the object comes into focus and then record the current value of the longitudinal position on the microscope stage, as illustrated in Figure 31(d) for an aplanatic NAIL. If the subsurface structure is not in the middle of the image, use one of the two methods for laterally positioning the NAIL on the object, described in Section 3.3, to finely center the subsurface structures in the image. Note that moving the NAIL on the object in any lateral direction, shifts the image in the opposite direction. Figure 32 shows an image of the same Si IC shown in Figure 30, but taken through the aplanatic Si NAIL.

Figure 32: Image of the Si IC with the aplanatic Si NAIL.

62

3.5

Variations in the experimental procedures

The experimental procedures vary somewhat with the object and microscope used. I will first describe an experiment in which the characteristics of the object affected the procedures.[52] One such object had self-assembled, InGaAs quantum dots epitaxially grown on a GaAs wafer. The improved lateral spatial resolution provided by a GaAs NAIL allowed the collection of photoluminescence emission from individual quantum dots. Furthermore, the improvement in the amount of light collected, provided by the GaAs NAIL, allowed time resolved photoluminescence spectroscopy measurements on individual quantum dots. These measurements required the object to be cooled in a cryostat to 77 K by liquid N2. Fortunately, the scan relaxation resulting from the magnification of the NAIL allowed the use of stepper motors instead of piezoelectric elements for positioning and scanning the cryostat, object, and NAIL, without sacrificing spatial resolution. The object and NAIL could not be moved relative to each other, once they were placed in the cryostat, but the distribution of quantum dots over the surface was uniform, so this was not necessary. Thus, the experimental procedures described in Sections 3.2 and 3.3 were not necessary, and instead the cryostat, object, and NAIL were moved as a single entity to center the NAIL in the image. I will now describe how the characteristics of the microscope affect the procedures. As we have seen, using a camera with a wide field of view and moderate resolution, for visual inspection is essential to aligning the NAIL. When using the commercial NIR inspection microscope, I could easily interchange its multiple parfocal objective lenses, without significant position inaccuracy. This allowed different magnifications for coarse 63

positioning, fine positioning, and image acquisition. On the other hand, when using our thermal emission microscope, described in Chapter 5, with a single objective lens and a separate leg for visual inspection for positioning, the process proved more difficult. These two examples of variation in experimental procedures show that every experimental implementation of NAIL microscopy provides a different set of limitations, and therefore, a slightly different set of experimental procedures.

64

Chapter 4 NEAR-INFRARED INSPECTION MICROSCOPY USING A NAIL

4.1

Introduction to near-infrared inspection microscopy

In this chapter, I will show the improvements the NAIL technique yields in nearinfrared (NIR) inspection microscopy of Si ICs. A NIR inspection microscope is a conventional optical microscope that illuminates an object with NIR light and collects the light scattered from the object for visual inspection. Current Si IC technology includes many opaque metal layers and structures above semiconductor devices, thereby hindering topside inspection of these buried devices in their final state. Thus, backside NIR inspection microscopy through the planar Si substrate is used almost exclusively in semiconductor failure analysis. However, optical absorption limits NIR inspection microscopy through the Si substrate to free space wavelengths longer than 1 µm, where the Si substrate is both transparent and homogenous, allowing light to propagate through it. The absence of light from above the critical angle (θc), ultimately limits the lateral spatial resolution to 0.5 µm and the longitudinal spatial resolution to 7.1 µm, for conventional, NIR inspection microscopes operating at a free space wavelength (λ0) of 1 µm, where the refractive index (n) of Si is 3.6. Typical lateral spatial resolution values are about 1 µm for state-of-the-art, commercial, NIR inspection microscopes.[53] Meanwhile, modern Si IC technology has reached process size scales of 0.09 µm, clearly 65

beyond the imaging capability of conventional, NIR inspection microscopy. In semiconductor failure analysis, a NIR inspection microscope is an invaluable tool for mapping the devices of a Si IC. Many other semiconductor failure analysis techniques employ a NIR inspection microscope, so a demonstration of the improvements in NIR inspection microscopy may be extended to these other applications, see Appendix A.[53] The large refractive index of Si at NIR wavelengths offers the potential for significant improvement in spatial resolution, if this planar barrier can be circumvented. The addition of a NAIL allows illumination and collection of light at higher angles of incidence. The NAIL is made of Si, matching the refractive index of the Si substrate, so that light propagates between the Si substrate and the NAIL, without refracting at the interface. The diffraction of light theoretically limits the lateral spatial resolution to 0.14 µm and the longitudinal spatial resolution to 0.28 µm, for NIR inspection microscopes with a NAIL, operating at a free space wavelength (λ0) of 1 µm. With a NAIL, I experimentally demonstrate a lateral spatial resolution of 0.23 µm and a longitudinal spatial resolution of better than 1.7 µm, as detailed in Section 4.3. Although I do not achieve the theoretical limits of NAIL microscopy, my experimental results represent a significant improvement over state-of-the-art, conventional, NIR inspection microscopy. To the best of my knowledge, this experimental spatial resolution is the best, to date, for subsurface NIR inspection microscopy of a Si IC. I conducted three NIR inspection microscopy experiments, which will be detailed in the sections to follow. The preliminary experiment, detailed in Section 4.2, was a coarse test of the NAIL technique that demonstrated a modest improvement in spatial resolution 66

and verified the principles of NAIL microscopy. The high resolution experiment, detailed in Section 4.3, demonstrated a lateral spatial resolution of better than 0.23 µm and a longitudinal spatial resolution of better than 1.7 µm, which to the best of my knowledge are spatial resolution records for backside NIR inspection microscopy of a Si IC. The third experiment, detailed in Section 4.4, demonstrated the improvements the NAIL yields on a state-of-the-art, commercial NIR inspection microscope. 4.2

Preliminary experiment

I will now present my first experimental demonstration of the improvements the NAIL technique yields in NIR inspection microscopy of Si ICs.[54] This experiment was a coarse test of the NAIL technique, and I had not perfected the experimental procedures or the microscope design at that time. This experiment verified the principles of NAIL microscopy, and thus was a very valuable step towards perfecting NAIL microscopy. I estimate the lateral spatial resolution attained in this experiment with a NAIL was about 0.3 µm. For this experiment, I required a Si IC with known structures, to test the principles of NAIL microscopy. I first attempted to fabricate a Si IC composed of thin metal lines on a Si wafer, using conventional photolithography in a Boston University clean room facility, but failed to create lines with submicron widths. I then decided to use the MOSIS Integrated Circuit Fabrication Service, which offered discounted low volume production of Si ICs for educational institutions.[55] I used the best CMOS VLSI process offered by MOSIS in 1998, the AMI ABN 1.2 process, which had a size for the smallest square unit of 0.6 µm and a predefined layer structure. As shown in Figure 33, the design had thin 67

lines fabricated in three different layers. Lines in the first layer, ndiff, were made by n+ type diffusion doping into the p type Si substrate. The ndiff layer is the device layer of a Si IC, so imaging the ndiff lines in the MOSIS Si IC approximated imaging the devices in a commercial Si IC for failure analysis. The minimum width and spacing of these lines was 1.2 µm, according to the design rules of the fabrication process.

Figure 33: Layout of fabricated Si IC.

Lines in both the second and third layers were composed of polycrystalline Si and had a thickness of 0.32 µm. The minimum width and spacing of these lines was 0.6 µm, according to the design rules of the fabrication process. Lines in the second layer, poly-1, were vertical and fabricated over a 0.03 µm thick layer of SiO2, on the Si substrate. Lines in the third layer, poly-2, were horizontal and fabricated above another layer of SiO2 on the aforementioned structures. Al pads and lines in another layer, metal-1, were also included to electrically connect the thin lines to an external power source, making the 68

lines heat sources. A final 2 µm thick layer of SiO2 covered all of the aforementioned layers. I mounted the 2.5 mm square Si IC die in a ceramic package, with the planar surface of the Si substrate exposed for NIR inspection microscopy through that Si substrate. I had an aplanatic Si NAIL manufactured with an R of 1.61 mm and a D of 1.5 mm, to match the 0.5 mm wafer thickness of the MOSIS Si IC, in accordance with Equation (72). While the MOSIS Si IC and aplanatic Si NAIL were being manufactured, I built a preliminary NIR microscope, tailored for imaging a Si IC with an aplanatic Si NAIL.

Figure 34: Illustration of the preliminary NIR microscope.

I will now describe the preliminary NIR microscope, which is illustrated in Figure 34, that I built for this experiment. After finishing this experiment, I disassembled and 69

redesigned the microscope for the second experiment. The mechanical components of the microscope were built on an optical table, equipped with pneumatic isolation to minimize the effects of vibration. One large damped rod attached to the optical table and supported the PbS camera, which in turn supported the other optical components by a 30 mm Linos cage assembly. Three mechanical stages with micrometer drives were used for focusing in longitudinal direction (z) and for positioning in the lateral directions (x and y). I will now follow the path of light, as I describe the optical components of the microscope. A quartz halogen lamp and a condenser lens below the MOSIS Si IC provided illumination at NIR wavelengths for transmission visual inspection. Light scattered from the thin lines in the MOSIS Si IC first passed through the Si substrate and aplanatic Si NAIL, and was then refracted into air with an NA of 0.28. The light was then collimated by an objective lens with an NA of 0.25 and a working distance of 25 mm. The objective lens had six elements and was corrected for chromatic aberration for wavelengths from 1.5 µm to 5 µm. The objective lens was infinity corrected and had an effective focal length of 18.5 mm, which yielded a beam diameter of 9.25 mm (2fsinθa) in the microscope. I selected a distance from the objective lens to the tube lens and a diameter for the tube lens that did not vignette the image. The tube lens formed the inspection image on the PbS vidicon tube camera made by Hamamatsu Photonics, which detected light at free space wavelengths between 0.4 µm and 1.8 µm. I will now present images of the MOSIS Si IC taken by the preliminary NIR microscope. In Figure 35(a), I show a conventional, NIR inspection microscopy image

70

without the NAIL in place, to identify the areas of the Si IC imaged in subsequent figures.

Figure 35: (a) Conventional, NIR inspection microscopy image (b) image of polycrystalline Si lines with a NAIL and (c) image of n+ type diffusion lines with a NAIL (d) Linecut of data in image (c).

The next two images and linecut were taken with the NAIL in place, which increases the lateral magnification of the microscope by a factor of 13. In Figure 35(b), I show a NAIL image of polycrystalline Si vertical lines having a width and spacing of 0.6 µm with overlapping polycrystalline Si horizontal lines. This image demonstrates the lateral spatial resolution capability, using a NAIL for a relatively high optical contrast structure 71

with a spatial period of 1.2 µm. It is often desirous to image subtle variations, such as those due to impurity concentrations. In Figure 35(c), I show a NAIL image of n+ type diffusion vertical lines having a width and spacing of 1.2 µm. The difference in absorption levels from the doping provides the optical contrast in these structures, with a spatial period of 2.4 µm. In Figure 35(d), I show a linecut of the image in Figure 35(c). From the linecut, I estimate a lateral spatial resolution of 0.3 µm, while taking into account the low signal to noise ratio. The diffraction of light theoretically limits the lateral spatial resolution to 0.26 µm for NIR inspection microscopes, using a NAIL, operating at free space wavelengths up to 1.8 µm. The combined geometry of the NAIL and Si IC further limits the theoretical lateral spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a lateral spatial resolution limit of 0.28 µm, in the absence of aberration. I experimentally demonstrated a lateral spatial resolution of about 0.3 µm. Although I did not achieve the theoretical limit of lateral spatial resolution for NIR inspection microscopy using a NAIL, my experimental result still represents an improvement over the 0.9 µm theoretical limit of lateral spatial resolution for conventional, NIR inspection microscopes, operating at a free space wavelength of 1.8 µm. This experiment was my first attempt at using a NAIL, where I verified the basic principles of NAIL microscopy and demonstrated a significant gain in lateral spatial resolution. 4.3

High resolution experiment

I will now present my second experimental demonstration of the improvements the NAIL technique yields in NIR inspection microscopy of Si ICs.[56,37] This experiment 72

realized close to the full potential of the NAIL, by achieving a lateral spatial resolution of better than 0.23 µm. This result represents a significant improvement over the 0.5 µm, theoretical limit of lateral spatial resolution for conventional, NIR inspection microscopes. I required a sample with features on the scale of 0.14 µm, the theoretical limit of lateral spatial resolution for NIR inspection microscopy using a NAIL. I could not fabricate a Si IC design at this scale for a reasonable price, so I turned to several commercial semiconductor manufacturers. Intel provided a Static Random Access Memory (SRAM) Si IC, fabricated by a 0.18 µm process, with the Si substrate polished to a thickness of 0.5 mm. Intel did not provide the design and layout of the Si IC for proprietary reasons, so the physical structure of most of the Si IC remained unknown. However, some gratings in the Si IC were marked with the values of width and spacing. I used the same aplanatic Si NAIL, with an R of 1.61 mm and a D of 1.5 mm, from the last experiment, to match the 0.5 mm wafer thickness of the SRAM Si IC. I will now describe a high resolution NIR microscope that I constructed to demonstrate the improvements the NAIL technique yields in subsurface NIR inspection microscopy of a Si IC. A photograph and an illustration of the microscope are shown in Figure 36 and Figure 37, respectively. I will now describe the mechanical, electrical, and optical systems of the microscope.

73

Figure 36: Photograph of the high resolution NIR microscope.

The mechanical system of the microscope was built on an optical table, equipped with pneumatic isolation to minimize the effects of vibration. Three large rods attached to the optical table and supported the cage assembly, the backbone of the microscope. I used Linos, Thorlabs, and many custom-machined parts to make the 30 mm cage assembly, in which the optical components of the microscope were housed. The microscope measured 74

a single point at a time with a pinhole and Ge detector, so the Si IC and NAIL had to be mechanically scanned under the objective lens to build a high spatial resolution NIR inspection image. For scanning, I used a Melles Griot piezoelectric scanning stage, with a 200 µm by 200 µm range in the lateral directions (x and y). Note that the lateral precision of the mechanical scan was relaxed by a factor of 13, so a 200 µm lateral mechanical movement of the Si IC and NAIL resulted in a 15 µm lateral movement of the focal point in the object space. The piezoelectric drives had strain gauges for closed loop position feedback, with a lateral mechanical resolution of 20 nm, so that the lateral resolution of the focal point in the object space was 1.5 nm. A mechanical stage with a micrometer drive was used for focusing in the longitudinal direction (z), and had a longitudinal mechanical resolution of 1 µm. The precision of longitudinal mechanical movement was relaxed by an even greater factor of 47, so that the longitudinal resolution of the focal point in the object space was 21 nm. In each direction, the resolution of the focal point in the object space was significantly smaller than the optical spatial resolution, and thus did not contribute significant noise to the measurements. The electrical system of the microscope was controlled by two computers. Images from the PbS camera were displayed on an analog monitor and acquired by the first computer, using an analog frame grabber board. The second computer ran a LabView program that initialized the instruments, controlled the scanning stage, and acquired the signals through a PCI GPIB board and a Data Acquisition (DAQ) board. The LabView program initially scanned the Si IC and NAIL under the objective lens, while acquiring

75

the preamplified signal from the Ge detector, to build a high spatial resolution NIR inspection image, and then managed the data for viewing and saving.

Figure 37: Illustration of the high resolution NIR inspection microscope.

I will now follow the path of light, as I describe the optical system of the microscope. A 1.054 µm fiber coupled laser diode from JDS Uniphase and a condenser lens below the SRAM Si IC provided illumination for transmission inspection. Light scattered from the structures in the SRAM Si IC first passed through the Si substrate and aplanatic Si NAIL 76

and was then refracted into air with an NA of 0.28. At this free space wavelength, the refractive index of Si is equal to 3.56, and the extinction coefficient of Si is equal to 9.67x10-5, so up to 91% of the light was absorbed by the SRAM Si IC substrate and aplanatic Si NAIL. The light was then collimated by an objective lens, made by Leitz Wetzlar, with a normal NA of 0.4. The NA of the objective lens was further limited to 0.3 by an aperture in the back focal plane of the objective lens, to minimize noise from stray light. The objective lens was infinity corrected for visible wavelengths and had an effective focal length of 10 mm, which yielded a beam diameter of 6 mm (2fsinθa) in the microscope. The first silver mirror reflected the optical axis of the beam in the microscope from a vertical direction, a requirement of the objective lens and NAIL, to a horizontal direction a requirement of the Ge detector. Both planar silver mirrors were elliptically shaped, and designed to accommodate a beam diameter up to 22.3 mm. These silver mirrors easily accommodated the beam diameter in the microscope of 6 mm, without vignetting the inspection image on the PbS camera. The mirrors have a surface figure of λ/10 and a reflectance of 98% at a wavelength of 1.054 µm. When the second silver mirror was in place, the camera tube lens formed a visual inspection image on the PbS camera. I selected a distance from the objective lens to the camera tube lens and a diameter for the camera tube lens that did not vignette the image. The PbS camera was used for visual inspection tasks, including locating, centering, and focusing of the volume to be imaged with high spatial resolution by the pinhole and Ge detector. For high spatial resolution, NIR inspection microscopy, the second silver mirror was removed and the light was alternately transmitted to an achromatic glass tube lens with a 77

diameter of 25.4 mm and a focal length of 90 mm. The light then passed through a 25 µm pinhole placed in the image plane and was collected by a Ge detector with a single, 5 mm diameter area. The cooled Ge detector, from Judson Technologies, measured NIR light at wavelengths from 0.8 µm to 1.4 µm for high spatial resolution, NIR inspection microscopy. If the vacuum in the detector dewar was periodically maintained, the Ge detector remained cooled for more than twelve hours when filled with liquid nitrogen. The diffraction limited, spot size in the image plane was 17 µm, so the 25 µm diameter of the pinhole yielded images with a resolution close to the diffraction limit. The distance from the pinhole to the Ge detector was minimized to allow most of the light passing through the pinhole to reach the Ge detector. The entire cage assembly was sealed with black tape to minimize stray light. I will now describe the procedure for the optical alignment of the microscope, starting with all of the optical components removed. The first silver mirror was installed and aligned to 45 degrees using a HeNe laser. The objective lens, glass tube lens, pinhole, and Ge detector were then installed. The fiber output of the laser was mounted on the scanning stage to control its position during alignment. The beam was then coarsely collimated and centered at several points along the microscope, using an infrared detection card, while adjusting the position of the fiber output of the laser. When enough light was incident on the pinhole, the detected signal was used to finely center the beam onto the pinhole by adjusting the lateral position (x,y) of the fiber output of the laser. The second silver mirror was then inserted, so that the beam was reflected upwards to a shear plate collimation tester monitored by a NIR camera to finely collimate the beam by 78

adjusting the longitudinal position (z) of the fiber output of the laser. The second silver mirror was then removed, and the detector signal was used to focus the beam on the pinhole by adjusting the distance from the glass tube lens to the pinhole. I performed several cycles of these fine alignment procedures to ensure proper alignment of the high spatial resolution, NIR inspection microscope. Finally, I aligned the visual inspection part of the microscope by inserting the second silver mirror and positioning the camera tube lens to image the spot on the PbS camera. After finishing this experiment, I disassembled and redesigned the microscope for the thermal emission microscopy experiment in Chapter 5. I will now present the experimental measurements of the SRAM Si IC, taken with the high resolution NIR inspection microscope. All of the data presented was taken using the Ge detector in the high resolution NIR inspection microscope. After searching the SRAM Si IC, I found a 15 µm by 9 µm area in the SRAM Si IC with features at several size scales and vertical structures. I will examine three parts of the image of the SRAM Si IC taken at best focus, shown in Figure 38.

Figure 38: Image at best focus, of a 15 µm by 9 µm area in the Si IC.

79

I first took the linecut, shown in Figure 39(a), to test the lateral spatial resolution on the edge of the largest structure, at the location marked A in Figure 38. The detected signal, shown in Figure 39(a), was fitted with an Error Function. As discussed in section 2.1, the detected signal represents a convolution of the NAIL microscope Line Spread Function (LSF) and the physical structure of the edge in the SRAM Si IC. I assumed the physical edge was a step function and deconvolved the LSF of the NAIL microscope, shown in Figure 39(b). The Full Width at Half Maximum (FWHM) of the resulting LSF is 0.23 µm, the lateral spatial resolution of the NAIL microscope, according to the Houston criterion.[7] Since the 0.18 µm fabrication process of the SRAM Si IC produces a finite physical edge width, the actual lateral spatial resolution of the NAIL microscope was better than the measured 0.23 µm.

Figure 39: Linecut across a physical edge with (a) fitted detector signal and (b) NAIL microscope Line Spread Function (LSF) plotted by distance, demonstrating 0.23 µm lateral spatial resolution.

80

The diffraction of light theoretically limits the lateral spatial resolution to 0.15 µm for NIR inspection microscopes with a NAIL, operating at free space wavelengths of 1.054 µm. The combined geometry of the NAIL and Si IC further limits the theoretical lateral spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a lateral spatial resolution limit of 0.16 µm, in the absence of aberration. I experimentally demonstrated a lateral spatial resolution of better than 0.23 µm. This experimental result represented a significant improvement over the 0.5 µm theoretical limit of lateral spatial resolution for conventional, NIR inspection microscopes, operating at free space wavelengths of 1.054 µm. To the best of my knowledge, this experimental lateral spatial resolution is the best, to date, for subsurface NIR inspection microscopy of a Si IC. To further test the lateral spatial resolution, I took the linecut, shown in Figure 40, across the two closely separated features at the location marked B in Figure 38. The two signal minima are separated by 0.9 µm and demonstrate high contrast. This measurement verifies that the lateral spatial resolution was much better than 0.9 µm, according to the Sparrow criterion.[7]

Figure 40: Linecut across two closely separated features, with the detector signal plotted by distance.

81

To evaluate the longitudinal spatial resolution, I next took successive images of an area around the location marked C in Figure 38, at different defocus distances in the longitudinal direction (z) from the best focus of the largest structure in Figure 38. The image in Figure 41(a) was taken with a longitudinal defocus of -1.7 µm, and a horizontal structure in the center of the image is near focus. The image in Figure 41(b) was taken with no longitudinal defocus and the horizontal structure seen in Figure 41(a) had gone out of focus. The image in Figure 41(c) was taken with a longitudinal defocus of 1.7 µm and two vertical structures in the center of the image are near focus. Although an exact value cannot be determined, because the layout of the complex three dimensional physical structures is unknown, these three images indicate a longitudinal spatial resolution of better than 1.7 µm.

Figure 41: Successive images at different defocus distances in the longitudinal direction, from best focus (a) -1.7 µm, (b) 0 µm, and (c) 1.7 µm.

The diffraction of light theoretically limits the longitudinal spatial resolution to 0.29 µm, for NIR inspection microscopes with a NAIL, operating at free space wavelengths of 1.054 µm. The combined geometry of the NAIL and Si IC further limits the theoretical 82

lateral spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a longitudinal spatial resolution limit of 0.77 µm, in the absence of aberration. I experimentally demonstrated a longitudinal spatial resolution of better than 1.7 µm. This experimental result represents a significant improvement over the 6 µm theoretical limit of longitudinal spatial resolution for conventional, NIR inspection microscopes, operating at free space wavelengths of 1.054 µm. To the best of my knowledge, this experimental longitudinal spatial resolution is the best, to date, for subsurface NIR inspection microscopy of a Si IC. 4.4

Commercial microscope experiment

I will now present my third experimental demonstration of the improvements the NAIL technique yields in NIR inspection microscopy of Si ICs.[37,57,58] This experiment was conducted on a µAMOS-200, made by Hamamatsu Photonics.[53] The µAMOS-200 is a commercial NIR microscope and is shown in Figure 42. This laser confocal scanning optical microscope operates with critical illumination from a fiber coupled laser diode at a free space wavelength of 1.3 µm, where the refractive index of Si is 3.5. Scanning the focal point in the object space is accomplished by galvanometric scanning mirrors. This microscope is used for an array of semiconductor failure analysis techniques and is also capable of operating at free space wavelengths other than 1.3 µm. For this experiment, I used the same SRAM Si IC and aplanatic Si NAIL, described in Section 4.3. I present images of three different areas of the SRAM Si IC, with and without the aplanatic Si NAIL.

83

Figure 42: µAMOS-200, a commercial NIR microscope. Photograph courtesy of Hamamatsu Corporation. Unauthorized use not permitted.

Figure 43 displays the improvement, by a factor of 6, in the lateral spatial resolution provided by the NAIL, when imaging an interconnect area of the SRAM Si IC. I show the image obtained using a 100X microscope objective lens in Figure 43(a), compared with that obtained using a 20X microscope objective lens with a NAIL in Figure 43(b).

Figure 43: Images of an interconnect area of the SRAM Si IC, taken using (a) a 100X objective lens and (b) a 20X objective lens with a NAIL.

84

Features otherwise not visible can be clearly discerned using the NAIL technique. The 100X objective lens is the best available on the microscope and has only a moderate NA of 0.5, due to the limitation of the lens materials available for achromats in the NIR. The increase in NA from the aplanatic NAIL allows for the use of a smaller NA microscope objective lens, without sacrificing the NA in the object space. The 20X objective lens has an NA of 0.4, but when coupled with the aplanatic Si NAIL, the NA in the object space is 3.3. In fact, an objective lens with an NA of 0.3 is sufficient to achieve the best spatial resolution for a NIR inspection microscope using an aplanatic Si NAIL. The combined geometry of the NAIL and SRAM Si IC limits the NA in the object space to 3.3 and the NA in the air between the NAIL and objective lens to 0.27, according to Equation (67). The 10X objective lens has an NA of 0.26, so it accepts virtually all of the light from the NAIL. Additionally, the total lateral magnification of the microscope with the aplanatic Si NAIL is 122, when using the 10X objective lens. This magnification is close to the magnification of the 100X objective lens and yields comparable images. For the aforementioned reasons, I used the 10X objective lens for most of the measurements. Figure 44 displays the improvement in the lateral spatial resolution provided by the NAIL at two different size scales, when imaging a cell area of the SRAM Si IC. The images in Figure 44(a) and Figure 44(b) show a wide field of view, while the images in Figure 44(c) and Figure 44(d) show the finer details of the SRAM Si IC. I show the images obtained using the 100X microscope objective lens in Figure 44(a) and Figure 44(c), compared with the images obtained using a 10X microscope objective lens with a NAIL in Figure 44(b) and Figure 44(d). Features otherwise not visible, can be clearly 85

discerned using the NAIL technique, which improves the microscope’s lateral spatial resolution from about 1.7 µm to 0.3 µm.

Figure 44: Images of a cell area of the SRAM Si IC, taken using (a),(c) a 100X objective lens and (b),(d) a 10X objective lens with a NAIL.

86

Figure 45 displays the improvement in the lateral spatial resolution provided by the NAIL, when imaging a grating area of the SRAM Si IC. I show the images obtained using the 100X microscope objective lens in Figure 45(a), compared with the images obtained using a 10X microscope objective lens with a NAIL in Figure 45(b). The width of the lines in these gratings is 0.64 µm, and the spacing between the lines in the grating varies from 0.36 µm to 2 µm.

Figure 45: Images of a grating area of the SRAM Si IC, taken using (a) a 100X objective lens and (b) a 10X objective lens with a NAIL.

Figure 46 shows a linecut across the grating in Figure 45, with a spatial period of 2.64 µm. At the center of the spacing between the lines, the signal reaches the lowest possible level. At the center of the lines, the signal does not reach the highest possible level, which is shown to the right, but the contrast is high at 0.8. The resolution in this image is approximately 0.3 µm from the 0.7 µm FWHM of the linecut.

87

Figure 46: Linecut across a grating with a spatial period of 2.64 µm.

The diffraction of light theoretically limits the lateral spatial resolution to 0.19 µm for NIR inspection microscopes with a NAIL, operating at free space wavelengths of 1.3 µm. The combined geometry of the NAIL and Si IC further limits the theoretical lateral spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a lateral spatial resolution limit in the absence of aberration of 0.20 µm. On a state-of-theart, commercial, NIR inspection microscope, I experimentally demonstrated a lateral spatial resolution of about 0.3 µm. This experimental result represents a significant improvement over the 0.7 µm, theoretical limit of lateral spatial resolution for conventional, NIR inspection microscopes, operating at free space wavelengths of 1.3 µm. Thus, I have demonstrated the first application of the NAIL technique to NIR inspection microscopy, resulting in significant improvements. In subsurface NIR inspection microscopy of Si integrated circuits, I demonstrated improvements in both the lateral and longitudinal spatial resolutions, well beyond the limits of conventional, NIR inspection microscopy on a state-of-the-art, commercial, NIR inspection microscope. 88

Chapter 5 THERMAL EMISSION MICROSCOPY USING A NAIL

5.1

Thermal microscopy

Heat sources in an object, such as Joule heating in the devices of a Si Integrated Circuit (IC), create a spatial distribution of temperature in the object, according to the channels of thermal transport in the object. In solids, thermal transport occurs primarily by conduction, particles mechanically exchanging energy, and to a much smaller degree by radiation, particles emitting and absorbing energy in the form of photons.[59] A variety of thermal transport models are available to predict the spatial distribution of temperature in solids, when the heat sources are well defined. On a macroscopic scale, the continuum approach of the heat equation is the best model for conduction in solids. In this model, the materials in an object are considered to be continuous media with constant heat capacity and thermal conductivity. However, when the heat sources and the resulting thermal features approach the atomic scale, other mechanical models of conduction, such as the Boltzmann Transport Equation (BTE), are necessary to accurately predict the spatial distribution of temperature.[60,61,62] Although theoretical models play an important part in determining the spatial distribution of temperature in an object, experimental measurements are also necessary, not only to test the validity of theoretical models, but also to determine the thermal 89

behavior of real objects when their exact structures and conditions remain unknown. A thermal microscope is an instrument that measures the spatial distribution of temperature in an object at the microscopic scale. An object that has a spatial distribution of temperature is, by definition, not in thermal equilibrium. Such an object may, however, be divided into microscopic volumes that have approximately the same temperature and are in thermal equilibrium. In practice, only the aggregate temperature of a microscopic volume can be measured by a thermal microscope, so the scale of the heat sources in an object and the spatial resolution of the thermal microscope should be considered to determine the accuracy of the experimental measurement. I will now cover passive thermal microscopes, which measure the temperature of an object without adding heat to the object. Passive thermal microscopes use either a mechanical or an optical method of interaction with an object to measure the temperature of the object. 5.1.1

Techniques using a mechanical method of interaction

I will first cover the two categories of passive thermal microscopy techniques that use a mechanical method of interaction with an object to measure the temperature of the object. In Scanning Thermal Microscopy (SThM), the first category, the probe of a Scanning Tunneling Microscope or Atomic Force Microscope is placed in contact with the surface of an object and scanned to build a thermal image.[63] Scanning Joule Expansion Microscopy (SJEM) is an indirect SThM technique that measures the displacement of the surface of an object caused by thermal expansion in the object. SJEM requires knowledge of the thermal expansion properties of the materials at the surface of an object to determine the temperature of the object. On the other hand, most SThM 90

techniques are direct and rely on the conduction of heat from the surface of an object to the probe, which must have a small thermal mass and attain an intimate contact, to reach thermal equilibrium with the surface of the object. To measure the temperature, some of these direct SThM techniques utilize the temperature dependence of the electrical properties of a thermocouple or resistor in the probe, while other direct SThM techniques utilize thermal expansion of the materials in the probe. The spatial resolution in SThM depends on the dimensions of the tip of the probe, and can reach the atomic scale. The scan area of a SThM is usually a few square microns, and the scan time is long, due to the serial nature of scanning to build an image. In the second category of passive thermal microscopy techniques that utilize a mechanical interaction with an object to measure the temperature of the object, a thin film is deposited on the surface of an object. Then, an optical microscope detects changes in the optical properties of the thin film to determine the temperature at the surface of the object. In Liquid Crystal Thermography (LCT), the polarization of light reflected from a thin film of liquid crystals varies with the temperature, while in Fluorescent MicroThermography (FMT), the intensity of light emitted from a thin film of fluorescent molecules varies with temperature.[64] The spatial resolution of LCT and FMT are ultimately limited by the diffraction of the light collected by a conventional optical microscope. The two aforementioned categories of passive thermal microscopy techniques that utilize a mechanical interaction with an object to measure the temperature of the object are only capable of measuring the temperature at the surface of an object.

91

5.1.2

Techniques using an optical method of interaction

I now present several passive thermal microscopy techniques that use an optical method of interaction with an object to measure the temperature of the object. MicroRaman spectroscopy, the first technique, measures the inelastic scattering of photons from the surface of an object, with a laser, optical microscope, and spectrometer.[65] The characteristics of the Stokes peak, Anti-Stokes peak, and other features in a Raman spectrum are used to indirectly evaluate the temperature.[66] The intensity of the features in a Raman spectrum depends on the materials at the surface of an object and can be too weak to measure for some objects. Optical Interferometry, the second technique, is similar to SJEM and measures the displacement of the surface of an object caused by thermal expansion in the object to determine the temperature of the object. The displacement causes a phase shift in the light reflected from the surface of an object that is measured by a laser, optical microscope, and interferometer. The refractive index and absorption of the materials in an object change with temperature. These changes in optical properties alter the amplitude and phase of light reflected from an object. In thermoreflectance, the third technique, an optical microscope measures a change in the amplitude of the reflection of light from an object, to determine the temperature of the object.[67,68] On the other hand, in the fourth technique, often described as Optical Beam Deflection, a laser, optical microscope, and interferometer detect a change in the phase of the reflection of light from an object, to determine the temperature of the object.[69,70] Thermoreflectance and Optical Beam Deflection are

92

indirect and require knowledge of the optical properties of the materials at the surface of an object to determine the temperature of the object. In the final technique, thermal emission microscopy, an optical microscope collects mid-infrared (MIR) photons incoherently radiated from an object.[71] As I will further describe in section 5.2, the number and wavelength of the thermally radiated photons are a function of the temperature in an object. Thus, with these relationships, the temperature in an object can be directly determined from measurement of the thermal radiation. A thermal emission microscope measures the local temperature of each microscopic volume in an object, by independently collecting the light radiated by that volume. The spatial resolution of the five aforementioned optical techniques is ultimately limited by the diffraction of light. However, the Near-Field Scanning Optical Microscopy [72] and Solid Immersion Lens [12] microscopy techniques, described in Chapter 1, may be employed to improve the lateral spatial resolutions beyond the limits of diffraction, for temperature measurement of the surface of an object. For subsurface passive thermal microscopy of solids, only a few of the techniques can be applied. Techniques that use a mechanical method of interaction with an object to measure the temperature of the object cannot interact with volumes located below the object’s surface. Techniques that use an optical method of interaction with the surface of an object, such as micro-Raman spectroscopy and Optical Interferometry, are also incompatible. Thus the only techniques remaining for subsurface passive thermal microscopy are thermoreflectance, Optical Beam Deflection, and thermal emission microscopy. For these remaining techniques to succeed, an object must be transparent 93

and homogenous at the wavelengths of light used in each technique, to allow thermal imaging below the surface of the object. Thermal emission microscopy is the only passive thermal microscopy technique that can directly measure the temperatures of subsurface volumes. In many subsurface thermal microscopy applications, such as failure analysis of Si ICs, a planar back surface geometry is characteristic, so application of the NAIL technique can be applied to improve the spatial resolution and collection efficiency. In this chapter, I will show the improvements the NAIL technique yields in subsurface thermal emission microscopy of Si ICs. 5.2

Thermal emission microscopy

To understand thermal emission microscopy, I will first examine the basic theory of thermal radiation. According to Kirchhoff’s law, at each wavelength (λ), the emissivity (ε(λ)) and absorptance (a(λ)) of an object at thermal equilibrium are equal. An object with constant emissivity (ε), which is independent of wavelength, is termed either a blackbody or greybody. A blackbody has an emissivity of unity, and a greybody has an emissivity that is a constant less than unity. The radiated power density (S) from the surface of an object in thermal equilibrium, as a function of wavelength and temperature, is given by the Planck Radiation Formula, Equation (79).[73] Figure 47 shows the radiated power density of a blackbody at three different temperatures. S( λ ,T ) = ε ( λ )

2π c 2 h

λ

1

5

e

94

hc λ kT

−1

W   3 m 

(79)

Figure 47: Radiated power density (S) from a blackbody, as a function of wavelength and temperature.

For a blackbody or greybody, the peak wavelength (λpeak) of the radiated power density, as a function of temperature, is given by Wien's Displacement Law, Equation (80).[74] Figure 48 shows λpeak in the range of temperatures used in our experiments between room temperature, 295 K, and the melting point of passivated aluminum, 873 K.[71]

λ peak =

2.898 × 10−3 ( m) T

(80)

Figure 48: Peak wavelength (λpeak) of the radiated power density, as a function of temperature, for a blackbody or greybody.

95

If the object is a blackbody or greybody, the radiant exitance (M) is obtained by integrating the Plank Radiation Formula over all wavelengths, yielding the StefanBoltzmann Law, Equation (81), and the Stefan-Boltzmann constant, Equation (82).[75]

σ=

M (T ) = ε σ T 4

(81)

2π 5 k 4  W    15h3c 2  m 2 K 4 

(82)

Now that I have covered the theory of thermal radiation, let us see how thermal emission microscopes collect thermal radiation and determine the temperature of an object. Single-band thermal emission microscopes collect radiation from an object with a single detector. The Stefan-Boltzmann Law is used to determine the temperature of an object from the detected signal. When the emissivity of an object and the optical loss mechanisms within the object and microscope are unknown, a single-band thermal emission microscope is incapable of determining the absolute temperature of the object without calibration. However, using a calibration standard, single-band thermal emission microscopes can determine the absolute temperature in an object. Multiband thermal emission microscopes collect radiation from an object, with multiple detectors in different bands, or ranges of wavelength. The wavelength dependence of the Planck Radiation Formula, Equation (79), is used to determine the temperature, typically by the ratio of signals from two detectors.[76] A single-band thermal emission microscope is the most common type used in failure analysis of Si ICs and is the type I chose to build for our experiments.

96

A Si IC can be approximated as a greybody, when considering its radiated power density, as a function of wavelength and temperature. Thus, thermal radiation from a Si IC is primarily at MIR wavelengths, between 3 µm and 30 µm, with a peak at a wavelength between 6 µm and 10 µm, at typical temperatures in failure analysis, between 200 ºC and 20 ºC respectively. This can be seen from Figure 47 and Figure 48, according to the Planck Radiation Formula and Wien's Displacement Law respectively. Cooled InSb and Mercury Cadmium Telluride (MCT) detectors and PtSi CCD cameras are commonly used in thermal emission microscopes, because of their high detectivity at MIR wavelengths. Most thermal emission microscopes used in failure analysis of Si ICs employ an InSb detector or a PtSi CCD camera, detecting light at free space wavelengths up to 5 µm. Thus, most of the collected signal results from light at free space wavelengths between 4 µm and 5 µm, according to the Planck Radiation Formula. In semiconductor failure analysis, a thermal emission microscope is an invaluable tool for mapping the temperatures in the devices of a Si IC. Current Si IC technology includes many opaque metal layers and structures above semiconductor devices, thereby hindering topside thermal emission microscopy of these buried devices in their final state. Thus, backside thermal emission microscopy through the Si substrate is used almost exclusively in failure analysis. Additionally, most thermal radiation propagates into the Si substrate, because of the Si substrate’s high refractive index at MIR wavelengths. At a free space wavelength (λ0) of 5 µm, the refractive index (n) of Si is 3.43. The Si substrate is both transparent and homogenous at MIR wavelengths, allowing thermal radiation to propagate through it. However, reflection and refraction of light at the planar surface of 97

the Si substrate limits the amount of thermal radiation a thermal emission microscope can collect, as shown in Figure 49(a).

Figure 49: Thermal emission from (a) a Si IC and (b) a Si IC with the addition of a NAIL.

Reflection allows less than 3% of unpolarized light, uniformly emitted from a circuit, to escape the sample, according to the Fresnel Formulae. Furthermore, much of the light that escapes the sample is refracted at high angles, which most thermal emission microscopes cannot collect because of their low NA. The absence of light from above the critical angle (θc), ultimately limits the lateral spatial resolution to 2.5 µm and the longitudinal spatial resolution to 34 µm for conventional thermal emission microscopes operating at free space wavelengths up to 5 µm. Typical lateral spatial resolution values are worse than about 3 µm for state-of-the-art commercial systems, and many require elevated ambient temperatures to boost the signal level.[71] However, Si IC technology has reached submicron process size scales, well beyond the spatial resolution capability of conventional thermal emission microscopy. The large refractive index of Si at MIR wavelengths offers the potential for significant improvement in the amount of light collected and the spatial resolution, if this planar barrier can be circumvented.

98

The addition of a NAIL allows more thermal radiation to escape, as illustrated in Figure 49(b). The NAIL is made of Si, matching the refractive index of the Si substrate, so that light propagates from the Si substrate into the NAIL without refracting at the interface. After propagating through the NAIL, the light then refracts at the spherical surface into air, and is collected by a thermal emission microscope. An aplanatic NAIL theoretically allows up to 63% of unpolarized light, uniformly emitted from a circuit, to reach a thermal emission microscope within an NA of 0.29. With a NAIL, our thermal emission microscope collected 8% of the light emitted from a circuit at best focus at room temperature ambient. This enhancement in the amount of light collected, compared to a theoretical maximum of 3% without a NAIL, greatly improves the thermal resolution of a thermal emission microscope, and reduces the need to elevate ambient temperatures to boost the signal level. The diffraction of light theoretically limits the lateral spatial resolution to 0.73 µm and the longitudinal spatial resolution to 1.5 µm, for thermal emission microscopes with a NAIL, operating at free space wavelengths up to 5 µm. With a NAIL, I experimentally demonstrate a lateral spatial resolution of 1.4 µm and a longitudinal spatial resolution of 7.4 µm, as detailed in section 5.5. Our experimental results represent a significant improvement over state-of-the-art conventional thermal emission microscopy. To the best of my knowledge, our spatial resolution is the best, to date, for subsurface thermal emission microscopy of a Si IC.

99

5.3

Custom thermal emission microscope

I will now describe a custom thermal emission microscope that I constructed to demonstrate the improvements the NAIL technique yields in subsurface thermal emission microscopy of a Si IC. A photograph and an illustration of the microscope are shown in Figure 50 and Figure 51, respectively. I will now describe the mechanical, electrical, and optical systems of the microscope.

Figure 50: Photograph of the custom thermal emission microscope.

The mechanical system of the microscope is built on an optical table, equipped with pneumatic isolation to minimize the effects of vibration. Two large rods attach to the 100

optical table and support the cage assembly, the backbone of the microscope. I used Linos, Thorlabs, and many custom-machined parts to make the 30 mm cage assembly, in which the optical components of the microscope are housed. The microscope measures a single point at a time with the InSb detector, so the Si IC and NAIL must be mechanically scanned under the objective lens to build a thermal image. For scanning, I used a Melles Griot piezoelectric scanning stage, with a 200 µm by 200 µm range in the lateral directions (x and y). The piezoelectric drives have strain gauges for closed loop position feedback, so the mechanical resolution of a scan is 5 nm. Three mechanical stages with micrometer drives are used for focusing in longitudinal direction (z) and for coarse positioning in the lateral directions. The electrical system of the microscope is controlled by a computer, using two LabView programs. Images from the InGaAs camera are displayed on an analog monitor and acquired by the computer, using a PCI digital image acquisition board with the first LabView program. The second LabView program initializes the instruments, controls the scanning stage, and acquires the signals through a PCI GPIB board and a Data Acquisition (DAQ) board. The second LabView program initially scans the Si IC and NAIL under the objective lens, while acquiring the signal from the InSb detector to build the thermal image, and then manages the thermal data for viewing and saving. The Si IC is flip chip bonded to a printed circuit board for connection and mounting to the piezoelectric stage. A signal generator drives the Si IC with a sinusoidal AC voltage, while heterodyne detection is used to optimize the signal to noise ratio of the detected thermal signal. The detected thermal signal is first preamplified and then sent to a lockin 101

amplifier, which measures the amplitude of the detected thermal signal at the second harmonic of the drive voltage, with a reference signal from the signal generator.

Figure 51: Illustration of the custom thermal emission microscope.

I will now follow the path of light, as I describe the optical system of the microscope. A quartz halogen lamp and a condenser lens below the Si IC provide illumination at nearinfrared (NIR) wavelengths for transmission visual inspection. The NIR light and the 102

MIR thermal radiation from the Si IC follow the same path through most of the microscope. Light from the Si IC first passes through the Si substrate and aplanatic NAIL and is then refracted into air with an NA of 0.29. The light is then collimated by an objective lens with an NA of 0.25 and a working distance of 25 mm. The objective lens has six elements and is corrected for chromatic aberration for wavelengths from 1.5 µm to 5 µm. The objective lens is infinity corrected and has an effective focal length of 18.5 mm, which yields a beam diameter of 9.25 mm (2fsinθa) in the microscope. The first silver mirror reflects the optical axis of the beam in the microscope from a vertical direction, a requirement of the objective lens and NAIL, to a horizontal direction, a requirement of the InSb detector. Both planar silver mirrors are elliptically shaped, designed to accommodate a beam diameter up to 22.3 mm. These silver mirrors easily accommodate the beam diameter in the microscope of 9.25 mm, without vignetting the inspection image. The mirrors have a surface figure of λ/10 and a reflectance of better than 98% over the range of wavelengths used in the microscope. Originally, the second mirror in the microscope was a cold mirror that reflected the NIR wavelengths to the InGaAs camera and transmitted the MIR wavelengths to the InSb detector. However, I eventually replaced the cold mirror with the second silver mirror, which is removable, to maximize the amount of light reaching the InSb detector and to simplify the alignment. When the second silver mirror is in place, it reflects light to a CaFl2 tube lens with a diameter of 25.4 mm and a focal length of 100 mm. The single element, uncoated, CaFl2 tube lens offers a transmittance of better than 90% at NIR wavelengths. I selected a distance from the objective lens to the CaFl2 tube lens and a diameter for the CaFl2 tube 103

lens that did not vignette the visual inspection image. The CaFl2 tube lens forms the visual inspection image on the Sensors Unlimited InGaAs camera. The InGaAs camera detects NIR light at wavelengths between 0.9 µm and 1.7 µm. At a wavelength of 1.7 µm, the diffraction limited spot size at the image plane is 18 µm, so the InGaAs camera undersamples the image with 320 by 240 pixels on a 40 µm pitch. The InGaAs camera is used for visual inspection tasks, including locating, centering, and focusing of the volume to be thermal imaged. For thermal imaging, the second silver mirror is removed and light is alternately transmitted to a Si tube lens with a diameter of 25.4 mm and a focal length of 100 mm. The single element Si tube lens is antireflection coated for wavelengths between 2 µm and 5 µm. An InSb detector in the image plane collects light from a single, 50 µm diameter area, so the beam of thermal radiation is not clipped with proper alignment. The cooled InSb detector from Judson Technologies detects MIR light at wavelengths from 3 µm to 5 µm for thermal emission microscopy. If the vacuum in the detector dewar is periodically maintained, the InSb detector remains cooled for more than eight hours when filled with liquid nitrogen. At a wavelength of 5 µm, the diffraction limited spot size at the image plane is 54 µm, so the 50 µm diameter of the InSb detector creates thermal images with a resolution close to the diffraction limit. The responsivity of the InSb detector (R) is 0.764 A/W, and the gain of the preamplifier (Gp) is 108 V/A. The entire cage assembly and sample area are sealed and purged with air from which water and carbon dioxide have been removed. This dry air eliminates noise in the thermal signal, from atmospheric absorption at MIR wavelengths. 104

I will now describe the procedure for the optical alignment of the microscope, which starts with all of the optical components removed. The first silver mirror is installed and aligned to 45 degrees using a HeNe laser. The objective lens, second silver mirror, Si tube lens, and InSb detector are then installed. A fiber coupled, 1.55 µm laser, which is the highest wavelength, single mode source in our lab, is used to align the microscope. The InSb detector can measure light at the wavelength of the laser, even though the detectivity is quite low. The fiber output of the laser is mounted on the scanning stage to control its position during alignment. The beam is coarsely collimated and centered at several points along the microscope, using an infrared detection card, while adjusting the position of the fiber output of the laser. With the second silver mirror removed, once enough light is incident on the InSb detector, the detector signal is used to finely center the beam by adjusting the lateral position (x,y) of the fiber output of the laser. With the second silver mirror in place, the beam is reflected upwards to a shear plate collimation tester monitored by a NIR camera to finely collimate the beam by adjusting the longitudinal position (z) of the fiber output of the laser. With the second silver mirror again removed, the detector signal is used to finely focus the beam on the InSb detector by adjusting the distance from the Si tube lens to the InSb detector. Performing several cycles of these fine alignment procedures ensures proper alignment of the thermal part of the microscope. Finally, I align the visual inspection part of the microscope. A pinhole above a high-temperature blackbody radiation source is brought into focus using the thermal part of the microscope. Then the second silver mirror is inserted and the image on the InGaAs camera is centered and brought into focus by adjusting the angle of the 105

second silver mirror and the distance between the CaFl2 tube lens and the InGaAs camera, respectively. For the thermal part of the microscope, which operates at wavelengths up to 5 µm, diffraction ultimately limits the lateral spatial resolution in air to 10 µm. After alignment, the microscope demonstrates a lateral spatial resolution of 12.9 µm, according to the Houston criterion. 5.4

Thermal test sample

I will now describe a thermal test sample we fabricated to demonstrate the improvements the NAIL technique yields in subsurface thermal emission microscopy. As I will show, Joule heating a metal line in the sample generates a spatial distribution of temperature, narrow enough to demonstrate the significant improvement in spatial resolution a NAIL affords. I will now detail the fabrication of the sample and introduce the dimensions of the structures. Figure 52, while not to scale, illustrates the sample in its final form.

Figure 52: Thermal test sample (not to scale).

106

We start with a double-side-polished Si wafer with a diameter of 4 inches and a thickness of 1 mm. The wafer is monocrystalline and undoped to minimize plasmon absorption at MIR wavelengths, which would hamper our thermal measurements. The Si IC described in section 4.2 was my original thermal test sample, but the wafers used were highly doped and absorbed all of the thermal radiation from the Joule heated polycrystalline Si lines. A 19 nm thick layer of SiO2 is then grown on the top surface of the wafer by dry thermal oxidation. The oxide both electrically and thermally isolates the next metal layer from the Si wafer. It takes several steps to fabricate the metal lines and square metal pads on each end of the lines. First, a 100 nm thick layer of passivated Al (1% Si) is deposited on the oxide by DC sputtering, and a layer of photoresist is then spun onto the metal. Next, a direct write laser lithography system patterns the lines and pads in the photoresist. The lines have a length of 20 µm and vary in width from 0.8 µm to 5 µm. The exact dimensions vary with the different doses of laser power used for lithography. After the photoresist is developed, the metal is etched to form the structures in the passivated Al. With the fabrication of the passivated Al lines and pads complete, we add larger Ti/Au pads using conventional photolithography. The wafer is diced into sets of four lines, which are then flip-chip bonded to a printed circuit board to electrically connect the lines and mount the thermal test sample in an inverted position for thermal emission microscopy. Further information on the fabrication of the thermal test sample is available in Shawn Thorne’s undergraduate thesis.[77] I will now create an electrical, thermal, and thermal radiation model of the thermal test sample. In the electrical model, I will assume the lines are ohmic. Applying a 107

sinusoidal voltage across a line at a frequency of f generates Joule heating within the line. The total power dissipated within the line as heat (P) has a time independent term and a second harmonic term as follows in Equation (83). P( t ) = P0 sin 2 ( 2 π f t ) =

P0 P0 + cos ( 2 π ( 2 f ) t ) 2 2

(83)

In our experiments I obtain the peak power (P0) by multiplying the peak voltage applied across the line and the measured peak current passing through the line. I will now create a thermal model of the thermal test sample, assuming all of the heat generated within the line is transferred to the oxide with a uniform distribution. I use the heat equation with no heat sources, Equation (84), to model the time-dependent spatial distribution of temperature (T) in the oxide and wafer. ∇ 2T( x , y , z ,t ) −

1 ∂T( x , y , z ,t ) =0 α ∂t

(84)

Table 10: Thermodynamic constants of the materials in the thermal test sample.

Material

Density

 kg  2  m 

ρ

Specific heat Thermal

Thermal

capacity

conductivity diffusivity

 J  Cp    kg K 

 J k mK

  s

α=

k  m2    ρ Cp  s 

Al (line)

2700

904.0

237.0

9.71x10-5

SiO2 (oxide)

2210

745.1

1.4

8.50x10-7

Si (wafer)

2328

713.5

148.0

8.91x10-5

Table 10 shows the thermodynamic constants of each material in the thermal test sample and the resulting thermal diffusivity (α).[78] I will assume the second harmonic 108

term of the power varies slowly enough that the time dependent term in the heat equation is insignificant, yielding Laplace’s equation, Equation (85). Once I obtain a temperature solution I will demonstrate this assumption to be valid. ∇ 2T( x , y , z ,t ) = 0

(85)

At large distances, from the Al line the temperature approaches room temperature (Tc), which was 295 K for our experiments. The background temperature is a trivial solution to the heat equation and is separated from the distribution of temperature change resulting from Joule heating in the line. The functions of the space and time variables of the distribution of temperature change resulting from Joule heating in the line are separated in Equation (86). T( x , y , z ,t ) = Tc + P( t ) F( x , y , z )

(86)

∇ 2 F( x , y , z ) = 0

(87)

The spatial distribution of temperature change per unit power (F) in the oxide and wafer is simulated with the HOTPAC software package.[79] The lateral displacement (x) extends from the center of the line along its width, the lateral displacement (y) extends from the center of the line along its length, and the longitudinal displacement (z) extends from the top surface of the oxide into the oxide and wafer. Heat from the line is modeled as a uniform heat source on the top surface of the oxide. I will assume the temperature within the line to be that of the top surface of the oxide below the line, because of the high thermal conductivity and small thickness of the line. The oxide has a different

109

composition near the interface with Si but this transition region is only 2 nm thick and is neither thermally nor optically significant.[80] Figure 53 shows a simulation of F, as a function of x and z, with y equal to zero for a 2 µm wide line, which I used for experimental measurements. Most of the temperature change occurs within the oxide and the remaining temperature change occurs primarily within the first 10 µm of the wafer.

Figure 53: Temperature change per unit power (F) for a 2 µm wide line, as a function of lateral displacement (x) from the center of the line along its width and longitudinal displacement (z) from the top surface of the oxide into the oxide and wafer.

Table 11 demonstrates that the length across the region of temperature change in each material is much smaller than the respective thermal diffusion length at a frequency of 1800 Hz, which I used in our experiments. This validates the earlier assumption that the time dependent term in the heat equation is insignificant.

110

Table 11: Length across the region of temperature change in each material is much smaller than the respective thermal diffusion length, so the time dependent term in the heat equation is insignificant.

Material

Thermal

diffusion Length across the region

length @ 1800 Hz

µ= Al (line) SiO2 (oxide) Si (wafer)

α

π (1800 )

( nm )

of temperature change l ( nm )

131000

100

12300

19

125500

10000

Figure 54 shows F, as a function of z, with x and y equal to zero at the center of the 2 µm wide line. If the temperature at the surface of the oxide reached the melting point of passivated Al (873 K), the maximum temperature in the wafer would be 616 K.[71] The power levels I use in experimental measurements do not melt the passivated Al, but electromigration failure is a common occurrence.

Figure 54: Temperature change per unit power (F), as a function of longitudinal displacement (z) from the top surface of the oxide into the oxide and wafer, with lateral displacements (x and y) equal to zero at the center of the 2 µm wide line.

111

Figure 55 shows F, as a function of x, with y and z equal to zero at the top surface of the oxide. The temperature change at the top surface of the oxide plateaus in the region beneath the 2 µm wide line, and drops to the temperature change of the wafer beneath it in the region in contact with air.

Figure 55: Temperature change per unit power (F), as a function of lateral displacement (x) from the center of the 2 µm wide line along its width at the top surface of the oxide.

Figure 56 shows F, as a function of y, with x and z equal to zero at the top surface of the oxide. There is no appreciable change in F, as a function of y near the center of the line, so in this region I will assume F is independent of y.

112

Figure 56: Temperature change per unit power (F), as a function of lateral displacement (y) from the center of the 2 µm wide line along its length at the top surface of the oxide.

I will now model the thermal radiation resulting from the temperature distribution I calculated above. I consider the materials in the thermal test sample to be greybodies, so according to the Stefan-Boltzmann Law, Equation (81), the emissivity (ε) and temperature of a material determine its radiant exitance (M). According to Kirchhoff’s law, the emissivity and absorptivity of a material are equal. So, for metals with a thickness much greater than their skin depth, the absorptivity is equal to one minus the reflectivity. Table 12: Optical constants for the line.

Material

Extinction Skin depth

Reflectivity

Emissivity

R

ε

coefficient

δ=

k

Al (line)

λ0 ( nm ) 4π k

48.6

8.19

113

0.986

0.014

Table 13: Optical constants for the oxide and wafer.

Material

Extinction Length across the Emissivity coefficient temperature change −4 π k l

l ( nm )

k

ε = 1− e

λ0

SiO2 (oxide)

3.98x10-3

19

1.90x10-4

Si (wafer)

1.99x10-7

10000

5.00x10-6

Table 12 and Table 13 show the optical constants of each material, for light with a wavelength of 5 µm in free space.[81] The thermal radiation from the oxide and wafer is insignificant, because their emissivities and temperatures are so much lower than those of the line. It is important to note that emissivity of the wafer remains very low at temperatures below the maximum of 616 K in our sample.[82,83,84] In this sample, the dominant radiant exitance (M) into the wafer comes from the bottom surface of the line. Continuity requires the temperature of the bottom surface of the line to be that of the top surface of the oxide, where z is equal to zero. The radiant exitance from the bottom surface of the line is given in Equation (88) and is derived from the Stefan-Boltzmann Law, Equation (81), and the temperature, Equations (83) and (86).

(

M ( t , x , y ) = ε Al σ T( t4, x , y , z =0) = ε Al σ Tc + P0 F( x , y , z =0) sin 2 ( 2 π f t )

)

4

(88)

Expanding the radiant exitance yields a time-independent term and several harmonic terms. In experiments, I measure the amplitude of the second harmonic term of the radiant exitance (M2f), given in Equation (89). M 2 f ( x, y) =

ε Al σ 16

(32P F( 0

T + 48P02 F( 2x , y , z =0)Tc2 + 30 P03 F(3x , y , z =0)Tc + 7 P04 F( 4x , y , z =0)

3 x , y , z =0) c

114

)

(89)

First, I will model the temperature and radiant exitance from the 2 µm wide line. The peak power (P0) applied to the 2 µm wide line for most of our experiments was 0.36 W. The resulting maximum temperature in the line and oxide is 566 K, and the maximum temperature in the Si is 446 K. Figure 57 shows amplitude of the second harmonic term of the radiant exitance (M2f), as a function of x, with y equal to zero.

Figure 57: The amplitude of the second harmonic term of the radiant exitance (M2f) from the bottom surface of a 2 µm wide line with a peak power of 0.36 W, as a function of lateral displacement (x) from the center of the line along its width.

Now, I present a model for the temperature and radiant exitance from the 0.8 µm wide line, which I also used for experimental measurements. The peak power (P0) applied to the 0.8 µm wide line for most of our experiments was 0.058 W. The resulting maximum temperature in the line and oxide is 374 K. Figure 58 shows the amplitude of the second harmonic term of the radiant exitance (M2f), as a function of x, with y equal to zero.

115

Figure 58: The amplitude of the second harmonic term of the radiant exitance (M2f) from the bottom surface of a 0.8 µm wide line with a peak power of 0.058 W, as a function of lateral displacement (x) from the center of the line along its width.

5.5

Experimental results using a NAIL

I experimentally demonstrated the improvements the NAIL technique yields in subsurface thermal emission microscopy, using the custom thermal emission microscope described in section 5.3 to image the thermal test sample described in section 5.4.[85,86,87] All of the measurements that I present were taken using a NAIL. I had an aplanatic Si NAIL manufactured with an R of 1.61 mm and a D of 1.07 mm, to match the 1 mm wafer thickness of the thermal test sample, in accordance with Equation (72). The combined geometry of the NAIL and thermal test sample limits the NA to 2.9, according to Equation (67). The objective lens has an NA of 0.25, and light from the NAIL has an NA of 0.24 in air, so the objective lens accepts virtually all of the light from the NAIL. The voltage applied to the lines in the thermal test sample had a frequency of 900 Hz. I measured the signal from the detector at the second harmonic of this frequency for all of these measurements. I used a 2 µm wide line for most measurements, because it offers more thermal radiation and a better signal to noise ratio than the 0.8 µm wide Al line that 116

I used to test the lateral spatial resolution. The signal to noise ratio for these two lines varied from about 10 to 100, depending on the drive power used. I achieved improvements in the amount of light collected and the spatial resolution, well beyond the limits of conventional subsurface thermal emission microscopy. I verified that the signal from the 2 µm wide line, measured at the InSb detector, was indeed thermal radiation. I brought the line into focus and centered it coarsely using the InGaAs camera, and then I brought the center of the line into focus by finely adjusting the position to maximize the signal at the InSb detector. I took successive thermal images while varying the peak power applied to the line. The signal, as a function of the peak power applied to the line, obeyed the Stefan-Boltzmann Law, as shown in Figure 59.

Figure 59: The Signal, as a function of the Peak Power applied to a 2 µm wide Al line, obeys the StefanBoltzmann Law.

To evaluate the lateral spatial resolution, I took thermal images, at best focus, of two Al lines. An inspection image of the 0.8 µm wide Al line, taken by the InGaAs camera, is shown in Figure 60(a). A thermal emission image of the Joule heating in the Al line from

117

a peak power of 0.058 W, taken with the InSb detector, at best focus, is shown in Figure 60(b).

Figure 60: (a) Inspection image of a 0.8 µm wide Al line, taken by an InGaAs camera and (b) thermal emission image of the Joule heating in the Al line, taken with an InSb detector at best focus.

The signal, as a function of lateral distance, shown in Figure 61(a), is a linecut from the thermal emission image in Figure 60(b), and has a full-width-at-half-maximum (FWHM) of 1.6 µm. The simulated radiant exitance from the bottom surface of the Al line is also shown in Figure 61(a). The Line Spread Function (LSF) shown in Figure 61(b) is deconvolved from the signal and the simulated radiant exitance from the Al line. According to the Houston criterion, this data demonstrates a lateral spatial resolution of 1.4 µm.

118

Figure 61: (a) Signal and Radiant Exitance from a 0.8 µm wide Al line, as a function of Lateral Distance, and (b) resulting Line Spread Function of the NAIL microscope with a FWHM of 1.4 µm.

The signal, as a function of lateral distance, shown in Figure 62(a), is a linecut from a thermal emission image of the Joule heating in the 2 µm wide line, from a peak power (P0) of 0.36 W, taken with the InSb detector at best focus. The signal has a full-width-athalf-maximum (FWHM) of 2.4 µm. The simulated radiant exitance from the bottom surface of the Al line is also shown in Figure 62(a). The LSF shown in Figure 62(b) is deconvolved from the signal and the simulated radiant exitance from the Al line. According to the Houston criterion, this data also demonstrates a lateral spatial resolution of 1.4 µm.

119

Figure 62: (a) Signal and Radiant Exitance from a 2 µm wide Al line, as a function of Lateral Distance, and (b) resulting Line Spread Function of the NAIL microscope with a FWHM of 1.4 µm.

The diffraction of light theoretically limits the lateral spatial resolution to 0.73 µm for thermal emission microscopes with a NAIL, operating at free space wavelengths up to 5 µm. The combined geometry of the NAIL and thermal test sample further limits the theoretical lateral spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a lateral spatial resolution limit in the absence of aberration of 0.90 µm. I experimentally demonstrate a lateral spatial resolution of 1.4 µm. Despite being below the theoretical limitation for NAIL microscopy, this experimental result still represents a significant improvement over the 2.5 µm theoretical limit of lateral spatial

120

resolution for conventional thermal emission microscopes, operating at free space wavelengths up to 5 µm. To the best of my knowledge, this experimental lateral spatial resolution is the best, to date, for subsurface thermal emission microscopy of a Si IC. To evaluate the longitudinal spatial resolution, I next took successive images of the 2 µm wide Al line at different defocus distances in the longitudinal direction (z). In Figure 63, the signal from the center of the line, from a peak power of 0.36 W, as a function of longitudinal distance from best focus, had a FWHM of 11.6 µm. The collection focus is in the Si substrate for positive defocus values, and is below the Si/Al/air interface for negative defocus values, where the signal is the result of radiation from the sides of the Al line and reflection from the interface.[88,89,90,91] The FWHM of the signal, from the positive defocus values, indicated a longitudinal spatial resolution of 7.4 µm. The longitudinal spatial resolution is obtained directly from the signal, because the radiant exitance from the bottom of the line is a delta function in the longitudinal direction.

Figure 63: Signal from the center of a 2 µm wide Al line, as a function of Longitudinal Distance from best focus.

121

The diffraction of light theoretically limits the longitudinal spatial resolution to 1.5 µm, for thermal emission microscopes with a NAIL, operating at free space wavelengths up to 5 µm. The combined geometry of the NAIL and thermal test sample further limits the longitudinal spatial resolution. Using the theoretical optical model developed in Chapter 2, I calculate a longitudinal spatial resolution limit of 2.92 µm, in the absence of aberration. I experimentally demonstrate a longitudinal spatial resolution of 7.4 µm. This experimental result represents a significant improvement over the 34 µm theoretical limit of longitudinal spatial resolution for conventional thermal emission microscopes, operating at free space wavelengths up to 5 µm. To the best of my knowledge, this experimental longitudinal spatial resolution is the best, to date, for subsurface thermal emission microscopy of a Si IC. To evaluate the amount of light collected by the custom thermal emission microscope, I will find the peak value of the Point Spread Function (PSF). I know the amplitude of the LSF, from the responsivity of the InSb detector and the gain in the custom thermal emission microscope, given in section 5.3. I fit the LSF, measured from the 2 µm wide Al line in Figure 62(b), with a Gaussian to determine the PSF. The peak value of the PSF is 8%, which means that much of the radiant exitance from the bottom surface of the line reaches the InSb detector. The 92% loss of thermal radiation occurs in the thermal test sample, NAIL, and microscope. A NAIL theoretically allows a collection efficiency of 68%, for unpolarized light uniformly emitted from a Si IC. The combined geometry of the NAIL and thermal test sample further limits the collection efficiency. Using the theoretical optical model 122

developed in Chapter 2, I calculate a collection efficiency limit of 31%. With a NAIL, our thermal emission microscope collected 8% of the light emitted from the 2 µm wide Al line, at best focus. This experimental result represents a significant improvement over the 3% theoretical limit of collection efficiency for conventional thermal emission microscopes. To the best of my knowledge, this experimental light collection ratio is the best, to date, for subsurface thermal emission microscopy of a Si IC. Several issues caused our experimental results to fall short of the various theoretical limits of NAIL microscopy. Conventional thermal emission microscopes are designed to minimize the effects of chromatic aberration over the range of MIR wavelengths they collect. However, refraction at the spherical surface of a NAIL imparts a chromatic aberration proportional to R, which deteriorates the spatial resolution. Although the spacing between the substrate and the NAIL can be minimized by polishing both surfaces, a finite gap remains, since in practice the two surfaces are not perfectly planar. As the size of the gap increases monotonically with R, less light evanescently couples from the Si substrate across the gap into the NAIL. These two issues lead us to choose a smaller value of R, however, in doing so I reduce the NA of the NAIL. In our experiments, I attempted an optimization by choosing a value of R large enough to allow an NA of 2.7, but small enough to reduce the longitudinal chromatic aberration term to 0.6 µm. Also note that the gap between the Si substrate and NAIL, compared to previous experiments, described in Chapter 4, at NIR wavelengths, has a smaller effect at MIR wavelengths, because the evanescent decay length of light is proportional to the wavelength. A final issue, common to all diffraction limited microscopes, is the tradeoff 123

between the amount of light collected and the spatial resolution, resulting from the size of the aperture or detector in the image space. I chose a detector size in our thermal emission microscope that demonstrates significant improvements in both quantities. In summary, I have demonstrated the first application of the NAIL technique to thermal emission microscopy, resulting in significant improvements. In subsurface thermal emission microscopy of Si integrated circuits, I demonstrated improvements in both the lateral and longitudinal spatial resolutions, well beyond the limits of conventional thermal emission microscopy. Since the amount of light collected is not adversely affected by NAIL microscopy, while the resolution is improved, I believe that this technique will be widely used in thermal emission microscopy for semiconductor failure analysis.

124

Chapter 6 CONCLUSIONS

In conclusion, I have presented the NAIL subsurface microscopy technique, which significantly improves the light-gathering, resolving, and magnifying power of a conventional optical microscope. I have created a theoretical optical model for imaging subsurface structures in an object with a planar surface, both without and with a NAIL, and compared the results. I have described experimental procedures for imaging structures below the planar surface of an object with a NAIL. Failure analysis microscopy must be conducted through the Si substrate at free space wavelengths longer than 1 µm. For conventional failure analysis microscopes operating at free space wavelengths longer than 1 µm, the ultimate limit on lateral spatial resolution is 0.5 µm, and typical lateral spatial resolution values are about 1 µm for state-of-the-art commercial microscopes.[53] Meanwhile, Si IC technology has reached process size scales of 0.09 µm, and is expected to reach process size scales of 0.065 µm by 2005 in accordance with Moore's Law.[92] The Si IC feature sizes are clearly beyond the imaging capability of conventional failure analysis microscopy. However, the large refractive index of Si at infrared wavelengths offers the potential for significant improvement in spatial resolution and amount of light collected from the buried devices in a Si IC, using the NAIL technique. The ultimate limit on lateral spatial resolution is 0.14 µm for NAIL 125

failure analysis microscopy operating at free space wavelengths longer than 1 µm, allowing nearly full resolution failure analysis microscopy of current and coming generations of Si IC technology. I have experimentally demonstrated the NAIL technique in two Si IC failure analysis applications. The NAIL technique was first applied to NIR inspection microscopy. By realizing a numerical aperture of 3.4, a lateral spatial resolution of better than 0.23 µm and a longitudinal spatial resolution of better than 1.7 µm were experimentally demonstrated at a free space wavelength of 1.05 µm. The NAIL technique was also applied to thermal emission microscopy. By realizing a numerical aperture of 2.9, a lateral spatial resolution of 1.4 µm and a longitudinal spatial resolution of 7.4 µm were experimentally demonstrated at free space wavelengths up to 5 µm. To the best of my knowledge, these results are spatial resolution records for their respective applications. Many other semiconductor failure analysis techniques employ a NIR inspection microscope, so my demonstration of the improvements in NIR inspection microscopy may be extended to these other applications. I have enabled the technology transfer of the NAIL technique to Hamamatsu Photonics, who have commercialized the NAIL technique and will incorporate NAIL optics into all of their new semiconductor failure analysis microscopes beginning this fall, see Appendix A.[53] I have been involved in other applications of the NAIL technique including quantum dot microscopy.[93] I have also received a patent on the NAIL technique, which is included in Appendix B.

126

Appendix A HAMAMATSU PHEMOS 2000 DATASHEET

The Phemos 2000, IR confocal emission microscope, incorporates the NAIL technique, and will ship this fall from Hamamatsu Photonics. The datasheet that follows is courtesy of Hamamatsu Corporation. Unauthorized use not permitted.

127

128

129

Appendix B NAIL PATENT

Unites States Patent number 6,687,058 titled "Numerical aperture increasing lens (nail) techniques for high-resolution sub-surface imaging" by Stephen B. Ippolito, M. Selim Unlu, and Bennett B. Goldberg was issued on February 3, 2004. Unites States Provisional Patent Application number 60/140,138 was filed on June 21, 1999. International Patent Application number PCT/US00/40253 was filed on June 20, 2000. United States Utility Patent Application number 10/019,133 was filed on December 20, 2001. European Patent Application number 00957981.4, Japanese Patent Application number 2001-505221, and Canadian Patent Application number 2,375,563 were also filed. The images that follow are available at patft.uspto.gov

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

BIBLIOGRAPHY

[1]

Molecular Expressions, 6 Aug 2001, Florida State U, 20 Aug. 2001, .

[2]

“microscope,” Britannica.com, vers. 2001, 1999-2001, Encyclopædia Britannica, 10 June 2001, .

[3]

G. Binning, C. F. Quate, Ch. Gerber, “Atomic Force Microscope,” Physical Review Letters 56 (1986): 930–933.

[4]

V. Jipson, C. F. Quate, “Acoustic microscopy at optical wavelengths,” Applied Physics Letters 32 (1978): 789-791.

[5]

Ernst Ruska, The Early Development of Electron Lenses and Electron Microscopy (Stuttgart [Germany]: Hirzel, 1980).

[6]

G. Binnig, H. Rohrer, Ch. Gerber, E. Weibel, “Tunneling through a controllable vacuum gap,” Applied Physics Letters 40 (1982): 178-180.

[7]

A. J. den Dekker, A. van den Bos, “Resolution: a survey,” Journal of the Optical Society of America A 14 (1997): 547-557.

[8]

H. Volkmann, “Ernst Abbe and His Work,” Applied Optics 5 (1966): 1720-1731.

[9]

C. J. R. Sheppard, “Depth of field in optical microscopy,” Journal of Microscopy 149 (1988): 73.

[10]

Masud Mansuripur, Lifeng Li, Wei-Hung Yeh, “Scanning Optical Microscopy, Part 1,” Optics & Photonics News May 1998: 56-59.

[11]

Masud Mansuripur, Lifeng Li, Wei-Hung Yeh, “Scanning Optical Microscopy: Part 2,” Optics & Photonics News June 1998: 42-45.

[12]

S. M. Mansfield, G. S. Kino, “Solid immersion microscope,” Applied Physics Letters 57 (1990): 2615. 145

[13]

B. D. Terris, H. J. Mamin, D. Rugar, W. R. Studenmund, G. S. Kino, “Near-field optical data storage using a solid immersion lens,” Applied Physics Letters 65 (1994): 388.

[14]

B. D. Terris, H. J. Mamin, D. Rugar, “Near-field optical data storage,” Applied Physics Letters 68 (1996): 141-143.

[15]

Isao Ichimura, Shinichi Hayashi, G. S. Kino, “High-density optical recording using a solid immersion lens,” Applied Optics 36 (1997): 4339.

[16]

J. A. H. Stotz, M. R. Freeman, “A stroboscopic scanning solid immersion lens microscope,” Review of Scientific Instruments 68 (1997): 4468-4477.

[17]

Qiang Wu, Robert D. Grober, D. Gammon, D. S. Katzer, “Imaging Spectroscopy of Two-Dimensional Excitons in a Narrow GaAs/AlGaAs Quantum Well,” Physical Review Letters 83 (1999): 2652-2655.

[18]

Qiang Wu, Robert D. Grober, D. Gammon, D. S. Katzer, “Excitons, biexcitons, and electron-hole plasma in a narrow 2.8-nm GaAs/AlxGa1-xAs quantum well,” Physical Review B 62 (2000): 13022.

[19]

Masahiro Yoshita, Takeaki Sasaki, Motoyoshi Baba, Hidefumi Akiyama, “Application of solid immersion lens to high-spatial resolution photoluminescence imaging of GaAs quantum wells at low temperatures,” Applied Physics Letters 73 (1998): 635.

[20]

Masahiro Yoshita, Motoyoshi Baba, Shyun Koshiba, Hiroyuki Sakaki, Hidefumi Akiyama, “Solid-immersion photoluminescence microscopy of carrier diffusion and drift in facet-growth GaAs quantum wells,” Applied Physics Letters 73 (1998): 2965.

[21]

Motoyoshi Baba, Takeaki Sasaki, Masahiro Yoshita, Hidefumi Akiyama, “Aberrations and allowances for errors in a hemisphere solid immersion lens for submicron-resolution photoluminescence microscopy,” Journal of Applied Physics 85 (1999): 6923.

[22]

Masahiro Yoshita, Naoki Kondo, Hiroyuki Sakaki, Motoyoshi Baba, Hidefumi Akiyama, “Large terrace formation and modulated electronic states in (110) GaAs quantum wells,” Physical Review B 63 (2001): 075305.

[23]

Hui Zhao, Sebastian Moehl, Sven Wachter, Heinz Kalt, “Hot exciton transport in ZnSe quantum wells,” Applied Physics Letters 80 (2002): 1391. 146

[24]

Martin Vollmer, Harald Giessen, Wolfgang Stolz, Wolfgang W. Rühle, Luke Ghislain, Virgil Elings, “Ultrafast nonlinear subwavelength solid immersion spectroscopy at T = 8 K,” Applied Physics Letters 74 (1999): 1791.

[25]

Qiang Wu, G. D. Feke, Robert D. Grober, L. P. Ghislain, “Realization of numerical aperture 2.0 using a gallium phosphide solid immersion lens,” Applied Physics Letters 75 (1999): 4064-4066.

[26]

Kazuko Koyama, Masahiro Yoshita, Motoyoshi Baba, Tohru Suemoto, Hidefumi Akiyama, “High collection efficiency in fluorescence microscopy with a solid immersion lens,” Applied Physics Letters 75 (1999): 1667.

[27]

Masahiro Yoshita, Kazuko Koyama, Motoyoshi Baba, Hidefumi Akiyama, “Fourier imaging study of efficient near-field optical coupling in solid immersion fluorescence microscopy,” Journal of Applied Physics 92 (2002): 862.

[28]

C. D. Poweleit, A. Gunther, S. Goodnick, José Menéndez, “Raman imaging of patterned silicon using a solid immersion lens,” Applied Physics Letters 73 (1998): 2275.

[29]

L. P. Ghislain, V. B. Elings, K. B. Crozier, S. R. Manalis, S. C. Minne, K. Wilder, G. S. Kino, C. F. Quate, “Near-field photolithography with a solid immersion lens,” Applied Physics Letters 74 (1999): 501.

[30]

Shigeki Matsuo, Hiroaki Misawa, “Direct measurement of laser power through a high numerical aperture oil immersion objective lens using a solid immersion lens,” Review of Scientific Instruments 73 (2002): 2011.

[31]

D. A. Fletcher, K. B. Crozier, C. F. Quate, G. S. Kino, K. E. Goodson, D. Simanovskii, D. V. Palanker, “Near-field infrared imaging with a microfabricated solid immersion lens,” Applied Physics Letters 77 (2000): 2109.

[32]

D. A. Fletcher, K. E. Goodson, G. S. Kino, “Focusing in microlenses close to a wavelength in diameter,” Optics Letters 26 (2001): 399.

[33]

D. A. Fletcher, K. B. Crozier, C. F. Quate, G. S. Kino, K. E. Goodson, D. Simanovskii, D. V. Palanker, “Refraction contrast imaging with a scanning microlens,” Applied Physics Letters 78 (2001): 3589.

[34]

Scott Marshall Mansfield, “Solid Immersion Microscopy,” diss., Stanford U, 1992.

147

[35]

E. Betzig, J. K. Trautman, T. D. Harris, J. S. Weiner, and R. L. Kostelak, “Breaking the Diffraction Barrier. Optical Microscopy on a Nanometric Scale,” Science 251 (1991): 1468.

[36]

L. P. Ghislain, V. B. Elings, “Near-field scanning solid immersion microscope,” Applied Physics Letters 72 (1998): 2779-2781.

[37]

S. B. Ippolito, B. B. Goldberg, M. S. Ünlü, “High spatial resolution subsurface microscopy,” Applied Physics Letters 78 (2001): 4071.

[38]

Max Born, Emil Wolf, Principles of optics, 7th expanded ed. (Cambridge [England]: Cambridge UP, 1999).

[39]

S. M. Mansfield, W. R. Studenmund, G. S. Kino, K. Osato, “High-numericalaperture lens system for optical storage,” Optics Letters 18 (1993): 305.

[40]

Guillaume Boisset, Luxpop, 1998, 2 May 2002 .

[41]

New Semiconductor Materials. Characteristics and Properties, 1998-2003, Ioffe Institute, 8 Jan. 2004 .

[42]

Tom L. Williams, The Optical Transfer Function of Imaging Systems, Optics and Optoelectronics Ser. (Bristol [England]: Institute of Physics Pub., 1999).

[43]

Matthew R. Arnison, Colin J.R. Sheppard, “A 3D vectorial optical transfer function suitable for arbitrary pupil functions,” Optics Communications 211 (2002): 53-63.

[44]

Bahaa E.A. Saleh, Malvin Carl Teich, Fundamentals of Photonics, Wiley Ser. in Pure and Applied Optics (New York: Wiley, 1991).

[45]

Colin J. R. Sheppard, “High-aperture beams,” Journal of the Optical Society of America A 18 (2001): 1579-1587.

[46]

E. Wolf, “Electromagnetic diffraction in optical systems I. An integral representation of the image field,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 253 (1959): 349–357.

[47]

B. Richards, E. Wolf, “Electromagnetic diffraction in optical systems II. Structure of the image field in an aplanatic system,” Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 253 (1959): 358–379.

148

[48]

Tom D. Milster, Joshua S. Jo, Kusato Hirota, “Roles of propagating and evanescent waves in solid immersion lens systems,” Applied Optics 38 (1999): 5046-5057.

[49]

Joshua S. Jo, Tom D. Milster, J. Kevin Erwin, “Phase and amplitude apodization induced by focusing through an evanescent gap in a solid immersion lens microscope,” Optical Engineering 41 (2002): 1866-1875.

[50]

Tasso R. M. Sales, “Smallest Focal Spot,” Physical Review Letters 81 (1998): 3844-3847.

[51]

D. W. Pohl, “Dynamic piezoelectric translation devices,” Review of Scientific Instruments 58 (1987): 54-57.

[52]

Bennett B. Goldberg, S. B. Ippolito, Lukas Novotny, Zhiheng Liu, M. Selim Ünlü, “Immersion Lens Microscopy of Photonic Nanostructures and Quantum Dots,” IEEE Journal on Selected Topics in Quantum Electronics 8 (2002): 1051.

[53]

Semiconductor failure analysis .

[54]

Stephen Bradley Ippolito, Anna K. Swan, Bennett B. Goldberg, M. Selim Ünlü, “High Resolution Subsurface Microscopy Technique,” Lasers and Electro-Optics Society Annual Meeting 2 (Piscataway: IEEE, 2000) 430.

[55]

MOSIS Integrated Circuit Fabrication Service, 2003, MOSIS, 1 Jan. 2004 .

[56]

Stephen Bradley Ippolito, Anna K. Swan, Bennett B. Goldberg, M. Selim Ünlü, “High-resolution IC inspection technique,” Photonics West: Metrology-based Control for Micro-Manufacturing 4275 (Bellingham: SPIE, 2001) 126.

[57]

Stephen Bradley Ippolito, Bennett B. Goldberg, M. Selim Ünlü, “Comparison of Numerical Aperture Increasing Lens and standard subsurface microscopy,” Lasers and Electro-Optics Society Annual Meeting 2 (Piscataway: IEEE, 2001) 774.

[58]

M. Selim Ünlü, Bennett B. Goldberg, Stephen B. Ippolito, “Optical Microscopy Beyond the Diffraction Limit: Imaging Guided and Propagating Fields,” International Symposium on Advanced Physical Fields: Fabrication and Characterization of Nano-structured Materials (Tsukuba [Japan]: NIMS, 2001).

149

tools

from

Hamamatsu

Photonics

[59]

Introduction to the Principles of Heat Transfer, 2003, eFunda, 14 Oct. 2003 .

[60]

David G. Cahill, Wayne K. Ford, Kenneth E. Goodson, Gerald D. Mahan, Arun Majumdar, Humphrey J. Maris, Roberto Merlin, Simon R. Phillpot, “Nanoscale thermal transport,” Journal of Applied Physics 93 (2003): 793.

[61]

P. G. Sverdrup, S. Sinha, M. Asheghi, S. Uma, K. E. Goodson, “Measurement of ballistic phonon conduction near hotspots in silicon,” Applied Physics Letters 78 (2001): 3331.

[62]

K. Schwab, E. A. Henriksen, J. M. Worlock, M. L. Roukes, “Measurement of the quantum of thermal conductance,” Nature 404 (2000): 974.

[63]

A. Majumdar, “Scanning Thermal Microscopy,” Annual Review of Materials Science 29 (1999): 505.

[64]

J. Kölzer, E. Oesterschulze, G. Deboy, “Thermal imaging and measurement techniques for electronic materials and devices,” Microelectronic Engineering 31 (1996): 251.

[65]

S. Kouteva-Arguirova, Tz. Arguirov, D. Wolfframm, J. Reif, “Influence of local heating on micro-Raman spectroscopy of silicon,” Journal of Applied Physics 94 (2003): 4946.

[66]

Satoru Todoroki, Masaaki Sawai, Kunio Aiki, “Temperature distribution along the striped active region in high-power GaAlAs visible lasers,” Journal of Applied Physics 58 (1985): 1124.

[67]

Gilles Tessier, Stéphane Holé, Danièle Fournier, “Ultraviolet illumination thermoreflectance for the temperature mapping of integrated circuits,” Optics Letters 28 (2003): 875.

[68]

S. A. Thorne, S. B. Ippolito, M. S. Ünlü, B. B. Goldberg, “High-resolution thermoreflectance microscopy,” MRS Fall Meeting: Spatially Resolved Characterization of Local Phenomena in Materials and Nanostructures 738 (Warrendale: MRS, 2002) G12.9.1.

[69]

D. Pogany, S. Bychikhin, M. Litzenberger, E. Gornik, G. Groos, M. Stecher, “Extraction of spatio-temporal distribution of power dissipation in semiconductor devices using nanosecond interferometric mapping technique,” Applied Physics Letters 81 (2002): 2881. 150

[70]

C. Pflügl, M. Litzenberger, W. Schrenk, D. Pogany, E. Gornik, G. Strasser, “Interferometric study of thermal dynamics in GaAs-based quantum-cascade lasers,” Applied Physics Letters 82 (2003): 1664.

[71]

Seiichi Kondo, Kenji Hinode, “High-resolution temperature measurement of void dynamics induced by electromigration in aluminum metallization,” Applied Physics Letters 67 (1995): 1606.

[72]

C. Feng, M. S. Ünlü, B. B. Goldberg, W. D. Herzog, “Thermal Imaging by Infrared Near-field Microscopy,” Lasers and Electro-Optics Society Annual Meeting (Piscataway: IEEE, 1996) 249.

[73]

Raymond A. Serway, Clement J. Moses, Curt A. Moyer, Modern Physics (Philadelphia: Saunders College Pub., 1989).

[74]

C. R. Nave, HyperPhysics, 2000, Georgia State U, 14 Oct. 2003 .

[75]

Michael Bass, ed., Handbook of Optics, 84th ed., 4 vols. (New York: McGrawHill, 1995).

[76]

Krzysztof Chrzanowski, Zbigniew Bielecki, Marek Szulim, “Comparison of temperature resolution of single-band, dual-band, and multiband infrared systems,” Applied Optics 38 (1999): 2820-2823.

[77]

Shawn Thorne, “High Resolution Thermography,” Senior honors thesis, College of Arts and Sciences, Boston U, 2003, 26.

[78]

David R. Lide, ed., CRC Handbook of Chemistry and Physics, 84th ed. (Cleveland: CRC P, 2003).

[79]

HOTPAC was written by John Albers of the Semiconductor Electronics Division (SED) of the Electronics and Electrical Engineering Laboratory (EEEL) of the National Institute of Standards and Technology and published as NIST Special Publication 400-96 (August 1995).

[80]

N. Gonon, A. Gagnaire, D. Barbier, A. Glachant, “Growth and structure of rapid thermal silicon oxides and nitroxides studied by spectroellipsometry and Auger electron spectroscopy,” Journal of Applied Physics 76 (1994): 5242.

[81]

Edward D. Palik, ed., Handbook of Optical Constants of Solids, Academic P Handbook Ser. (Orlando: Academic P, 1985). 151

[82]

J. Nulman, S. Antonio, W. Blonigan, “Observation of silicon wafer emissivity in rapid thermal processing chambers for pyrometric temperature monitoring,” Applied Physics Letters 56 (1990): 2513.

[83]

Peter Vandenabeele, Karen Maex, “Influence of temperature and backside roughness on the emissivity of Si wafers during rapid thermal processing,” Journal of Applied Physics 72 (1992): 5867.

[84]

G. Chen, “Optical effect on thermal emission of semiconductors,” Applied Physics Letters 69 (1996): 512.

[85]

Stephen B. Ippolito, Shawn A. Thorne, Mesut G. Eraslan, Bennett B. Goldberg, M. Selim Ünlü, Yusuf Leblebici, “High spatial resolution subsurface thermal emission microscopy,” Lasers and Electro-Optics Society Annual Meeting (Piscataway: IEEE, 2003).

[86]

Shawn A. Thorne, Steven B. Ippolito, Mesut G. Eraslan, Bennett B. Goldberg, M. Selim Ünlü, “High Resolution Backside Imaging and Thermography using a Numerical Aperture Increasing Lens,” International Symposium for Testing and Failure Analysis (Materials Park: ASM International, 2003).

[87]

M. S. Ünlü, S. B. Ippolito, M. G. Eraslan, S. A. Thorne, A. Vamivakas, B. B. Goldberg, Y. Leblebici, “High Resolution Backside Imaging and Thermography using a Numerical Aperture Increasing Lens,” Nanotechnology Conference and Trade Show (Cambridge: NSTI, 2004) MO41.03.

[88]

Jean-Jacques Greffet, Rémi Carminati, Karl Joulain, Jean-Philippe Mulet, Stéphane Mainguy, Yong Chen, “Coherent emission of light by thermal sources,” Nature 416 (2002): 61.

[89]

Lukas Novotny, Robert D. Grober, Khaled Karrai, “Reflected image of a strongly focused spot,” Optics Letters 26 (2001): 789.

[90]

Lukas Novotny, “Allowed and forbidden light in near-field optics. I. A single dipolar light source,” Journal of the Optical Society of America A 14 (1997): 91.

[91]

Khaled Karrai, Xaver Lorenz, Lukas Novotny, “Enhanced reflectivity contrast in confocal solid immersion lens microscopy,” Applied Physics Letters 77 (2000): 3459-3461.

[92]

Gordon E. Moore, “Cramming more components onto integrated circuits,” Electronics April 1965: 114-117. 152

[93]

Zhiheng Liu, “Nanoscale time-resolved spectroscopy of individual quantum dots,” diss., Boston U, 2004.

153