An ideal light sensor would provide exact information on intensity, timing, location, and angle of incoming photons. Single photon avalanche diodes (SPADs) provide such desired high (single photon) sensitivity with precise time information and can be implemented at a pixel-scale to form an array to extract spatial information. Furthermore, recent work has demonstrated photodiode-based structures (combined with micro-lenses or diffraction gratings) that are capable of encoding both spatial and angular information of incident light. In this letter, we describe the implementation of such a grating structure on SPADs to realize a pixel-scale angle-sensitive single photon avalanche diode (A-SPAD) built in a standard CMOS process. While the underlying SPAD structure provides high sensitivity, the time information of the two layers of diffraction gratings above offers angle-sensitivity. Such a unique combination of SPAD and diffraction gratings expands the sensing dimensions to pave a path towards lens-less 3-D imaging and light-field time-of-flight imaging.
In order to obtain maximum information from incident photons, a light sensor should be sensitive to direction, arrival time, and number of incoming photons. To capture their spatial distribution, a sensor should be small and parallelizable enough to form an array with high spatial resolution, as well as angle-sensitive for ray-tracing or lens-less 3-D reconstruction. Furthermore, the sensor should be able to resolve the photon arrival down to a sub-nanosecond range, even for a small number of photons
Recently, CMOS angle sensitive pixel (ASP) arrays have been reported1 that enable light field imaging, including computational refocusing and range finding, lens-less far field imaging, and on-chip 3D localization.1,2,4
These capabilities are derived from two stacked diffraction gratings integrated above each pixel. Incident light striking the top grating generates a diffraction pattern in the form of self-images at 1/2 integer multiples of the Talbot depth3 with a lateral offset that depends on the incident angle of the light. The bottom grating, acting as an analyzer, selectively blocks or passes the incoming diffraction patterns based on their lateral offsets, effectively modulating the light intensity based on the incident angle of the incoming light. This generates angular response at the pixel which can be modeled by the equation
Quantitatively, Eq. (1) can be used to approximate the output of an ASP,1 where Io is incident light intensity, θ is incident angle, m is the modulation depth, α is sampling phase, and β is angular sensitivity. The aperture function, , accounts for effects such as surface reflections, which can hamper sensitivity of the ASP at large angles. In order to resolve the ambiguity between incident angle and local intensities, previous work1,4 has implemented sets of 4 ASPs, where α is selected as and . Such a scheme is akin to differential, quadrature modulation often used in communication systems to determine phase and amplitude of carrier signals. It is also important to note that such ASPs are built in a standard CMOS manufacturing flow with the gratings built in the already-existing metal interconnect layers, allowing for low-cost, scalable arrays of ASPs. Because of their diffractive nature, ASPs are fundamentally dependent on the wave-like behavior of light.
Previous studies on ASPs have used conventional photodiodes to transduce photons into charged carriers. Such sensors have limited temporal resolution and require many photons to detect a signal. However, in many applications, high temporal resolution on a nanosecond scale is needed. For instance, fluorescent imaging requires significant rejection of short wavelength stimulus in order to detect longer-wavelength emissions, which is often accomplished through an optical band pass filter. However, sub-nanosecond resolution time-gating can also enable rejection of high energy stimulus pulses in the time domain while capturing re-emitted fluorescent photons nanoseconds later. Similarly, time of flight applications need to detect the time it takes emitted photons to reflect and arrive at the sensor.8
Single photon avalanche diodes (SPADs) are known for their high sensitivity, as they can detect single photon arrivals, as well as extremely fine temporal resolution of photon arrival times down to sub-nanosecond scale.6–9 Pixel-scale SPADs on CMOS are emerging as favorable low-cost options for time of flight (TOF) or fluorescence lifetime imaging (FLIM) owing to their scalability, integration in conventional monolithic semiconductor technology, and therefore co-integrability with signal processing.5 SPADs amplify a single electron-hole pair generated by an incident photon in a p-n junction. An electric field above the intrinsic breakdown voltage across the p-n junction triggers impact ionization, which leads to an avalanche breakdown current in response to the incident photon. A quenching circuit is included in order to block the avalanche current, suppress after-pulsing, and restore bias voltage for next detection cycle.6–12 One common implementation of such a quenching circuit converts the current pulse to voltage by putting a resistor in series with the SPAD. It should be noted that the output of a SPAD is inherently binary and probabilistic, and therefore amenable to immediate digital signal processing. Although such discreteness is needed to achieve ultra-fine temporal resolution, this information can be used either to detect the arrival of individual photons or to characterize the probability that a photon arrives inside a given time window. This window is shifted in varying amounts relative to the stimulus window a technique known as time-correlated single photon counting (TCSPC). By repeating the stimulus and measurement many times and averaging, such statistical data can be obtained with mean-square error inversely proportional to the number of measurements. Fundamentally, however, the temporal information captured by SPADs derives from the arrival of individual photons, and therefore from the particle nature of light.
In order to achieve characteristics closer to an ideal light sensor, it makes sense to combine the superior sensitivity, time resolution, and inherent photon counting sensitivity of CMOS compatible SPAD device with the high angular sensitivity of ASPs. Here, we propose an angle sensitive SPAD on a conventional CMOS platform by integrating ASP gratings on top of silicon SPADs with optimized readout circuitry. Fig. 1 shows the cross section of the angle sensitive single photon avalanche diode (A-SPAD). Two metal gratings are used to implement angle sensitivity. The upper grating, in Metal 5 layer, generates the diffraction pattern and the lower grating, in Metal 3, is near the fractional Talbot depth (1/2 Zt), and so filters the pattern based on incident angle and passes it onto SPAD surface. A highly doped p-type diffusion layer and a lightly doped n-type well form a p-n junction to support the high electric field required for SPAD functions. A p-well buffer is formed around the edge of the p+ anode diffusion area to serve as a guard-ring that minimizes unwanted edge breakdown.7–9 All the active circuitry is optically shielded by the local decoupling capacitor, and noise sensitive circuits including comparator are electrically isolated by independent p-wells to minimize charge injection through the substrate.
Because SPADs are somewhat larger than simple photodiodes, we saved area by replacing the quadrature phase ( and ) sampling with a balanced tri-phase () sampling. While the reconstruction of the incident angle and intensity is formulated as a two dimensional complex number, and therefore maps naturally to quadrature sampling, balanced tri-phase provides equivalent intensity detection through averaging the tri-phase sampling outputs (as shown in Fig. 2). At the same time, three phases are sufficient to resolve angle information. Fig. 3 shows a microphotograph of the foundry-fabricated prototype A-SPAD unit cell, consisted of six A-SPAD pixels for three different phases () oriented in both x and y direction, where a single A-SPAD pixel occupies the area of 35 μm × 35 μm. The readout circuitry consists of a quenching pseudo resistor, an AC coupling capacitor, which also provides optical shield over non optical area, and a comparator. Two different β values (8 and 15) are implemented by using different analyzer grating depths using different metal layers, specifically as Metal 3 and Metal 1. The gratings, made of Metal 1, are connected to anode and cathode that are placed below the metal layer. Hence, the Metal 1 gratings also serve to bring out the electrical signals from the SPAD.
Balanced tri-phase. Colored curves represent ideal angular response from each type of A-SPAD ().
Balanced tri-phase. Colored curves represent ideal angular response from each type of A-SPAD ().
Fig. 4(a) shows a measurement setup to verify the sub-nanosecond temporal resolution while measuring amplitude and angular information simultaneously. Using digital delay generator (SRS Inc. DG645), one LED or the other was turned on for approximately 1 ns and 4 ns after the SPAD entered Geiger mode (t = 0 in Fig. 4(b)), with angular offset of . A 5 ns wide gating sampling window was shifted with respect to LED stimulus as in TCSPC technique, through which we obtained a histogram with temporal resolution of 72 ps. Fig. 4(b) shows the resulting amplitude response acquired by averaging the photon counts in all three phase outputs (blue curve). The three output vectors () were summed up in the complex plane to reconstruct the angle information of incident photons (red curve). Extraction of angle information, however, requires the vector sum to be sufficiently large in amplitude, which was chosen to be 40% of peak amplitude (estimated to be 10 photons/ns per A-SPAD), as depicted by the yellow boxes in Fig. 4(b).
Time and angle independence measurement. (a) Timing diagram of SPAD gate driver and light source (LEDs). (b) Reconstructed amplitude and angle information from tri-phase () measurement. Yellow shaded boxes indicate where the reconstructed amplitude was large enough to resolve angle information.
Time and angle independence measurement. (a) Timing diagram of SPAD gate driver and light source (LEDs). (b) Reconstructed amplitude and angle information from tri-phase () measurement. Yellow shaded boxes indicate where the reconstructed amplitude was large enough to resolve angle information.
The measurement was performed with SPAD reverse bias at 1.2 V above breakdown voltage (i.e., Vex = 1.2 V) where light emitted by green LED (λ = 535 nm) is first passed through a visible light band-pass filter (Melles Griot Inc.) before going through Metallic ND filter (FBS-ND10, Newport) in series with a collimator. An optical power meter (PM203, Thorlab Inc.) is used to determine the average optical power on the positioned sensor prior to the actual A-SPAD measurement. Dark count rate was also measured by examining the sensor outputs while the light source was turned off. Photon detection probabilities (PDP) were measured for both A-SPAD and SPAD without gratings. A-SPAD PDP resulted in 2.72% with dark count rate of 404 Hz, whereas SPAD without gratings resulted in PDP of 18.7% with similar dark count, which implies a factor of around 7 loss caused by gratings.
Due to the binary nature of SPAD, the signal to noise ratio (SNR) of both incident light intensity and angle is a function of the detector SNR and the number of frames (F), where a single frame here refers to a single SPAD output accumulated over one thousand cycles of sampling. Peak light intensities were chosen to be less than 5% of the onset of sensor saturation to avoid photon pile-up issues.7,8 Fig. 5 shows the angular response from FTDT simulation that uses the wave-like behavior of light (a), and angular response of A-SPAD for different number of frames and βs (b),(c), and (d). The three curves in Fig. 5(b) show that the angle sensitivity is sharply improved as the number of frames increases. The angular responses recorded over different number of frames were vertically shifted by their average so that they are centered around zero in y-axis. Such strong dependency on the number of frames is not surprising as SPADs are binary devices. At a single or a few photon levels, photon detection using an A-SPAD can be viewed much like the classical double-slit experiment in Fig. 5(b), where the particle-like characteristic of photons dominates. Once the number of frames becomes sufficiently large by repeating sub-nanosecond measurements as shown in Figs. 5(c) and 5(d), averaged A-SPAD sensor output converges to the conventional photo-diode based sensor output as shown in Fig. 5(a). Thus, A-SPADs intrinsically make use of both sides of the wave-particle duality of light.
(a) FTDT-simulated angular response of photo-diode based ASP, (b) A-SPAD angular response measured with different number of frames (F = 1, 10, and 100), (c) A-SPAD angular response with β = 8 and F = 8000, and (d) A-SPAD angular response with β = 15 and F = 8000. With a large number of frames, A-SPAD response in (c) and (d) converges to the FTDT-simulated photo-diode sensor response shown in (a).
(a) FTDT-simulated angular response of photo-diode based ASP, (b) A-SPAD angular response measured with different number of frames (F = 1, 10, and 100), (c) A-SPAD angular response with β = 8 and F = 8000, and (d) A-SPAD angular response with β = 15 and F = 8000. With a large number of frames, A-SPAD response in (c) and (d) converges to the FTDT-simulated photo-diode sensor response shown in (a).
The minimum angular resolution of an A-SPAD is determined by the angular sensitivity and SNR of the device, which is influenced by various factors. Failure to detect incoming photons due to finite quantum efficiency, false photon counts from tunneling and thermal charge generation, and limited number of frames taken are some of the main factors that limit the maximum attainable SNR. SNR neglecting after pulsing noise can be described as Eq. (2)14
where λp is the average number of photons absorbed per sampling window, λd is the average number of dark count per sampling window, and F is the number frames (note there are 1000 samples per frame). Thus, to optimize SNR, we need to lower dark-count probability, increase PDP or increase the number of frames, at the cost of lowered frame rate. Another critical factor is non-linear distortion of detection probability caused by photon pile-up, which limits the overall signal to noise and distortion ratio (SNDR).13 With 10 ns sampling window, 1% photon detection rate, and F = 8000, SNDR of A-SPAD is estimated to be 49.01 dB. This can be improved by replacing the amplitude (Talbot) grating with a phase grating, or by implementing a low-noise SPAD and quenching circuit.10,11,15
The angular sensitivity of A-SPAD can be described as Eq. (3), where sensitivity, by definition, is the inverse of the derivative of ASP response from Eq. (1)
Interestingly, the periodicity of angular response of ASP leads to a non-uniform distribution of angular noise. Since the readout mechanism of A-SPAD is based on the modulation of the incident angle and amplitude of incoming photons, the angular noise can be calculated by multiplying amplitude noise and the sensitivity as shown in Eq. (4)
where and σA stand for angular noise and amplitude noise respectively. However, since a unit A-SPAD consists of three phases per each axis, the overall angular resolution for a given SNR is factor of higher than a single SPAD pixel.
In conclusion, we present a light sensor that combines the merits of SPADs and ASPs. The demonstrated sensor exhibits high sensitivity to incoming photons with fine time resolution, a characteristic of SPADs, as well as high angular sensitivity, a characteristic of ASPs. This combined device extends the dimension of light sensing to a local photon arrival angle. Such extension in sensing ability provides not only a local intensity probe with sub-nanosecond temporal precision but also a method to mitigate an existing limitation in 2D time of flight imaging based on its angular sensitivity.
The authors thank TaeSung Jung, Min Chul Shin, Sung-Yun Park, Jaeyoon Kim, and Sunwoo Lee for critical discussions. The authors acknowledge the support by NSF (Grant ECS-0335765).