Magneto-optic Kerr effect (MOKE) microscopy is a widely used technique for observation and characterization of microscopic magnetic structures. While being efficient and easy-to-use, current commercial MOKE microscopes are not superb in time resolution, limited by the frame rate of the camera. Here, we introduce a revolutionary sensor, namely, the event camera, as a convenient add-on to traditional MOKE microscopy and explore the potential applications of event-based vision in research areas using MOKE microscopy. We use the frame stacking method to improve visibility to human eyes in generated slow motion videos. We perform a proof-of-principle feedback control experiment using the event-based vision data and characterize the overall latency of the feedback loop as short as 25 ms with our current prototype. Finally, we discuss the limitations of current event cameras in MOKE microscopy as well.

The magnetic domain behavior, such as flipping, expansion, deformation, etc., is of crucial importance to understand the micromagnetic dynamics of magnetic bulk or thin film materials. Magneto-optic Kerr effect (MOKE) microscopy is one of the most prominent technologies for observing magnetic domains and various other microstructures in magnetic materials, providing immediate imaging of the magnetic domains.1,2 It is non-invasive, convenient, and extremely easy-to-access under ambient conditions with decent spatial resolution of sub-micrometers. The MOKE microscope is based on the magneto-optic Kerr effect, which describes the small change in the polarization state of reflected light from the surface of a magnetic material. This polarization change is dependent on the magnetization of the material and can be detected using a polarized light microscope. The change in polarization is typically of the order of sub-degree. Careful alignment and fine adjustment of the compensator (a quarter-wave plate) and the analyzer are required in order to maximize the optical contrast of different magnetic domains under a MOKE microscope. However, the temporal resolution, i.e., the frame rate, of MOKE microscopy is heavily limited, not because of any physical reasons but by the data transfer speed of common cameras. For applications involving fast dynamics of magnetic microstructures, upgrading to commercial MOKE microscopy such as installing high-speed cameras is necessary. However, this leads to very high costs and is rarely seen in research groups.

An event camera, also called a silicon retina, is a new type of sensor mimicking the neural structure of the eye, which can transfer visual information with high time resolution at a low data rate.3,4 As the realms of machine vision and artificial intelligence have been prospering in recent years, the event camera has attracted an increasing number of researchers and developers.4–9 Generations of prototypes keep hitting the market and are being used in a broad range of application areas.10–12 However, there have been very few demonstrations of using event cameras in microscopy so far.13–15 

In this work, we install a prototype event camera on a commercial MOKE microscope as an add-on and collect event-based vision data for both real-time monitoring and postprocessing. We explore the data presentation methods and potential applications of event-based vision by MOKE microscopy. We believe our results would open the gate to a brand-new technology in MOKE microscopy, which will be useful in research areas such as dynamics and control of magnetic materials.

A standard frame-based camera is made of an array of pixels that are read out one by one during a synchronous readout process. A standard camera outputs images, or videos made of cascades of images, which are, unfortunately, redundant in terms of information processing. As a result, standard cameras are extremely data intensive, and their frame rate is heavily limited by the readout and transfer of these redundant data. The event camera, on the other hand, is an asynchronous sensor that does not read out each of its pixels periodically. Here, we should emphasize that by the term event camera, we mean the Dynamic Vision Sensor (DVS) event camera. There are other types of event cameras such as the ATIS (Asynchronous Time-based Image Sensor) and DAVIS (Dynamic and Active pixel VIsion Sensor), dating back to the first silicon retina decades ago. The pixels of a DVS event camera are only active when there is a detected change in the light intensity (exceeding a certain percentage threshold, i.e., the temporal difference in the logarithm of intensity reaching a threshold) on that pixel, which is called an event, while the pixels with unchanged intensity are treated unimportant and do not produce redundant data. The event camera records the position (coordinates), the change direction (polarity), and the timestamp of events, producing a stream of event data. The time resolution of the event timestamps is down to 1 µs.

The integration of an event camera on an existing MOKE microscope is straightforward. Thanks to the standardized packaging and universal C-mount threads, we can easily replace our C-mount standard camera with an event camera without realigning any other optical components. The event camera can alternatively be installed to the ocular port with a home-made adapter, as shown in Fig. 1(a).

FIG. 1.

(a) Picture of an event camera installed as an add-on at the ocular port of a MOKE microscope. (b) The pixel intensity histogram of a MOKE image taken using a standard camera (inset: the orange curve is a double peak Gaussian fit). (c) and (d) Reconstructed frames in the real-time monitoring program. The motion of the domain wall (edge) is marked with red arrows.

FIG. 1.

(a) Picture of an event camera installed as an add-on at the ocular port of a MOKE microscope. (b) The pixel intensity histogram of a MOKE image taken using a standard camera (inset: the orange curve is a double peak Gaussian fit). (c) and (d) Reconstructed frames in the real-time monitoring program. The motion of the domain wall (edge) is marked with red arrows.

Close modal

In this work, we use a prototype event camera (SilkyEvCam from CenturyArks Co. Ltd., equipped with a PPS3MVCD event-based vision sensor from Prophesee16,17) by integrating it on a MOKE microscope obtained from evico magnetics GmbH. The objective lens is a 50× polarized light objective from Zeiss, whose numerical aperture is 0.8. Our test sample is a cobalt–platinum multilayer structure (Ta(5 nm)/[Pt(0.85 nm)/Co(0.5 nm)]3/Ta(5 nm)) fabricated on a silicon wafer with magnetron sputtering, followed by a 350 °C annealing process of 1 h.18 The bottom layer made of tantalum is used as the buffer layer, and the top layer is the capping layer for antioxidation. The annealing process improves the crystallization of the multilayer thin film, strengthens the magneto-optic Kerr effect, and improves the optical contrast of our sample, and the event camera can achieve a better signal/noise ratio. The magnetization directions of magnetic domains are perpendicular to the thin film, either outward or inward. Figure 1(b) shows the intensity histogram of pixel data from a selected area of the MOKE image taken using a standard camera. The optical contrast of these two magnetic domains under the MOKE microscope is estimated from the intensity histogram as ∼10% without background subtraction, which is large among common magnetic materials. The coercive field of the sample is measured as ∼6.5 mT with the MOKE microscope.

The event-based vision data acquisition and processing is achieved using home-made software written in Python, based on Metavision Intelligence Suite, a software toolkit provided by Prophesee (the source code will be available from the corresponding author upon reasonable request). The real-time monitoring program for data acquisition provides reconstructed frames (see details in the data presentation section), as shown in Figs. 1(c) and 1(d), while recording the stream of events as raw data.

Most commercially available DVS event cameras have typical temporal contrast threshold larger than 10%, which means the optical contrast in our application is hitting the limit of event camera capability. Fine adjustments of the MOKE microscope optics and event camera parameters are necessary in order to achieve the best overall performance of this setup.

The optical contrast of magnetic domains under a MOKE microscope is a complicated topic, especially for multilayer thin film materials.2,19,20 In short, the fine adjustments of a MOKE microscope involve two crucial steps: a combined optimization of the compensator and the analyzer in the collection path and closing down the aperture diaphragm in the illumination path at the cost of overall brightness. The latter one is rarely used in traditional MOKE because brightness is more important and background subtraction can significantly improve the contrast in processed images and videos. In our application, an improvement of 1%–2% points in optical contrast sacrificing the brightness could be of crucial importance.

The principles and procedures of optimizing the event camera are illustrated in the documents from Prophesee.21,22 In each DVS pixel, the photocurrent passes through a low-pass filter and a high-pass filter, both software controlled, before entering the differentiator. In our application where the optical contrast is extremely low, lowering the cut-off frequency (bandwidth) of the low-pass filter significantly suppresses the noise events while having minor effects on the signal events. However, we should point out that this increases the latency of the event camera as well. The software controlled thresholds in the comparators can be adjusted to increase the sensitivity, but it increases the noise rate at the same time. Finally, crazy pixels (pixels emitting events at an abnormally high rate) can be suppressed by increasing the refractory period of the event camera.

Noise filtering algorithms are usually applied to eliminate the background noise events. However, almost all noise filtering algorithms assume that the motion creates events much denser than noise events in both space and time domain, which requires sufficiently large light intensity changes. In MOKE microscopy, the contrast of light intensity at different magnetic domains is small (typically less than 10%). As the domain wall (edge) moves, the small change in light intensity barely produces 1 event per pixel at most. The signal events are not dense in the time domain and hard to survive common noise filtering algorithms. Recent progress in denoising algorithms may improve the results.23 The denoising algorithm specifically designed or tuned for MOKE microscopy is a sophisticated topic and is beyond the main purpose of this article. In our work, no noise filtering algorithms are applied.

The optimization of event-based MOKE microscopy depends heavily on the magnetic sample and the specific tasks. The raw data of experimental results and corresponding event camera biases specifically optimized for the experiments can be found in our public data repository.24 The physical meanings of the bias values and more details about optimization of the event camera can be found in documents from Prophesee.22 

Unlike standard cameras producing frames or images, the event-based vision data are less straightforward to present on screen. The event camera produces asynchronous flow of events as vision data, which is usually a large sparse array in three-dimensional spatial–temporal space. A 3D plot of the data array does not seem informative because the signal events are usually submerged into the sea of noise events and become invisible in the plot, when there are typically tens of thousands of noise events per second generated by the event camera without noise filtering (∼0.1 noise events per pixel per second, 640 × 480 pixels).

One of the commonly used data presentation methods for either video playback or real-time monitoring is reconstructing frames, which first bins the events into small time slices and then generates frames with events in each time slice by painting individual pixels with corresponding coordinates for each event. This frame reconstruction data presentation method generates output frames that are significantly different from standard cameras, i.e., only the moving edges (the moving domain walls in our case) are visible.25,26 This is often used as the default monitoring program of event camera prototypes. In our case, due to the low contrast in MOKE microscopy, the events in each time slice are typically insufficient to generate frames with visible magnetic domains, especially when the width of each time slice is reduced to ∼1 ms. As the first example, we record a slow expansion process of a magnetic domain driven by a magnetic field. First, our magnetic sample is initialized as a single magnetic domain by applying a large magnetic field with our electromagnet. Then an opposite magnetic field barely larger than the coercive field is applied to flip the magnetization via magnetic domain expansion. Finally, we invert the magnetic field again to flip the magnetization back to the initial state. The time histogram of the recorded events is shown in Fig. 2(a), where the positive/negative (ON/OFF) events, corresponding to intensity increase/decrease, are binned into 1 ms time slices as stacked orange/blue columns. There are obviously two peaks in the time histogram. The first one is made of ON events, corresponding to the expansion of magnetic domains, which are brighter under a MOKE microscope; the second one is made of OFF events, corresponding to the expansion of darker magnetic domains. The number of events in every time slice is of the order of 100, including noise events, which is not sufficient to plot an informative frame. In order to increase the number of events in each frame, we use events in multiple time slices to generate one frame, similar to overlapping frames in audio processing. As shown in Fig. 2(b), the first frame is generated using events in the first four time slices, the second frame is generated using events in time slices 2–5, and so on. This frame stacking method significantly increases the number of events on each frame while the time resolution stays high (the width of time slices stays narrow), and the generated video clearly shows the motion of the domain wall if the number of stacking frames is chosen properly. After reconstructing the frames, we generate a video with a frame rate of 20 fps using these stacked frames. The video is effectively 50 times slower. The number of stacking frames is chosen as 50, meaning that each generated frame consists of 50 time slices and is based on thousands of events accumulated within a 50 ms time window. In another aspect, each event stays on screen for 2.5 s (50 generated frames) in the generated video. Figures 2(c) and 2(d) show generated frames using events within a 50 ms window from 1.75 to 1.80 s and from 5.00 to 5.05 s, where we draw white pixels for ON events and green pixels for OFF events. In fact, some of the frames are identical to the frames in the real-time monitoring program since they both use events accumulated within a 50 ms window for each frame.

FIG. 2.

(a) Time histogram of the number of events. The orange/blue stacked columns represent ON/OFF (positive/negative) events. The width of the time bin is 1 ms. Inset: a zoomed-in view of the histogram. (b) The time slices and the frame stacking method for reconstructing frames. (c) and (d) The reconstructed frame using accumulated events within a 50 ms time window (c) from 1.75 to 1.80 s and (d) from 5 to 5.05 s. White pixels stand for positive events, and green pixels stand for negative events.

FIG. 2.

(a) Time histogram of the number of events. The orange/blue stacked columns represent ON/OFF (positive/negative) events. The width of the time bin is 1 ms. Inset: a zoomed-in view of the histogram. (b) The time slices and the frame stacking method for reconstructing frames. (c) and (d) The reconstructed frame using accumulated events within a 50 ms time window (c) from 1.75 to 1.80 s and (d) from 5 to 5.05 s. White pixels stand for positive events, and green pixels stand for negative events.

Close modal

One of the key advantages of an event camera compared to the standard frame-based camera is its low data rate, which leads to low latency and faster data processing. Feedback control of the dynamics of magnetic structures based on visual signals could be of great interest among researchers in spintronics. While the frame rate of standard cameras is a bottleneck for vision-based feedback control, the event camera is suitable for exploration in this area. As a starter, we explore the vision-controlled magnetic domain expansion on our sample.

We first saturate our magnetic thin film sample by exerting a magnetic field larger than the coercive field. At a moment, the external magnetic field is inverted by inverting the current through the electromagnet. Because the magnitude of the external field is much larger than the coercive field of the sample, the magnetization of the sample flips within a short period of time via random nucleation and fast expansion of magnetic domains with opposite magnetization. This expansion process generates a large number of negative events (intensity decreasing events) since the opposite magnetic domains are darker under a MOKE microscope. We adjust the event camera threshold settings so that the event camera is insensitive to positive events (see recorded biases in our data repository), in order to suppress the noise. We program our monitoring software so that once the computer recognizes a sudden increase in the event rate from the event camera, it turns off the external magnet by setting the control signal to zero. Once the external magnetic field falls below the coercive field, the magnetic domain expansion stops. We use our event camera to record the nucleation-expansion-stopping process and reconstruct a slow-motion video.

The time histogram (binned into 5 ms time slices) is shown in Fig. 3. The threshold to turn off the external magnetic field is set to 16 200 events per 5 ms time slice, which is marked as the horizontal red line in Fig. 3. This threshold value is chosen three standard deviations higher than the background event rate so that it is unlikely to be triggered by background noise events. The time when the number of events exceeds the threshold, recognized as expansion of the opposite magnetic domain, is marked as tr. The reconstructed video of the entire process is generated using the frame stacking method, as described in the previous section, and the number of stacking frames is chosen as 20. From the video, we record the time when the opposite domain starts to nucleate, the time the domain expansion obviously decelerates, and the time when the domain finally stops expansion, marked in Fig. 3 as tn, ts, and tf, respectively. The reconstructed frames at tn, tr, ts, and tf are shown in Figs. 4(a)4(d). The time surface plot of the process (see the supplementary material for introduction of the time surface method) and the MOKE image of the final state obtained using a standard camera are shown in Figs. 4(e) and 4(f) for comparison.

FIG. 3.

Histogram of the number of events. The width of each time bin is 5 ms. The inset is a zoomed-in view of the plot of the red rectangular area. The threshold to judge whether the magnetization flipping happens is marked as the red horizontal line. The yellow vertical lines mark the time tn when nucleation starts, the time tr when the number of events crosses threshold and magnetization flip is recognized, the time ts when domain expansion decelerates, and the time tf when domain wall finally stops.

FIG. 3.

Histogram of the number of events. The width of each time bin is 5 ms. The inset is a zoomed-in view of the plot of the red rectangular area. The threshold to judge whether the magnetization flipping happens is marked as the red horizontal line. The yellow vertical lines mark the time tn when nucleation starts, the time tr when the number of events crosses threshold and magnetization flip is recognized, the time ts when domain expansion decelerates, and the time tf when domain wall finally stops.

Close modal
FIG. 4.

(a)–(d) Reconstructed frames at time tn, tr, ts, and tf, respectively. (e) The time surface plot (see the supplementary material for introduction of the time surface method) of the experimental process. (f) The MOKE image of the final state captured using a frame-based camera without background subtraction.

FIG. 4.

(a)–(d) Reconstructed frames at time tn, tr, ts, and tf, respectively. (e) The time surface plot (see the supplementary material for introduction of the time surface method) of the experimental process. (f) The MOKE image of the final state captured using a frame-based camera without background subtraction.

Close modal

We note that the time of the entire domain expansion process (from nucleation tn to stopping tf) is 80 ± 5 ms. The domain shows clear deceleration from ts to tf, and the number of events decreases at ts, both implying that the magnet is turned off or at least turned down to ts. The decelerated domain expansion is due to the rise/fall time of the magnetic field from our electromagnet. The actual feedback time (from tr to ts) is estimated as 25 ± 5 ms, which is not achievable in traditional MOKE microscopes but much longer than what could be achieved with typical event cameras (3–5 ms).27 We found out the overwhelming latency is from the rise/fall time of the electromagnet. With a TMR (tunneling magnetic resistance) magnetic field sensor chip, the demagnetization time of our electromagnet is measured as around 100 ms, which means our magnetic field feedback system is limited by this slow electromagnet. If we use a faster control method, such as electric current (continuous or pulsed), to drive the motion of magnetic domain walls, we may achieve much shorter latency in feedback control experiments.

In conclusion, a new type of vision sensor, the event camera, is integrated into a traditional MOKE microscope as add-on hardware. We generate videos using reconstructed frames with the frame stacking method and time surface method, and we explore applications of such event-based vision on microscopes, specifically for fast feedback control applications.

As we can see from the reconstructed frames in Figs. 2 and 4, there are a large number of pixels that do not emit any events (the black pixels). This is due to the low optical contrast of MOKE microscopy, which is occasionally insufficient to trigger an event. With the existence of noise under weak optical illumination, some pixels do not reach the threshold to generate a contrast difference event.

Compared with standard cameras, the event camera provides higher time resolution, despite the imaging quality being not as good as scientific-grade CMOS cameras, which is due to the high noise rate of the event camera and the low optical contrast of most magnetic materials under MOKE microscopes. Nevertheless, we think that this direct comparison is both unfair and unnecessary. During decades of development of MOKE microscopy, system designers were well aware of the features of MOKE microscopy such as low optical contrast and low light intensity, which led to standard cameras on MOKE microscopes being typically optimized specifically for high sensitivity and low noise, usually at the cost of low dynamic range. On the other hand, event cameras are still in the prototyping stage used under ambient conditions, and we are among the first to introduce event cameras to microscopy. It is unfair to compare a specifically designed model with a general-purpose prototype. More importantly, what we are truly interested from these sensors is visual information, not images or videos. Information can either be expressed in a frame-based manner as images or in an event-based manner. Usually, the frame-based expression includes redundant information. For example, we use the frame stacking method to generate reconstructed frames, where all the stacked time slices are redundant in terms of information. If we develop proper algorithms to extract information from streaming events, such as recognition, motion detection, tracking, etc., it is not necessary to convert events back to frames.

On the other hand, current event cameras have limited applications in MOKE microscopy mainly because of the low contrast of magnetic samples under MOKE microscopes. Although some special treatments28 can improve the optical contrast significantly for some specific samples, improvements of event camera performance under low optical contrast for general magnetic samples are still required in the future, from both the hardware (design of new sensors such as SDAVIS with high sensitivity29) and the software (algorithms for noise filtering and pattern recognition) side. The good news is that several manufacturers of event cameras are pushing their technical development in recent years, releasing new generations of products with significant improvements such as lower noise, larger fill factors, more pixels, etc.30 This is an initiative of a cross-disciplinary research area requiring expertise in optics, spintronics, magnetic materials, and computer sciences. As a revolutionary vision sensor, we believe event cameras have a bright future in many applications, including MOKE microscopy.

See the supplementary material for a brief introduction of the time surface method mentioned in the text.

See the supplementary material for the generated videos mentioned in the text: 1. expansion-1×: generated video of a magnetic domain expansion process at original speed without frame stacking; 2. expansion-50×: generated 50× slow motion video of the same magnetic domain expansion process with frame stacking; 3. expansion-TS-1×: generated video of the same magnetic domain expansion process at original speed using consecutive time surface plots; the visual effect is very similar to standard MOKE microscopy; 4. feedback: generated 50× slow motion video of the feedback control process with frame stacking.

A combined video is available on YouTube (https://youtu.be/NPRItP9qhqk) as a brief visualization of the results of this article.

This work was supported by the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2021B1515120047), the Guangdong Special Support Project (Grant No. 2019BT02X030), the Shenzhen Peacock Group Plan (Grant No. KQTD20180413181702403), the Pearl River Recruitment Program of Talents (Grant No. 2017GC010293), and the National Natural Science Foundation of China (Grant Nos. 11974298 and 61961136006). Y. Zhao acknowledges the financial support from the National Natural Science Foundation of China (Grant No. 12004319). Z.C. acknowledges the financial support from the HKSAR Research Grants Council (RGC) Early Career Scheme (ECS, Grant No. 27202919) and the HKSAR Innovation and Technology Fund (ITF): Platform Projects of the Innovation and Technology Support Program (ITSP, Grant No. ITS/293/19FP). The authors are grateful for the fruitful discussion with Feng Xu, Madhav Gupta, Zhiyuan Du, and Dr. Can Li from the Department of EEE at the University of Hong Kong.

The authors have no conflicts to disclose.

Kai Zhang: Conceptualization (equal); Data curation (lead); Formal analysis (lead); Investigation (equal); Methodology (equal); Resources (equal); Software (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). Yuelei Zhao: Conceptualization (equal); Investigation (equal); Methodology (equal); Resources (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (supporting). Zhiqin Chu: Conceptualization (equal); Funding acquisition (supporting); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (supporting). Yan Zhou: Conceptualization (equal); Funding acquisition (lead); Supervision (equal); Writing – original draft (supporting); Writing – review & editing (supporting).

The data that support the findings of this study are openly available in EvCamMOKE Dataset at https://github.com/zhangkaicuhk/EvCamMOKE.24 

1.
A. D.
Booth
and
R. G.
Poulsen
, “
A Kerr magneto-optical microscope using polaroid polarizers
,”
J. R. Microsc. Soc.
84
(
4
),
465
474
(
1965
).
2.
J.
McCord
, “
Progress in magnetic domain observation by advanced magneto-optical microscopy
,”
J. Phys. D: Appl. Phys.
48
(
33
),
333001
(
2015
).
3.
C.
Mead
and
M. A.
Mahowald
, “
The silicon retina
,”
Sci. Am.
264
(
5
),
76
82
(
1991
).
4.
G.
Gallego
et al, “
Event-based vision: A survey
,”
IEEE Trans. Pattern Anal. Mach. Intell.
44
(
1
),
154
180
(
2022
).
5.
S.-C.
Liu
et al, “
Event-driven sensing for efficient perception: Vision and audition algorithms
,”
IEEE Signal Process. Mag.
36
(
6
),
29
37
(
2019
).
6.
A.
Amir
et al, “
A low power, fully event-based gesture recognition system
,” in
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
,
2017
.
7.
H.
Rebecq
et al, “
EMVS: Event-based multi-view stereo—3D reconstruction with an event camera in real-time
,”
Int. J. Comput. Vision
126
(
12
),
1394
1414
(
2018
).
8.
A.
Zhu
, “
EV-FlowNet: Self-supervised optical flow estimation for event-based cameras
,” in
Robotics: Science and Systems
(
Robotics Foundation
,
2018
).
9.
T.-J.
Chin
et al, “
Star tracking using an event camera
,” in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
,
2019
.
10.
P.
Lichtsteiner
,
C.
Posch
, and
T.
Delbruck
, “
A 128 × 128 120 dB 15 µs latency asynchronous temporal contrast vision sensor
,”
IEEE J. Solid-State Circuits
43
(
2
),
566
576
(
2008
).
11.
C.
Brandli
et al, “
A 240 × 180 130 dB 3 µs latency global shutter spatiotemporal vision sensor
,”
IEEE J. Solid-State Circuits
49
(
10
),
2333
2341
(
2014
).
12.
B.
Son
et al, “
4.1 A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation
,” in
2017 IEEE International Solid-State Circuits Conference (ISSCC)
,
2017
.
13.
Z.
Ni
et al, “
Asynchronous event-based high speed vision for microparticle tracking
,”
J. Microsc.
245
(
3
),
236
244
(
2012
).
14.
Z.
Ni
et al, “
Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics
,”
IEEE Trans. Rob.
28
(
5
),
1081
1089
(
2012
).
15.
G.
Taverni
et al, “
In-vivo imaging of neural activity with dynamic vision sensors
,” in
2017 IEEE Biomedical Circuits and Systems Conference (BioCAS)
,
2017
.
16.
Prophesee
, PPS3MVCD Datasheet, 2021, available at https://support.prophesee.ai/portal/en/kb/articles/csd3mvcd.
17.
Prophesee
, Sensor Characterization, 2022, available at https://support.prophesee.ai/portal/en/kb/articles/sensor-characterization.
18.
Y.
Zhao
et al, “
Domain wall dynamics in ferromagnet/Ru/ferromagnet stacks with a wedged spacer
,”
Appl. Phys. Lett.
119
(
2
),
022406
(
2021
).
19.
J.
Zak
et al, “
Universal approach to magneto-optics
,”
J. Magn. Magn. Mater.
89
(
1–2
),
107
123
(
1990
).
20.
C.-Y.
You
and
S.-C.
Shin
, “
Generalized analytic formulae for magneto-optical Kerr effects
,”
J. Appl. Phys.
84
(
1
),
541
546
(
1998
).
21.
Prophesee
, Overview on bias parameters, 2022, available at https://support.prophesee.ai/portal/en/kb/articles/influence-bias-pixel-kpi.
22.
Prophesee
, Bias Tuning Application Note, 2022, available at https://support.prophesee.ai/portal/en/kb/articles/bias-tuning-flow.
23.
S.
Guo
and
T.
Delbruck
, “
Low cost and latency event camera background activity denoising
,”
IEEE Trans. Pattern Anal. Mach. Intell.
(published online,
2022
).
24.
Kai
Zhang
(
2022
). “
EvCamMOKE Dataset
,” https://github.com/zhangkaicuhk/EvCamMOKE.
25.
M.
Litzenberger
et al, “
Embedded vision system for real-time object tracking using an asynchronous transient vision sensor
,” in
2006 IEEE 12th Digital Signal Processing Workshop and 4th IEEE Signal Processing Education Workshop
,
2006
.
26.
M. L.
Katz
,
K.
Nikolic
, and
T.
Delbruck
, “
Live demonstration: Behavioural emulation of event-based vision sensors
,” in
2012 IEEE International Symposium on Circuits and Systems (ISCAS)
,
2012
.
27.
T.
Delbruck
and
M.
Lang
, “
Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor
,”
Front. Neurosci.
7
,
223
(
2013
).
28.
D.
Kim
et al, “
Extreme anti-reflection enhanced magneto-optic Kerr effect microscopy
,”
Nat. Commun.
11
(
1
),
5937
(
2020
).
29.
M.
Żołnowski
,
R. R.
Diederik Paul Moeys
,
T.
Delbruck
, and
K.
Kamiński
, “
Observational evaluation of event cameras performance in optical space surveillance
,” in
Proceedings of the 1st NEO and Debris Detection Conference
(
ESA Space Safety Programme Office
,
Darmstadt, Germany
,
2019
).
30.
T.
Finateu
et al, “
5.10 A 1280×720 back-illuminated stacked temporal contrast event-based vision sensor with 4.86µm pixels, 1.066GEPS readout, programmable event-rate controller and compressive data-formatting pipeline
,” in
2020 IEEE International Solid-State Circuits Conference (ISSCC)
,
2020
.

Supplementary Material