Conventional DNN (deep neural network) implementations rely on networks with sizes in the order of MBs (megabytes) and computation complexity of the order of Tera FLOPs (floating point operations per second). However, implementing such networks in the context of edge-AI (artificial intelligence) poses limitations due to the requirement of high precision computation blocks, large memory requirement, and memory wall. To address this, low-precision DNN implementations based on IMC (in-memory computing) approaches utilizing NVM (non-volatile memory) devices have been explored recently. In this work, we experimentally demonstrate a dual-configuration XNOR (exclusive NOR) IMC bitcell. The bitcell is realized using fabricated 1T-1R SiOx RRAM (resistive random access memory) arrays. We have analyzed the trade-off in terms of circuit-overhead, energy, and latency for both IMC bitcell configurations. Furthermore, we demonstrate the functionality of the proposed IMC bitcells with mobilenet architecture based BNNs (binarized neural networks). The network is trained on VWW (visual wake words) and CIFAR-10 datasets, leading to an inference accuracy of 80.3% and 84.9%, respectively. Additionally, the impact of simulated BER (bit error rate) on the BNN accuracy is also analyzed.
Dual-configuration in-memory computing bitcells using SiOx RRAM for binary neural networks
Note: This paper is part of the APL Special Collection on Neuromorphic Computing: From Quantum Materials to Emergent Connectivity.
Sandeep Kaur Kingra, Vivek Parmar, Shubham Negi, Alessandro Bricalli, Giuseppe Piccolboni, Amir Regev, Jean-François Nodin, Gabriel Molas, Manan Suri; Dual-configuration in-memory computing bitcells using SiOx RRAM for binary neural networks. Appl. Phys. Lett. 17 January 2022; 120 (3): 034102. https://doi.org/10.1063/5.0073284
Download citation file: