Design and Fabrication of Energy Efficient In-Memory Computing Design Framework for Analog Neural Processor with Improved Linearity for Neuromorphic Computing Applications.
Implementing Organization
Indian Institute of Technology Jodhpur
Principal Investigator
Dr. Bhupendra Singh Reniwal
Indian Institute of Technology Jodhpur
CO-Principal Investigator
Dr. Santosh Kumar Vishvakarma
Indian Institute of Technology (IIT)
About
The International Technology Roadmap for Semiconductors (ITRS) report states that by 2022, 70-85% of silicon area will be occupied by on-chip memory due to the intensive computing and memory resources required for Artificial Intelligence (AI) inference processes. Scalable, low-power memory, high bandwidth, and reliability are crucial for applications like general-purpose microcontrollers, automotive, and edge-AI. Deep neural networks (DNN) and convolution neural networks (CNN) implementations rely on vector multiplication or multiply and accumulate (MAC) operations, which consume a significant portion of the total power budget in complex CNN/DNN engines. MAC operations are highly computation-intensive and require large amounts of memory cells and signal processing circuits. In-memory computing (IMC) offers efficient implementation of these operations due to its potential for energy-efficiency and throughput advantages. Energy-efficient MAC operation is essential for modern AI engines. This research aims to implement a novel design for IMC that can execute several arithmetic, logical, and vector operations by directly computing the values stored in bit-cells, delivering precise MAC operations for modern AI engines. The research will address limitations and problems at the software level for hardware implementation purposes, advancing the emerging research area of In-Memory Computing for next-generation computing requirements targeting sophisticated AI applications such as image classification, speech recognition, and autonomous vehicles.