TY - JOUR
T1 - Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning
AU - Kumar, Pratik
AU - Nandi, Ankita
AU - Chakrabartty, Shantanu
AU - Thakur, Chetan Singh
N1 - Publisher Copyright:
© 2011 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - Bias-scalable analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications. For instance, ML implementations for server workloads are focused on higher computational throughput for faster training, whereas ML implementations for edge devices are focused on energy-efficient inference. In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC). The resulting S-AC core integrates several near-memory compute elements, which include: (a) non-linear activation functions; (b) inner-product compute circuits; and (c) a mixed-signal compressive memory, all of which can be scaled for performance or power while preserving its functionality. Using measured results from prototypes fabricated in a 180nm CMOS process, we demonstrate that the performance of computing modules remains robust to transistor biasing and variations in temperature. In this paper, we also demonstrate the effect of bias-scalability and computational accuracy on a simple ML regression task.
AB - Bias-scalable analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications. For instance, ML implementations for server workloads are focused on higher computational throughput for faster training, whereas ML implementations for edge devices are focused on energy-efficient inference. In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC). The resulting S-AC core integrates several near-memory compute elements, which include: (a) non-linear activation functions; (b) inner-product compute circuits; and (c) a mixed-signal compressive memory, all of which can be scaled for performance or power while preserving its functionality. Using measured results from prototypes fabricated in a 180nm CMOS process, we demonstrate that the performance of computing modules remains robust to transistor biasing and variations in temperature. In this paper, we also demonstrate the effect of bias-scalability and computational accuracy on a simple ML regression task.
KW - Analog approximate computing
KW - ReLU
KW - analog multiplier
KW - generalized margin-propagation
KW - machine learning
KW - memory DAC
KW - shape-based analog computing
UR - http://www.scopus.com/inward/record.url?scp=85147305651&partnerID=8YFLogxK
U2 - 10.1109/JETCAS.2023.3234570
DO - 10.1109/JETCAS.2023.3234570
M3 - Article
AN - SCOPUS:85147305651
SN - 2156-3357
VL - 13
SP - 312
EP - 322
JO - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
JF - IEEE Journal on Emerging and Selected Topics in Circuits and Systems
IS - 1
ER -