Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning

Pratik Kumar, Ankita Nandi, Shantanu Chakrabartty, Chetan Singh Thakur

Research output: Contribution to journalArticlepeer-review

3 Scopus citations


Bias-scalable analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications. For instance, ML implementations for server workloads are focused on higher computational throughput for faster training, whereas ML implementations for edge devices are focused on energy-efficient inference. In this paper, we demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle called shape-based analog computing (S-AC). The resulting S-AC core integrates several near-memory compute elements, which include: (a) non-linear activation functions; (b) inner-product compute circuits; and (c) a mixed-signal compressive memory, all of which can be scaled for performance or power while preserving its functionality. Using measured results from prototypes fabricated in a 180nm CMOS process, we demonstrate that the performance of computing modules remains robust to transistor biasing and variations in temperature. In this paper, we also demonstrate the effect of bias-scalability and computational accuracy on a simple ML regression task.

Original languageEnglish
Pages (from-to)312-322
Number of pages11
JournalIEEE Journal on Emerging and Selected Topics in Circuits and Systems
Issue number1
StatePublished - Mar 1 2023


  • Analog approximate computing
  • ReLU
  • analog multiplier
  • generalized margin-propagation
  • machine learning
  • memory DAC
  • shape-based analog computing


Dive into the research topics of 'Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning'. Together they form a unique fingerprint.

Cite this