TY - GEN
T1 - ADC-Less 3D-NAND Compute-in-Memory Architecture Using Margin Propagation
AU - Undavalli, Aswin Chowdary
AU - Cauwenberghs, Gert
AU - Natarajan, Arun
AU - Chakrabartty, Shantanu
AU - Nagulu, Aravind
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Compute In Memory (CIM) has gained significant attention in recent years due to its potential to overcome the memory bottleneck in Von-Neumann computing architectures. While most CIM architectures use non-volatile memory elements in a NOR-based configuration, NAND-based configuration, and in particular, 3D-NAND flash memories are attractive because of their potential in achieving ultra-high memory density and ultra-low cost per bit storage. Unfortunately, the standard multiply-and-accumulate (MAC) CIM-paradigm can not be directly applied to NAND-flash memories. In this paper, we report a NAND-Flash-based CIM architecture by combining the conventional 3D-NAND flash with a Margin-Propagation (MP) based approximate computing technique. We show its application for implementing matrix-vector multipliers (MVMs) that do not require analog-to-digital converters (ADCs) for read-out. Using simulation results we show that this approach has the potential to provide a 100 x improvement in compute density, read speed, and computation efficiency compared to the current state-of-the-art.
AB - Compute In Memory (CIM) has gained significant attention in recent years due to its potential to overcome the memory bottleneck in Von-Neumann computing architectures. While most CIM architectures use non-volatile memory elements in a NOR-based configuration, NAND-based configuration, and in particular, 3D-NAND flash memories are attractive because of their potential in achieving ultra-high memory density and ultra-low cost per bit storage. Unfortunately, the standard multiply-and-accumulate (MAC) CIM-paradigm can not be directly applied to NAND-flash memories. In this paper, we report a NAND-Flash-based CIM architecture by combining the conventional 3D-NAND flash with a Margin-Propagation (MP) based approximate computing technique. We show its application for implementing matrix-vector multipliers (MVMs) that do not require analog-to-digital converters (ADCs) for read-out. Using simulation results we show that this approach has the potential to provide a 100 x improvement in compute density, read speed, and computation efficiency compared to the current state-of-the-art.
KW - Compute-In Memory
KW - Margin Propagation
KW - Matrix Vector Multipliers
KW - Multiply and Accumulate
KW - NAND Flash
UR - http://www.scopus.com/inward/record.url?scp=85185387824&partnerID=8YFLogxK
U2 - 10.1109/MWSCAS57524.2023.10406082
DO - 10.1109/MWSCAS57524.2023.10406082
M3 - Conference contribution
AN - SCOPUS:85185387824
T3 - Midwest Symposium on Circuits and Systems
SP - 89
EP - 92
BT - 2023 IEEE 66th International Midwest Symposium on Circuits and Systems, MWSCAS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE 66th International Midwest Symposium on Circuits and Systems, MWSCAS 2023
Y2 - 6 August 2023 through 9 August 2023
ER -