Abstract
Analog circuits that use on-chip digital-to-analog converters for calibration use a DSP based algorithm for adapting and calibrating the system parameters. In this paper, we show that this conventional method suffers from artifacts due to quantization noise which adversely affects the real-time and precise convergence to the desired parameters. We propose a ΣΔ based gradient-descent learning that can noise-shape the quantization noise during the adaptation procedure and in the process achieve faster convergence compared to the conventional quantized gradient-descent approach. We also show that when the analog circuits suffer from non-linearities and non-monotonic response of the calibration DACs, the proposed algorithm is still able to find the optimal system solution without getting trapped into local minima. Using measured results obtained from prototype fabricated in a 0.5-μm CMOS process, we demonstrate the robustness of the proposed algorithm for the task of: (a) compensating and tracking of offset parameters; and (b) calibration of the center frequency of a sub-threshold gm-C biquad filter.
Original language | English |
---|---|
Pages | 2885-2888 |
Number of pages | 4 |
DOIs | |
State | Published - 2012 |
Event | 2012 IEEE International Symposium on Circuits and Systems, ISCAS 2012 - Seoul, Korea, Republic of Duration: May 20 2012 → May 23 2012 |
Conference
Conference | 2012 IEEE International Symposium on Circuits and Systems, ISCAS 2012 |
---|---|
Country/Territory | Korea, Republic of |
City | Seoul |
Period | 05/20/12 → 05/23/12 |