Analog circuits that use on-chip digital-to-analog converters for calibration use a DSP based algorithm for adapting and calibrating the system parameters. In this paper, we show that this conventional method suffers from artifacts due to quantization noise which adversely affects the real-time and precise convergence to the desired parameters. We propose a ΣΔ based gradient-descent learning that can noise-shape the quantization noise during the adaptation procedure and in the process achieve faster convergence compared to the conventional quantized gradient-descent approach. We also show that when the analog circuits suffer from non-linearities and non-monotonic response of the calibration DACs, the proposed algorithm is still able to find the optimal system solution without getting trapped into local minima. Using measured results obtained from prototype fabricated in a 0.5-μm CMOS process, we demonstrate the robustness of the proposed algorithm for the task of: (a) compensating and tracking of offset parameters; and (b) calibration of the center frequency of a sub-threshold gm-C biquad filter.
|Number of pages
|Published - 2012
|2012 IEEE International Symposium on Circuits and Systems, ISCAS 2012 - Seoul, Korea, Republic of
Duration: May 20 2012 → May 23 2012
|2012 IEEE International Symposium on Circuits and Systems, ISCAS 2012
|Korea, Republic of
|05/20/12 → 05/23/12