Precise learning curves and higher-order scaling limits for dot-product kernel regression

  • Lechao Xiao
  • , Hong Hu
  • , Theodor Misiakiewicz
  • , Yue M. Lu
  • , Jeffrey Pennington

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves (LCs) that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ( m → ∞ ) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ( m ∝ d ). There is a wide gulf between these two regimes, including all higher-order scaling relations m ∝ d r , which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias and variance, for data drawn uniformly from the sphere with isotropic random labels in the rth-order asymptotic scaling regime m → ∞ with m / d r held constant. We observe a peak in the LC whenever m ≈ d r / r ! for any integer r, leading to multiple sample-wise descent and non-trivial behavior at multiple scales. We include a colab (available at: https://tinyurl.com/2nzym7ym) notebook that reproduces the essential results of the paper.

Original languageEnglish
Article number114005
JournalJournal of Statistical Mechanics: Theory and Experiment
Volume2023
Issue number11
DOIs
StatePublished - Nov 1 2023

Keywords

  • machine learning

Fingerprint

Dive into the research topics of 'Precise learning curves and higher-order scaling limits for dot-product kernel regression'. Together they form a unique fingerprint.

Cite this