Estimating relatedness via data compression

  • Brendan Juba

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We show that it is possible to use data compression on independently obtained hypotheses from various tasks to algorithmically provide guarantees that the tasks are sufficiently related to benefit from multitask learning. We give uniform bounds in terms of the empirical average error for the true average error of the n hypotheses provided by deterministic learning algorithms drawing independent samples from a set of n unknown computable task distributions over finite sets.

Original languageEnglish
Title of host publicationACM International Conference Proceeding Series - Proceedings of the 23rd International Conference on Machine Learning, ICML 2006
Pages441-448
Number of pages8
DOIs
StatePublished - 2006
Event23rd International Conference on Machine Learning, ICML 2006 - Pittsburgh, PA, United States
Duration: Jun 25 2006Jun 29 2006

Publication series

NameACM International Conference Proceeding Series
Volume148

Conference

Conference23rd International Conference on Machine Learning, ICML 2006
Country/TerritoryUnited States
CityPittsburgh, PA
Period06/25/0606/29/06

Fingerprint

Dive into the research topics of 'Estimating relatedness via data compression'. Together they form a unique fingerprint.

Cite this