Causal Feature Learning for Utility-Maximizing Agents

  • David Kinney
  • , David Watson

Research output: Contribution to journalConference articlepeer-review

8 Scopus citations

Abstract

Discovering high-level causal relations from low-level data is an important and challenging problem that comes up frequently in the natural and social sciences. In a series of papers, Chalupka et al. (2015, 2016a, 2016b, 2017) develop a procedure for causal feature learning (CFL) in an effort to automate this task. We argue that CFL does not recommend coarsening in cases where pragmatic considerations rule in favor of it, and recommends coarsening in cases where pragmatic considerations rule against it. We propose a new technique, pragmatic causal feature learning (PCFL), which extends the original CFL algorithm in useful and intuitive ways. We show that PCFL has the same attractive measure-theoretic properties as the original CFL algorithm. We compare the performance of both methods through theoretical analysis and experiments.

Original languageEnglish
Pages (from-to)257-268
Number of pages12
JournalProceedings of Machine Learning Research
Volume138
StatePublished - 2020
Event10th International Conference on Probabilistic Graphical Models, PGM 2020 - Virtual, Online, Denmark
Duration: Sep 23 2020Sep 25 2020

Keywords

  • Bayesian Networks
  • Causal Feature Learning
  • Coarse-Graining
  • Expected Utility

Fingerprint

Dive into the research topics of 'Causal Feature Learning for Utility-Maximizing Agents'. Together they form a unique fingerprint.

Cite this