Adversarial AI

  • Yevgeniy Vorobeychik

Research output: Contribution to journalConference articlepeer-review

3 Scopus citations

Abstract

In recent years AI research has had an increasing role in models and algorithms for security problems. Game theoretic models of security, and Stackelberg security games in particular, have received special attention, in part because these models and associated tools have seen actual deployment in homeland security and sustainability applications. Stackelberg security games have two prototypical features: 1) a collection of potential assets which require protection, and 2) a sequential structure, where a defender first allocates protection resources, and the attacker then responds with an optimal attack. I see the latter feature as the major conceptual breakthrough, allowing very broad application of the idea beyond physical security settings. In particular, I describe three research problems which on the surface look nothing like prototypical security games: adversarial machine learning, privacy-preserving data sharing, and vaccine design. I describe how the second conceptual aspect of security games offers a natural modeling paradigm for these. This, in turn, has two important benefits: first, it offers a new perspective on these problems, and second, facilitates fundamental algorithmic contributions for these domains.

Original languageEnglish
Pages (from-to)4094-4097
Number of pages4
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2016-January
StatePublished - 2016
Event25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States
Duration: Jul 9 2016Jul 15 2016

Fingerprint

Dive into the research topics of 'Adversarial AI'. Together they form a unique fingerprint.

Cite this