The Many Faces of Adversarial Machine Learning

  • Yevgeniy Vorobeychik

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Adversarial machine learning (AML) research is concerned with robustness of machine learning models and algorithms to malicious tampering. Originating at the intersection between machine learning and cybersecurity, AML has come to have broader research appeal, stretching traditional notions of security to include applications of computer vision, natural language processing, and network science. In addition, the problems of strategic classification, algorithmic recourse, and counterfactual explanations have essentially the same core mathematical structure as AML, despite distinct motivations. I give a simplified overview of the central problems in AML, and then discuss both the security-motivated AML domains, and the problems above unrelated to security. These together span a number of important AI subdisciplines, but can all broadly be viewed as concerned with trustworthy AI. My goal is to clarify both the technical connections among these, as well as the substantive differences, suggesting directions for future research.

Original languageEnglish
Title of host publicationAAAI-23 Special Programs, IAAI-23, EAAI-23, Student Papers and Demonstrations
EditorsBrian Williams, Yiling Chen, Jennifer Neville
PublisherAAAI press
Pages15402-15409
Number of pages8
ISBN (Electronic)9781577358800
DOIs
StatePublished - Jun 27 2023
Event37th AAAI Conference on Artificial Intelligence, AAAI 2023 - Washington, United States
Duration: Feb 7 2023Feb 14 2023

Publication series

NameProceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Volume37

Conference

Conference37th AAAI Conference on Artificial Intelligence, AAAI 2023
Country/TerritoryUnited States
CityWashington
Period02/7/2302/14/23

Fingerprint

Dive into the research topics of 'The Many Faces of Adversarial Machine Learning'. Together they form a unique fingerprint.

Cite this