Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks

Junlin Wu, Hussein Sibai, Yevgeniy Vorobeychik

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Function approximation has enabled remarkable advances in applying reinforcement learning (RL) techniques in environments with high-dimensional inputs, such as images, in an end-to-end fashion, mapping such inputs directly to low-level control. Nevertheless, these have proved vulnerable to small adversarial input perturbations. A number of approaches for improving or certifying robustness of end-to-end RL to adversarial perturbations have emerged as a result, focusing on cumulative reward. However, what is often at stake in adversarial scenarios is the violation of fundamental properties, such as safety, rather than the overall reward that combines safety with efficiency. Moreover, properties such as safety can only be defined with respect to true state, rather than the high-dimensional raw inputs to end-to-end policies. To disentangle nominal efficiency and adversarial safety, we situate RL in deterministic partially-observable Markov decision processes (POMDPs) with the goal of maximizing cumulative reward subject to safety constraints. We then leverage a partially-supervised reinforcement learning (PSRL) framework that takes advantage of an additional assumption that the true state of the POMDP is known at training time. We present the first approach for certifying safety of PSRL policies under adversarial input perturbations, and two adversarial training approaches that make direct use of PSRL. Our experiments demonstrate both the efficacy of the proposed approach for certifying safety in adversarial environments, and the value of the PSRL framework coupled with adversarial training in improving certified safety while preserving high nominal reward and high-quality predictions of true state.

Original languageEnglish
Title of host publicationProceedings - 45th IEEE Symposium on Security and Privacy Workshops, SPW 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages57-67
Number of pages11
ISBN (Electronic)9798350354874
DOIs
StatePublished - 2024
Event45th IEEE Symposium on Security and Privacy Workshops, SPW 2024 - San Francisco, United States
Duration: May 23 2024 → …

Publication series

NameProceedings - 45th IEEE Symposium on Security and Privacy Workshops, SPW 2024

Conference

Conference45th IEEE Symposium on Security and Privacy Workshops, SPW 2024
Country/TerritoryUnited States
CitySan Francisco
Period05/23/24 → …

Keywords

  • Adversarial Perturbation
  • Safe Reinforcement Learning
  • Verified Safety

Fingerprint

Dive into the research topics of 'Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks'. Together they form a unique fingerprint.

Cite this