Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders

  • Haiyan Zhang
  • , Xinghua Li
  • , Jiawei Tang
  • , Chunlei Peng
  • , Yunwei Wang
  • , Ning Zhang
  • , Yingbin Miao
  • , Ximeng Liu
  • , Kim Kwang Raymond Choo

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Deep Convolution Neural Networks (CNNs) have become the cornerstone of image classification, but the emergence of adversarial image attacks brings serious security risks to CNN-based applications. As a local perturbation attack, the border attack can achieve high success rates by only modifying the pixels around the border of an image, which is a novel attack perspective. However, existing border attacks have shortcomings in stealthiness and are easily detected. In this article, we propose a novel stealthy border attack method based on deep feature alignment. Specifically, we propose a deep feature alignment algorithm based on style transfer to guarantee the stealthiness of adversarial borders. The algorithm takes the deep feature difference between the adversarial and the original borders as the stealthiness loss and thus ensures good stealthiness of the generated adversarial images. To ensure high attack success rates simultaneously, we apply cross entropy to design the targeted attack loss and use margin loss as well as Leaky ReLU to design the untargeted attack loss. Experiments show that the structural similarity between the generated adversarial images and the original images is 8.8% higher than the state-of-art border attack method, indicating that our proposed adversarial images have better stealthiness. At the same time, the success rate of our attack in the face of defense methods is much higher, which is about four times that of the state-of-art border attack under the adversarial training defense.

Original languageEnglish
Pages (from-to)2405-2419
Number of pages15
JournalIEEE Transactions on Computers
Volume73
Issue number10
DOIs
StatePublished - 2024

Keywords

  • CNN
  • adversarial attack
  • stealthiness
  • visual fidelity

Fingerprint

Dive into the research topics of 'Hiding in Plain Sight: Adversarial Attack via Style Transfer on Image Borders'. Together they form a unique fingerprint.

Cite this