TY - GEN
T1 - Deception through half-Truths
AU - Estornell, Andrew
AU - Das, Sanmay
AU - Vorobeychik, Yevgeniy
N1 - Publisher Copyright:
Copyright 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2020
Y1 - 2020
N2 - Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e.g., honeypots) are an important tool, to politics that can feature politically motivated "leaks" and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal s decision by "half-Truths", that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary. The principal s problem can be modeled as one of predicting future states of variables in a dynamic Bayes network, and we show that, while theoretically the principal s decisions can be made arbitrarily bad, the optimal attack is NP-hard to approximate, even under strong assumptions favoring the attacker. However, we also describe an important special case where the dependency of future states on past states is additive, in which we can efficiently compute an approximately optimal attack. Moreover, in networks with a linear transition function.
AB - Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e.g., honeypots) are an important tool, to politics that can feature politically motivated "leaks" and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal s decision by "half-Truths", that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary. The principal s problem can be modeled as one of predicting future states of variables in a dynamic Bayes network, and we show that, while theoretically the principal s decisions can be made arbitrarily bad, the optimal attack is NP-hard to approximate, even under strong assumptions favoring the attacker. However, we also describe an important special case where the dependency of future states on past states is additive, in which we can efficiently compute an approximately optimal attack. Moreover, in networks with a linear transition function.
UR - https://www.scopus.com/pages/publications/85106604925
M3 - Conference contribution
AN - SCOPUS:85106604925
T3 - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
SP - 10110
EP - 10117
BT - AAAI 2020 - 34th AAAI Conference on Artificial Intelligence
PB - AAAI press
T2 - 34th AAAI Conference on Artificial Intelligence, AAAI 2020
Y2 - 7 February 2020 through 12 February 2020
ER -