Rethinking Backdoor Detection Evaluation for Language Models

J Yan, WJ Mo, X Ren, R Jia�- arXiv preprint arXiv:2409.00399, 2024 - arxiv.org
arXiv preprint arXiv:2409.00399, 2024arxiv.org
Backdoor attacks, in which a model behaves maliciously when given an attacker-specified
trigger, pose a major security risk for practitioners who depend on publicly released
language models. Backdoor detection methods aim to detect whether a released model
contains a backdoor, so that practitioners can avoid such vulnerabilities. While existing
backdoor detection methods have high accuracy in detecting backdoored models on
standard benchmarks, it is unclear whether they can robustly identify backdoors in the wild�…
Backdoor attacks, in which a model behaves maliciously when given an attacker-specified trigger, pose a major security risk for practitioners who depend on publicly released language models. Backdoor detection methods aim to detect whether a released model contains a backdoor, so that practitioners can avoid such vulnerabilities. While existing backdoor detection methods have high accuracy in detecting backdoored models on standard benchmarks, it is unclear whether they can robustly identify backdoors in the wild. In this paper, we examine the robustness of backdoor detectors by manipulating different factors during backdoor planting. We find that the success of existing methods highly depends on how intensely the model is trained on poisoned data during backdoor planting. Specifically, backdoors planted with either more aggressive or more conservative training are significantly more difficult to detect than the default ones. Our results highlight a lack of robustness of existing backdoor detectors and the limitations in current benchmark construction.
arxiv.org
Showing the best result for this search. See all results