Date
August 14, 2024
Author
Rui Zhang and Hongwei Li, University of Electronic Science and Technology of China; Rui Wen, CISPA Helmholtz Center for Information Security; Wenbo Jiang and Yuan Zhang, University of Electronic Science and Technology of China; Michael Backes, CISPA Helmholtz Center for Information Security; Yun Shen, NetApp; Yang Zhang, CISPA Helmholtz Center for Information Security
USENIX Security
The increasing demand for customized Large Language Models (LLMs) has led to the development of solutions like GPTs. These solutions facilitate tailored LLM creation via natural language prompts without coding. However, the trustworthiness of third-party custom versions of LLMs remains an essential concern. In this paper, we propose the first instruction backdoor attacks against applications integrated with untrusted customized LLMs (e.g., GPTs). Specifically, these attacks embed the backdoor into the custom version of LLMs by designing prompts with backdoor instructions, outputting the attacker's desired result when inputs contain the predefined triggers. Our attack includes 3 levels of attacks: word-level, syntax-level, and semantic-level, which adopt different types of triggers with progressive stealthiness. We stress that our attacks do not require fine-tuning or any modification to the backend LLMs, adhering strictly to GPTs development guidelines. We conduct extensive experiments on 6 prominent LLMs and 5 benchmark text classification datasets. The results show that our instruction backdoor attacks achieve the desired attack performance without compromising utility. Additionally, we propose two defense strategies and demonstrate their effectiveness in reducing such attacks. Our findings highlight the vulnerability and the potential risks of LLM customization such as GPTs.
Resources
The paper can be found at: https://www.usenix.org/system/files/usenixsecurity24-zhang-rui.pdf