Prompt Injection 방어, 구조적 격리와 검증으로 LLM 보안 강화
Shielding Your LLMs: A Deep Dive into Prompt Injection & Jailbreak Defense
Shielding Your LLMs: A Deep Dive into Prompt Injection & Jailbreak Defense
Introducing the Red-Teaming Resistance Leaderboard
Red-Teaming Large Language Models