Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation

TEMPEST: Multi-Turn Jailbreaking of Large Language Models with Tree Search

Andy Zhou · Ron Arel


Abstract:

We introduce TEMPEST, a multi-turn adversarial framework that models the gradual erosion of Large Language Model (LLM) safety through a tree search perspective. Unlike single-turn jailbreaks that rely on one meticulously engineered prompt, TEMPEST expands the conversation at each turn in a breadth-first fashion, branching out multiple adversarial prompts that exploit partial compliance from previous responses. By tracking these incremental policy leaks and reinjecting them into subsequent queries, TEMPEST reveals how minor concessions can accumulate into fully disallowed outputs. Evaluations on the JailbreakBench dataset show that TEMPEST achieves a 100% success rate on GPT-3.5-turbo and 97% on GPT-4 in a single multi-turn run, using fewer queries than baselines such as Crescendo or GOAT. This tree search methodology offers an in-depth view of how model safeguards degrade over successive dialogue turns, underscoring the urgency of robust multi-turn testing procedures for language models.

Chat is not available.


OSZAR »