SequentialBreak: Large Language Models Can be Fooled by Embedding Jailbreak Prompts into Sequential Prompt Chains

Bijoy Ahmed Saiem | MD Sadik Hossain Shanto | Rakib Ahsan | Md Rafi Ur Rashid |

Paper Details:

Month: July
Year: 2025
Location: Vienna, Austria
Venue: ACL | WS |

Citations

URL

No Citations Yet