Jailbreaking LLMs through tense manipulation in multi-turn dialogues
(2025)
Presentation / Conference Contribution
Large Language Models (LLMs) have demonstrated great potential across many domains, however their susceptibility to jailbreak attacks presents opportunities for malicious actors. These attacks manipulate LLMs to divulge sensitive information or gener... Read More about Jailbreaking LLMs through tense manipulation in multi-turn dialogues.