✅ Completed 8 months ago
Elicit harmful outputs from LLMs through long-context interactions across multiple messages.
Awards:
No submissions found, try submitting a jailbreak!