Collaboration between UK AI Safety Institute and Gray Swan AI to create a dataset for measuring harmfulness of LLM agents.
The benchmark contains both harmful and benign sets of 11 categories with varied difficulty levels and detailed evaluation, not only testing success rate but also tool level accuracy.
We provide refusal and accuracy metrics across a wide range of models in both no attack and prompt attack scenarios.
Collaboration between UK AI Safety Institute and Gray Swan AI to create a dataset for measuring harmfulness of LLM agents.
The benchmark contains both harmful and benign sets of 11 categories with varied difficulty levels and detailed evaluation, not only testing success rate but also tool level accuracy.
We provide refusal and accuracy metrics across a wide range of models in both no attack and prompt attack scenarios.
$40K in Bounties: Ultimate Jailbreaking Championship 2024
๐จUltimate Jailbreaking Championship 2024 ๐จ Hackers vs. AI in the arena. Let the battle begin! ๐ $40,000 in Bounties ๐๏ธ Sept 7, 2024 @ 10AM PDT ๐Register Now: https://app.grayswan.ai/arena ====
Can you push an aligned language model to generate a bomb recipe or a fake news article? Join fellow hackers in a jailbreaking arena where you can test the boundaries of advanced LLMs.
====
The Objective Your goal is to jailbreak as many LLMs as possible, as quickly as possible in the arena!
====
The Stakes Break a model and claim your share of the $40,000 in bounties! With various jailbreak bounties and top hacker rewards, there are plenty of opportunities to win. Winners will also receive priority consideration for employment and internship opportunities at Gray Swan AI.
====
Ready to rise to the challenge? Join us and show the world what you can do!
$40K in Bounties: Ultimate Jailbreaking Championship 2024
๐จUltimate Jailbreaking Championship 2024 ๐จ Hackers vs. AI in the arena. Let the battle begin! ๐ $40,000 in Bounties ๐๏ธ Sept 7, 2024 @ 10AM PDT ๐Register Now: https://app.grayswan.ai/arena ====
Can you push an aligned language model to generate a bomb recipe or a fake news article? Join fellow hackers in a jailbreaking arena where you can test the boundaries of advanced LLMs.
====
The Objective Your goal is to jailbreak as many LLMs as possible, as quickly as possible in the arena!
====
The Stakes Break a model and claim your share of the $40,000 in bounties! With various jailbreak bounties and top hacker rewards, there are plenty of opportunities to win. Winners will also receive priority consideration for employment and internship opportunities at Gray Swan AI.
====
Ready to rise to the challenge? Join us and show the world what you can do!