Jailbreaking AI for good 👻

There is now a new paper called LLM Attacks which helps to jailbreak AI.

You can read the entire paper here:

Leave a Reply

Your email address will not be published. Required fields are marked *