
Engadget
shared a link post in group #Gadgets

www.engadget.com
AI chatbots can be tricked with poetry to ignore their safety guardrails
Researchers from Italy discovered that phrasing prompts in poetry can be a reliable jailbreaking method for LLMs.

Gadgets

shared a link post in group #Gadgets

www.engadget.com
AI chatbots can be tricked with poetry to ignore their safety guardrails
Researchers from Italy discovered that phrasing prompts in poetry can be a reliable jailbreaking method for LLMs.
Comment here to discuss with all recipients or tap a user's profile image to discuss privately.
<div data-postid="ymmmpwa" [...] </div>