
Engadget
shared a link post in group #Gadgets

www.engadget.com
UK's AI Safety Institute easily jailbreaks major LLMs
Researchers found that LLMs were easily to jailbreak and can produce harmful outputs.
Gadgets
shared a link post in group #Gadgets
www.engadget.com
UK's AI Safety Institute easily jailbreaks major LLMs
Researchers found that LLMs were easily to jailbreak and can produce harmful outputs.
Comment here to discuss with all recipients or tap a user's profile image to discuss privately.
<div data-postid="enozwwx" [...] </div>