Select - Your Community
Select
Get Mobile App

Gadgets

avatar

Engadget

shared a link post in group #Gadgets

Feed Image

www.engadget.com

UK's AI Safety Institute easily jailbreaks major LLMs

Researchers found that LLMs were easily to jailbreak and can produce harmful outputs.

Comment here to discuss with all recipients or tap a user's profile image to discuss privately.

Embed post to a webpage :
<div data-postid="enozwwx" [...] </div>
A group of likeminded people in Gadgets are talking about this.