Not even fairy tales are safe – researchers weaponise bedtime stories to jailbreak AI chatbots and create malware

Cato CTRL researchers can jailbreak LLMs with no prior malware coding experience.
Rolar para cima
× Como posso te ajudar?