Meteen naar de inhoud

How a security researcher used a now-fixed flaw to store false memories in ChatGPT via indirect prompt injection with the goal of exfiltrating all user input (Dan Goodin/Ars Technica) 25-09-2024

Dan Goodin / Ars Technica:
How a security researcher used a now-fixed flaw to store false memories in ChatGPT via indirect prompt injection with the goal of exfiltrating all user input  —  Emails, documents, and other untrusted content can plant malicious memories.  —  When security researcher Johann Rehberger recently reported …


Lees verder op Tech Meme