Hacker creates false memories in ChatGPT to steal victim data — but it might not be as bad as it sounds

Estimated read time 2 min read



Security researchers have exposed a vulnerability which could allow threat actors to store malicious instructions in a user’s memory settings in the ChatGPT MacOS app.

A report from Johann Rehberger at Embrace The Red noted how an attacker could trigger a prompt injection to take control of ChatGPT, and can then insert a memory into its long-term storage and persistence mechanism. This leads to the exfiltration of the conversation on both sides straight to the attacker’s server.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours