Claude AI and other systems could be vulnerable to worrying command prompt injection attacks

Estimated read time 2 min read




  • Security researchers tricked Anthropic’s Claude Computer Use to download and run malware
  • They say that other AI tools could be tricked with prompt injection, too
  • GenAI can be tricked to write, compile, and run malware, as well

In mid-October 2024, Anthropic released Claude Computer Use, an Artificial Intelligence (AI) model allowing Claude to control a device – and researchers have already found a way to abuse it.

Cybersecurity researcher Johann Rehnberger recently described how he was able to abuse Computer Use and get the AI to download and run malware, as well as get it to communicate with its C2 infrastructure, all through prompts.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours