Prompt Injection in JetBrains Rider AI Assistant
We discovered a prompt injection vulnerability in JetBrains Rider AI Assistant, allowing an attacker to exfiltrate data. Prompt injection attacks aim to alter the LLM's intended behavior, leading to unexpected or malicious outputs. The consequences vary depending on the application. In this case, an injection attack might cause the LLM to generate output that, when rendered in the AI Assistant Chat, enables data exfiltration.