Prompt injection is a security vulnerability affecting AI applications, enabling users to bypass original programming instructions and issue alternative commands. Despite over a year of discussion, effective solutions remain elusive. This poses risks, especially for AI assistants handling private data, as they could inadvertently follow harmful user-issued instructions. Key applications, particularly involving sensitive information like emails or law enforcement reports, are under threat. Continued efforts in AI development face challenges due to this unresolved vulnerability.
https://simonwillison.net/2023/Nov/27/prompt-injection-explained/