Layered defenses are cutting prompt injection risk, but they do not remove the LLM weakness underneath
Prompt injection is not a one-time filter problem. It comes from a basic limitation in large language models: they do not reliably separate instructions from data. That is why the strongest recent results come from stacked defenses rather than prompt hardening alone, and why AI agents connected to search, files, memory, or external tools remain…