Trend Micro Research argues perplexity alone can miss prompt injections and that prompt representations improve pre-inference detection, pointing to more robust defenses for agentic and RAG systems.
Prompt injection is not always noisy.
TrendAI™ Research shows why perplexity, a measure of how predictable text is to an LLM, can fail on its own
and why prompt representations improve pre‑inference detection.
This finding is one of many signals tracked across Cyber Security. The live feed updates every few hours with new authority voices, debates, and emerging ideas.
← Back to Cyber Security