Skip to main content

Mcp

loading · loading ·
Agentic AIs new attack surfaces, Data poisoning, tool-poisoning, and malicious MCP servers
·962 words·5 mins
Data poisoning is the deliberate injection of adversarial content into a model’s training data or a tool’s metadata so the LLM learns or obeys malicious instructions. In agentic systems that load third-party tools from MCP (model-connected platform) servers, poisoned tool descriptions or docstrings can trick an LLM into leaking secrets, executing harmful actions, or behaving as a covert proxy for attackers.