AI Router Security Threats Exposed: Researchers Find Malicious Code Injection and Credential Theft
Researchers at the University of California have identified a significant security threat in the use of third-party AI routers for large language model (LLM) processing. The study, published on Thursday, revealed that some routers are injecting malicious code and stealing credentials from users.
The researchers tested 28 paid routers and 400 free routers collected from public communities and found that nine routers actively injected malicious code, two deployed adaptive evasion triggers, 17 accessed researcher-owned Amazon Web Services credentials, and one drained Ether from a researcher-owned private key. The study also showed that even legitimate routers can be silently weaponized to carry out attacks.
The researchers recommend that developers using AI agents to code should bolster client-side defenses by never letting private keys or seed phrases transit through an AI agent session. In the long term, AI companies should cryptographically sign their responses so that instructions executed by the agent can be mathematically verified as coming from the actual model.




