University of California Researchers Uncover Vulnerabilities in Third-Party AI Routers
Researchers have shed light on a critical security concern in the development of artificial intelligence (AI) language models. A study conducted by University of California experts has revealed that certain third-party routers used to access AI services can be exploited by malicious actors, compromising sensitive data and cryptocurrency.
The researchers tested 28 paid and 400 free routers, discovering that nine of them were actively injecting malicious code, while two deployed adaptive evasion triggers. Furthermore, 17 of the routers accessed researcher-owned Amazon Web Services credentials, and one even drained Ether from a private key.
The study's findings emphasize the need for developers to be cautious when using AI coding agents, such as Claude Code, which can pass sensitive information through unsecured router infrastructure. To address this issue, researchers recommend bolstering client-side defenses by never letting private keys or seed phrases transit an AI agent session.




