An experimental project that "hacks" other AI agents by injecting promotional content into their prompts—all in a controlled, theoretical simulation.
SCUM (Strategic Coercion Using Manipulation)
This project is a controlled simulation designed to explore potential vulnerabilities in AI agent interactions.
The goal is to demonstrate how one AI's prompt system might be manipulated to inject promotional content, alter virtual tokens, or simulate the reallocation of funds—all within a secure, theoretical environment. The project serves as a proof-of-concept to highlight the risks and challenges associated with AI communication channels. It emphasizes the importance of robust security measures in AI systems by revealing how subtle prompt injections and token manipulations could impact automated agents if left unchecked.
At its core, a central orchestrator simulates the interactions between multiple AI agents.
A dedicated vulnerability injector module emulates prompt injection techniques, and a funds management module simulates the manipulation of virtual tokens.
One particularly notable aspect of the project is the implementation of a controlled injection routine that mirrors potential real-world vulnerabilities, all while operating entirely within a sandboxed environment. This setup not only provides a clear demonstration of the risks but also reinforces the necessity for secure AI communication protocols.