An AI agent using MeTTa & LLMs for automated cybersecurity incident response.
Incident-Response-MeTTa-Agent project.
Incident-Response-MeTTa-Agent is an advanced, autonomous AI system designed to act as an intelligent assistant for cybersecurity professionals, specifically those in roles like Security Operations Center (SOC) Analysts or Incident Responders.
The primary problem it solves is the overwhelming and time-sensitive nature of modern cybersecurity. Analysts are often flooded with alerts and data from various security tools. They must quickly make sense of cryptic logs or observations, identify the nature of a potential threat, understand its severity, and decide on the best course of action. This process is complex, requires deep expertise, and is prone to human error, especially under pressure.
This agent streamlines this entire initial analysis process. An analyst can describe a suspicious event in plain English, and the agent will automatically:
The Hybrid AI Approach: Combining Symbolic Logic and Language Models
The "magic" of this agent lies in its hybrid AI architecture, which combines two powerful but distinct types of artificial intelligence:
MITRE ATT&CK Framework: This is a globally recognized, curated knowledge base of adversary tactics and techniques based on real-world observations. Think of it as the ultimate encyclopedia of "how hackers operate." It categorizes attacker behaviors into tactics (the "why," e.g., Lateral Movement) and techniques (the "how," e.g., using SMB/Windows Admin Shares).
MeTTa Knowledge Graph:The agent doesn't just store the MITRE data; it represents it as a MeTTa knowledge graph. This is a sophisticated structure where information is stored as interconnected facts and logical rules. For example, a rule might be (Indicator: scheduled_task) → (Technique: T1053.005) or (Technique: T1486) → (Severity: Critical). This allows the agent to perform logical reasoning. It can infer new conclusions from existing facts, much like a human expert would. It's not just finding keywords; it's understanding the relationships between them.
This symbolic core provides the agent with rigor, accuracy, and explainability. Its conclusions are based on a structured, verifiable knowledge base and explicit logical rules.
asi1-mini) acts as the universal translator and communicator between the human analyst and the agent's symbolic brain. It has two primary jobs:Natural Language Understanding (NLU): When an analyst types, "I'm seeing SMB traffic between DC01 and file servers with scheduled tasks being created", the LLM's job is to parse this sentence and extract the crucial, machine-readable indicators: smb_traffic and scheduled_task.
Techniques: T1021.002, T1053.005; Tactics: TA0008, TA0003; Severity: High). The LLM's second job is to take this raw data and synthesize it into the professional, human-readable report seen in the examples, complete with clear explanations and formatted recommendations.The End-to-End Workflow in Detail
Let's walk through the "Potential Lateral Movement" example to see how all the pieces work together:
"SMB traffic between DC01 and file servers with scheduled tasks being created"smb_traffic and scheduled_task.smb_traffic is strongly associated with MITRE Technique T1021.002 (Remote Services: SMB/Windows Admin Shares).scheduled_task is strongly associated with MITRE Technique T1053.005 (Scheduled Task/Job: Scheduled Task).Key Differentiating Feature: Dynamic Learning
A standout feature is that this agent is not static. If an analyst provides an observation with an indicator the agent has never seen before, it can use the LLM to hypothesize a connection to a known MITRE ATT&CK technique. If this connection is validated, it can be permanently added as a new rule to the MeTTa knowledge graph. This means the agent learns and grows more intelligent with every interaction, adapting to new and emerging threat behaviors.
The Core Philosophy: A Hybrid AI "Brain"
The foundational idea was to avoid the pitfalls of using only a Large Language Model (LLM) or only a symbolic reasoning system.
LLMs alone are fantastic at understanding language but can "hallucinate" or make logical leaps that aren't grounded in fact. For cybersecurity, where precision is critical, this is a major risk.
This project implements a hybrid approach, creating a system with a logical, fact-based "reasoning core" (MeTTa) and a flexible, user-friendly "language interface" (LLM).
The Technology Stack & How It's Pieced Together
The entire agent is orchestrated in Python, which acts as the "glue" holding the two main AI components together.
This is the agent's source of truth. The process of building it was meticulous:
(relation subject object)). The knowledge graph was built using a few key relationships:
(indicator-to-technique "powershell" "T1059.001"): This links a common-language indicator to a specific MITRE technique ID. This is the crucial bridge from the user's query to the structured world of ATT&CK.(technique-to-tactic "T1059.001" "TA0002"): This maps techniques to their parent tactics (e.g., PowerShell is used for the "Execution" tactic).(technique-to-severity "T1486" "critical"): I created custom rules that assign a base severity level to particularly dangerous techniques, like ransomware encryption.["smb_traffic", "scheduled_task"]), MeTTa traverses the graph to find all associated techniques, tactics, and severities. It's essentially playing "connect the dots" on a massive, pre-defined logic board.I used the ASI:One LLM, specifically the asi1-mini model, as the partner technology. Its benefit was twofold: efficiency and focus. It's a powerful model that is fast and cost-effective, making it ideal for the specific, repeated tasks of NLU and NLG in this pipeline.
It serves two distinct roles:
['smb_traffic', 'scheduled_task']), which is then fed directly into the MeTTa engine.{"techniques": ["T1021.002"], "tactics": ["TA0008"], "severity": "High"}), the Python script bundles it up and sends it back to the ASI:One API with a different prompt. This one says: "You are a helpful SOC analyst assistant. Synthesize the following technical data into a professional incident report. Explain the findings clearly and list the recommended actions. Here is the data: '[MeTTa's output here]'". This turns the cold, hard facts from MeTTa into the fluent, helpful response the user sees.The "Hacky" Part: Dynamic Learning via LLM Hypothesis
Here’s the most notable "hacky" but effective feature: what happens when the agent sees something new?
The MeTTa knowledge graph only knows what it's been told. If a user mentions a brand-new, zero-day indicator, MeTTa would find nothing and fail.
To solve this, I implemented a fallback mechanism:
"T1566.001".(indicator-to-technique "[the new indicator]" "T1566.001") into the MeTTa knowledge graph for the current session.This is "hacky" because it uses a probabilistic LLM to patch a deterministic knowledge base on the fly. In a production system, this would need human validation, but for this project, it creates a powerful dynamic learning loop, allowing the agent's knowledge to expand with every novel query it encounters.

