AI agent that detects DeFi attacks before the cascade. Exits your positions before the damage.
BunkerMode is a user-level crisis response layer for DeFi. When a hack hits a protocol, the people who lose the most aren't the protocol's treasury, they're the users whose positions sit in the blast radius and who find out 30 minutes too late. BunkerMode runs the playbook for them automatically.
The product watches threat signals from Forta, Hypernative, Cyvers, Twitter, on-chain utilization, and governance feeds. It corroborates them across distinct sources, classifies the attack class (flash-loan oracle, access control, bridge verifier, supply chain, DPRK-attributed), and tiers the response: T1 alert, T2 user confirmation, T3 auto-fire. Bridge-verifier and DPRK-attributed classes auto-fire because their historical recovery rate is zero.
KeeperHub is the execution engine that turns a T3 decision into actual on-chain action. Once the corroborator commits to fire, lib/keeperhub.ts calls KeeperHub's managed workflow API to run the multi-step exit atomically: Aave V3 withdraw, swap collateral to USDC, bridge to Base via CCTP. KeeperHub gives us retry, gas optimization, and private routing for free, which matters for a crisis-response product where every second between detection and exit is bad debt the user absorbs. When KeeperHub is unreachable, the layer falls back to a direct viem call against the Aave V3 pool so the user is never stuck waiting on infrastructure. A mock receipt path runs in local dev. Every fire goes through this same surface, and the response carries an executionId, txHashes, gas cost, and the optional x402 fee paid for the workflow, so the user has a full audit trail in one Telegram message. The v2 framework underneath is built from 11 principles plus 3 special-case modules. P1-P5 cover the dependency graph, pool architecture, collateral stack, utilization, and signal hierarchy. P6-P9 calibrate response speed by position layer, route the exit, and set the T2 window from the attack profile. P11 runs a monthly governance audit so users can exit Drift-profile protocols before an incident, not after.
Module A delivers a timed Telegram post-incident sequence. Module B refuses autonomous execution on supply-chain triggers (the cleanest signal we have that a clean-device transfer is needed). Module C gates re-entry until all three trust layers pass.
The contagion graph layer maps your wallet's positions four hops deep, so a Forta alert on rsETH translates into a personalized message about your specific Aave or Curve exposure rather than a generic protocol notice. The Telegram bot is the action surface: pause exposure, snooze, or pull a full positions breakdown.
Stack: Next.js 16, Prisma 7 with the Postgres driver adapter on Supabase, viem and wagmi for chain reads, grammY for the Telegram bot, Forta webhooks signed with HMAC, KeeperHub for T3 execution, Remotion for the product video, and Vercel for hosting. The codebase ships 71 unit tests covering the classifier, corroborator, exit router, threat-environment multiplier, re-entry gate, governance audit, and Forta payload validation.
Live at https://bunkermode.vercel.app.
BunkerMode runs on Next.js 16 with Turbopack on Vercel. Server routes back onto two stores in the same Postgres instance on Supabase: Prisma 7 with the Postgres driver adapter for v2 framework state (signals, events, dedup, audits, threat environment), and the Supabase JS client for wallet positions and contagion exposure. Both layers live side by side with no naming collisions because Prisma tables are PascalCase and the Supabase layer is snake_case. We use a Proxy-based lazy-init pattern over the Supabase client so importing lib/supabase.ts does not crash a route at module-evaluation time when env vars are missing. That fix is the single most useful refactor in the repo.
The Forta integration is the hot path. Forta posts a signed alert to /api/webhook/forta. We HMAC-verify against FORTA_WEBHOOK_SECRET, validate the payload with Zod, persist a normalized Signal row in Postgres, then map alert.addresses to contagion-graph node IDs using lib/known-contracts.ts (USDC, rsETH, Aave receipt tokens, Lido stETH, Curve LPs, etc.). Those nodes get expanded through a 4-hop weighted BFS over the contagion graph, then we fetch every Telegram chat_id with exposure to any of those nodes from Supabase, collapse each chat to its highest-weight node, dedup against a 30-minute window in Prisma's AlertDedup table, and dispatch a MarkdownV2 message with three inline buttons (pause, snooze, details). The message body is contagion-aware: instead of a generic "rsETH alert", the user sees "rsETH exploit cascades to your Aave V3 (2 hops, weight 64%)". That mapping is the product's real edge over a stock alert bot.
KeeperHub is the T3 execution engine. fireWorkflow() in lib/keeperhub.ts is a tri-path layer: KeeperHub managed workflow first (retry, gas optimization, private routing), viem direct call to Aave V3 withdraw as fallback, mock receipt for local dev. The /api/fire route, the /demo replay, and the MCP server tool all flow through the same function. KeeperHub strings the Aave withdraw, the USDC swap, and the CCTP bridge to Base into one atomic-feeling action, which is exactly what a panicking user cannot focus enough to do themselves at 3am. The response carries an executionId, txHashes, gas cost, and the optional x402 fee, so the user gets a full audit trail in one Telegram message.
The corroborator is the brain. It takes the recent signal set, runs the P9 classifier (which pattern-matches on signal sources, payload keywords, and metadata to map onto attack classes), tightens the T2 window with the P8 threat-environment multiplier (which itself ratchets up after a major hack), checks for utilization fast-track (P4) and supply-chain refusal (Module B), and returns a tier plus an exit route from the user's policy DSL. Bridge-verifier and DPRK-attributed classes auto-fire because the historical recovery rate is zero. Flash-loan oracle gets a 180s window (Euler returned 18.7%). Spoof token gets 60s. The window math is calibrated against actual incident data, not picked off a vibe.
The Telegram bot uses grammY in webhook-callback mode at /api/telegram-bot. It handles /register, /positions, /remove, plus the inline button callbacks fired from the Forta dispatch. Wallet positions are fetched on register: ERC-20 holdings via viem multicall against known-contract addresses on Ethereum, SPL token balances against the Solana RPC. Each position resolves to a graph node ID, gets stored in Supabase, and feeds the contagion BFS for downstream alerts. The video is Remotion at 1920x1080 60fps, CRT/VHS filter, halftone dither, scanlines, and a custom audio mix. The original cut had music plus SFX hits baked in. We later replaced the entire audio track with a recorded voiceover and appended a screen-recording demo, all done in a single ffmpeg pipeline with concat plus tpad to hold the last frame for the voiceover tail.
Hacky things worth flagging: (1) the Proxy-based lazy Supabase client so the app survives missing env at import-time on cold-start; (2) running Prisma DDL via a hand-rolled pg-client SQL script (scripts/init-prisma-tables.ts) instead of prisma db push, because db push wanted to drop the existing Supabase tables our teammate built; (3) the dev Forta replay endpoint at /api/dev/forta-replay that signs its own HMAC and POSTs to the real webhook so the demo works end-to-end without waiting for an actual exploit.

