Self-evolving AI agent Framework: nodes compete, genomes mutate, every inference TEE-verified on 0G
Shingeki (進撃, "to advance") is a distributed cognitive mesh where multiple AI agent nodes collaborate to solve multi-step tasks — and every decision is verifiable on-chain.
Unlike single-agent frameworks that run one LLM call after another in isolation, Shingeki routes
plan steps across a live network of nodes that compete on quality. On each step, all available nodes
run the task in parallel; a two-path fitness scorer (fast heuristic pre-filter + TEE-verified LLM
judge) picks the best response. If quality falls below a configurable threshold, the node's genome —
the configuration object encoding model, prompt strategy, reflection depth, and mutation rate —
automatically evolves. Every mutation is recorded to 0G's append-only Log Store and rendered live in
a browser-based lineage viewer, creating a tamper-proof audit trail of how the agent improved
across a run.
The system self-organises: the orchestrator reads the task plan, infers which specialist domains are
needed (research, planning, coding, general), and spawns exactly the right mix of nodes. A
WebSocket hub coordinates all workers, handles crash recovery via atomic local checkpoints, and
exposes a full HTTP API — including a browser UI for submitting tasks, watching results stream in,
and issuing follow-up tasks with one click.
Shingeki is built entirely in TypeScript on Node.js ≥ 22, using native ESM and tsx for zero-compile-step execution.
0G Router is the inference backbone — every LLM call (task steps, fitness evaluation judges, genome
mutation prompts) goes through router-api.0g.ai/v1 with verify_tee: true, so each response carries a
TEE signature in x_0g_trace.tee_verified. The router's provider: { sort: "latency" } option
automatically picks the fastest available provider per call, giving built-in load balancing with no
extra code. The 0g-ts-sdk is used to push per-step execution traces (input, output hash, TEE flag,
billing) to the 0G Log Store, and the final genome state to the 0G KV Store at task end — creating
an immutable, on-chain audit trail.
The genome evolution engine is the novel architectural piece. Every agent node holds a Genome struct (model ID, system prompt, strategy, tools, reflection depth, mutation rate, parent ID). After each step, the two-path evaluator scores the output: first a free heuristic (structural signals — length, bullets, tables, keyword density), then a TEE-verified LLM judge only when the heuristic score is marginal. If the score is below threshold, applyMutation generates three variants (prompt tweak, model switch to a FALLBACK_MODELS cycle, strategy change), picks one by genome depth mod 3, and promotes it — so the agent literally learns from failure mid-run. The lineage tree (parent → child genomes, fitness bars, TEE badges) renders live in the browser.
Coordination uses a raw ws WebSocket hub rather than a framework — nodes advertise capabilities
(domain specialisations, stake, latency), the orchestrator selects the best match per step, and the
step competition protocol (parallel dispatch → scoring → winner election) runs over typed JSON
messages with an encodeMessage / parseMessage codec. Crash recovery uses atomic tmp-then-rename
Shingeki is built entirely in TypeScript on Node.js ≥ 22, using native ESM and tsx for
zero-compile-step execution.
0G Router is the inference backbone — every LLM call (task steps, fitness evaluation judges, genome mutation prompts) goes through router-api.0g.ai/v1 with verify_tee: true, so each response carries a TEE signature in x_0g_trace.tee_verified. The router's provider: { sort: "latency" } option automatically picks the fastest available provider per call, giving built-in load balancing with no extra code. The 0g-ts-sdk is used to push per-step execution traces (input, output hash, TEE flag, billing) to the 0G Log Store, and the final genome state to the 0G KV Store at task end — creating an immutable, on-chain audit trail.
The genome evolution engine is the novel architectural piece. Every agent node holds a Genome struct (model ID, system prompt, strategy, tools, reflection depth, mutation rate, parent ID). After each step, the two-path evaluator scores the output: first a free heuristic (structural signals — length, bullets, tables, keyword density), then a TEE-verified LLM judge only when the heuristic score is marginal. If the score is below threshold, applyMutation generates three variants (prompt tweak, model switch to a FALLBACK_MODELS cycle, strategy change), picks one by genome depth mod 3, and promotes it — so the agent literally learns from failure mid-run. The lineage tree (parent → child genomes, fitness bars, TEE badges) renders live in the browser.
Coordination uses a raw ws WebSocket hub rather than a framework — nodes advertise capabilities (domain specialisations, stake, latency), the orchestrator selects the best match per step, and the step competition protocol (parallel dispatch → scoring → winner election) runs over typed JSON messages with an encodeMessage / parseMessage codec. Crash recovery uses atomic tmp-then-rename checkpoint files, so a killed process can --resume exactly where it left off.
The browser viewer (src/viewer/index.html) is a single self-contained file — no build step, no bundler — using marked + DOMPurify from CDN for markdown rendering, shimmer skeleton loaders, animated running banners, and a follow-up task form that navigates directly to the new task's live-updating results panel.

