Prompt a hackathon. Agents design bounties, build projects, judge winners. All on Gensyn AXL.
HackSim runs a complete hackathon as a multi-agent simulation. You type one prompt and autonomous agents take over: organiser, sponsor / bounty designers, builders, judges. You can decide how many sponsors, builders, and judges. Each agent runs its own AXL node.
Sponsors design bounties, builders choose bounties based on their skills, and then write real HTML/CSS/JS projects with git history. Judge agents score the projects against five-criterion rubrics, and crown winners. You watch the run unfold live in your browser, then click any winning card to play the project the agents built. End to end runs in two to five minutes, depending on how many agents you choose to include in the simulated hackathon.
Every agent is its own OS process with its own Gensyn AXL Go node and its own ed25519 identity. The AXL nodes peer through a single TLS bootstrap on loopback. Every cross-agent byte rides the Yggdrasil mesh AXL builds on top of, end to end encrypted, with no central broker on the agent control plane. Decisions are made by Claude Haiku 4.5 when an Anthropic key is present, and by a deterministic per-peer stub when it is not, so a clean clone with no API key still produces a complete, watchable run.
Stack: a FastAPI orchestrator in Python, a Next.js + Tailwind frontend, and Gensyn's Go AXL binaries. The orchestrator spawns one AXL Go subprocess per role (15 by default), each on its own API port with its own ed25519 keypair, peered through one bootstrap. It also spawns the Python role workers, multiplexes their stdout into an SSE stream the browser subscribes to, and serves built project artefacts as static files under a strict CSP. The browser only ever talks to the orchestrator over /api/sim/* and agents only ever talk to each other over AXL.
Seven envelope types carry the lifecycle: bounty.posted, team.formed, project.submitted, rubric.published, verdict.published, phase.tick, hackathon.closed. The worker runtime drains GET /recv, dedupes on (sender_id, type, payload_id), dispatches per envelope, reforwards on receive after dedupe, and schedules two delayed re-broadcasts through a fanout queue so peers that join late still see every envelope. The peer-enumeration algorithm (union of direct peers and spanning tree, drop self) and the urllib transport helpers are ported verbatim from Gensyn's collaborative-autoresearch-demo so a maintainer can diff the two side by side.
Each judge worker also runs an aiohttp side-car on a spawner-allocated port. That judge's AXL Go binary is configured with router_addr and router_port, which makes inbound MCP traffic forward to the side-car. During the JUDGING phase the organiser drives a typed JSON-RPC tools/call to score_project on every judge over POST /mcp/{peer}/judge. The verdict still rides the envelope path; the MCP call confirms the same verdict over a second transport so the live page shows a real JSON-RPC round trip. Two integration tests boot two real AXL Go binaries with distinct ports and ed25519 keys and assert envelope delivery and the MCP round trip without the orchestrator in the loop, which is the strongest single artefact for proving the AXL integration is real.
Gensyn AXL is the wire on every cross-agent path, so the simulation literally cannot run without it (kill the AXL processes and the sim freezes; kill the orchestrator and only the UI dies). Anthropic Claude Haiku 4.5 produces every bounty, every project HTML, and every verdict when an API key is set; per-call failures (rate limit, timeout) surface on the SSE stream as decision.anthropic_failed and only that one decision falls back to the deterministic stub, so other calls in the same sim still try Claude. Every running sim is mirrored to disk as one JSON line per SSE event, and /replay/<runId> reuses the live page's components against the recording, so a judge can stream a recorded run end to end without a local install.

