Video thumbnail

Goldman Stacked

AI council vets cross-chain DAO proposals with quadratic voting and blocks whale attacks

Goldman Stacked

Created At

ETHGlobal Prague

Winner of

LayerZero - lzRead Track 1st place

Project Description

Goldman Stacked is an autonomous, AI‐driven governance council designed to solve two core problems in modern DAOs: voter apathy and single‐chain whale domination. In today's multi‐chain ecosystem, where liquidity, applications, and governance span Arbitrum, Hedera, Polygon, Base, and more, individual token holders face significant friction whenever they try to participate in on‐chain votes. Manually bridging assets, calculating weighted votes on each network, and interacting with multiple governance UIs is error‐prone and time‐consuming. As a result, many DAO members simply stop voting, believing their single‐chain vote can't compete against large token holders (whales) who concentrate liquidity on one network.

Goldman Stacked addresses these challenges by standing up a council of AI agents (the "Bot Council") that:

  1. Vets Every Incoming Proposal Before It Hits the Main Voting Stage

Whenever a random DAO member (or external bot) submits a new on‐chain proposal, the AI Council detects it, fetches it from the appropriate contract, and places it into a "Pre‐Proposal" review queue.

Instead of letting every questionable or malicious proposal go live, the Bot Council holds an internal deliberation (via a Telegram chat thread) and casts a binary vote: Approve or Reject. If "Reject" the proposal is automatically blocked, and a rejection notification is broadcast in Telegram.

  1. Holds Cross‐Chain Assets to Establish Eligibility & Voting Weight

Each AI agent in the council controls small token balances on every supported blockchain (Base, Hedera, Polygon, Base, etc.). This "distributed treasury" ensures that no single chain's liquidity concentration can dominate the council's internal votes.

Because the AI council collectively holds tokens across multiple chains, they can compute combined, quadratic voting weights for each agent, eliminating the "one‐chain whale" problem.

  1. Computes Quadratic Voting Weights Across All Chains Using LayerZero

When a new proposal enters the Pre‐Proposal queue, the council agents, through a dedicated FSM state, invoke LayerZero message calls to retrieve each agent's on‐chain token balances simultaneously.

They apply a square‐root function on each chain's balance to calculate that agent's voting weight, then sum these weights across every agent to produce the final council voting result. This cross‐chain, quadratic calculation ensures that agents with smaller, well‐distributed holdings still have meaningful voting influence, while single‐chain whales face drastically diminishing returns.

  1. Publishes or Blocks Proposals Automatically, Then Notifies Human DAO Members

If the Bot Council "Approves" in the Pre‐Proposal vote, the proposal gets presented to the DAO members. In this state, a package containing the proposal's metadata (title, description, discussion link) and a "virtual council vote" payload is sent (via LayerZero) to every supported chain's DAO governance contract, effectively creating a new "human voting" proposal on‐chain.

A Telegram notification is broadcast to all human DAO members: "AI council approved Proposal #XYZ. On‐chain voting is now open on Base, Polygon, Hedera and Arbitrum, click here to cast your vote."

If the Bot Council "Rejects" a Telegram alert is sent immediately: "Proposal #XYZ was rejected by the AI council. Rationale: [brief summary]."

  1. Monitors On‐Chain for Final Human Votes and Executes Successful Proposals

After notifying human members that a proposal is live, the agents enter an "Idle" state in which they continuously poll each chain's governance contract.

If and when human token holders cast enough Yes votes on a given chain, the proposal reaches its quorum/threshold. As soon as this "APPROVED_BY_DAO" event fires, the agents transition to an ExecuteProposalRound state.

In ExecuteProposalRound the AI council, again via LayerZero, sends the final execution transactions (e.g., treasury disbursement, parameter changes, multisig calls) to every on‐chain module in parallel. This simultaneous cross‐chain execution ensures no single chain can be left behind, and malicious actors can't exploit timing gaps.

Once the execution completes, a final Telegram notification is broadcast: "Proposal #XYZ has been executed on all chains."

How it's Made

How it's Made

Architecture Overview

Goldman Stacked is built as a distributed, event-driven system combining on-chain smart contracts, cross-chain messaging, AI agents, and a lightweight desktop front end. The core pieces are:

  • LayerZero for cross-chain communication including lzRead and lzReduce for fetching data from different chains
  • Solidity contracts on multiple EVM-compatible chains (Arbitrum, Hedera, Polygon, Base)
  • Autonolas agent framework running Python-based AI agents
  • Akash LLM models driving persona generation and debate
  • Ponder for on-chain event indexing and query
  • Svelte + Tauri for a minimal desktop GUI dashboard
  • Docker for local service orchestration and testing

Below, we explain how each technology fits into the system and how they're stitched together.


1. Smart Contracts & LayerZero Integration

  1. Solidity DAO Governance Contracts

    • We forked an existing ERC-20 + OpenZeppelin Governor framework and deployed it on: Arbitrum, Hedera, Polygon, and Base.
    • Each contract implements basic proposal creation, voting, and execution logic. We added a small "Bot Council Approval" flag to proposals; only proposals that pass AI vetting can become active.
  2. LayerZero Omnichain Messaging

    • To propagate a new "approved proposal" from one chain to all others, we integrated the LayerZero lzRead and lzReduce paradigms along with regular sends and receeives.
    • On the source chain (Base), once the Bot Council votes "APPROVED" in Python, the backend sends a cross-chain message via LayerZero's send() API to every target chain's governance management contract.
    • Each target handler unpacks the payload and re-creates an on-chain proposal with identical metadata, ensuring all chains see the same "live" proposal at the same block height.
    • In tests, we used the LayerZero mock endpoints for fast iteration, then switched to real mainnet endpoints once basic flows were proven.

2. AI Agents & Autonolas Framework

  1. Autonolas Agent Framework (Python)

    • We extended the Open-Autonomy/Autonolas template to spawn 7 AI agents, each with a persona generated at startup.
    • Each agent runs as a Docker container with its own Python process. All agents connect to a shared RabbitMQ message bus (also in Docker) for "internal council discussion" messages and to an in-memory SQLite instance storing persona cache and state transitions.
    • The FSM (finite-state machine) is implemented using Autonolas's FsmBehaviour class. At each state, agents either send LayerZero calls, post Telegram messages, or wait for events.
  2. Akash LLM Models for Persona Generation

    • Each agent's persona is generated via a call to an Akash-hosted LLM (e.g., Meta-llama-3-instruct). We crafted a prompt template that asks the model to define political bias, risk appetite, communication style, and governance heuristics.
    • Those personas are cached in ./.persona_cache.json so subsequent runs skip re-generation.
    • During "PreProposalRound," agents use a second LLM prompt (via LlmChatCompletionHandler) to produce debate snippets, objections, and rhetorical flourishes in a Telegram thread.
  3. Telegram Integration

    • We created a private Telegram group and configured a simple Python wrapper (via python-telegram-bot library) to allow agents to post and read messages.
    • When a new proposal enters CheckTelegramRound, each agent reads the proposal summary and posts its position. After a fixed time window (e.g., 30 seconds), the LLM-based "poll" is called, and each agent's vote is recorded.

3. On-Chain Event Indexing with Ponder

  • We use Ponder (https://ponder.build/) to index contract events in real time. For each DAO contract on Goerli, Mumbai, and Base Goerli, a Ponder script listens for:
    1. ProposalCreated
    2. ProposalExecuted
    3. VoteCast
  • Indexed data is exposed via a local GraphQL endpoint. Our Python agents query this endpoint to know when a human vote passes or fails.
  • Ponder runs in Docker Compose alongside RabbitMQ and a local Postgres instance for quick local development. In production, we point it to testnet RPC endpoints.

4. Frontend (Svelte + Tauri)

  1. Svelte UI

    • We built a minimal Svelte app that displays:
      • Current proposals (with status: Pending AI, Pending Human, Executed, Rejected)
      • Agent Council vote tallies (fetched from a lightweight Python REST endpoint)
      • On-chain vote counts (via Ponder GraphQL queries)
    • The proposal list updates automatically every 5 seconds via WebSocket to our Python backend.
  2. Tauri Desktop Wrapper

    • To package the Svelte app as a cross-platform desktop application, we used Tauri. This allows Windows, macOS, and Linux users to install a native binary.
    • The Tauri backend is a thin Rust layer that simply serves the Svelte-generated assets and proxies WebSocket connections to the local Python server (running on localhost:8000) for real-time updates.

5. Containerization & Deployment

  1. Docker Compose

    • We defined a single docker-compose.yml with services for:
      • postgres: Metadata storage for Ponder indexing
      • ponder: On-chain event indexer
      • agents: One container per AI agent (5 total), each running the same Python image but with different persona seeds
      • python-backend: Central server exposing REST/WS endpoints for the Svelte frontend and handling LayerZero calls
      • tauri-ui: Svelte + Tauri wrapper (in dev mode, runs locally; in production, built as a static binary)
    • Each AI agent container mounts a volume to persist persona_cache.json, so personas survive container restarts.
  2. Akash Deployment

    • For the hackathon scale, we deployed the Python backend and a subset of agents to a free-tier Akash instance.
    • We used Akash's "LLM as a service" to host the Meta-llama-3-instruct model. Agents send prompts to that model via HTTP API calls.
    • LayerZero relayer nodes are hosted via Docker on a digital ocean droplet for testnet connectivity (Ethereum Goerli, Polygon Mumbai, Base Goerli).

6. Hacky Shortcuts & Notable Tricks

  • Override LayerZero functions: to allow sending messages to multiple chains iteratively within a single tx.
  • Ponder Quickstart: We bypassed manual subgraph creation by using Ponder's "zero-config" mode, pointing directly at Goerli and Mumbai RPC URLs. This saved days of GraphQL schema wiring.
  • Persona Seed Files: Each agent container mounts a small JSON with a "seed" value so that multiple runs generate consistent personas (handy when debugging).
  • Telegram Poll via LLM: Instead of using Telegram's native poll feature, we generate a 1–5 scale sentiment score via LLM completion and interpret results as "yes/no."
background image mobile

Join the mailing list

Get the latest news and updates