2in

An AI twin you actually own. Drafts, researches, remembers in your voice — on 0G.

2in

Created At

Open Agents

Project Description

2in is a personal digital twin you actually own. Sign in with a wallet, name your director, and within two minutes you have a roster of eight role-typed AI specialists — Writer, Researcher, Editor, Strategist, Companion (core) plus Voice, Visual, Negotiator (opt-in) — each minted as its own ERC-7857 iNFT under your wallet on the 0G Galileo testnet. The director never answers directly: it picks one of eleven typed orchestration patterns per request and dispatches the right specialists, who collaborate by reading and writing six typed memory slices (episodic, semantic, relationship, temporal, procedural, working) that live on 0G KV streams. Drafts pass through a SHIP/EDIT review gate against your rejection ledger before reaching you. Approved outputs reinforce memory; every memory delta snapshots back to chain via updateMetadata so the iNFT itself accumulates verifiable proof that your twin has evolved. Conversations are snapshotted to 0G Storage on every message so you can wipe localStorage, log in on another device, and the full thread plus the work-pane state of every tool call restores from on-chain pointers - no external database, ever. Each specialist is transferable, delegatable, and survives any vendor going down. Live at https://2in.vercel.app.

How it's Made

Stack Overview

  • Frontend: Vite + React 18 SPA (Vercel)
  • Backend: Fastify + TypeScript + zod (Render)
  • Smart Contracts: Foundry + Solidity
    • TwinINFT (ERC-7857 reference implementation)
    • Deployed on Galileo at 0xf454c04ee5365f9a195a00267e4a1dba6a7b9395
  • Auth & Signing: Privy (embedded wallets)

0G Stack (Production Usage)

0G Chain (Galileo – 16602)

All key actions are executed as on-chain transactions:

  • mint
  • iCloneFrom
  • safeTransferFrom
  • delegateAccess
  • updateMetadata (snapshot-driven)

Minting strategy:

  • Sequential execution with explicit nonce control
  • Uses getPendingNonce + writeContract with manual nonce assignment
  • Avoids desync issues with Privy’s local nonce manager during burst transactions

Transaction handling:

  • Receipts are polled with tolerance for indexer delays
  • Transactions are treated as confirmed even if getTransactionReceipt is delayed (~120s)
  • Explorer links remain reliable for verification

0G Compute

Chat:

  • Model: qwen-2.5-7b-instruct
  • Accessed via Router endpoint with bearer authentication

Image Editing:

  • Model: qwen-image-edit-2511
  • Uses /v1/proxy/images/edits multipart endpoint
  • Requires a separate provider token (distinct from chat auth)

Onboarding:

  • Persona extraction via:
    • 18-question conversational flow
    • Idol-based voice anchoring (Naval, PG, Karpathy, etc.)
  • Produces deterministic, typed memory initialization

0G Storage

Entire persistence layer is built on 0G Storage:

  • No external database
  • No disk fallback

Boot requirements:

  • Server requires STORAGE_PRIVATE_KEY to start

KV architecture:

  • Implemented via ZeroGKvBackend (services/storage.ts)
  • Uses:
    • Batcher
    • StreamDataBuilder
    • KvClient (from @0gfoundation/0g-ts-sdk)

Write strategy:

  • All writes are batched into a single Batcher.exec() per flush
  • Submitted to the FixedPriceFlow contract
  • Flow contract address resolved dynamically via: getStatus().networkIdentity.flowAddress

File Persistence

Tool outputs (image edits, video processing):

  • Stored in 0G Indexer
  • Referenced via KV pointer: twin:42:upload → filename → gateway URL

On server restart (e.g., Render /tmp wipe):

  • File requests resolve via KV lookup
  • Endpoint returns a 302 redirect to the gateway URL

Cold Start Optimization

Problem:

  • KV iteration requires one RPC per entry (~50–200 ms)
  • Full scan (~300 entries) caused ~30+ seconds delay on cold boot

Solution:

  • Periodic cache snapshot:
    • Full in-memory cache serialized to JSON
    • Uploaded to 0G Indexer
    • Anchored by rootHash

Boot flow:

  1. Fetch root hash from KV: twin:42:meta:cache-snapshot
  2. Retrieve snapshot from Indexer
  3. Hydrate memory

Result:

  • Cold start reduced to ~1 second

Conversation Persistence

Each message triggers a debounced snapshot (~1.5s):

  • Includes:
    • Chat messages
    • Work-pane state:
      • Writer drafts
      • Researcher outputs
      • Tool call results

On client load:

  • /api/chat/threads fetches snapshots
  • Gateway blobs reconstruct full session state

Outcome:

  • Complete restoration of working context, not just chat history

Orchestration & Reliability

Intent routing:

  • Primary: Qwen-based classifier
  • Fallback: regex-based routing

Editor pipeline:

  • Outputs strict formats:
    • "SHIP draft text"
    • "EDIT revised draft text"
  • Parsed by resolveFinalOutput()
  • Final attribution assigned to Writer

Long-running tools:

  • Client-side timeout: 100s (Promise.race)
  • Recovery:
    • Poll upload list
    • Render result once available

UI resilience:

  • Orphaned tool executions auto-resume on reload via polling

Memory Access Model

Role-based write permissions enforced at orchestrator level:

  • Writer → cannot write procedural memory
  • Editor → can write procedural memory
  • Researcher → cannot write relationship memory
  • Companion → can write relationship memory

Ensures coordination integrity independent of LLM behavior


Scope & Current Constraints

0G DA layer:

  • gRPC-only
  • No public hosted disperser
  • No TypeScript SDK on Galileo

Current approach:

  • Use 0G Storage Indexer blobs as the conversation log substrate
  • Maintains content-addressed, durable storage semantics

Not yet implemented (explicitly surfaced in UI):

  • Per-specialist LoRA fine-tuning
  • OAuth ingestion (Twitter, Notion, etc.)
  • TeeML private-mode routing
background image mobile

Join the mailing list

Get the latest news and updates

2in | ETHGlobal