DecentraMind video demo

DecentraMind

🧠 DecentraMind – Run private LLMs on your own device. Modular. Encrypted. On-chain.

DecentraMind

Created At

Unite Defi

Project Description

DecentraMind is a fully decentralized AI assistant infrastructure that empowers users to run, control, and monetize their own large language models (LLMs) without depending on centralized AI providers. It tackles major issues like data privacy, surveillance, cloud dependency, and regulatory compliance by enabling:

Edge-first local inference: Users can run quantized LLMs (like Mistral or LLaMA) on modular hardware called the AI Box, built on top of a secure Linux-based OS (BoxOS), with support for Docker or Firecracker microVM isolation.

Blockchain-based model registry and reputation system: Each node (LLM provider) registers a signed metadata profile on-chain (using FulaChain or an Ethereum L2 rollup via OP Stack). Users discover available nodes, view pricing and performance, and pay per inference using bridged USDC or native tokens.

Secure RPC protocol: Prompts and responses are encrypted end-to-end using AES-256-GCM, with keys exchanged via X25519, ensuring zero plaintext visibility across the network.

Optional ZK-Audit & TEE: Inference results can be accompanied by zero-knowledge proofs to guarantee integrity, and execution may occur within AMD SEV-SNP or Intel TDX enclaves, providing verifiable privacy.

Open-source, plugin-extensible ecosystem: Developers can register custom models, agent plugins, or domain-specific tools (e.g. legal, medical) and monetize them via a dApp-integrated marketplace.

This infrastructure reclaims data sovereignty for individuals, enables regulatory-safe enterprise AI deployment, and creates a crypto-native economy for decentralized LLM services.

How it's Made

DecentraMind is composed of tightly integrated layers across AI runtime, secure communication, blockchain logic, and edge hardware. Here's how it's built:

  1. AI Runtime Node Containerized LLM runtimes (using Docker or Firecracker microVMs).

Supports ollama, llama.cpp, ONNX, and other frameworks.

Enforces memory, CPU, and execution time limits per request.

Runs in ephemeral sandboxes with no persistent storage by default.

  1. Secure RPC Protocol Client encrypts the message payload using a randomly generated AES-256 session key.

The session key is encrypted using the node’s Curve25519 public key.

All communication happens over TLS with signed headers.

Result is decrypted only by the client, preserving full confidentiality.

  1. Blockchain Layer Built on Optimistic Rollup (OP Stack) for low-fee, high-speed smart contract execution.

Model nodes are registered via smart contracts with metadata stored in IPFS and referenced via CID.

Payments handled via USDC (ERC-20) using gasless permit-based approval.

Usage logs batched and committed to chain via Merkle roots; nodes get slashed if they fail audits.

  1. Reputation & Audit System Oracles and auditor bots submit random queries.

Nodes generate ZK-proofs (via Groth16 or PLONK) that verify:

Execution occurred in a valid TEE (e.g., SEV-SNP).

Model integrity (hash of weights).

Response consistency.

Failing to submit valid proof leads to on-chain slashing.

  1. Edge AI Box (Hardware Layer) Embedded OS: Hardened Linux distro with Docker/containerd.

Hardware: ARM64, x86_64, or RISC-V with optional NPU/GPU acceleration.

Features TPM-backed secure boot, encrypted local storage, and LAN-based model execution.

Can act as a local node or a router to remote nodes.

  1. Frontend dApp (Next.js + Tailwind) Connects via WalletConnect/MetaMask.

Allows users to:

Browse and filter available model nodes.

View SLA/reputation scores.

Start encrypted sessions and pay per query.

Plugin system supports custom workflows, voice input/output, and file uploads.

  1. Notable Hacks / Innovations Encrypted usage batching with Merkle tree proofs minimizes gas costs while preserving auditability.

TEE+ZK combo enables real-time secure inference with <15% added latency overhead.

Gasless payments via EIP-2612 permit signatures reduce friction for end users.

Plugin API sandbox allows untrusted WASM agents to run on user nodes safely.

This multi-layer architecture enables a scalable, privacy-preserving AI ecosystem that aligns incentives across users, developers, and node operators β€” something centralized AI platforms fundamentally cannot offer.

background image mobile

Join the mailing list

Get the latest news and updates