Dehug

decentralized AI model hub with NFT royalties and native compute

Dehug

Created At

ETHOnline 2025

Project Description

DeHug is a decentralized hub where developers and researchers can publish, version, discover, and monetize AI models and datasets. Creators upload model weights and metadata, mint a Model NFT on Base that proves authorship and license terms, and optionally shard large artifacts to decentralized storage (IPFS/Filecoin/Arweave). Consumers can run inference directly (via on-chain triggered jobs routed to decentralized compute) or pull models through the DeHug Python SDK and pay usage fees that automatically distribute royalties to creators and collaborators.

Key features

Model NFTs (ERC-721/1155 + ERC-2981): On-chain ownership, licensing, and royalty rules (default 5% creator royalty, configurable splits).

Immutable storage & provenance: Content-addressed artifacts (IPFS CIDs), signed manifests, and optional EAS attestations for lineage.

Usage-based monetization: Pay-per-inference or subscription; automatic revenue splits to contributors (core author, data curators, evaluators).

Developer-first SDKs: Python SDK for training/inference pipelines (TensorFlow, PyTorch, Transformers) + REST/Graph endpoints.

Discovery & gamification: Trending models, contributor leaderboards, badges, and seasonal model-upload challenges and hackathons.

Enterprise mode: Private spaces, audit/compliance logs, SSO, policy-locked licenses, and analytics dashboards.

Who it’s for

Open-source model authors who want fair, programmable payouts.

Research labs/enterprises needing verifiable provenance, usage analytics, and compliant decentralized storage/compute.

Builders who want a “Hugging Face-like” experience—but with on-chain ownership, royalties, and composability in the Base ecosystem.

How it's Made

Architecture at a glance

Frontend: Next.js (App Router) + React + TypeScript + Tailwind/shadcn for a clean, fast UI. Wagmi + Coinbase Wallet for wallet flows; SIWE for auth.

Smart contracts (Base):

ModelRegistry.sol (tracks model IDs, owners, license URIs, content hashes, versions).

ModelNFT.sol (ERC-721/1155) with ERC-2981 royalties; integrates 0xSplits (or equivalent) for multi-party revenue sharing.

UsageController.sol for metered access tokens/passes and pay-per-inference receipts.

Indexing & query: Subgraph (The Graph) to power search, trending, creator stats, and royalty histories.

Storage: IPFS/Filecoin (via web3.storage/NFT.storage) for weights, tokenizer, configs, and a signed model manifest (JSON) that pins:

artifact CIDs, SHA-256/Merkle root, license, dependencies, eval scores, and optional EAS attestations.

Compute layer:

Decentralized compute integration (e.g., 0G/other networks) for queued inference/training jobs triggered by on-chain receipts.

OR serverless GPU fallback for the demo to guarantee live inference during judging.

Client fallback: ONNX Runtime Web/WASM for small models to prove end-to-end portability.

Backend services: FastAPI (Python) microservice for inference orchestration, model validation, and eval job runners; Node/Express for webhook + payments glue.

Data & telemetry: Postgres for off-chain analytics; Prometheus/Grafana for usage metrics; Log drains for audit trails.

Marketplace compatibility: OpenSea/Magic Eden ready via ERC-2981; royalties honored on secondary trades.

Partner tech & why it helped

Base: low fees, EVM tooling, strong ecosystem → cheap mints, frequent updates, easy composability with DeFi/NFT infra.

IPFS/Filecoin/Arweave: durable, verifiable storage → reproducible science and auditability.

0xSplits (or similar): instant multi-party payouts → reward data curators, evaluators, and collaborators fairly.

The Graph: fast queries for trending/leaderboards → smooth UX without hammering RPCs.

EAS (Ethereum Attestation Service): optional provenance attestations → trust signals for enterprises and researchers.

Notable hacks we’re proud of

Model Manifest v0: a signed JSON schema that deterministically binds code, weights, license, evals, and dependencies; we compute a Merkle root over all files, store leaf hashes in IPFS DAGs, and stamp the root on-chain for tamper-evident provenance.

Sharded weight uploads: chunked TARs with streaming pin + parallel verification to keep large models responsive in the browser.

Usage-as-receipt flow: a tiny “inference pass” NFT minted/burned per job; acts as both a payment receipt and cache-key for replays.

WASM fallback: for small models, we demo end-to-end inference fully in-browser with ONNX Runtime Web—no server, no GPU—just to prove portability.

What’s working in the MVP

Upload → pin → mint Model NFT on Base → run inference (decentralized or fallback) → auto-distribute creator royalty (default 5%) → list on marketplace.

Python SDK: dehug login, dehug push, dehug pull, dehug infer with examples for PyTorch/Transformers.

Discovery: basic search, tags, trending, and contributor badges seeded by hackathon challenge uploads.

background image mobile

Join the mailing list

Get the latest news and updates