AvaFace

Decentralized AI hub for sharing, training & monetizing models on Web3.

AvaFace

Created At

ETHGlobal New Delhi

Project Description

Akave AI Hub is the world’s first fully decentralized AI model and dataset management platform, designed as a Web3-native alternative to Hugging Face. It enables developers, researchers, and AI creators to train, share, and monetize AI models without relying on centralized platforms, giving full ownership, privacy, and control. Traditional AI platforms suffer from centralized control, high costs, vendor lock-in, and censorship risks. Akave AI Hub addresses these issues with a trustless, decentralized infrastructure: Decentralized Storage: Models & datasets on IPFS and Akave O3 for immutability and censorship resistance. Blockchain Integration: On-chain metadata registry and ownership proofs. Distributed Training: Global GPU network with Docker containerization for reproducible AI training. Web3 Authentication & Governance: Decentralized identity and DAO-based decision-making, incentivized via $AKAVE tokens. CLI & Dashboard: Manage models, initiate training, and track results easily. Competitive Advantages: Ownership by users, lower costs, censorship resistance, and full revenue retention for creators. Akave AI Hub democratizes AI, reduces costs, ensures privacy, and builds a community-owned ecosystem for AI model creation and sharing, paving the way for the next generation of decentralized AI collaboration on Web3.

How it's Made

Akave AI Hub is built as a fully decentralized AI infrastructure, combining Web3 technologies, distributed storage, and scalable AI training frameworks. The architecture is designed to give users full control over their models while enabling collaborative training and monetization.

Core Technologies & Stack Blockchain: Ethereum smart contracts store metadata, versioning, and ownership proofs for models and datasets. This ensures immutability, provenance, and censorship resistance.

Decentralized Storage: Akave O3 – S3-compatible decentralized storage for hosting model files. IPFS – Content-addressed storage for immutable versions of models.

Compute Layer: Distributed GPU Network – Training jobs are distributed globally across volunteer nodes. Docker containerization – Guarantees reproducible training environments across nodes. Resource Marketplace – Node operators earn $AKAVE tokens for providing compute power. Web3 Authentication: Secure decentralized identity for users, enabling login, access control, and interaction with the DAO.

Backend Services: Authentication Service (authController.ts) – Manages user identity and permissions. Training Service (trainingService.ts) – Orchestrates distributed training jobs. File Management Service – Handles uploads/downloads to decentralized storage. CLI Interface & Dashboard: Developers interact with models, training jobs, and governance via an interactive CLI or a web-based dashboard.

Notable Hacks & Integrations Federated Training over P2P Network: We implemented a novel system for distributing training jobs across decentralized nodes while syncing model updates securely. Tokenized Incentives: Using smart contracts, we automatically reward storage and compute providers, ensuring the network sustains itself without central control. Multi-framework Support: PyTorch and TensorFlow models are supported seamlessly within the distributed environment, containerized for reproducibility.

Partner Technologies & Benefits: IPFS & Akave O3 – Immutable storage prevents censorship. Ethereum – Smart contracts enforce ownership and revenue distribution. PyTorch/TensorFlow – Leveraged for flexibility in training multiple AI frameworks. This stack enables a fully decentralized, censorship-resistant, and community-governed AI ecosystem that’s cost-efficient, secure, and scalable.

background image mobile

Join the mailing list

Get the latest news and updates