TrueCast

Decentralized AI news platform where truth is verified by code, community votes, and whistleblowers.

TrueCast

Created At

ETHGlobal Cannes

Project Description

This project is a decentralized platform that reimagines how news is published, verified, and consumed. We aim to solve the problems of misinformation, centralization, and lack of accountability in modern journalism. The platform features three types of users:

  • Journalists, who publish articles and stake tokens based on their truthfulness and quality.
  • Normal users, who participate in a decentralized voting system to evaluate article credibility.
  • Whistleblowers, anonymous actors who can challenge articles post-publication by submitting evidence.

Every article goes through a three-stage validation process:

  • AI Model Check: The article is analyzed using LLM combined with internet lookups to separate facts from opinions and assign a truthfulness score.
  • User Voting Phase: Community users vote on the article's credibility, similar to UMA’s truth machine concept.
  • Whistleblower Challenge: Even after publication, any article can be flagged and reassessed if new evidence emerges.

All users and journalists are verified via Self Protocol, while whistleblowers remain uniquely anonymous. Article data is permanently stored on Walrus, and the AI model is deployed on 0G over Fluence to ensure scalability and decentralization. All staking and truth-score outcomes are handled on-chain via smart contracts.

How it's Made

We begin with three types of users — Journalists, Normal Users, and Whistleblowers — all entering the platform through a unified gateway. Both journalists and normal users go through identity verification using Self Protocol, which ensures Sybil-resistance while preserving user privacy via zero-knowledge proofs and decentralized identity primitives. Whistleblowers, however, remain anonymous and are not bound by identity checks to protect their role as censorship-resistant watchdogs.

When a journalist submits an article, it first enters Stage 1, where it is processed by our AI fact-checking pipeline, built with Python using Transformers (HuggingFace) and spaCy for advanced Natural Language Processing (NLP). The AI splits the content into factual claims and subjective opinions, scores the article based on factual density, clarity, and credibility, and flags potential manipulations. This pipeline is deployed on Fluence, a decentralized peer-to-peer compute network, enhanced with 0G (Zero Gravity) for on-demand, scalable AI model execution without relying on centralized servers.

If the AI-assigned score exceeds the threshold, the article progresses to Stage 2, the user voting phase, where verified users vote on the article’s truthfulness and usefulness. This phase is governed by a mechanism similar to UMA’s optimistic oracle, where users have a limited voting window, and the result becomes final unless contested. This encourages active participation and discourages apathy.

Once an article receives sufficient positive votes, it is accepted and permanently stored on Walrus, a decentralized storage layer built on top of IPFS with added redundancy and immutability. Alongside, the article is linked to an EVM-compatible smart contract (Solidity) that handles all token economics: journalists stake tokens upon submission, which are either rewarded or slashed based on community feedback and potential future challenges. Journalists can also stake to assess other articles, aligning incentives for honest peer review.

However, truth is not frozen after publication. In Stage 3, articles can still be flagged by whistleblowers, a unique and anonymous class of users who submit cryptographic evidence. This triggers a reassessment phase, governed by another smart contract logic that may penalize the original journalist if the flag is found valid, based on further AI analysis and potentially even DAO governance votes.

background image mobile

Join the mailing list

Get the latest news and updates