project screenshot 1
project screenshot 2
project screenshot 3

DeepTrust.eth

SoTA On-chain proofs for LLM Model Executions. DeepTrust.eth makes sure that OpenAI, Anthropic, Groq be honest about the model they are serving.

DeepTrust.eth

Created At

ETHGlobal San Francisco

Winner of

Lit Protocol - Best Use of Compute Over Private Data (Decrypting within a Lit Action)

Nethermind - Innovative Applications of ZK in Deep Learning and the ETH Ecosystem

Project Description

The future will undoubtedly rely heavily on AI-powered applications and insights. But how can we ensure that the individuals or enterprises providing us with these models are trustworthy and credible?

We identified two major problems that haven't been widely addressed:

Centralized inference requires trust in an inference provider not to alter the model or data. For example, how can we be sure that OpenAI is providing responses from the more expensive GPT-4 model instead of cutting costs and using GPT-3.5? Centralized entities might manipulate or censor outputs, potentially affecting model fairness. To address these issues, we proposed a solution using blockchain technology, creating a network of verification nodes that perform the checks for us. We introduced a novel plugin that can be integrated into all transformer architectures, which LLM models are based on. This plugin allows us to derive a deterministic term from each model by intercepting the LLM prior to the head layer (where randomness, such as temperature, is introduced).

Since relying on a single source could be risky (due to potential dishonesty), our solution involves a network of models performing independent verifications. These outputs are then compared with those provided by enterprises like OpenAI, giving us a better chance to detect bad actors. Rather than checking every single request, we opted to sample 2-5% of all inference requests for verification.

Additionally, we implemented a tokenomics system within the network. All inference clients must stake tokens to participate. If our platform detects dishonesty, these stakes will be slashed. Conversely, those who remain honest will see their trust score increase and will be rewarded with tokens. This system creates a strong incentive for participants to act with integrity.

How it's Made

We primarily used two key technologies to achieve this solution:

Polygon – We chose Polygon due to its scalability and low transaction costs. It allows us to build a transparent and efficient verification system, where transactions related to inference verifications and token staking/slashing are processed quickly and affordably. We find the documetation very friendly so it made our developer experience awesome!

Lit Protocol – We leveraged Trusted Execution Environments (TEEs) to ensure that our model executions are isolated and protected from external interference. Lit Protocol was also one of the easiest platforms to develop on, as its setup process is much more straightforward compared to other providers.

background image mobile

Join the mailing list

Get the latest news and updates