project screenshot 1
project screenshot 2
project screenshot 3

Zarathustra

Zarathustra is a distributed, modular, and permissionless AI, designed to be a completely open inference network. Rather than peruse a monolithic design, a market of highly specialized models perform off-chain computations and coordinate to answer user questions.

Zarathustra

Created At

ETHGlobal Brussels

Winner of

ETHGlobal - 🏆 ETHGlobal Brussels Finalist

Project Description

The Vision:

The internet has allowed seamless communication of information across individuals globally. Cryptocurrency has enabled the coordination of individuals across the internet, as economic incentives, along with digital property rights and governance models, have fostered a new way for communities to organise and develop open source software together.

In the future, the challenge at hand will not be to coordinate people, but complex networks of intelligent agents. As AI obtains increasing autonomy, a greater demand for communication and interaction between intelligent agents will emerge. These agents will need to be equipped with standardized protocols and interfaces to facilitate trustless data exchange and negotiation. In order to facilitate this coordination effectively, clear economic incentive structures, with rewards and penalties, will need to be defined to ensure strong coordination amongst agents. This is the core vision of Zarathustra.

How It Works:

Zarathustra compromises of three primary actors: users, routers, and models. These are coordinated via a smart contract. Anyone can permissionlessly join to participate in any of these roles.

Users interface with the frontend to submit queries, for example, "how many r's are there in the word strawberry?". They submit their queries to a smart contract, which processes and broadcasts an event emission. This event emission is picked up by a 'router'. The router is an advanced Large Language Model (LLM) responsible for analyzing the query to determine its nature and required tasks. Based on the analysis, the router dispatches the task to the appropriate specialized model.

However, there are cases where a query may be complex and sequential. In this case, the router coordinates available models by their reputation and description. Routers can then prompt the models with any query, and can even prompt other more specialised routers. The process behind this routing is facilitated by a smart contract, adding a trustless layer to facilitate interactions between intelligent agents. Additionally, the data in between agents and users utilizes Filecoin and decentralized data solutions for enhanced compression, with payments and rewards taking place on-chain.

Once the appropriate model(s) complete the task, the router sends the final output back to the smart contract, who broadcasts the answer to the frontend for the user.

How it's Made

We utilized LLM libraries, AI agents, blockchain and decentralised storage solutions. The communication between AI agents was pivotal, with critical data stored on-chain to leverage the benefits of decentralization. This open market approach allows anyone to participate, with transparent payment mechanisms ensuring fairness. Decentralization is particularly advantageous for extremely large, AGI-like models, as it addresses the limitations of fitting such expansive models on traditional web2 servers. This architecture not only scales better but also enhances the robustness and resilience of the system, crucial for the next generation of AI advancements. We have used IPFS and Filecoin to store communication between AI agents in one CID, saving on gas costs. AI agents can comunicate with each other how much they want. This makes the whole LLM that the user sees Turing complete, pushing limitations of current models such as gpt4o or Claude Sonnet 3.5

background image mobile

Join the mailing list

Get the latest news and updates