Create a blockchain-integrated, tailor-made GPT specific to your protocol
Prize Pool
Prize Pool
Our platform stands at the intersection of AI and blockchain technology, pioneering an innovative approach to GPT models that are specifically tailored for a variety of protocols. At the core of our offering is a user-centric marketplace where individuals can contribute their custom, protocol-oriented GPTs. This not only enriches our platform but also nurtures a community-driven ecosystem thriving on innovation. Each GPT model on our platform is a deep reservoir of knowledge, providing expert insights into its respective protocol, including intricate technical details and code-level understanding. For instance, engaging with a model like the Aave V3 Protocol GPT opens a window to comprehensive understanding, from technical nuances to coding intricacies. These models act as intelligent guides, simplifying and elucidating complex protocol functionalities for users. We leverage Cartesi's Linux-based virtual machine, which is pivotal in ensuring scalability and efficiency in DApp development. This approach is achieving full verification of our GPT models, their training datasets, and model parameters. By doing so, we ensure that every aspect of model architecture and data usage is not just transparent but also verifiable on the blockchain. This commitment to transparency and verification sets a new standard in AI and blockchain integration, outputs a sense of trust and credibility within our ecosystem.
We have a login mechanism that takes wallet address and then shows WorldCoin QR. When a user finishes the authentication, the json result is sent to our smart contract to verify user on chain. After the verification whenever the user want to create a protocol, if he/she is verified, the creation process is starts.
In order to validate users uniqueness we use Proof of personhood of WorldCoin. In our first smart contract that interacts with our frontend we receive the root, nullifierHash and the proof which was generated on the frontend. then we use them to verify that this user is unique and interacts with us through our frontend. after verification then we let the user send a prompt to our personalised GPT Model. in Cartesi. This happens through Hyperlane. Our verifier contract sends the prompt or the GPTData contract creation method via Hyperlane to our another contract in Scroll. Scroll provides proofs for each tx thanks to its zk-evm nature so every step until our LLM model is verifiable. Then those messages come to our Factory Contract in Scroll and according to commands either a new data contract is created to feed our model or we directly send our prompt to our model.
We used huggingface models in our first trial. Due to Cartesi Linux runtime uses risc-v architecture some of the dependencies cant be compiled. Because of that we used LLAMA 2 cpp version. With this version ve are using llama 2 7B version with 4 bit quantization. So we are using less memory for loading model. This approach solved our dependency issue. Hence our current DApp which serves as custom gpt model almost ready to deploy Cartesi Machine We are also using inspect requests in our dapp to in order to show what information we stored so we are ensuring transparency.