project screenshot 1
project screenshot 2
project screenshot 3

Pryv

We use Marlin's TEE's to run inference that protects users PII from LLM providers.

Project Description

It's a private chat interface, hosted on IPFS, that is multi modal (claude, openai, etc.) and provides privacy for users PII in metadata associated with their request. Its like a Duckduckgo, but for inference. It enables context switching, so that you can take your chat history with a different model and pass it into a new model without copy/pasting. There are future iterations of things that I want to do with this, like have a model locally to swap out sensitive data. This would be a patch job until something like FHE becomes live!

How it's Made

We use Django for the server, and html/css for the frontend. We use Polygon for subscriptions and IPFS to host our site. We use Thirdweb wallet connect for wallet auth. We use marlins TEE network to deploy our enclave server into a secure environment to execute inference requests.

Our TEE is hosted on Marlin and you can send it an example request. You can see in the screenshot on the following page.

curl -v -X POST https://api.imbuefit.com/stream_query/
-H "Content-Type: application/json"
-d '{ "query": "What is the capital of France?", "model": "gpt-4", "context": [] }'

background image mobile

Join the mailing list

Get the latest news and updates