project screenshot 1
project screenshot 2
project screenshot 3

AutoWebChain

decentralised fine tuning of web agents and rag on chain, using lit and walrus

AutoWebChain

Created At

ETHGlobal San Francisco

Project Description

Websites are considered out-of-distribution data for LLMs, meaning they aren't naturally optimized to handle such content efficiently. While LLMs can navigate websites, their performance could greatly improve with specialized fine-tuning. During our work, we managed to integrate support for models like 4o-mini, SLMs (which we later removed for performance reasons), and LLama 3.2 B. Additionally, we explored optimizations for better handling of web data and improving interaction across various web environments.

How it's Made

We utilized Walrus to simulate the addition of model weights into a network for efficient storage and retrieval as blobs. This allowed users to download these blobs and run the models directly, enabling them to automate tasks in their browsers. The automation was based on a simplified DOM model inspired by Taxy AI's implementation, making it more user-friendly and streamlined for common web interactions. Throughout this process, we added support for models like 4o-mini, SLMs (which we later phased out for performance reasons), and LLama 3.2 B. Though the latter models were eventually removed, their integration provided valuable insights into handling model storage and execution. Additionally, we enhanced the compatibility between model blobs and browser automations, improving overall flexibility and performance for end users.

background image mobile

Join the mailing list

Get the latest news and updates