AI-powered autonomous system that revolutionizes open source evaluation by analyzing GitHub PRs and distributing $hackathon trading fees as weekly rewards. Combines Farcaster Frames, multi-agent AI, and smart contracts to create an infinite, self-sustaining developer ecosystem.
Prize Pool
$hackathon began as an innovative experiment on Warpcast: a Clanker token to use its trading fees to fund weekly hackathons. After generating $8K from $2M in trading volume on its first week, and a first edition that had 8 submissions, we had a challenge:
How can this system work on its own, without the biases -and limitations- that come from having humans be the judges?
So we had the mission to transform this platform into a sophisticated autonomous system that could have the potential of reshaping how we evaluate and reward open source contributions.
Our solution tackles complex technical challenges across multiple domains:
The user experience is streamlined through Farcaster Frames, the cutting-edge standard for crypto-social applications. Developers authenticate with both Farcaster and GitHub, submit their PRs, and receive feedback on them.
Weekly leaderboards drive engagement and healthy competition, with top contributors automatically receiving shares of $hackathon's trading fees for that given cycle.
What makes our project unique is its multi-faceted flywheel effect:
Economic: Trading fees → Economical Rewards → Quality contributions → Community growth → Attention for the token → More trading
Educational: AI feedback → Better development practices → Higher quality submissions → Content to educate hackers
Community: Weekly recognition → Increased participation → Stronger open source ecosystem
By combining AI evaluation with transparent rewards, we're not just automating hackathon judging - we're creating a practical, self-sustaining system that defines and incentivizes quality open source development.
We built a production-ready sophisticated multi-layer system that pushes the boundaries of AI, blockchain, and social integration.
There is no immediate feedback from the system after a given user submits a Pull Request through the farcaster frame. This could be transformed, to generate a sense of immediacy that can help the trust and attention that people can have towards this system. There is an interesting balance to find between making them wait for the results of a given weekly hackathon and providing them immediate feedback to get that rush of dopamine that is so important in today's world.
LLM's are not good with numbers. The scoring accuracy of projects represents opportunities for specialized model fine-tuning, and scaling of this system to potentially create a product that helps companies evaluate the role of their contributors in the evolution of a given codebase.
Potential for enhanced community validation mechanisms, with the challenge of people not having enough time to go through production implementation and codebases. This is why AI excels as a solution for this project, because of its almost infinite knowledge and understanding of the code.
Scope for expanded educational feedback systems and transforming $hackathon into a platform that educates developers (and people that don't know yet refer to themselves as devs) with the learnings of each cycle.
Community: There is a big opportunity to create community around this project, so that the leaderboard and prizes rewarded transform into just one aspect of the flywheel effect that we are proposing. There is value on earning money for being on the leaderboard on a given week, but there is also the value that comes from having those hackers as the example for the rest of the community to follow and learn from.
Our solution demonstrates technical sophistication in combining emerging technologies (AI, Frames, smart contracts) to solve a real-world problem, while maintaining a focus on usability and practical application. The result is a functional platform that's already processing submissions and distributing rewards, with clear paths for future enhancement and scaling.