Using zero knowledge machine learning and zktls to create trustless anomaly detection


The Core Problem
The "Impossible Trinity" of DeFi security states that you can only achieve two out of three critical properties: Sophisticated Detection (complex ML models that catch attacks), Decentralization (trustless, no oracles), and Low Cost (affordable gas fees). Traditional solutions always sacrificed one—centralized oracles are sophisticated and cheap but require trust; on-chain ML would be sophisticated and trustless but would cost $15,000+ per transaction in gas fees; simple allowlists are cheap and trustless but can't detect novel attack patterns. This trilemma forced DeFi protocols to accept either security theater (simple rules), centralization (trust oracles), or massive losses (no protection at all).
How Zero-Knowledge Breaks The Trinity
Redfish achieves all three properties simultaneously through Zero-Knowledge Machine Learning (ZKML). The key insight is separating computation from verification: the sophisticated XGBoost fraud detection model runs off-chain (taking 30-60 seconds but costing zero gas), generates a cryptographic proof of correct execution, and then the smart contract verifies this proof on-chain (costing ~$375 in gas, dropping to ~$4 with optimizations). The ZK proof is mathematically impossible to forge—breaking it would require breaking the same cryptography that secures Bitcoin signatures—making it completely trustless. This means you get enterprise-grade ML fraud detection (99.97% accuracy) with on-chain verification that's 40x cheaper than naive execution, all without trusting any centralized party.
The Paradigm Shift
This breakthrough is analogous to how Layer 2 rollups solved the blockchain scalability trilemma—by moving computation off-chain while keeping verification on-chain. Before ZKML, complex decision-making on blockchains was fundamentally limited by the gas cost of execution. Now, smart contracts can leverage arbitrarily sophisticated computation (ML models, complex algorithms, external data) while remaining fully decentralized and economically viable. The verification cost is dropping exponentially as ZK technology matures: from $375 today to projected $0.37 within 2-3 years through proof aggregation, hardware acceleration, and L2 deployment.
Why This Matters
Breaking the Impossible Trinity unlocks entirely new categories of blockchain applications that were theoretically impossible before: undercollateralized DeFi lending with on-chain credit scores, sybil-resistant airdrops using behavior analysis, privacy-preserving KYC compliance, MEV-resistant trading, and autonomous fraud prevention like Redfish. This isn't incrementally better—it's a fundamental expansion of what's computationally feasible in trustless systems. Redfish demonstrates that the "impossible" constraint that shaped all of DeFi security architecture is now obsolete, opening the door to an "Intelligence Layer" where every smart contract can make sophisticated, context-aware decisions while remaining fully decentralized and verifiable.
The Implementation
We start with a minimum viable example of an anomaly detection model, trained on ethereum data to detect suspicious transactions. This model is small enough to be used in a ZK circuit and verified atomically onchain. We source trusted data points, such as historical account data, from RPC endpoints secured by vlayer's ZKTLS system. After the inference is verified onchain, it can trigger downstream logic. This fundamentally increases the computational expressivity of a smart contract to that of currently circuitizable machine learning models.
We used EZKL as the ZKML engine. This circuitizes the pytorch-generated model, using an ethereum fraud dataset. The inputs the this model at inference time are delivered via vlayer, which serves to provide provable notarization to the input variables, ie. that they are not spoofed by the tx sender. After a zkml proof is made, a verifier is deployed on chain, and then a transaction can be sent to that verifier which will verify the inference of the model without a third party. After this, downstream logic, such as a uniswap hook, can be triggered conditionally on the outcome of the inference.

