Itô Protocol is a Stochastic AMM which simulates possible future prices via a Geometric Brownian Motion (GBM) model and adjusts its internal mechanics on-the-fly to better handle volatility and market risk.
What are Stochastic AMMs?
A Stochastic AMM extends traditional AMM models (like Uniswap) by incorporating random price evolution into their core design.
- Instead of using deterministic formulas (e.g., constant-product), it uses probabilistic models to simulate future price movement.
- By sampling a randomized “effective price” at each trade, it adapts parameters—such as fees, spreads, and liquidity routing based on modeled risk, giving it trading-like adaptability.
Discrete GBM in Ito Protocol
In practice, Itô Protocol uses a discrete snapshot at swap time:
EffectivePrice = P_Market * exp(–σ²·Δt / 2 + σ√ΔtZ₀)
- Convexity Adjustment (–σ²·Δt / 2): Corrects the log-normal skew so the expectation isn’t biased upward
- Volatility Scaling (σ√Δt): Translates annual σ to the chosen time window Δt
- Random Shock (Z₀ ~ N(0,1)): Applies random variation in line with GBM
Liquidity
Token B/Token A = (σ * currentRatio + (1 - σ) * oracleRatio)
Current Ratio = reserveA / reserveB
Oracle Ratio = 1 / Price
Fee Calculation
Fee = Base + σ * VolMultiplier + (Trade Size/Reserves) * DepthFactor
- Increases during high volatility
- Scales with trade size relative to pool depth
- Compensates LPs for increased risk.
Component Overview
- Drift Adjustment Term (-σ²/2):
- Compensates for Jensen's Inequality in lognormal distributions
- Ensures E[S_t] = S₀e^{μt} (martingale property)
- Without this, prices would artificially drift upward
- Volatility Scaling (σ√Δt):
- Annualized volatility scaled to time period
- Square root law comes from variance scaling in Brownian motion
- Ensures consistency across timeframes
- Random Shock (Z):
- Standard normal variable (μ=0, σ=1)
- Captures unpredictable market movements
However, it's not perfect. Alternatives include:
- Ornstein-Uhlenbeck (OU) Process: For stablecoins (mean-reverting).
- Jump-Diffusion Models: To account for sudden market crashes.
Volatility Estimation
We need the volatility σ. We can get this from historical price data. We can use data feeds to compute the volatility over a recent period (e.g., last 30 days).
Why 30-day volatility?
- Statistical Significance: 30 days provides a more stable and reliable measure of volatility. 24h volatility can be extremely noisy and may overreact to short-term events (e.g., news, market manipulation).
- Industry Standard: In traditional finance, 30-day (or 1-month) volatility is widely used for options pricing and risk management.
- Mean-Reverting Properties: Volatility itself is mean-reverting. Using a longer time frame helps capture the "typical" volatility level rather than transient spikes.
However, for highly volatile assets or during market crises, shorter time frames (like 24h) might be more responsive. The choice depends on the asset and risk tolerance.
Why not use 24h?
- Overfitting to Noise: 24h volatility can be misleading. A single day of high volatility might not represent the asset's true risk profile.
- Manipulation Risk: Short-term volatility is easier to manipulate with large trades.
- Inconsistency: If we update the volatility too frequently (e.g., every block), it might lead to erratic fee changes and pricing.