Why Fixed-Point Arithmetic is the Hardest Part of ZK

Your f64 has 53 bits. Your field has 255. Here's where they collide — and why fixed-point arithmetic is the single hardest problem in ZK circuit design.

Fixed-point arithmetic visualization: decimal precision fragmenting into field elements

A binary64 float gives you 53 bits of significand precision. A BN254 scalar field gives you roughly 254 bits. The hard part of ZK engineering is not field arithmetic itself. It is forcing real-valued systems to survive the translation between those two worlds without overflow, witness drift, or silent precision loss.

The Problem No One Talks About

Every tutorial on ZK circuits shows you how to prove a + b = c. Maybe a hash preimage. Maybe a Merkle proof. All of these operate on integers that fit cleanly into a finite field.

Then you try to prove something physical -- a temperature, a pressure, a velocity -- and everything breaks.

Physics code is usually written in floating-point numbers. ZK circuits are checked over finite fields. There is no f64 inside a PLONK constraint. There is no IEEE 754 rounding. There are only integers modulo a large prime field.

So you scale.

The Scaling Factor: Why 10^30?

In our zk-physics project -- where we prove a sonoluminescence simulation in zero knowledge -- every physical quantity is represented as:

value_scaled = round(value_float × 10³⁰)

A bubble radius of 5 micrometers becomes 5_000_000_000_000_000_000_000_000 (5 × 10²⁴ in scaled representation). A pressure of 101,325 Pa becomes 101_325_000_000_000_000_000_000_000_000_000_000.

Why 10^30 specifically? Three constraints:

Precision floor: Sonoluminescence spans a 200× pressure range and 170× temperature range. The bubble radius shrinks by 100×. We need enough digits to track changes across five orders of magnitude without rounding errors accumulating into wrong answers.

Overflow ceiling: When you multiply two scaled values, the intermediate product hits 10^60. The BN254 scalar field modulus is 21888242871839275222246405745257275088548364400416034343698204186575808495617, which is about 2^254 and about 2.19 × 10^76. So 10^60 still leaves about 16 decimal orders of magnitude of headroom before field wraparound becomes a risk for these intermediate products. Push the scaling factor much higher, and that safety margin shrinks fast.

The inversion problem: Division in a ZK circuit isn't floor(a/b). It's a × b⁻¹ mod p. These give different results whenever a × b isn't exactly divisible by the scale. Your witness generator computes floor(a*b/S). Your circuit computes a*b*S⁻¹ mod p. If these diverge, your proof is invalid.

Split Multiplication: Avoiding i128 Overflow

The Rust witness generator can't multiply two 10^30-scale integers directly -- i128 maxes out at ~1.7 × 10^38, and two typical scaled values multiplied gives 10^60.

Our solution splits each value into three 10^10-digit limbs:

const SPLIT: i128 = 10_000_000_000; // 10^10

let a0 = a_abs % sp;
let a1 = (a_abs / sp) % sp;
let a2 = a_abs / (sp * sp);

Since SPLIT^3 = 10^30 = SCALE, the product a × b / SCALE can be reassembled from partial products where each a_i × b_j ≤ (10^10 - 1)² ≈ 10^20 -- well within u128.

The key insight: terms where i + j < 3 contribute only to the fractional part (which gets truncated), terms where i + j = 3 contribute directly, and terms where i + j > 3 contribute shifted by SPLIT. This is schoolbook multiplication with a built-in division.

Iterative Long Division: The Other Hard Part

Scaled division -- computing (a × S) / b -- has the opposite problem. a × SCALE can reach 10^62, far beyond even u128.

We use iterative long division with a chunk factor of 10^6:

const CHUNK: u128 = 1_000_000; // 10^6
const ROUNDS: usize = 5;       // CHUNK^5 = 10^30 = SCALE

let mut acc: u128 = a_abs / b_abs;
let mut rem: u128 = a_abs % b_abs;

for _ in 0..ROUNDS {
    let wide = rem * CHUNK;
    let digit = wide / b_abs;
    rem = wide % b_abs;
    acc = acc * CHUNK + digit;
}

Five rounds of multiply-by-10^6-and-divide, and we've effectively multiplied by 10^30 without ever forming the full product. The maximum intermediate value is remainder × CHUNK < b × 10^6 ≈ 10^38, which fits in u128.

The Float-to-Scaled Trap

Here's a bug we actually shipped and had to fix: float_to_scaled() was implemented as:

pub fn float_to_scaled(v: f64) -> i128 {
    (v * SCALE as f64) as i128  // DON'T DO THIS
}

The problem: f64 has 53 bits of mantissa. SCALE = 10^30 needs about 100 bits. So v * 1e30 silently loses the bottom 47 bits of precision. For v = 5e-6 (5 micrometers), v * 1e30 = 5e24 -- that's fine, only 82 bits. But for v = 0.0728 (surface tension), v * 1e30 = 7.28e28 -- 94 bits, truncated to 53. You lose 12 decimal digits.

The fix splits into integer and fractional parts:

fn float_to_scaled_big(v: f64) -> BigInt {
    let int_part = v.trunc() as u64;
    let frac_part = v - int_part as f64;
    let int_scaled = BigInt::from(int_part) * BigInt::from(10u64).pow(30);
    let frac_hi = (frac_part * 1e15).round() as i64;
    let frac_scaled = BigInt::from(frac_hi) * BigInt::from(10u64).pow(15);
    int_scaled + frac_scaled
}

Two stages, each within f64's 53-bit precision. The integer part gets full 30-digit scaling. The fractional part gets 15 digits of precision (within f64's capacity) scaled by 10^15. Combined, this preserves substantially more usable decimal precision than the one-shot v * 1e30 cast, based on the 53-bit precision limit of f64.

What's Inside the Circuit

In the actual halo2 constraint system, the field arithmetic is cleaner -- but the subtlety runs deeper:

// Scaled multiply gate: a * b == c * S
meta.create_gate("scaled_mul", |meta| {
    let s = meta.query_selector(s_mul);
    let a_val = meta.query_advice(a, Rotation::cur());
    let b_val = meta.query_advice(b, Rotation::cur());
    let c_val = meta.query_advice(c, Rotation::cur());
    vec![s * (a_val * b_val - c_val * scale.clone())]
});

The prover provides c as a witness. The gate checks a × b - c × S = 0 over the field. If the prover lies about c, the constraint fails and the proof is invalid. The constraint itself is exact in the field: it enforces multiplication by the inverse of S modulo the field prime, not truncating integer division. That mismatch is exactly why witness-generation code has to mirror the circuit semantics with care.

The tension between exact field arithmetic (in the circuit) and approximate integer arithmetic (in the witness generator) is where every subtle bug lives.

The Lesson

Fixed-point arithmetic in ZK is hard not because the math is complicated -- it's schoolbook multiplication. It's hard because you're maintaining two parallel computation models (field vs. integer) that must agree on every intermediate value, while operating under constraints (no floats, no overflow, no rounding errors) that don't exist in normal programming.

Get the scaling wrong, and your proofs are invalid. Get the precision wrong, and your witness doesn't match. Get the division wrong, and field arithmetic silently gives different answers than integer arithmetic.

Every ZK project that touches real-world quantities will hit this wall. Plan for it.


This is Part 1 of an 8-part series on building zero-knowledge proofs for physics simulations. Part 2: Building Custom halo2 Chips dives into the full constraint system -- available to Pro members.

New here? Subscribe free for the full series, or grab the halo2 circuit guide to build your first chip step by step.

Subscribe to Jacobian

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe