Making Hard Math Provable: Introducing Efficient Proofs for Non-Polynomial Functions

February 23, 2026

Zero-knowledge proofs have become one of the most powerful tools in cryptography. They let you prove a computation was done correctly without revealing private inputs. That is why ZK is used for privacy, compliance, and scaling systems where trust cannot be assumed. Yet a practical obstacle remains: many real computations rely on mathematical functions that are surprisingly expensive to prove.

Our newest research paper, “Efficiently Provable Approximations for Non-Polynomial Functions” (Sridhar, Srinivasan, Papadopoulos, Papamanthou), directly tackles this challenge. The paper introduces a general framework that makes a large class of “hard” functions both accurate and efficient to prove inside zk-SNARKs, with strong empirical results in a production proof stack (Noir and Barretenberg).

What are “Non-Polynomial Functions” and Why Are They Difficult to Prove?

Many systems depend on functions like:

  • Trigonometric functions (sin, cos, tan) used in navigation, physics, and coordinate transforms
  • Exponentials and logs used in finance and probability
  • Activation functions like sigmoid and GeLU used in machine learning
  • Non-elementary functions like erf used in statistics and scientific computing 

These are called non-polynomial functions because they be written as a simple polynomial like anxn + an-1xn-1+ … + a0. ZK-SNARKs are extremely good at proving polynomial-style arithmetic. However, they are much less efficient when asked to prove “real number math” that appears natural in conventional software.

Directly proving floating-point operations inside a SNARK is expensive and can require hundreds to thousands of constraints for basic operations. This is because most proof systems operate over a finite field (think “a special kind of integer math with wraparound”), while real-world software uses floating point numbers to represent decimals. 

In practice, many systems represent real numbers as fixed-point values using quantization. For example, 3.1415 may be stored as the integer 31415 with an implied decimal position. This is fast and deterministic, but now you need to approximate non-polynomial functions efficiently and accurately in a fixed-point model.

Today’s ZK-Systems (and How We’re Improving Them)

There are three common approaches used in ZK systems today:

  1. Lookup tables
    Precompute a big table of outputs and prove the correct entry was selected. This can be accurate, but tables get huge for large input ranges. For 32-bit domains, tables can become too large to store or prove efficiently.
  1. Piecewise approximations
    Divide the input into segments and approximate each segment with a lineaer function. This can approximate almost anything, but higher accuracy requires more segments and higher proving costs.
  1. Taylor series and polynomial
    Approximate the function with a polynomial expansion. This can be efficient near the approximation point, but high accuracy often requires higher-degree polynomials, increasing multiplications and worsening rounding behavior in quantized arithmetic.

Key Breakthrough: Depth Matters

Our research introduces a framework designed around a critical practical constraint: multiplicative depth.

  • Multiplicative depth is a circuit property describing how many layers of multiplications depend on each other.
  • In quantized computation, deeper multiplication chains amplify rounding error.
  • Many existing approximation techniques increase depth when you push for higher accuracy.

Our main contribution is a method that increases accuracy largely by increasing the number of terms in a sum, rather than increasing multiplication depth. This matters because errors do not compound through addition the way they do through repeated multiplication, so you get high accuracy without the same penalty.

Two New Techniques: Padé and Gauss–Legendre Quadrature

Technique 1: Padé approximants

A Padé approximant is a rational approximation, meaning it represents a function as: polynomial / polynomial. Our research shows Padé can reach the same target accuracy as Taylor series while using lower-degree polynomials, which usually means fewer depth-expensive multiplications. In experiments, the Padé approach often delivered the best overall performance and accuracy tradeoff.

Technique 2: Gauss–Legendre Quadrature

Quadrature is a numerical integration method that approximates an integral as a weighted sum of function values at specific points. Gauss–Legendre quadrature provides very high accuracy with structured weights and points. The research shows how many non-polynomial functions can be rewritten so that proving them reduces to proving a weighted summation plus a set of inverse checks. Critically, increasing accuracy increases the number of summation terms, but does not increase multiplicative depth beyond a small constant (reported ≤ 4). The result is a shallow proof circuit that preserves accuracy even when these functions are used repeatedly (like many activation layers in ML).

We implemented both techniques in Noir with the Barretenberg backend (UltraPLONK), which is production-grade ZK tooling. We evaluated eight functions across four categories (trig, activation, power, non-elementary) and compared them against lookup tables, piecewise linear, and Taylor expansions.

The key results:

  • Gauss–Legendre quadrature achieved up to 8 extra bits of accuracy, translating to as much as 256× lower absolute error in larger quantization domains.
  • Padé achieved 2–20× better prover performance while maintaining comparable accuracy loss, generally outperforming baseline numerical methods.
  • Verifier time stayed in the low millisecond range, and proof sizes stayed within a narrow range since all methods used the same underlying SNARK backend.

We also validated the approach in two accuracy-sensitive applications:

  • Orbital mechanics (proving computations related to Kepler’s equation)
  • DeFi pricing (verifying the Bancor pricing function involving fractional powers)

In both cases, we achieved significant accuracy gains in large quantization domains, with performance comparable to existing baselines.

Implications for Cryptography and Verifiable AI

  1. Broader expressiveness for ZK systems
    • This work expands the set of computations that zk-SNARKs can prove efficiently, particularly when real-number math is involved. Many real-world applications become more feasible when non-polynomial functions are no longer a performance cliff.
  2. Better foundations for verifiable machine learning
    • Machine learning models rely heavily on non-polynomial operations: sigmoid, GeLU, softmax, tanh, exp, normalization, and more. Efficiently proving these functions with high accuracy is directly relevant to zkML. This research provides new building blocks for proving ML pipelines without forcing unacceptable accuracy loss.
  3. A clearer path toward proof-friendly numerical computing
    • Cryptography is increasingly intersecting with scientific and engineering workloads. Our central insight - that depth matters , and additive accumulation is safer for numerical stability - suggests a framework for designing future proof-friendly numerical systems.

What This Means for Defense

Many defense and aerospace systems rely on precisely the kinds of functions addressed in this research:

  • Trigonometric functions for navigation and coordinate transforms
  • Exponentials and nonlinear activations in ML models
  • Statistical functions used in detection and filtering
  • High-precision numerical workflows where small errors compound over time

Efficient proofs strengthen the cryptographic and numerical foundations required for verifiable AI in mission-critical settings. They make “proof of correct computation” viable for the real math these systems depend on.

By introducing Gauss–Legendre quadrature and Padé approximants into zk-SNARK-friendly approximations, this work improves both performance and precision compared to common baselines, and demonstrates those gains in real application benchmarks.

To apply verifiable AI to operational infrastructure, we must be able to prove the math real systems use. This research marks a concrete step in that direction, advancing cryptography for the industries that need it most.


Full Research Paper:
Efficiently Provable Approximations for Non-Polynomial Functions