Approximation Theory

The formal polynomial semantics that underpin every circuit operation in Asgard.

Overview

A computational network operates on coefficient arrays — finite-dimensional approximations of functions represented as polynomial expansions. Given $n$ dimensions, a network is a function

$$s : \mathbb{N}^n \to \mathbb{R}$$

mapping multi-indices to real-valued coefficients. The coefficient array represents a polynomial approximation of the solution to a dynamical system.

For a single variable $x$, the coefficient array $[c_0, c_1, c_2, \ldots]$ corresponds to the polynomial

$$f(x) = c_0 + c_1 x + c_2 x^2 + \cdots$$

When interpreted as Taylor coefficients (with the appropriate factorial scaling handled by the register/deregister operations), this polynomial approximates the true solution to any desired degree of precision.

Atomic Operations

Each atomic circuit operation transforms one or more coefficient arrays into a new coefficient array. The definitions below are exact — they specify what the JAX compiler implements numerically.

Identity

For $a : \mathbb{N}^n \to \mathbb{R}$ and $i \in \mathbb{N}^n$:

$$(id(a))(i) = a(i)$$

Passes coefficients through unchanged. Circuit syntax: id.

Scalar Multiplication

For $a : \mathbb{N}^n \to \mathbb{R}$, $i \in \mathbb{N}^n$, and $s \in \mathbb{R}$:

$$(s \cdot a)(i) = s \cdot a(i)$$

Scales every coefficient by the constant $s$. Circuit syntax: scalar(s).

Addition

For $a, b : \mathbb{N}^n \to \mathbb{R}$ and $i \in \mathbb{N}^n$:

$$(a + b)(i) = a(i) + b(i)$$

Pointwise addition of coefficient arrays. Circuit syntax: add.

Multiplication (Cauchy Product)

For $a, b : \mathbb{N}^n \to \mathbb{R}$ and $i \in \mathbb{N}^n$:

$$(a \cdot b)(i) = \sum_{i_1 + i_2 = i} a(i_1) \cdot b(i_2)$$

This is the Cauchy product — the coefficient-level analogue of polynomial multiplication. The sum runs over all pairs of multi-indices that add to $i$. Circuit syntax: multiplication.

Copy

For $a : \mathbb{N}^n \to \mathbb{R}$ and $i \in \mathbb{N}^n$:

$$(\Delta(a))(i) = a(i)$$

Duplicates the coefficient array onto two output wires. Categorically, this is the diagonal map. Circuit syntax: split.

Register (Integration)

For dimension $m$, $a : \mathbb{N}^n \to \mathbb{R}$, and $i \in \mathbb{N}^n$:

$$R_m(a)(i) = \begin{cases} 0 & \text{if } i_m = 0 \ \frac{1}{i_m} \cdot a(\hat{i}) & \text{if } i_m > 0 \end{cases}$$

where $\hat{i}_k = i_k$ for $k \neq m$ and $\hat{i}_m = i_m - 1$.

Register shifts coefficients right along dimension $m$ and scales by $1/i_m$. This implements formal integration: if $a$ represents $f(x)$, then $R(a)$ represents $\int f(x),dx$ (with zero constant of integration). The $1/i_m$ scaling is the Taylor coefficient form of the power rule: $\int x^n,dx = \frac{x^{n+1}}{n+1}$.

Circuit syntax: register(x).

Deregister (Differentiation)

For dimension $m$, $a : \mathbb{N}^n \to \mathbb{R}$, and $i \in \mathbb{N}^n$:

$$D_m(a)(i) = (i_m + 1) \cdot a(\hat{i})$$

where $\hat{i}_k = i_k$ for $k \neq m$ and $\hat{i}_m = i_m + 1$.

Deregister shifts coefficients left along dimension $m$ and scales by $(i_m+1)$. This implements formal differentiation: if $a$ represents $f(x)$, then $D(a)$ represents $f'(x)$. The $(i_m+1)$ scaling is the Taylor coefficient form of the power rule: $\frac{d}{dx}x^n = n \cdot x^{n-1}$.

Circuit syntax: deregister(x).

Register and deregister are formal inverses — they encode the fundamental theorem of calculus at the coefficient level:

$$D_m(R_m(a)) = a \qquad R_m(D_m(a)) = a + F_0$$

where $F_0$ is the constant of integration (the zeroth coefficient along dimension $m$).

Composition (Power Series Substitution)

Composition of coefficient arrays corresponds to power series substitution: computing the coefficients of $f(g(x))$ from those of $f$ and $g$.

One-dimensional Case

For $a, b : \mathbb{N} \to \mathbb{R}$ and $n \in \mathbb{N}$:

$$(a \circ b)(n) = \sum_{i \in \mathbb{N}} a(i) \sum_{\substack{w \in \mathbb{N}^i \ |w| = n}} \prod_{j=1}^{i} b(w_j)$$

where $|w| = \sum_k w_k$. This sums over all ways to partition the degree $n$ among the terms of $b$ raised to successive powers.

In practice, for coefficient arrays truncated to degree $P$:

$$(a \circ b)(n) = \sum_{i=0}^{P} a(i) \sum_{\substack{v \in \mathbb{N}^i \ |v| = n}} \prod_{j=1}^{i} b(v_j)$$

Circuit syntax: convolution(P).

Multi-dimensional Case

For $a, b : \mathbb{N}^N \to \mathbb{R}$, $n \in \mathbb{N}^N$, and composition along dimension $M \in {1,\ldots,N}$:

$$(a \circ b)(n) = \sum_{i=0}^{P_M} \Bigl(\sum_{\substack{v_1 \in \mathbb{N}^{i+1} \ |v_1|=n_1}} \cdots \sum_{\substack{v_M \in \mathbb{N}^{i+1} \ |v_M|=n_M+i,; v_M(0)=i}} \cdots \sum_{\substack{v_N \in \mathbb{N}^{i+1} \ |v_N|=n_N}} a(\langle v_1(0),\ldots,v_N(0)\rangle) \prod_{j=1}^{i} b(\langle v_1(j),\ldots,v_N(j)\rangle)\Bigr)$$

The composition dimension $M$ is distinguished: its inner index carries the "degree of substitution" while all other dimensions are convolved independently.

Algorithm

The multi-dimensional composition can be computed recursively. For each outer degree $i$ (the power of $b$ in the expansion), enumerate all ways to distribute indices across dimensions:

function compose(a, b, n, M, N):
    result = 0
    if n[M] == 0:
        result += a[n]
    for i = 1 to P_M:
        v = zero matrix of shape (i+1, N)
        result += recurse_dimensions(a, b, N, i+1, i+1, v, n[N])
    return result

Each recursive call handles one dimension at a time, accumulating the product of $b$ coefficients and the $a$ coefficient at the top-level index.

Structural Operators

The atomic operations above transform coefficient arrays. Structural operators compose these transformations into larger networks.

Composition (Sequential)

For transformations $s, t : (\mathbb{N}^n \to \mathbb{R}) \to (\mathbb{N}^n \to \mathbb{R})$ and input $a$:

$$(s \circ t)(a) = s(t(a))$$

Apply $t$ first, then feed the result into $s$. Circuit syntax: composition(s, t).

Monoidal Product (Parallel)

For transformations $s, t$ and inputs $a, b$:

$$(s \otimes t)\langle a, b \rangle = \langle s(a), t(b) \rangle$$

Apply $s$ and $t$ independently to separate inputs. Circuit syntax: monoidal(s, t).

Trace (Feedback)

For $s : (\mathbb{N}^n \to \mathbb{R})^c \to (\mathbb{N}^n \to \mathbb{R})^d$ with $c, d > 1$ and feedback degree $e$ where $0 < e < \min(c,d)$:

$$\text{Tr}^e(s)$$

routes $e$ outputs back as inputs, creating a feedback loop. The trace is computed iteratively: each coefficient is determined by the previous ones, which is why the stream calculus evaluator processes coefficients one at a time.

Circuit syntax: trace(s).

See Categorical Semantics for the formal properties (naturality, dinaturality, yanking) that traces must satisfy.

Connection to Code

The JAX compiler (jax_compiler.py) implements each atomic operation as a JAX function operating on NumPy/JAX arrays:

Math Code JAX Operation
$R_m(a)$ register(x) Right-shift + divide by index
$D_m(a)$ deregister(x) Left-shift + multiply by index
$a + b$ add jnp.add(a, b)
$a \cdot b$ multiplication Cauchy product via jnp.convolve
$s \cdot a$ scalar(s) s * a
$\Delta(a)$ split Duplicate array

The Runtime page explains how the runtime evaluates these operations coefficient-by-coefficient for differential equations with feedback (trace).

Next Steps