Examples
Asgard turns mathematical equations into executable, differentiable simulations. Below are real problems from physics, finance, engineering, biology, and discrete mathematics — each defined as a standard equation and a declarative YAML configuration.
Every example follows the same pattern: write the equation, declare the parameters, and let the framework handle compilation, execution, and visualization.
Physics
Exponential Decay
The simplest non-trivial ODE: a quantity whose rate of change equals itself. This models radioactive decay, capacitor discharge, and any process with a constant half-life.
$$f'(x) = f(x), \quad f(0) = 1$$
The solution is $f(x) = e^x$. In Asgard, the equivalent integral form compiles to a traced circuit where the stream calculus feedback operator computes Taylor coefficients to arbitrary precision.
Equation (LEAN-style):
f = f0 + int(f, x)
Configuration:
equation: "f = f0 + int(f, x)"
input:
coefficients: [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
dims: [x]
evaluate:
x:
range: [0, 2]
points: 100
Heat Equation
The canonical parabolic PDE: diffusion of heat in a medium. The temperature's rate of change in time equals its spatial curvature.
$$\frac{\partial f}{\partial x} = \frac{\partial^2 f}{\partial y^2}$$
Asgard handles this as a multi-dimensional stream, computing coefficients in both $x$ and $y$ simultaneously. The same compilation pipeline that handles 1D ODEs extends to PDEs without changing the equation syntax.
Equation:
diff(f, x) = diff(diff(f, y), y)
Configuration:
equation: "diff(f, x) = diff(diff(f, y), y)"
input:
shape: [5, 6]
fill: 1.0
dims: [x, y]
evaluate:
x:
range: [-1, 1]
points: 30
y:
range: [-1, 1]
points: 30
Power Series Composition
Functional composition on streams: computing $\sin(2t)$ by composing the Taylor series for $\sin$ with the linear map $t \mapsto 2t$.
$$f(g(t)) = \sin(2t)$$
The compose combinator computes $f(g(t))$ for any pair of power series — enabling transcendental functions applied to polynomial streams. This is a core algebraic operation in the stream calculus.
Circuit:
compose(sin(8), composition(id, scalar(2.0)), t)
Configuration:
circuit: "compose(sin(8), composition(id, scalar(2.0)), t)"
input:
coefficients: [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
dims: [t]
evaluate:
t:
range: [0, 1.5]
points: 100
Finance
Geometric Brownian Motion
The workhorse model behind the Black-Scholes formula. A stock price $S$ follows log-normal dynamics with constant drift and volatility.
$$dS = \mu S , dt + \sigma S , dW$$
Under Stratonovich interpretation, the analytical solution is $S(T) = S_0 \exp(\mu T + \sigma W(T))$ — no Itô correction. Asgard runs 1000 Monte Carlo paths with a single declaration.
Equation:
Y = sde($mu * X, $sigma * X, X)
Configuration:
equation: "Y = sde($mu * X, $sigma * X, X)"
stochastic:
calculus: stratonovich
n_paths: 1000
dt: 0.01
simulate:
t_start: 0.0
t_end: 2.0
initial_condition:
x0: 100.0
params:
mu: 0.1
sigma: 0.2
CIR Interest Rate Model
The Cox-Ingersoll-Ross model for bond pricing and term structure. Square-root diffusion ensures interest rates stay non-negative — a physical constraint that simpler models violate.
$$dr = \kappa(\theta - r) , dt + \sigma \sqrt{r} , d\xi$$
The Feller condition $2\kappa\theta > \sigma^2$ prevents rates from reaching zero. Mean reversion pulls the rate toward the long-term level $\theta$, while the $\sqrt{r}$ diffusion naturally reduces noise near zero.
Equation:
Y = sde($kappa * ($theta - X), $sigma * sqrt(X), X)
Configuration:
equation: "Y = sde($kappa * ($theta - X), $sigma * sqrt(X), X)"
stochastic:
calculus: stratonovich
n_paths: 1000
dt: 0.01
simulate:
t_start: 0.0
t_end: 10.0
initial_condition:
x0: 0.03
params:
kappa: 0.5
theta: 0.05
sigma: 0.15
Heston Stochastic Volatility
The industry-standard model for options pricing and volatility smiles. Two coupled SDEs: the asset price with stochastic volatility, and the variance process that mean-reverts.
$$dS = \mu S , dt + \sqrt{v} , S , dW_1$$
$$dv = \kappa(\theta - v) , dt + \sigma_v \sqrt{v} , dW_2, \quad dW_1 \cdot dW_2 = \rho , dt$$
The negative correlation ($\rho = -0.7$) captures the leverage effect: falling prices correlate with rising volatility. The Feller condition $2\kappa\theta > \sigma_v^2$ ensures variance stays positive.
Equations:
Y = sde($mu * S, sqrt($v) * S, S)
Y = sde($kappa * ($theta - v), $sigma_v * sqrt(v), v)
Configuration:
coupled:
correlation: -0.7
equations:
- name: S
equation: "Y = sde($mu * S, sqrt($v) * S, S)"
x0: 100.0
- name: v
equation: "Y = sde($kappa * ($theta - v), $sigma_v * sqrt(v), v)"
x0: 0.04
clamp_nonnegative: true
params:
mu: 0.05
kappa: 3.0
theta: 0.04
sigma_v: 0.3
Three-Factor Vasicek Model
A three-factor yield curve model with correlated Ornstein-Uhlenbeck processes. Three factors (short rate, slope, long rate) each mean-revert independently, with dependence captured entirely through a $3 \times 3$ correlation matrix.
$$dr = \kappa_r(\theta_r - r) , dt + \sigma_r , dW_1$$
$$ds = \kappa_s(\theta_s - s) , dt + \sigma_s , dW_2$$
$$dl = \kappa_l(\theta_l - l) , dt + \sigma_l , dW_3$$
with $\text{corr}(dW_1, dW_2) = -0.3$, $\text{corr}(dW_1, dW_3) = 0.6$, $\text{corr}(dW_2, dW_3) = -0.2$.
This is a standard model in fixed-income markets. Asgard handles the Cholesky decomposition of the correlation matrix and generates correlated Brownian increments automatically.
Equations:
Y = sde($kappa_r * ($theta_r - r), $sigma_r, r)
Y = sde($kappa_s * ($theta_s - s), $sigma_s, s)
Y = sde($kappa_l * ($theta_l - l), $sigma_l, l)
Configuration:
coupled:
correlation:
- [1.0, -0.3, 0.6]
- [-0.3, 1.0, -0.2]
- [0.6, -0.2, 1.0]
equations:
- name: r
equation: "Y = sde($kappa_r * ($theta_r - r), $sigma_r, r)"
x0: 0.03
- name: s
equation: "Y = sde($kappa_s * ($theta_s - s), $sigma_s, s)"
x0: 0.01
- name: l
equation: "Y = sde($kappa_l * ($theta_l - l), $sigma_l, l)"
x0: 0.05
params:
kappa_r: 0.5
theta_r: 0.04
sigma_r: 0.01
kappa_s: 0.3
theta_s: 0.02
sigma_s: 0.008
kappa_l: 0.2
theta_l: 0.05
sigma_l: 0.012
Engineering
RC Circuit Thermal Noise
Johnson-Nyquist noise in a passive RC circuit. The capacitor voltage follows an Ornstein-Uhlenbeck SDE with physics-derived parameters.
$$dV = -\frac{1}{RC} V , dt + \sqrt{\frac{2 k_B T}{C}} , dW$$
At thermal equilibrium, the voltage distribution is $\mathcal{N}(0, k_B T / C)$. The time constant $\tau = RC$ governs the relaxation rate. This bridges statistical mechanics and circuit theory in a single equation.
Equation:
Y = sde(-$inv_rc * X, $sigma_thermal, X)
Configuration:
equation: "Y = sde(-$inv_rc * X, $sigma_thermal, X)"
stochastic:
calculus: stratonovich
n_paths: 1000
dt: 0.001
simulate:
t_start: 0.0
t_end: 5.0
initial_condition:
x0: 1.0
params:
inv_rc: 1.0
sigma_thermal: 0.3
Newton's Cooling with Environmental Noise
Heat transfer from a warm object to its surroundings, perturbed by stochastic fluctuations (drafts, solar radiation, humidity changes).
$$dT = -h(T - T_{\text{amb}}) , dt + \sigma , dW$$
Substituting $U = T - T_{\text{amb}}$ gives an Ornstein-Uhlenbeck process mean-reverting to zero. The temperature relaxes with time constant $\tau = 1/h$.
Equation:
Y = sde(-$h * X, $sigma, X)
Configuration:
equation: "Y = sde(-$h * X, $sigma, X)"
stochastic:
calculus: stratonovich
n_paths: 1000
dt: 0.01
simulate:
t_start: 0.0
t_end: 10.0
initial_condition:
x0: 10.0
params:
h: 0.5
sigma: 0.8
Mass-Spring-Damper with Random Forcing
A damped harmonic oscillator driven by stochastic excitation — modeling mechanical vibrations from turbulence, road roughness, or seismic ground motion.
$$dX = V , dt$$
$$dV = \left(-\frac{k}{m} X - \frac{c}{m} V\right) dt + \sigma , dW$$
This is a coupled system: position depends on velocity and vice versa. The natural frequency is $\omega_n = \sqrt{k/m}$ and the damping ratio is $\zeta = c / (2\sqrt{km})$.
Equations:
dX = V dt
dV = (-k/m X - c/m V) dt + sigma dW
Configuration:
coupled:
correlation: 0.0
equations:
- name: X
equation: "dX = V dt"
x0: 1.0
- name: V
equation: "dV = (-k/m X - c/m V) dt + sigma dW"
x0: 0.0
params:
k_over_m: 4.0
c_over_m: 0.4
sigma: 1.0
Kalman-Bucy State Estimation
Continuous-time state estimation — the analogue of the discrete Kalman filter. A hidden state evolves under process noise, while a separate channel accumulates noisy observations.
$$dX = A X , dt + B , dW \quad \text{(hidden state)}$$
$$dY = C X , dt + D , dV \quad \text{(observation)}$$
The challenge is inferring $X$ from $Y$ when both are corrupted by noise. This is the foundation of modern filtering theory, and the same coupled-SDE infrastructure that handles financial models also handles estimation problems.
Equations:
dX = A X dt + B dW
dY = C X dt + D dV
Configuration:
coupled:
correlation: 0.0
equations:
- name: X
equation: "dX = A X dt + B dW"
x0: 5.0
- name: Y
equation: "dY = C X dt + D dV"
x0: 0.0
params:
A: -0.5
B: 0.3
C: 1.0
D: 0.5
Biology & Epidemiology
Population Dynamics
Geometric Brownian Motion applied to ecology: a population with constant per-capita growth rate and environmental stochasticity.
$$dN = r N , dt + \sigma N , dW$$
The population follows a log-normal distribution. A key insight from stochastic ecology: when $\sigma^2/2 > r$, the median population declines despite positive expected growth — environmental variance can be more dangerous than it appears.
Equation:
Y = sde($r * X, $sigma * X, X)
Configuration:
equation: "Y = sde($r * X, $sigma * X, X)"
stochastic:
calculus: stratonovich
n_paths: 1000
dt: 0.01
simulate:
t_start: 0.0
t_end: 10.0
initial_condition:
x0: 100.0
params:
r: 0.05
sigma: 0.2
Predator-Prey (Lotka-Volterra)
The classic ecological model of oscillating populations, extended with environmental noise. Prey grow and are consumed; predators die and are fed by prey.
$$dN_1 = (a N_1 - b N_1 N_2) , dt + \sigma_1 N_1 , dW_1$$
$$dN_2 = (-c N_2 + d N_1 N_2) , dt + \sigma_2 N_2 , dW_2$$
The deterministic system produces periodic cycles. Stochastic perturbations create noisy orbits that can drift toward extinction boundaries — a phenomenon not captured by deterministic models alone.
Equations:
dN1 = (a N1 - b N1 N2) dt + sigma1 N1 dW1
dN2 = (-c N2 + d N1 N2) dt + sigma2 N2 dW2
Configuration:
coupled:
correlation: 0.0
equations:
- name: N1
equation: "dN1 = (a N1 - b N1 N2) dt + sigma1 N1 dW1"
x0: 40.0
clamp_nonnegative: true
- name: N2
equation: "dN2 = (-c N2 + d N1 N2) dt + sigma2 N2 dW2"
x0: 9.0
clamp_nonnegative: true
params:
a: 1.0
b: 0.1
c: 0.5
d: 0.02
sigma1: 0.05
sigma2: 0.05
Epidemic SIR with Intervention
A three-compartment SIR model with a time-dependent policy intervention. Susceptible individuals become infected, then recover — with a social distancing measure that reduces the transmission rate at day 14.
$$dS = -\beta(t) , S , I , dt + \sigma S , dW_1$$
$$dI = (\beta(t) , S , I - \gamma I) , dt + \sigma I , dW_2$$
$$dR = \gamma I , dt$$
where $\beta(t) = 0.4$ before intervention and $\beta(t) = 0.15$ after. The stochastic noise captures individual-level variability in contact patterns. The distribution of total infected (final $R$) informs hospital capacity planning.
Equations:
dS = -beta(t) S I dt + sigma S dW
dI = (beta(t) S I - gamma I) dt + sigma I dW
dR = gamma I dt
Configuration:
coupled:
correlation: 0.0
equations:
- name: S
equation: "dS = -beta(t) S I dt + sigma S dW"
x0: 0.99
clamp_nonnegative: true
- name: I
equation: "dI = (beta(t) S I - gamma I) dt + sigma I dW"
x0: 0.01
clamp_nonnegative: true
- name: R
equation: "dR = gamma I dt"
x0: 0.0
clamp_nonnegative: true
params:
beta: 0.4
beta_reduced: 0.15
gamma: 0.1
sigma: 0.02
t_intervention: 14.0
Discrete Systems
Fibonacci Sequence
A classical recurrence relation computed via stream calculus. The Cauchy product (stream convolution) with $\sigma = (1, 1)$ turns the recurrence into an algebraic circuit with feedback.
$$f_n = f_{n-1} + f_{n-2}, \quad f_0 = 0, ; f_1 = 1$$
The circuit uses a trace operator for the feedback loop and a register for causal access to previous terms — the same operators that handle continuous differential equations also express discrete recurrences.
Equation:
f[n] = f[n-1] + f[n-2]
Configuration:
equation: "f[n] = f[n-1] + f[n-2]"
circuit: >-
trace(composition(composition(composition(composition(
monoidal(param(sigma), register(t)), multiplication),
monoidal(param(kick), id)), add), split))
params:
sigma: [1.0, 1.0]
kick: [0.0, 1.0]
generator:
n_terms: 20
Markov Customer Lifecycle
A Markov chain implemented as cascaded trace circuits. Three states — Prospect, Active, Churned — with transition probabilities encoded as scalar feedback gains.
$$\mathbf{s}(t+1) = P^T \mathbf{s}(t)$$
The transition matrix is lower-triangular, so each state feeds forward into the next via composition. Each state is a trace loop with scalar feedback, and cross-state coupling flows through the triangular structure. Churned is an absorbing state.
Equation:
s(t+1) = P^T * s(t)
Configuration:
equation: "s(t+1) = P^T * s(t)"
circuit: >-
composition(
trace(composition(composition(
monoidal(param(kick0), composition(register(t), scalar(0.80))),
add), split)),
composition(split, monoidal(id,
composition(
trace(composition(composition(
monoidal(composition(register(t), scalar(0.20)),
composition(register(t), scalar(0.92))),
add), split)),
composition(split, monoidal(id,
trace(composition(composition(
monoidal(composition(register(t), scalar(0.08)),
register(t)),
add), split))))))))
params:
kick0: [1.0]
generator:
n_terms: 21
Inverse Problems & Optimization
Parameter Identification
Given observed data from $y = ax + b$, find the unknown parameters $a$ and $b$ using gradient descent through the entire circuit.
$$\min_{a, b} \sum_i \bigl(y_i - (a x_i + b)\bigr)^2$$
Because every Asgard circuit compiles to JAX, gradients flow through the full simulation automatically. The optimizer adjusts circuit parameters (here, scalar(a) and const(b)) to minimize the loss — at $O(1)$ cost per parameter via reverse-mode autodiff.
Configuration:
pipeline:
dataset:
generator:
function: "y = a * x + b"
true_params:
a: 2.5
b: 3.0
x_range: [-2, 4]
n_samples: 15
initial_circuit: "composition(monoidal(scalar(1.0), const(0.0)), add)"
optimizer:
method: gradient_descent
learning_rate: 0.02
max_iterations: 200
tolerance: 0.000001
Feedback Gain Identification
The key capability of differentiable optimization: computing gradients through a feedback loop. The circuit implements a feedback amplifier whose fixed-point gain depends on a parameter $a$.
$$y = \frac{x}{1 - a} \quad \text{(via feedback circuit with gain } a \text{)}$$
At fixed point, the trace operator converges to $z = a \cdot (x + z)$, giving total output $x / (1 - a)$. The optimizer uses the Implicit Function Theorem to differentiate through the fixed-point, finding $a = 0.6$ (amplification factor 2.5) from data.
Before differentiable optimization, jax.grad() could not differentiate through trace. This example demonstrates that capability.
Configuration:
pipeline:
dataset:
generator:
function: "y = x / (1 - a)"
true_params:
a: 0.6
x_range: [0.5, 2.0]
n_samples: 10
initial_circuit: >-
trace(composition(add,
composition(split, monoidal(id, scalar(0.1)))))
optimizer:
method: gradient_descent
learning_rate: 0.002
max_iterations: 300
tolerance: 0.000001
Pattern
Every example above follows the same workflow:
- Write the equation in standard mathematical notation
- Declare parameters, initial conditions, and evaluation ranges in YAML
- Execute — the framework compiles to a differentiable JAX program and runs
- Visualize — results render automatically in the built-in dashboard
The equation never changes between deterministic, stochastic, and discrete modes. Only the runtime interpretation does.