This document pins down what “perfect” means for the Helix runtime, how we inspect it, and which slices we ship first. Treat it as the non‑negotiable checklist when proposing new work or deciding that a feature is “done”.
Parameterize against real assays with posterior predictive checks (PPCs) and held-out validation datasets.
Numerical guarantees
Typed ports carry explicit physical units, with strict mass and positivity invariants enforced at boundaries.
Deterministic seeding plus bitwise-identical trajectories across machines and runs.
Stable multi-rate stepping across solver islands.
Performance envelope
UI holds a steady 60 Hz loop with <10 ms jitter; delta-sync state delivery keeps renderer work bounded.
GPU kernels run within sub-millisecond budgets using micro-batching and multi-stream execution.
UX and reproducibility
Every run emits a COMBINE-style bundle bundling IR, config, seeds, and outputs.
Hot-swap edits (params, rules, or nodes) reset only the touched state.
Loops That Keep Us Honest
Model Loop — refine the IR structure. Graph IR supports cycles; strongly connected components (SCCs) collapse into solver islands. Lift/restrict couplers enforce invariants at the boundaries.
Data Loop — fit and validate models. Combine gradient-based and sampling calibration, run PPCs, sensitivity analyses (Sobol/Morris), and use active learning to drive down uncertainty.
Performance Loop — guarantee real-time behavior. Autotune per-island Δt, reuse kernels via CUDA Graphs, stream deltas to the renderer, and track UI frame cadence continuously.
Architecture Snapshot
IR: general directed graph (not a DAG).
Runtime: compute SCCs, build the condensation DAG, treat each SCC as a solver island.
Scheduler: multi-rate per island with operator-split sync at DAG boundaries, content-addressed caching keyed by (node hash, inputs, Δt), and a hot-swap path that minimizes state resets.
Live mode: StateReducer emits deltas, renderer double-buffers, compute never waits on UI.
Node palette
Node
Role
ProteinNode
Patch-level MD/docking, ΔΔG to rate multipliers
RuleNetNode
ODE/SSA/rule-based hybrid
GRNNode
Gene regulation (ODE/SSA)
FieldNode
Reaction-diffusion/PDE tiles
ABMNode
Agent-based models at cell/agent level
MechNode
Tissue mechanics
CouplerNode
Lift/restrict, flux maps, mixing
RewriterNode
CRISPR/prime/PTM/pathway rewrites
ObserverNode
Metrics, losses, reports
Reference Code
# core/scc.py
from collections import defaultdict
class Graph:
def __init__(self): self.adj = defaultdict(set)
def add_edge(self, u, v): self.adj[u].add(v)
def tarjan_scc(g: Graph):
index, stack, onstack, idx, sccs = {}, [], set(), 0, []
low = {}
def strongconnect(v):
nonlocal idx
index[v] = low[v] = idx; idx += 1
stack.append(v); onstack.add(v)
for w in g.adj[v]:
if w not in index:
strongconnect(w); low[v] = min(low[v], low[w])
elif w in onstack:
low[v] = min(low[v], index[w])
if low[v] == index[v]:
comp = []
while True:
w = stack.pop(); onstack.remove(w)
comp.append(w)
if w == v: break
sccs.append(tuple(comp))
for v in list(g.adj.keys()) | {x for vs in g.adj.values() for x in vs}:
if v not in index: strongconnect(v)
return sccs
def condensation_dag(g: Graph):
comps = tarjan_scc(g)
comp_index = {v: i for i, c in enumerate(comps) for v in c}
dag = Graph()
for u in g.adj:
for v in g.adj[u]:
cu, cv = comp_index[u], comp_index[v]
if cu != cv: dag.add_edge(cu, cv)
return comps, dag
# core/scheduler.py
from time import perf_counter
class Island:
def __init__(self, name, nodes, dt):
self.name, self.nodes, self.dt = name, nodes, dt
def step(self, t, dt, inputs):
# run ODE/SSA/PDE/ABM kernels; return outputs dict
return {}
class LiveScheduler:
def __init__(self, islands, sync_dt, state_reducer):
self.islands = islands
self.sync_dt = sync_dt
self.state_reducer = state_reducer
self.buffers = {isl.name: {} for isl in islands}
def run_until(self, T, wall_budget_ms=5):
t = 0.0
next_sync = self.sync_dt
wall_anchor = perf_counter()
while t < T:
for isl in self.islands:
steps = int(self.sync_dt // isl.dt)
for _ in range(max(1, steps)):
out = isl.step(t, isl.dt, self.buffers[isl.name])
self.buffers[isl.name] = out
t += isl.dt
snapshot = self.collect_snapshot()
self.state_reducer.push(snapshot)
elapsed_ms = (perf_counter() - wall_anchor) * 1e3
if elapsed_ms < wall_budget_ms:
pass
wall_anchor = perf_counter()
def collect_snapshot(self):
return {}