Remote Engine Contracts (OGN-ready)
Helix and OGN share the same JSON contracts so local prototypes and remote pipelines stay interchangeable.
Performance benchmark response
Remote CRISPR services must return the exact JSON produced by
helix engine benchmark --json (see docs/engine_architecture.md). Required
fields:
helix_version,scoring_versions,env,seed,configbenchmarks.crispr[]entries withbackend_requested/backend_used/shape/mpairs_per_sbenchmarks.prime[]entries withbackend_requested/backend_used/workload/predictions_per_s
Prime physics response
Remote prime scoring endpoints should echo the physics_score structure used by
helix prime simulate --physics-score:
{
"physics_score": {
"pbs_dG": -8.4,
"rt_cum_dG": [...],
"flap_ddG": 0.7,
"microhomology": 6,
"mmr_flag": false,
"nick_distance": 4,
"P_RT": 0.82,
"P_flap": 0.61,
"E_pred": 0.50
}
}
If a remote service adds ML-augmented probabilities, place them adjacent to the same structure so clients can continue to rely on one schema regardless of backend.
Evidence → VeriBiota → OGN (stub plan)
- Helix now exports
helix_run_evidence_v1.json(UI +helix-cli evidence). - OGN ingestion scaffold lives at
tools/ogn_evidence_ingest.py(local-only for now):- Fetches evidence from file:// or local path.
- If a
veribiotabinary is available, runs profilehelix_run_outcomes_v1; otherwise writes a stub verdict with status "skipped". - Stores evidence and verdict under
runs/<run_id>/evidence/andruns/<run_id>/verdicts/beneath the chosen store root (default./ogn_store). - Returns a small summary
{run_id, evidence_id, profile, status, verdict_ref}.
- Usage example:
python tools/ogn_evidence_ingest.py \ --evidence artifacts/run.v1.1.evs.json \ --store ogn_store - Next step (separate OGN repo): swap the local store for S3/DB and hook the verdict status into dashboards.