Skip to main content

Enterprise API v1 — Pipelines & Scripts

The first version of the Enterprise API exposes two surfaces over HTTP:

  • Hub pipelines — the curated recipe-launchers shipped with the Hub
  • User scripts.sh files under the operator's home directory and /home/mark/serveraihub/scripts/

Both surfaces share the same run model: every launch returns a run_id, runs are listable and filterable by status, logs are paged by byte offset, and SIGTERM is one POST away.

The Corpus, HIFF / GRIFF-Δ, and Δ UI internals are not exposed via this API. Those remain inside the closed Hub stack. See Extending the Hub for the tier model.

Base URL

https://<your-hub-host>/api/v1

Authentication

Every request requires an Authorization: Bearer <key> header. Keys are managed in /app/config/enterprise_keys.json inside the dashboard container (Docker volume serveraihub_dashboard_config):

{
"keys": [
{
"name": "acme-prod",
"key": "sk_live_LONG_RANDOM_STRING",
"scopes": ["pipelines:read", "pipelines:write", "scripts:read", "scripts:write"],
"created": "2026-05-15"
}
]
}

Scope names:

  • pipelines:read, pipelines:write
  • scripts:read, scripts:write
  • * — grants everything (use sparingly)

Dev mode: if the keys file is missing or has an empty keys array, the API serves all requests un-authenticated and tags them auth=dev. The moment a real keys file exists, auth is enforced.

Idempotency

POST /api/v1/pipelines/runs and POST /api/v1/scripts/runs accept an Idempotency-Key: <any-string> header. If a request with the same key was successfully processed within the last hour, the prior run record is returned instead of starting a new run. Use this to make network retries safe.

Endpoints

Pipelines

GET /api/v1/pipelines/recipes — list recipes
POST /api/v1/pipelines/runs — launch a recipe
GET /api/v1/pipelines/runs?status=&limit= — list runs
GET /api/v1/pipelines/runs/{run_id} — one run's record
GET /api/v1/pipelines/runs/{run_id}/logs?after=N — paged log tail
POST /api/v1/pipelines/runs/{run_id}/kill — SIGTERM

Launch a pipeline

curl -X POST https://hub.acme/api/v1/pipelines/runs \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-H "Idempotency-Key: my-job-2026-05-15-001" \
-d '{"recipe_id": "build_all_extra_features", "args": "--limit 5000"}'

Response (201 Created):

{
"run_id": "run_a1b2c3d4e5f6",
"source": "pipeline",
"spec": {"recipe_id": "build_all_extra_features", "args": "--limit 5000"},
"pid": "3505615",
"log_path": "/home/mark/serveraihub/eval/halueval30k/logs/build_all_extra_features__20260515_104444.log",
"cmd": "python build_all_extra_features.py --limit 5000",
"started_at": 1778856284,
"ended_at": null,
"status": "running",
"key_name": "acme-prod"
}

Scripts

GET /api/v1/scripts?path= — list .sh under path
GET /api/v1/scripts/content?path= — read script source
POST /api/v1/scripts/runs — launch a .sh
GET /api/v1/scripts/runs?status=&limit= — list runs
GET /api/v1/scripts/runs/{run_id} — one run's record
GET /api/v1/scripts/runs/{run_id}/logs?after=N — paged log tail
POST /api/v1/scripts/runs/{run_id}/kill — SIGTERM

Only .sh files under /home/mark or /home/mark/serveraihub/scripts/ are runnable. Anything else returns 403.

Launch a script

curl -X POST https://hub.acme/api/v1/scripts/runs \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-d '{"path": "/home/mark/scripts/my_job.sh", "args": "--mode full"}'

Logs — paged tail

Logs are byte-paged. The first call passes after=0; the response includes a new offset to pass on the next call. more=true means there are bytes beyond what was returned.

curl "https://hub.acme/api/v1/scripts/runs/run_a1b2/logs?after=0" \
-H "Authorization: Bearer sk_live_..."
{
"run_id": "run_a1b2",
"offset": 4096,
"size": 8192,
"chunk": "started at 10:44:44\n...",
"more": true
}

A simple polling loop:

offset = 0
while True:
r = client.get(f"/api/v1/scripts/runs/{run_id}/logs",
params={"after": offset, "max_bytes": 65536})
chunk = r.json()
print(chunk["chunk"], end="")
offset = chunk["offset"]
if not chunk["more"]:
# Optional: check status; break if completed/failed/killed
...
time.sleep(2)

OpenAPI spec

The full machine-readable spec lives at:

https://<your-hub-host>/openapi.json

Filter to /api/v1/* paths to get just the v1 surface. Import into Postman, generate client SDKs with openapi-generator, etc.

Run status values

StatusMeaning
runningProcess is alive on the host
completedProcess exited cleanly (log tail had no error markers)
failedProcess exited but the log tail contains error markers
killedSIGTERM was issued via /kill
unknownStatus couldn't be determined

The Hub refreshes running status when a run is listed or fetched — process liveness is verified by kill -0 on the host PID. Completed/failed inference is best-effort and reads the last few lines of the log file for error markers.

What this API does not do

  • Host shell access — never.
  • Corpus / HIFF / GRIFF-Δ internals — sold through the Δ UI scored-response endpoint, not exposed here.
  • Server-Sent Events streaming — paged byte offsets only in v1. SSE planned for v2.
  • Per-key quotas / billing — planned for v2 alongside the metered subscription model.