Skip to content

Jobs Endpoints

YAPTIDE supports two execution backends: direct (Celery workers on the server) and batch (SLURM on HPC clusters). Both share the same request format.

Run a simulation using local Celery workers.

POST /jobs/direct
Cookie: access_token=<jwt>
Content-Type: application/json
{
"sim_type": "shieldhit",
"ntasks": 10,
"input_type": "editor",
"sim_data": { ... }
}

Parameters:

FieldTypeRequiredDescription
sim_typestringYesSimulator: shieldhit, fluka, geant4, topas
ntasksintegerYesNumber of parallel tasks (splits primaries)
input_typestringYes"editor" (project JSON) or "files" (raw input files)
sim_dataobjectYesProject JSON (when input_type is "editor")

Response 202 Accepted

{
"message": "Job submitted",
"job_id": "abc123-def456"
}

Errors:

  • 400 — Missing required fields or invalid sim_type
  • 500 — Conversion or task dispatch failed

GET /jobs/direct?job_id=abc123-def456
Cookie: access_token=<jwt>

Response 200 OK

{
"message": "Job status",
"job_state": "RUNNING",
"job_tasks_status": [
{"task_id": 1, "task_state": "COMPLETED", "simulated_primaries": 1000, "requested_primaries": 1000},
{"task_id": 2, "task_state": "RUNNING", "simulated_primaries": 500, "requested_primaries": 1000}
]
}

Job states:

StateDescription
UNKNOWNJob not found or not yet initialized
PENDINGSubmitted, waiting for a worker
RUNNINGAt least one task is executing
MERGING_QUEUEDAll tasks done, waiting for result merge
MERGING_RUNNINGResults being merged
COMPLETEDAll tasks finished successfully
FAILEDOne or more tasks failed
CANCELEDJob was manually cancelled

DELETE /jobs/direct?job_id=abc123-def456
Cookie: access_token=<jwt>

Response 200 OK

{
"message": "Job cancelled"
}

Revokes all pending Celery tasks and terminates running ones.


Submit a simulation to an HPC cluster via SLURM. Requires Keycloak authentication.

POST /jobs/batch
Cookie: access_token=<jwt>
Content-Type: application/json
{
"sim_type": "shieldhit",
"ntasks": 100,
"input_type": "editor",
"sim_data": { ... },
"batch_options": {
"cluster_name": "prometheus",
"slurm_options": {
"time": "01:00:00",
"partition": "plgrid"
}
}
}

Additional parameters:

FieldTypeRequiredDescription
batch_optionsobjectNoSLURM cluster selection and resource options
batch_options.cluster_namestringNoTarget cluster name
batch_options.slurm_optionsobjectNoCustom SLURM headers (time, partition, etc.)

Response 202 Accepted

{
"message": "Batch job submitted",
"job_id": "batch-789xyz"
}

Errors:

  • 403 — Not a Keycloak-authenticated user
  • 500 — SSH connection or SLURM submission failed

GET /jobs/batch?job_id=batch-789xyz
Cookie: access_token=<jwt>

Response format is identical to direct job status.


DELETE /jobs/batch?job_id=batch-789xyz
Cookie: access_token=<jwt>

Sends a SLURM scancel command to the cluster.


Platform-agnostic endpoint that reads job status from the database (works for both direct and batch).

GET /jobs?job_id=abc123-def456
Cookie: access_token=<jwt>

Response 200 OK

{
"message": "Job status",
"job_state": "COMPLETED",
"job_tasks_status": [
{"task_id": 1, "task_state": "COMPLETED"},
{"task_id": 2, "task_state": "COMPLETED"}
]
}

Tip: Use GET /jobs/direct or GET /jobs/batch during active simulation for real-time status. Use GET /jobs for historical lookups from the database.


Worker-facing only. Called by Celery workers and batch helpers to report status changes.

POST /jobs
Content-Type: application/json
{
"sim_id": "abc123-def456",
"update_key": "<shared-secret>",
"job_state": "RUNNING"
}

This endpoint is not user-facing — it uses update_key authentication instead of JWT cookies.