Skip to Content
Perstack 0.0.1 is released πŸŽ‰

Runtime

The Perstack runtime combines probabilistic LLM reasoning with deterministic state management β€” making agent execution predictable, reproducible, and auditable.

Agent loop

The runtime executes Experts through an agent loop:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 1. Reason β†’ LLM decides next action β”‚ β”‚ 2. Act β†’ Runtime executes tool β”‚ β”‚ 3. Record β†’ Checkpoint saved β”‚ β”‚ 4. Repeat β†’ Until completion or limit β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The loop ends when:

  • LLM calls attemptCompletion (task done)
  • maxSteps limit reached
  • External signal (SIGTERM/SIGINT)

This design lets the LLM autonomously decide when a task is complete β€” no hardcoded exit conditions.

Stopping and resuming

npx perstack run my-expert "query" --max-steps 50
Stop conditionBehaviorResume from
attemptCompletionTask completeN/A
maxSteps reachedGraceful stop at step boundaryLast checkpoint
SIGTERM/SIGINTImmediate stopPrevious checkpoint

Checkpoints enable pause/resume across process restarts β€” useful for long-running tasks, debugging, and resource management.

Deterministic state

LLMs are probabilistic β€” same input can produce different outputs. Perstack draws a clear boundary:

Probabilistic (LLM)Deterministic (Runtime)
Which tool to callTool execution
Final report contentState recording
ReasoningCheckpoint creation

The β€œthinking” is probabilistic; the β€œdoing” and β€œrecording” are deterministic. This boundary enables:

  • Reproducibility: Replay from any checkpoint with identical state
  • Testability: Mock the LLM, test the runtime deterministically

Event, Step, Checkpoint

Runtime state is built on three concepts:

ConceptWhat it represents
EventA single state transition (tool call, result, etc.)
StepOne cycle of the agent loop
CheckpointComplete snapshot at step end β€” everything needed to resume

This combines Event Sourcing (complete history) with Checkpoint/Restore (efficient resume).

The perstack/ directory

The runtime stores execution history in perstack/runs/ within the workspace:

/workspace └── perstack/ └── runs/ └── {runId}/ β”œβ”€β”€ run-setting.json # Run configuration β”œβ”€β”€ checkpoint-{timestamp}-{step}-{id}.json # Execution snapshots └── event-{timestamp}-{step}-{type}.json # Execution events

This directory is managed automatically β€” don’t modify it manually.

Event notification

The runtime emits events for every state change. Two options:

stdout (default)

Events are written to stdout as JSON. This is the safest option for sandboxed environments β€” no network access required.

npx perstack run my-expert "query"

Your infrastructure reads stdout and decides what to do with events. See Sandbox Integration for the rationale.

Custom event listener

When embedding the runtime programmatically, use a callback:

import { run } from "@perstack/runtime" await run(params, { eventListener: (event) => { // Send to your monitoring system, database, etc. } })

Skills (MCP)

Experts use tools through MCP (Model Context Protocol). The runtime handles:

  • Lifecycle: Start MCP servers with Expert, clean up on exit
  • Environment isolation: Only requiredEnv variables are passed
  • Error recovery: MCP failures are fed back to LLM, not thrown as runtime errors

For skill configuration, see Skills.

Providers and models

Perstack uses standard LLM features available from most providers:

  • Chat completion (including PDF/image in messages)
  • Tool calling

For supported providers and models, see Providers and Models.