Every apastra project follows a predictable directory layout. Understanding where files live helps your agent find them and helps you know where to look when something goes wrong.Documentation Index
Fetch the complete documentation index at: https://bintzgavin-apastra-14.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Full directory tree
Directory reference
promptops/prompts/
Prompt spec YAML files. Each file defines a single prompt with a stable ID, variable schema, template, and optional output contract.
File naming: <prompt-id>.yaml or <prompt-id>/prompt.yaml for multi-file prompts.
Example:
prompt-spec.schema.json
promptops/datasets/
Test case files in JSONL format — one JSON object per line. Each line has a case_id, inputs, and optionally assert (inline assertions) or expected_outputs.
File naming: <dataset-id>.jsonl
Example:
dataset-case.schema.json, dataset-manifest.schema.json
promptops/evaluators/
Evaluator YAML files describing how to score model outputs. Evaluator types include deterministic, schema, judge, and human.
File naming: <evaluator-id>.yaml
Example:
evaluator.schema.json
promptops/suites/
Suite YAML files that tie together datasets, evaluators, and models into a runnable test configuration.
File naming: <suite-id>.yaml
Example:
suite.schema.json
promptops/schemas/
56 JSON Schema files that validate every protocol file type in apastra. Your agent and CI both reference these schemas when validating prompts, datasets, evaluators, and run artifacts.
See the full schema reference for the complete list.
promptops/validators/
Shell scripts that invoke ajv-cli to validate files against the schemas. Used by schema-validation.yml in CI.
Key scripts:
validate-prompt-spec.sh— validates a prompt spec YAMLvalidate-dataset.sh— validates a dataset manifest + JSONL cases
promptops/policies/
Regression policy YAML files that define per-metric thresholds and severity levels used by the regression engine.
File naming: regression.yaml (conventional default)
Example:
regression-policy.schema.json
promptops/harnesses/
Harness adapter spec files describing how to invoke an external eval harness. These are optional; your IDE agent acts as the default harness.
Validated by: harness-adapter.schema.json
promptops/resolver/
Python implementation of the four-level prompt resolution chain. See the resolver reference for a full walkthrough.
Files:
chain.py— theResolverChainclass that orchestrates resolutionlocal.py— local path override resolverworkspace.py— same-repo workspace resolvergit_ref.py— git tag / commit SHA resolverpackaged.py— packaged artifact resolver (OCI, npm, PyPI, GitHub Release)
promptops/runtime/
Digest computation utilities and the resolution runtime. Implements the content digest convention: canonicalize JSON/YAML → SHA-256 → sha256:<hex>.
promptops/runs/
Run artifact output directory. After each eval, the agent or harness writes:
<suite-id>-<YYYY-MM-DD-HHmmss>.
promptops/manifests/
Consumption manifest files. The default file is consumption.yaml, which declares which prompt versions your app pins.
Example:
consumption-manifest.schema.json
promptops/delivery/
Delivery target spec files. Each file declares a downstream sync target (e.g., a GitHub repo to receive a PR when a new version is promoted).
Validated by: delivery-target.schema.json
derived-index/baselines/
Known-good scorecards. After a passing eval run, the baseline skill saves the scorecard here. Future evals compare against this file to detect regressions.
File naming: <suite-id>.json
Validated by: baseline.schema.json
derived-index/regressions/
Regression reports produced by the regression engine. Each report compares a candidate scorecard against a baseline and produces a pass or fail result with per-metric evidence.
Validated by: regression-report.schema.json
Quick eval files
For single-file evaluations, place files underpromptops/evals/:
promptops/evals/ automatically. See assertion types for the full list of available assertions.
Repo topology options
Apastra supports three repo shapes without changing the conceptual model.- Same-repo
- Separate prompt repo
- Local-linked development
Prompts live inside the app repo under Best for: most teams starting out; single product repo.
promptops/. This is the simplest starting point — one PR can update code and prompts together.The derived-index/ directory
derived-index/ is intentionally separate from promptops/. It stores derived artifacts — outputs computed from source files, not source files themselves:
- Baselines — scorecards saved after a passing eval run
- Promotions — promotion records binding approved versions to channels
- Regressions — regression reports comparing candidate vs baseline
promptops/ prevents confusion about what is authoritative. Source files in promptops/ are the source of truth; files in derived-index/ are computed results.
Never manually edit files in
derived-index/. They are always written by the agent (baselines), the regression engine (reports), or the promotion workflow (records).The artifacts branch pattern
For CI pipelines, run artifacts (scorecards, manifests, reports, promotion records) are stored on a separate Git branch calledpromptops-artifacts. This keeps the main branch clean and avoids merge conflicts from concurrent CI runs.
regression-gate.yml and promote.yml workflows both fetch from origin/promptops-artifacts to read and write these records.