Skip to main content

Documentation Index

Fetch the complete documentation index at: https://bintzgavin-apastra-14.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Installation

npx skills add BintzGavin/apastra/skills/validate

How to invoke

Ask your agent:
“Use the apastra-validate skill to validate my promptops files”

What gets validated

Your agent scans the promptops/ directory and validates every file it finds:
File patternSchema applied
promptops/prompts/*.yaml or *.jsonPrompt spec
promptops/datasets/*.jsonlDataset case (per line)
promptops/evaluators/*.yaml or *.jsonEvaluator
promptops/suites/*.yaml or *.jsonSuite
promptops/evals/*.yamlQuick eval
promptops/policies/*.yamlRegression policy

Validation process

1

Discover files

Your agent scans promptops/ for all YAML, JSON, and JSONL files matching the patterns above.
2

Validate each file

Your agent applies schema rules specific to each file type.Prompt specs (promptops/prompts/):
  • Has id (string, required)
  • Has variables (object, required) — each value should have a type field
  • Has template (string, object, or array — required)
  • output_contract if present is a valid object
  • metadata if present is a valid object
  • Template {{variable}} placeholders match keys in variables
Datasets (promptops/datasets/*.jsonl):
  • Each line is valid JSON
  • Each line has case_id (string, required)
  • Each line has inputs (object, required)
  • case_id values are unique within the file
  • If assert is present: it is an array, each assertion has a valid type
Evaluators (promptops/evaluators/):
  • Has id (string, required)
  • Has type (required, must be one of deterministic, schema, or judge)
  • Has metrics (array of strings, required, minimum 1 item)
  • config if present is a valid object
Suites (promptops/suites/):
  • Has id (string, required)
  • Has name (string, required)
  • Has datasets (array of strings, required, minimum 1)
  • Has evaluators (array of strings, required, minimum 1)
  • Has model_matrix (array of strings, required, minimum 1)
  • trials if present is an integer >= 1
  • Referenced datasets and evaluators exist on disk
Quick evals (promptops/evals/*.yaml):
  • Has id (string, required)
  • Has prompt (string, required)
  • Has cases (array, required, minimum 1)
  • Each case has id, inputs, and assert (all required)
  • Each assertion has a valid type
  • thresholds.pass_rate if present is a number between 0 and 1
Regression policies (promptops/policies/):
  • Has baseline (string, required)
  • Has rules (array, required)
  • Each rule has metric (string) and severity (blocker or warning)
3

Cross-reference check

After validating individual files, your agent checks cross-references:
  • Suites reference datasets that exist in promptops/datasets/
  • Suites reference evaluators that exist in promptops/evaluators/
  • Dataset inputs keys match prompt spec variables keys
  • Evaluator metrics match suite thresholds keys (if thresholds are defined)
4

Report results

Your agent outputs a structured validation report.

Validation report format

Validation Report
=================

Prompt Specs:
  ✅ summarize-v1 (promptops/prompts/summarize.yaml)
  ❌ classify-v1 (promptops/prompts/classify.yaml)
     └── Missing required field: variables

Datasets:
  ✅ summarize-smoke (promptops/datasets/summarize-smoke.jsonl) — 5 cases
  ⚠️ classify-smoke (promptops/datasets/classify-smoke.jsonl) — 3 cases
     └── Warning: inputs.category not in prompt spec variables

Evaluators:
  ✅ contains-keywords (promptops/evaluators/contains-keywords.yaml)

Suites:
  ✅ summarize-smoke (promptops/suites/summarize-smoke.yaml)
  ❌ classify-smoke (promptops/suites/classify-smoke.yaml)
     └── Referenced dataset 'classify-full' not found

Summary: 3 passed, 2 issues (1 error, 1 warning)
  • — file is valid
  • ⚠️ — warning: non-blocking but worth fixing
  • — error: will cause evaluation failures

Shell validators (optional)

If your project has promptops/validators/ with shell scripts, you can run strict JSON Schema validation directly:
# Validate a prompt spec
bash promptops/validators/validate-prompt-spec.sh <file.json>

# Validate a suite
bash promptops/validators/validate-suite.sh <file.json>

# Validate an evaluator
bash promptops/validators/validate-evaluator.sh <file.json>
These scripts require npx ajv-cli and validate against the JSON schemas in promptops/schemas/.
The shell validators require files in JSON format. Your agent’s built-in validation handles both YAML and JSON files.

Valid assertion types

When validating assert arrays in datasets and quick evals, your agent checks that each type is one of: equals, contains, icontains, contains-any, contains-all, regex, starts-with, is-json, contains-json, is-valid-json-schema, similar, llm-rubric, factuality, answer-relevance, latency, cost Or any not- prefixed variant of the above (for example, not-contains, not-is-json).
Run validation after scaffolding to catch typos, and again before running evals to avoid confusing runtime errors. Warnings are non-blocking but indicate potential mismatches that could cause unexpected eval results.