Documentation Index
Fetch the complete documentation index at: https://bintzgavin-apastra-14.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Installation
How to invoke
Ask your agent:“Use the apastra-validate skill to validate my promptops files”
What gets validated
Your agent scans thepromptops/ directory and validates every file it finds:
| File pattern | Schema applied |
|---|---|
promptops/prompts/*.yaml or *.json | Prompt spec |
promptops/datasets/*.jsonl | Dataset case (per line) |
promptops/evaluators/*.yaml or *.json | Evaluator |
promptops/suites/*.yaml or *.json | Suite |
promptops/evals/*.yaml | Quick eval |
promptops/policies/*.yaml | Regression policy |
Validation process
Discover files
Your agent scans
promptops/ for all YAML, JSON, and JSONL files matching the patterns above.Validate each file
Your agent applies schema rules specific to each file type.Prompt specs (
promptops/prompts/):- Has
id(string, required) - Has
variables(object, required) — each value should have atypefield - Has
template(string, object, or array — required) output_contractif present is a valid objectmetadataif present is a valid object- Template
{{variable}}placeholders match keys invariables
promptops/datasets/*.jsonl):- Each line is valid JSON
- Each line has
case_id(string, required) - Each line has
inputs(object, required) case_idvalues are unique within the file- If
assertis present: it is an array, each assertion has a validtype
promptops/evaluators/):- Has
id(string, required) - Has
type(required, must be one ofdeterministic,schema, orjudge) - Has
metrics(array of strings, required, minimum 1 item) configif present is a valid object
promptops/suites/):- Has
id(string, required) - Has
name(string, required) - Has
datasets(array of strings, required, minimum 1) - Has
evaluators(array of strings, required, minimum 1) - Has
model_matrix(array of strings, required, minimum 1) trialsif present is an integer >= 1- Referenced
datasetsandevaluatorsexist on disk
promptops/evals/*.yaml):- Has
id(string, required) - Has
prompt(string, required) - Has
cases(array, required, minimum 1) - Each case has
id,inputs, andassert(all required) - Each assertion has a valid
type thresholds.pass_rateif present is a number between 0 and 1
promptops/policies/):- Has
baseline(string, required) - Has
rules(array, required) - Each rule has
metric(string) andseverity(blockerorwarning)
Cross-reference check
After validating individual files, your agent checks cross-references:
- Suites reference datasets that exist in
promptops/datasets/ - Suites reference evaluators that exist in
promptops/evaluators/ - Dataset
inputskeys match prompt specvariableskeys - Evaluator
metricsmatch suitethresholdskeys (if thresholds are defined)
Validation report format
✅— file is valid⚠️— warning: non-blocking but worth fixing❌— error: will cause evaluation failures
Shell validators (optional)
If your project haspromptops/validators/ with shell scripts, you can run strict JSON Schema validation directly:
npx ajv-cli and validate against the JSON schemas in promptops/schemas/.
The shell validators require files in JSON format. Your agent’s built-in validation handles both YAML and JSON files.
Valid assertion types
When validatingassert arrays in datasets and quick evals, your agent checks that each type is one of:
equals, contains, icontains, contains-any, contains-all, regex, starts-with, is-json, contains-json, is-valid-json-schema, similar, llm-rubric, factuality, answer-relevance, latency, cost
Or any not- prefixed variant of the above (for example, not-contains, not-is-json).