Skip to main content

Documentation Index

Fetch the complete documentation index at: https://bintzgavin-apastra-14.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

A prompt spec is the source of truth for a single prompt in your project. It defines a stable identifier, the inputs the prompt accepts, the template text, and optionally an output contract and metadata. Prompt specs are YAML files stored in promptops/prompts/.

Directory structure

All prompt-related files live under promptops/:
promptops/
├── prompts/          # Prompt specs (YAML) ← you are here
├── datasets/         # Test cases (JSONL)
├── evaluators/       # Scoring rules (YAML)
├── suites/           # Test configurations (YAML)
├── policies/         # Regression policies
├── manifests/        # Consumption manifests
└── delivery/         # Delivery targets
One file per prompt. Name the file to match the prompt’s id — for example, prompts/summarize-v1.yaml for a prompt with id: summarize-v1.

The prompt spec format

Every prompt spec is validated against the prompt spec schema. The required fields are id, variables, and template. Optional fields are output_contract, metadata, and tool_contract.

Required fields

FieldTypePurpose
idstringStable identifier for the prompt
variablesobjectInput variable definitions (JSON Schema)
templatestring, object, or arrayThe prompt template content

Optional fields

FieldTypePurpose
output_contractobjectJSON Schema describing the expected output structure
metadataobjectArbitrary key-value pairs (author, intent, tags, etc.)
tool_contractobjectJSON Schema for tool calling structure and available tools

A real example: summarize-v1.yaml

Here is the complete promptops/prompts/summarize-v1.yaml from the apastra repo:
id: summarize-v1
variables:
  text: { type: string }
template: "Summarize: {{text}}"
This minimal spec defines:
  • A stable ID (summarize-v1)
  • One input variable (text, typed as a string)
  • A template that uses {{text}} as a placeholder
Simple prompts often need nothing more. As your prompt matures, you can add output_contract and metadata.

Stable IDs and naming conventions

The id field is the canonical identifier for your prompt. Treat it like a function name — once you publish or consume a prompt, renaming the ID is a breaking change.
Use a slug format with a version suffix: <domain>/<name>-v<N> or simply <name>-v<N>. Examples: summarize-v1, classify-email-v2, my-app/extract-entities-v1.
Guidelines:
  • Use lowercase letters, hyphens, and slashes only.
  • Include a version suffix (-v1, -v2) so you can publish a new major version alongside the old one without breaking consumers.
  • Never rename a published ID — create a new versioned ID instead.
  • The file name should match the ID (e.g., id: summarize-v1 → file summarize-v1.yaml).

Variable schema

The variables field is a map of variable names to JSON Schema definitions. At minimum, provide the type:
variables:
  text: { type: string }
  max_length: { type: string }
  language: { type: string }
You can add richer constraints:
variables:
  text:
    type: string
    description: "The article text to summarize"
  max_length:
    type: string
    description: "Maximum word count for the summary"
  format:
    type: string
    enum: ["bullet", "prose"]
    description: "Output format"
Variable hygiene best practices:
  • Declare every variable the template uses — undeclared variables are implicit coupling that breaks consumers.
  • Use description fields so consumers and evaluators understand what each variable means.
  • Prefer string types for LLM inputs. Use number or boolean only when the consuming code is doing the type conversion.
  • Avoid leaking implementation details (internal IDs, session tokens) into variables.

Template syntax

Templates use {{variable}} double-brace placeholders. The agent or runtime substitutes each placeholder with the corresponding value from the case’s inputs object.
template: "Summarize the following text in {{max_length}} or fewer words: {{text}}"
For chat models, template can be an array of message objects:
template:
  - role: system
    content: "You are a summarization assistant."
  - role: user
    content: "Summarize this in {{max_length}} words: {{text}}"
Every {{variable}} placeholder in the template must have a matching entry in variables. Missing variable declarations are a common source of errors caught by schema validation.

Output contracts

An output_contract is a JSON Schema that describes what the model should return. It is optional but highly recommended for structured outputs:
id: classify-email-v1
variables:
  email_body: { type: string }
template: |
  Classify the following email and return JSON:
  {{email_body}}
  Respond with: {"category": "<string>", "confidence": <number>}
output_contract:
  type: object
  required: [category, confidence]
  properties:
    category:
      type: string
      enum: [spam, sales, support, internal]
    confidence:
      type: number
      minimum: 0
      maximum: 1
When to use an output contract:
  • When your prompt returns structured data (JSON, YAML, XML)
  • When downstream code parses the model output
  • When you want schema validation as an evaluator assertion (is-valid-json-schema)
When you can skip it:
  • Free-text outputs where structure is not enforced
  • Early-stage prompts where the output shape is still evolving
Output contracts double as documentation. Even if you don’t validate against them in every run, they make the prompt’s intended API explicit for other engineers and for AI agents reading the spec.

Tool contracts

If your prompt uses tool calling, declare the expected tools in tool_contract:
tool_contract:
  type: object
  properties:
    tools:
      type: array
      items:
        type: object
        required: [name, description]
This field follows the same JSON Schema format as output_contract and is used to validate that the model’s tool call structure matches expectations.

Metadata

Use metadata for any key-value pairs that don’t belong in the functional fields:
metadata:
  author: "eng-team"
  intent: "Summarize long-form articles for the digest email"
  tags: [summarization, email, production]
  created: "2026-01-15"
Metadata is not validated beyond being a valid YAML object. It is surfaced in derived indexes, audit reports, and documentation tooling.

Best practices

One spec per file

Each prompt spec lives in its own file. Never combine multiple prompts in one YAML file.

Stable IDs, always

Treat the id field as an immutable contract. Version bumps get a new ID (v2), not an in-place rename.

Declare all variables

Every {{placeholder}} must appear in variables. Undeclared variables break consumers and schema validation.

Add output contracts early

For structured outputs, write the output contract when you write the template — not after a production bug.

Validating your prompt spec

Run schema validation with the apastra-validate skill to catch errors before committing:
npx skills run apastra-validate
The validator checks all files in promptops/ against the 23 JSON schemas. Prompt spec errors are reported with the field path and expected type.