# Apastra ## Docs - [Core concepts](https://bintzgavin-apastra-14.mintlify.app/core-concepts.md): The key building blocks of Apastra: prompt specs, datasets, evaluators, suites, baselines, and the resolution chain. - [CI integration](https://bintzgavin-apastra-14.mintlify.app/guides/ci-integration.md): Automate prompt evaluation with GitHub Actions — from a simple two-workflow setup to full enterprise governance. - [Delivery and promotion](https://bintzgavin-apastra-14.mintlify.app/guides/delivery-and-promotion.md): How to promote approved prompt versions to production, pin them in consuming apps, and roll back safely. - [Regression detection](https://bintzgavin-apastra-14.mintlify.app/guides/regression-detection.md): How baselines and regression policies gate prompt quality and block merges when scores drop. - [Writing effective evaluations](https://bintzgavin-apastra-14.mintlify.app/guides/writing-evals.md): Design test cases and assertions that catch real regressions — not just the failures you already expect. - [Writing prompt specs](https://bintzgavin-apastra-14.mintlify.app/guides/writing-prompts.md): How to write well-structured, versioned prompt specs using the apastra file format. - [Introduction](https://bintzgavin-apastra-14.mintlify.app/introduction.md): What Apastra is, the problem it solves, and why prompts need the same discipline as code. - [Quickstart](https://bintzgavin-apastra-14.mintlify.app/quickstart.md): Install Apastra skills and run your first prompt evaluation in 5 minutes. - [Assertion types](https://bintzgavin-apastra-14.mintlify.app/reference/assertion-types.md): Complete reference for all assertion types available in apastra, including deterministic checks, model-assisted grading, performance assertions, and negation. - [File structure reference](https://bintzgavin-apastra-14.mintlify.app/reference/file-structure.md): Complete reference for all files and directories in an apastra project, including repo topology options and the artifacts branch pattern. - [GitHub workflows reference](https://bintzgavin-apastra-14.mintlify.app/reference/github-workflows.md): Reference for all 13 GitHub Actions workflows included with apastra, including triggers, job steps, and branch protection setup. - [Prompt resolver](https://bintzgavin-apastra-14.mintlify.app/reference/resolver.md): How apastra resolves prompt IDs to actual prompt content through a four-level chain — local override, workspace, git ref, and packaged artifact. - [Schema reference](https://bintzgavin-apastra-14.mintlify.app/reference/schemas.md): Reference for all 56 JSON schemas that validate apastra protocol files, organized by category with field-level documentation for core schemas. - [apastra-baseline](https://bintzgavin-apastra-14.mintlify.app/skills/baseline.md): Establish and manage evaluation baselines for regression detection. A baseline is a saved scorecard from a passing run — future evals compare against it to catch quality drops. - [apastra-eval](https://bintzgavin-apastra-14.mintlify.app/skills/eval.md): Run prompt evaluations using your IDE agent as the harness. Load suites, execute test cases, score results, and compare against baselines. - [Skills overview](https://bintzgavin-apastra-14.mintlify.app/skills/overview.md): Apastra skills are SKILL.md files that teach your IDE agent domain-specific PromptOps workflows — how to evaluate, baseline, scaffold, validate, and ship AI prompts. - [apastra-scaffold](https://bintzgavin-apastra-14.mintlify.app/skills/scaffold.md): Generate new prompt specs, datasets, evaluators, and suites from templates. All generated files follow apastra schemas and pass validation out of the box. - [apastra-setup-ci](https://bintzgavin-apastra-14.mintlify.app/skills/setup-ci.md): Upgrade from local-first evaluation to automated GitHub Actions CI. Adds PR gating, regression blocking, governed releases, and approval tracking to your repository. - [apastra-validate](https://bintzgavin-apastra-14.mintlify.app/skills/validate.md): Validate all promptops files against JSON schemas. Catch formatting errors, missing required fields, and invalid values before running evaluations.