Overview
Start with installation and the first evaluation loop, then move into artifact inspection, reference material, and assurance notes as your workflow matures.
Browse docs
Start here
Follow the same onboarding path as the upstream README: install the package, run a first evaluation, then move into baseline-vs-subject workflows.
01
Getting Started
Environment setup, installation, and the first evaluation loop.
02
Quickstart
CLI highlights for the common workflows and the first artifact outputs.
03
Compare & evaluate (BYOE)
Run baseline vs subject comparisons with pinned pairing and guard checks.
04
Primary Metric Smoke
Tiny examples for ppl and accuracy paths before a larger evaluation run.
pip install "invarlock[hf]"invarlock evaluate --baseline gpt2 --subject gpt2-q4 --profile devChoose a path
Run the quickstart if you want to execute the CLI immediately, or inspect artifacts first if you need to understand the evidence model before running anything.
Artifact Trail
Start here if you need to understand what the evaluation produces before running the CLI in your own environment.
Example Reports
Inspect representative evaluation outputs and reviewer-facing attachments.
Reading a report
Understand PASS/FAIL status, paired metrics, provenance, and verification fields.
Browse
Start with quickstartUser Guide
12Core workflows, evaluation reports, proof packs, and practical guidance for running evaluations.
Reference
16CLI flags, configuration, and API references you can bookmark.
Assurance
15Safety case and assurance artifacts: evidence, analysis, and verification posture.
Security
4Security model, hardening notes, and operational guidance.