The GRC tool that actually says
“I don’t know.”
Trusted by vCISOs, MSSPs, and compliance teams who can’t afford confident wrong answers.
Most compliance AI fabricates confidence. threep reads your actual policies, cites specific sections, and reports PARTIAL or NONE when evidence is missing. Auditors love it.
An evidence engine for compliance answers.
Most compliance AI demos look the same: upload some policies, ask a question, get a confident answer. threep is different under the hood. It separates what a control is asking from what your documents actually prove, so answers can be checked against policy support instead of model confidence.
Control-aware retrieval
Interprets the framework question before it goes searching for support.
Corpus-aware evidence
Searches your documents for proof, not just related language, and grounds answers in specific sections.
Answer-state discipline
Returns supported answers, PARTIAL support, or GAP when your corpus does not warrant a stronger claim.
Account management requirements are addressed in two policy sections with explicit provisioning, review, and termination procedures aligned to NIST AC-2.
Three screens. Zero guesswork.
And one place to run it all.
Most tools give you confidence.
threep gives you accuracy.
Magic AI tools promise full automation but deliver significant rework. The difference between a compliant-sounding answer and a defensible one is whether anyone actually checked your policies.
What most compliance tools do
- Generate dashboards from integrations — not from your actual documents
- Produce plausible-sounding coverage even when evidence is absent
- Report YES when your implementation is technically PARTIAL
- Require cloud connectivity — your policy text leaves your environment
- Lock you into a vendor's framework mapping you can't inspect
What threep does
- Reads your actual policy documents — citations trace to specific sections
- Requires evidence before reporting YES — otherwise returns PARTIAL or NONE
- Turns gaps into actionable remediation tasks, not suppressed warnings
- Runs entirely on your machine via Ollama — policy text never leaves
- Open source — read, audit, and modify every line of logic
Built for real compliance work.
Not just search. Not just chat. A system that reads your policies, cites them honestly, and tells you exactly where the gaps are.
NIST 800-53 Coverage Matrix
See which controls are STRONG, PARTIAL, or NONE — with cited policy evidence for each. The single view that shows your real compliance posture.
Evidence-Only Answers
Every answer cites specific policy sections. No evidence? The system says so instead of making something up.
Questionnaire Batch Automation
Upload a CSV, XLSX, or TXT of questions. Get evidence-backed answers in bulk with export to CSV, DOCX, or JSON. Significantly reduces the manual effort of vendor security reviews.
Three-Lane Question Routing
Questions automatically route through policy, NIST control, or hybrid paths for the most relevant response.
Policy Management
Upload, edit, and reingest policy documents without leaving the browser. DOCX auto-converts to Markdown.
Gap Task Management
Turn coverage gaps into actionable work items — flag what's missing and track remediation progress.
Scheduled Assessment Runs
Run assessments on a daily, weekly, monthly, or custom cron recurrence — with timezone handling, holiday presets, and skip dates. Each run is a tracked job (queued, running, completed, failed, canceled) and artifacts deliver to a folder or HTTP endpoint.
Your compliance logic shouldn't
be someone else's black box.
Fork it. Own it.
Every prompt, scoring weight, and routing decision is yours to modify.
Data never leaves.
Embeddings and LLM inference run locally via Ollama. Cloud is opt-in.
Full audit trail.
Every citation, every routing decision. Show auditors exactly how you got there.
MIT licensed.
Use in production. Modify for your org. Contributions welcome.
From raw policy docs to a
submitted questionnaire in five steps.
Upload
Add your policy docs (DOCX or Markdown)
Ingest
Policies are chunked, embedded, and indexed locally
Map
Sections map to NIST 800-53 controls automatically
Ask
Query or batch-run questionnaires against your docs
Export
Download answers as CSV, DOCX, or JSON
I'm tired of black-box compliance tools that lie to auditors.
So I open-sourced the whole thing.
threep is free, local-first, and built by someone who actually fills out vendor security questionnaires.
Showing a source is not the same
as proving the claim.
Many AI tools can attach citations. threep is designed to do the harder thing: evaluate whether the available evidence actually supports the answer, distinguish policy intent from implementation proof, and preserve uncertainty when the documents do not justify a stronger claim.
Traceability is the start
Knowing which document an answer came from is table stakes. threep records the excerpt, the file, the page, and the reasoning chain that produced the conclusion — so reviewers can audit the logic, not just the output.
Restraint is the difference
When evidence is thin, ambiguous, or silent on the question asked, threep says so. It does not synthesize a confident-sounding answer from weak material. Partial and unknown are first-class results.
Assessment is the core
Each answer distinguishes between what a policy states and what your environment actually demonstrates. Intent without implementation evidence is labeled as such — because that gap is exactly what auditors look for.
Benchmarking threep for evidence discipline.
Most AI evaluations ask whether an answer sounds plausible. In compliance work, that is not enough. threep is evaluated on whether answers are grounded in real documentation, whether claims are inspectable, and whether gaps are stated honestly.
Grounding
Measures whether answers stay tied to the source corpus instead of attaching decorative or unrelated citations.
Restraint
Measures whether the system avoids overstating what the documents actually prove — especially when policy language exists without implementation evidence.
Output control
Measures whether answers stay readable, bounded, and structurally disciplined rather than sprawling or padded.
Coverage honesty
Measures when the right answer is PARTIAL or GAP — not forced certainty.
- Citation duplication sharply reduced
- Unsupported citation usage sharply reduced
- Cross-document contamination reduced dramatically
- Forbidden implementation verbs reduced to zero
- PARTIAL introduced as a first-class answer state
- Section sprawl capped and controlled
threep is not a substitute for human review, and these evaluations do not prove legal sufficiency or control effectiveness. They measure grounding, output discipline, and honesty about uncertainty. The evaluation harness runs in the open. See the open-source project →
Common questions.
The short answers. No marketing language.
No — by default everything runs locally. Embeddings use sentence-transformers locally. The LLM runs via Ollama. OpenAI is available as an opt-in provider, but even then embeddings remain local.
A modern laptop with 16 GB RAM is enough to run qwen2.5:14b via Ollama. A GPU helps but isn't required. For batch work, 32 GB RAM and a consumer GPU is recommended.
Policy documents: .docx or .md (DOCX auto-converts). Questionnaires: .csv, .xlsx, or .txt. Exports: CSV, DOCX, or JSON.
Accuracy is bounded by your documents. threep reports PARTIAL or UNKNOWN rather than fabricating — threep would rather tell you it doesn't know than pretend it does. That's the whole point. Every answer includes a section citation. Quality improves with larger models and well-structured docs.
Built-in: NIST SP 800-53 Rev 5 and NIST CSF. The batch runner handles any free-form questionnaire — FedRAMP, TX-RAMP, SOC 2, ISO 27001, and vendor security reviews all work.
Yes. Scheduled Assessment Runs execute on a daily, weekly, monthly, or custom cron recurrence, with timezone awareness, holiday presets, and explicit skip dates. Each run is a tracked job with states (queued, running, completed, failed, canceled) and per-run artifacts you can review. Delivery options include writing to a folder or POSTing to an HTTP endpoint — useful for vCISOs and compliance teams that need a recurring cadence without manual execution.
Yes. MIT licensed, open source on GitHub. Self-host, modify, and use it in production at no cost. Contributions are welcome.
Roadmap
Want the managed path?
threep.cloud is the hosted version of threep for teams that want the workflow without self-hosting.
Four commands. Real compliance answers.
Clone the repo, ingest your policies, and start answering audit questionnaires from your actual documents — not hallucinations.
git clone https://github.com/dllswbr/threep && cd threep && python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && python3 ingest.py && uvicorn app:app --reload