Trust Gradient
Understanding and improving the trust level of AI responses.
The trust gradient is Ctrl AI's core differentiator — every claim in every response is tagged with how much you can trust it.
Trust Levels
| Level | Color | Source | Auditability |
|---|---|---|---|
| Verified | Green | Program-computed (Brain 2) | Deterministic. Same inputs = same outputs. Zero hallucination. |
| Expert-Reviewed | Blue | Structured unit with expert consensus | Expert-verified reasoning structure. LLM generates prose within boundaries. |
| Synthesized | Yellow | Generated from verified templates | Structurally sound but not yet peer-reviewed. |
| Neural | Gray | Pure LLM output | No unit coverage. Explicitly marked as unverified. |
Improving Coverage
To move from gray/yellow to blue/green:
- Identify gaps — check the Dashboard for coverage gaps and most-requested unmatched outputs
- Create units — use any of the 6 creation paths
- Verify units — put them through the verification workflow
- Compose workflows — chain verified units into complete processes
The Conversation Flywheel
Coverage improves naturally from daily usage:
- Users ask questions → AI responds (mostly neural at first)
- Users save good segments as candidate units
- Gaps are logged automatically for unmatched queries
- Admin clicks "Auto-Generate" for top gaps
- Team verifies candidates in the queue
- Next time → AI responds with verified units (green/blue)
- Repeat — coverage grows from usage
Neural-Only Disclaimer
When no units match a query, the response begins with an explicit warning:
"This response is not backed by verified reasoning units."
This ensures users and auditors always know when the AI is operating without expert-verified constraints.
Taint Tracking
If a candidate (unverified) unit's output feeds into a verified program, the program's output is marked as "tainted by unverified input." This prevents unverified reasoning from silently contaminating trusted outputs.