Ctrl AI

Trust Gradient

Understanding and improving the trust level of AI responses.

The trust gradient is Ctrl AI's core differentiator — every claim in every response is tagged with how much you can trust it.

Trust Levels

LevelColorSourceAuditability
VerifiedGreenProgram-computed (Brain 2)Deterministic. Same inputs = same outputs. Zero hallucination.
Expert-ReviewedBlueStructured unit with expert consensusExpert-verified reasoning structure. LLM generates prose within boundaries.
SynthesizedYellowGenerated from verified templatesStructurally sound but not yet peer-reviewed.
NeuralGrayPure LLM outputNo unit coverage. Explicitly marked as unverified.

Improving Coverage

To move from gray/yellow to blue/green:

  1. Identify gaps — check the Dashboard for coverage gaps and most-requested unmatched outputs
  2. Create units — use any of the 6 creation paths
  3. Verify units — put them through the verification workflow
  4. Compose workflows — chain verified units into complete processes

The Conversation Flywheel

Coverage improves naturally from daily usage:

  1. Users ask questions → AI responds (mostly neural at first)
  2. Users save good segments as candidate units
  3. Gaps are logged automatically for unmatched queries
  4. Admin clicks "Auto-Generate" for top gaps
  5. Team verifies candidates in the queue
  6. Next time → AI responds with verified units (green/blue)
  7. Repeat — coverage grows from usage

Neural-Only Disclaimer

When no units match a query, the response begins with an explicit warning:

"This response is not backed by verified reasoning units."

This ensures users and auditors always know when the AI is operating without expert-verified constraints.

Taint Tracking

If a candidate (unverified) unit's output feeds into a verified program, the program's output is marked as "tainted by unverified input." This prevents unverified reasoning from silently contaminating trusted outputs.

On this page