Frequently Asked Questions
Isn’t this just ChatGPT fact-checking things?
No. Veridi is a structured methodology: a set of documented procedures, decision trees, source hierarchies, and gaming countermeasures. The AI (currently Claude by Anthropic, which we have found more reliable than ChatGPT) follows these procedures to produce assessments. The difference matters:
Without the methodology, an AI asked to fact-check a claim will produce a plausible-sounding answer based on its training data. It won’t systematically classify source quality, check for disinformation techniques, apply confidence ceilings based on evidence tier, or use decision trees to resolve ambiguous verdicts. It will sometimes be right, sometimes wrong, and it’s difficult to tell which is which by looking at the output.
With the methodology, the AI follows an explicit process that produces transparent, auditable assessments. When it’s wrong, you can trace where the process broke down. The structural constraints - confidence ceilings, source tier requirements, gaming countermeasure checklists - prevent many categories of error that unconstrained AI commonly makes.
The methodology is also separable from the AI. The procedures could be followed by a human fact-checker or implemented on a different AI system.
Can AI really fact-check?
AI can follow a structured evaluation process, search for evidence, classify sources, and apply decision logic. It cannot conduct original reporting, interview sources, or exercise the kind of editorial judgment that comes from deep familiarity with a beat.
Veridi uses AI as the engine that follows the methodology - the way a pilot follows checklists. The checklist is the safety mechanism, not the pilot’s intuition. The AI’s role is to follow the documented procedures consistently, search broadly, and apply the structural constraints that prevent common errors.
The validation results provide evidence that this approach works: 96 of 97 claims handled correctly, including 24 adversarial scenarios specifically designed to produce wrong answers. But “works” means “produces correct results when following the methodology” - it does not mean “is infallible.”
What happens when it’s wrong?
The methodology includes a communication plan for when an incorrect assessment is identified. The process:
- The incorrect assessment is acknowledged promptly and specifically.
- The error is traced through the methodology to identify where the process broke down - wrong source classification, missed gaming vector, decision tree ambiguity, or AI reasoning error.
- If the error reveals a methodology gap, the relevant component is updated.
- If the error is in the test suite’s expected results, the test suite is corrected.
- Regression testing confirms the fix doesn’t break other assessments.
The methodology definition also aspirationally includes a volunteer safety framework for situations where fact-checking sensitive content causes distress, and a legal escalation path for claims with legal implications. These are not yet in place as of March 2026.
Who watches the watchers?
Several mechanisms:
Open methodology. The complete set of methodology files is published for external inspection. Anyone can review the decision trees, source hierarchy, gaming countermeasures, and confidence calibration framework. If the process is biased or flawed, the bias or flaw is visible.
Published validation data. All test claims, expected results, actual results, and scoring criteria are public. External parties can re-evaluate whether the scoring was fair.
Structural constraints. The methodology includes explicit checks against common biases: symmetric evidence standards (the same burden of proof for both sides), source-tier-based confidence ceilings (prevents volume from substituting for quality), and gaming countermeasures that apply regardless of which “side” the claim favors.
Known limitations. The limitations page documents what the methodology cannot do, where the validation has gaps, and where the evidence is weaker than we’d like. An organization that publishes its weaknesses is harder to accuse of concealing them.
Invitation to break it. We actively seek claims designed to produce incorrect results. The methodology improves through adversarial challenge, not through protection from scrutiny.
Is Veridi biased?
Every system has biases. Veridi’s are structural and documented:
Toward primary sources. The source hierarchy privileges government databases, peer-reviewed research, and court records over news reporting and social media. This means claims supported by primary data will generally receive higher confidence than claims supported by media coverage, regardless of topic.
Toward the scientific consensus. The field reliability coefficients and source hierarchy mean that well-established scientific consensus (supported by multiple peer-reviewed studies and institutional agreement) will generally receive higher confidence than challenges to that consensus (which are often supported by lower-tier evidence). This is by design - it reflects how evidence works - but it means the methodology is structurally conservative about paradigm shifts.
The IRI reflects specific assessments. The Institutional Reliability Index contains specific judgments about which agencies have been compromised and to what degree. These assessments are based on documented evidence (funding cuts, personnel changes, policy directives overriding scientific assessment), but they are judgments, not mechanical facts. The IRI is published for exactly this reason - so the assessments can be evaluated and challenged.
Training data. The AI implementation has a knowledge cutoff and inherits whatever biases exist in its training data. The methodology’s structural constraints limit, but do not eliminate, this.
How does this compare to PolitiFact / Snopes / other fact-checkers?
We don’t position against other fact-checkers. They do important, essential work. Veridi addresses a different part of the problem:
PolitiFact, Snopes, AFP Fact Check, etc. employ experienced journalists who investigate claims, conduct interviews, and exercise editorial judgment. Their assessments reflect human expertise and original reporting.
Veridi provides a structured, auditable methodology that can process claims at scale, systematically check for disinformation techniques, and produce assessments with transparent source classification and confidence reasoning.
These are complementary, not competitive. A journalist might use Veridi as a second-opinion tool or a gaming countermeasure scanner. Veridi might cite established fact-checkers as Tier 2 sources. The goal is to strengthen the overall fact-checking ecosystem, not to replace any part of it.
Is Veridi free?
The methodology documentation is freely available for inspection. Operational access details - including any future API, web interface, or partnership arrangements - will be published as the organization develops.
The organization plans to become a Canadian nonprofit funded through grants, subscriptions, and institutional partnerships, not through serving advertising. The intent is to provide a free tier, with subscription access to higher rigor, but financial models are still under development.
What claims can Veridi handle?
The methodology covers eight subject domains: scientific/technical, legal/regulatory, medical/health, financial/economic, electoral/voting, historical, technology/digital, and propaganda/narrative. It has been tested across all nine verdict categories and in four non-English languages.
Claims it handles well:
- Factual assertions with checkable evidence
- Statistical claims (with a dedicated checklist for manipulation detection)
- Claims using known disinformation techniques
- Claims about institutional actions or policies
- Claims requiring source quality evaluation
Claims it handles less well:
- Claims requiring original investigative reporting
- Claims in languages or institutional contexts not yet covered by the IRI
- Purely predictive claims about future events (though a predictive assessment framework exists)
- Claims about private conversations or classified information (often rated UNVERIFIABLE)
- Rapidly evolving situations less than 72 hours old (confidence is structurally capped)
How do I submit a claim?
For now, see the example assessments to understand what Veridi’s output looks like. We are currently working to create a directly accessible web interface, and exploring avenues such as claim submission via Patreon to cover the interim.