Open Methodology
How to inspect Veridi’s methodology
The complete Veridi methodology is published as a set of Markdown documents. These are the same files the AI follows when performing a fact-check; there is no hidden layer or proprietary process. What you see is what runs.
The methodology files
Core system
| File | Purpose |
|---|---|
| Triage_Agent_Main_Prompt.md | The primary system prompt. Defines claim classification, complexity assessment, and routing logic. |
| System_Flow_v2.md | Architecture diagram showing how a claim moves through the system - from input through triage, specialist invocation, and integrated output. |
| Output_Format_Standard.md | Defines the nine verdict categories, output structure, and annotation requirements. |
| Verdict_Decision_Trees.md | Explicit decision logic for resolving verdict boundary cases, particularly Misleading vs. Lacks Context and Mixed vs. Mostly False. |
Evidence evaluation
| File | Purpose |
|---|---|
| Source_Hierarchy_Triage_Addendum_v2.md | The four-tier source classification system with confidence ceilings and independence verification procedures. |
| Confidence_Calibration_Framework.md | Tier-based confidence caps, field reliability coefficients with sourcing honesty labels, and the interaction rules between them. |
| Institutional_Reliability_Index.md | Per-agency, per-function reliability assessments for institutions whose output may have been compromised. Includes degradation levels, observable indicators, and comparison anchors. |
Gaming countermeasures
| File | Purpose |
|---|---|
| Gaming_Countermeasures.md | Detection procedures for all 11 disinformation attack vectors. Includes the 14-item quick checklist applied before any verdict above 70% confidence. |
Domain specialists
| File | Purpose |
|---|---|
| Scientific_Claims_Specialist.md | Evaluation framework for scientific claims: study quality, methodology assessment, consensus evaluation. |
| Medical_Health_Specialist.md | Medical and public health claim evaluation, including clinical trial assessment and pharmacological claims. |
| Legal_Regulatory_Specialist.md | Legal and regulatory claim evaluation: statute interpretation, court ruling analysis, regulatory procedure. |
| Financial_Economic_Specialist.md | Financial and economic claim evaluation: market data, institutional analysis, statistical claims. |
| Electoral_Voting_Specialist.md | Electoral and voting claim evaluation: election procedures, voter data, policy analysis. |
| Historical_Context_Specialist.md | Historical claim evaluation: contextualization, historiographic assessment, source verification for historical records. |
| Technology_Digital_Specialist.md | Technology and digital claim evaluation: platform analysis, digital forensics, AI-generated content detection. |
| Propaganda_Deconstruction_Specialist_v2.md | Propaganda and narrative analysis: rhetorical technique identification, narrative deconstruction, disinformation pattern recognition. |
| Breaking_Event_Analyst.md | Evaluation framework for claims about events less than 72 hours old: timeline construction, source ecosystem mapping, uncertainty inventory. |
Supporting frameworks
| File | Purpose |
|---|---|
| Statistical_Claims_Checklist.md | Structured evaluation for statistical claims: methodology validation, sample assessment, cherry-picking detection. |
| Infrastructure_Authenticity_Addendum.md | Digital infrastructure verification: domain registration, hosting analysis, website authenticity assessment. |
Operational
| File | Purpose |
|---|---|
| Regression_Testing_Framework.md | How the methodology is tested for consistency across versions. |
| Crisis_Communication_Plan.md | Procedures for when Veridi produces an incorrect assessment. |
| Legal_Escalation_Path.md | Procedures for claims with legal implications. |
| Volunteer_Safety_Framework.md | Safety procedures for volunteers working with sensitive or distressing content. |
| CHANGELOG.md | Version history of methodology changes. |
Validation
| File | Purpose |
|---|---|
| golden_test_set_A.md | 25 cross-domain test claims with documented ground truth (GTS-001 to GTS-025). |
| golden_test_set_B.md | 25 weakness-targeting test claims (GTS-026 to GTS-050). |
| golden_test_set_C.md | 20 gap-filling test claims (GTS-051 to GTS-070). |
| adversarial_test_suite.md | 12 single-vector adversarial claims (ADV-001 to ADV-012). |
| adversarial_test_suite_v2.md | 12 multi-vector adversarial claims (ADV-013 to ADV-024). |
| validation-results/ | Per-claim scorecards with full evidence and reasoning. |
What to look for
If you’re reviewing the methodology, here are the most important things to evaluate:
Decision tree consistency. Do the verdict decision trees produce the same result regardless of which path you take? Are there contradictions between the trees and the output format definitions?
Source hierarchy completeness. Are there source types that don’t fit cleanly into the four tiers? Are there edge cases where the confidence ceiling produces unreasonable results?
Gaming countermeasure coverage. Are there disinformation techniques not covered by the eleven vectors? Can you construct a claim that should be detected but wouldn’t be?
IRI assessment methodology. Are the degradation levels well-defined? Are the comparison anchors genuinely independent? Could the IRI itself be gamed?
Confidence calibration. Are the field reliability coefficients well-sourced? Does the interaction between tier ceilings and field coefficients produce sensible results across edge cases?
Reporting issues
If you find an inconsistency, gap, or vulnerability in the methodology, we want to hear about it. The methodology improves through exactly this kind of external scrutiny.
Contact: veridi [at] nettercap.net