Open Methodology

How to inspect Veridi’s methodology

The complete Veridi methodology is published as a set of Markdown documents. These are the same files the AI follows when performing a fact-check; there is no hidden layer or proprietary process. What you see is what runs.

The methodology files

Core system

FilePurpose
Triage_Agent_Main_Prompt.mdThe primary system prompt. Defines claim classification, complexity assessment, and routing logic.
System_Flow_v2.mdArchitecture diagram showing how a claim moves through the system - from input through triage, specialist invocation, and integrated output.
Output_Format_Standard.mdDefines the nine verdict categories, output structure, and annotation requirements.
Verdict_Decision_Trees.mdExplicit decision logic for resolving verdict boundary cases, particularly Misleading vs. Lacks Context and Mixed vs. Mostly False.

Evidence evaluation

FilePurpose
Source_Hierarchy_Triage_Addendum_v2.mdThe four-tier source classification system with confidence ceilings and independence verification procedures.
Confidence_Calibration_Framework.mdTier-based confidence caps, field reliability coefficients with sourcing honesty labels, and the interaction rules between them.
Institutional_Reliability_Index.mdPer-agency, per-function reliability assessments for institutions whose output may have been compromised. Includes degradation levels, observable indicators, and comparison anchors.

Gaming countermeasures

FilePurpose
Gaming_Countermeasures.mdDetection procedures for all 11 disinformation attack vectors. Includes the 14-item quick checklist applied before any verdict above 70% confidence.

Domain specialists

FilePurpose
Scientific_Claims_Specialist.mdEvaluation framework for scientific claims: study quality, methodology assessment, consensus evaluation.
Medical_Health_Specialist.mdMedical and public health claim evaluation, including clinical trial assessment and pharmacological claims.
Legal_Regulatory_Specialist.mdLegal and regulatory claim evaluation: statute interpretation, court ruling analysis, regulatory procedure.
Financial_Economic_Specialist.mdFinancial and economic claim evaluation: market data, institutional analysis, statistical claims.
Electoral_Voting_Specialist.mdElectoral and voting claim evaluation: election procedures, voter data, policy analysis.
Historical_Context_Specialist.mdHistorical claim evaluation: contextualization, historiographic assessment, source verification for historical records.
Technology_Digital_Specialist.mdTechnology and digital claim evaluation: platform analysis, digital forensics, AI-generated content detection.
Propaganda_Deconstruction_Specialist_v2.mdPropaganda and narrative analysis: rhetorical technique identification, narrative deconstruction, disinformation pattern recognition.
Breaking_Event_Analyst.mdEvaluation framework for claims about events less than 72 hours old: timeline construction, source ecosystem mapping, uncertainty inventory.

Supporting frameworks

FilePurpose
Statistical_Claims_Checklist.mdStructured evaluation for statistical claims: methodology validation, sample assessment, cherry-picking detection.
Infrastructure_Authenticity_Addendum.mdDigital infrastructure verification: domain registration, hosting analysis, website authenticity assessment.

Operational

FilePurpose
Regression_Testing_Framework.mdHow the methodology is tested for consistency across versions.
Crisis_Communication_Plan.mdProcedures for when Veridi produces an incorrect assessment.
Legal_Escalation_Path.mdProcedures for claims with legal implications.
Volunteer_Safety_Framework.mdSafety procedures for volunteers working with sensitive or distressing content.
CHANGELOG.mdVersion history of methodology changes.

Validation

FilePurpose
golden_test_set_A.md25 cross-domain test claims with documented ground truth (GTS-001 to GTS-025).
golden_test_set_B.md25 weakness-targeting test claims (GTS-026 to GTS-050).
golden_test_set_C.md20 gap-filling test claims (GTS-051 to GTS-070).
adversarial_test_suite.md12 single-vector adversarial claims (ADV-001 to ADV-012).
adversarial_test_suite_v2.md12 multi-vector adversarial claims (ADV-013 to ADV-024).
validation-results/Per-claim scorecards with full evidence and reasoning.

What to look for

If you’re reviewing the methodology, here are the most important things to evaluate:

  1. Decision tree consistency. Do the verdict decision trees produce the same result regardless of which path you take? Are there contradictions between the trees and the output format definitions?

  2. Source hierarchy completeness. Are there source types that don’t fit cleanly into the four tiers? Are there edge cases where the confidence ceiling produces unreasonable results?

  3. Gaming countermeasure coverage. Are there disinformation techniques not covered by the eleven vectors? Can you construct a claim that should be detected but wouldn’t be?

  4. IRI assessment methodology. Are the degradation levels well-defined? Are the comparison anchors genuinely independent? Could the IRI itself be gamed?

  5. Confidence calibration. Are the field reliability coefficients well-sourced? Does the interaction between tier ceilings and field coefficients produce sensible results across edge cases?

Reporting issues

If you find an inconsistency, gap, or vulnerability in the methodology, we want to hear about it. The methodology improves through exactly this kind of external scrutiny.

Contact: veridi [at] nettercap.net