For Organizations and Funders
The problem
Disinformation operates at a scale and sophistication that individual fact-checkers cannot match. Claims are engineered to exploit specific weaknesses in human evaluation, using true facts to create false impressions, laundering unreliable sources through credible outlets, and weaponizing the degradation of institutions that people have been taught to trust.
Existing fact-checking organizations do essential work, but they face structural challenges: editorial processes that are difficult to scale, implicit decision-making that is difficult to audit, and limited systematic defenses against adversarial disinformation techniques.
The approach
Veridi is a structured fact-checking methodology - an explicit, step-by-step process for evaluating claims that addresses these challenges directly:
Scalable. The methodology is implemented as a prompt system for AI, meaning it can process claims at machine speed while following documented procedures. A single assessment takes minutes, not hours.
Auditable. Every step is documented. Source classification, gaming countermeasure checks, decision tree paths, confidence reasoning - all visible in the output. When the system makes an error, the process shows where and why.
Adversarial-aware. The methodology includes explicit countermeasures for eleven documented disinformation techniques. This is its most distinctive feature: treating fact-checking as an adversarial problem where bad actors are actively trying to defeat the process.
Adaptive to institutional change. An Institutional Reliability Index tracks when formerly authoritative sources have been compromised by political interference or defunding, and provides alternative sources to consult. This matters in an era when some of the most effective disinformation correctly attributes false claims to institutions that were once trustworthy.
The evidence
The methodology was validated against 97 claims:
- 96 passed, 1 partial, 0 failed. The partial was a correct verdict with slightly low confidence due to source unavailability.
- 24 adversarial scenarios were specifically engineered to produce wrong answers by exploiting disinformation techniques. All were handled correctly.
- 4 claims were based on documented real-world disinformation patterns (VAERS misuse, “died suddenly” narrative, immigration-crime statistics, FEMA hurricane diversion).
- 6 claims tested genuinely contested topics where experts disagree. The methodology produced appropriate verdicts with clearly reported uncertainty.
- 4 languages were tested for non-English source evaluation (Japanese, Turkish, Chinese, Hindi).
The full validation report documents the testing methodology, per-claim results, and limitations.
The organization
Veridi is being built by a small Canadian organization with intent to become a nonprofit with a lean operating budget. The organization:
- Has no venture capital, advertising model, or growth-at-all-costs pressure
- Publishes its full methodology for external inspection
- Documents its limitations alongside its successes
- Positions itself as complementary to existing fact-checking organizations, not competitive
What’s next
The methodology is validated and ready for controlled deployment: real-world use by trained volunteers with ongoing quality monitoring. Next steps include:
- Usability testing with human volunteers to verify the methodology can be followed correctly by non-experts
- Brier score calibration to measure whether confidence ratings match real-world outcomes over time
- Scale testing in continuous production
- Red-teaming by external parties attempting to find claims the methodology handles incorrectly
Contact
veridi [at] nettercap.net
For technical questions about the methodology, see the researcher documentation. For the full methodology files, see open methodology.