Mission
Why the suite exists
Like many people, we have watched the increasing flood of cheaply generated misinformation and disinformation with dismay, and we wondered whether it was possible to build frameworks that use the machinery that facilitates the problem to help address it. We were also struck by an adjacent gap: even when an empirical claim is settled, the leap from “what is true” to “what should we do about it” and “what can a specific person actually do” is often left to advocacy organizations with positions, think tanks with funders, or social media voices with priors.
So the suite is three products, sharing a discipline:
- Veridi — Is this claim true? A fact-checking methodology that produces transparent, auditable verdicts.
- Pragma — What does the evidence suggest about this policy? A policy-evidence synthesis methodology that separates empirical assessment from value choice.
- Praxis — What can this person actually do about a goal? An individual-action synthesis methodology that matches a user’s position, resources, and constraints to the highest-leverage pathways available to them.
Most people evaluate claims, policies, and choices about action the same way: they read something, decide whether it sounds right, and move on. Even professionals often rely on experience and instinct. This works until it doesn’t — and in an environment engineered to exploit exactly these shortcuts, it increasingly doesn’t.
The three products take a different approach: explicit, step-by-step procedures for the question each one addresses. Think of each as a checklist; not for what to conclude, but for how to evaluate.
What we’re building
Each product is a structured methodology, implemented as a prompt system for AI (currently Claude by Anthropic), which follows the documented procedures to evaluate the claim, policy, or goal. The AI doesn’t decide what’s true, what’s wise, or what’s worth doing; it follows each methodology’s decision trees, source hierarchies, evidence quality frameworks, and quality checks to produce a transparent, auditable assessment.
This matters because the methodologies are separable from any particular AI system. The procedures are documented, the decision logic is explicit, and the assessment criteria are inspectable. If the AI makes a mistake, the methodology provides a framework for identifying where and why.
Principles
These principles apply across all three products.
Truth comes first. Every Veridi assessment begins by stating what is actually true, clearly, affirmatively, and without negation. Every Pragma assessment names what the evidence establishes before naming what’s contested. Every Praxis assessment grounds its recommendation in the user’s actual context before suggesting action. Repetition of false framings, even to refute them, can reinforce them — so we lead with the corrective.
Transparency over authority. We don’t ask anyone to trust us. We document each product’s methodology, test results, limitations, and reasoning, and make them available to those who request them. If our processes are sound, the results will bear that out. If they aren’t, we want to know.
Forthright uncertainty. When the evidence is ambiguous, we say so. When experts genuinely disagree, we report the disagreement rather than picking a side. Veridi uses nine verdict categories. Pragma uses verbal confidence bands plus an Outcome Likelihood scale. Praxis uses leverage-confidence bands. Binary outputs would fail to describe most real-world questions accurately.
Adversarial awareness. Disinformation, manipulation, and motivated reasoning are not random. They follow patterns: techniques that exploit specific weaknesses in how people and institutions evaluate information. Pragma identifies 14 such patterns (corrected from 11 in the v1.2 audit) and provides explicit detection procedures. Praxis adds 6 native vectors plus 8 inherited from Pragma. Each product treats reasoning under contestation as an evolving adversarial problem.
Auditable feedback loop. The v1.3 calibration feedback loop closes the gap between recommendations and realized outcomes. User-reported outcomes feed Brier-lite scoring per pathway, per recommendation, per methodology. Drift triggers human review (auto-flag, NOT auto-adjust). Methodology files are never modified by code.
Organization
We are a small Canadian cooperative-in-formation. There is no venture capital, no growth-at-all-costs pressure, and no advertising model. The cooperative exists to produce accurate, evidence-grounded reasoning across fact-checking, policy analysis, and individual-action synthesis, and to make its methodologies available to others.
We are not competitors to existing fact-checkers, think tanks, or organizing infrastructure. Organizations like PolitiFact, Snopes, and the IFCN network do important fact-checking work; prior art such as Loki (an open-source fact-checking platform) brings value and identifies challenges. Existing policy-research institutions and movement-building groups have decades of expertise. The suite’s contribution is structured, auditable methodologies for three specific questions that can complement and strengthen existing practices, not replace them.