Verdict Taxonomy

Nine categories for how claims actually work

Binary true/false ratings fail to describe most real-world claims. A claim can contain true facts arranged to create a false impression. A claim can be accurate but missing context that reverses its meaning. A claim can be genuinely unresolvable with available evidence. Veridi uses nine verdict categories to capture these distinctions.


TRUE

The claim is accurate. Supported by Tier 1 or Tier 2 sources with no material caveats.

Example: “More than 97% of actively publishing climate scientists agree that human activities are causing global warming.” - Confirmed by multiple independent meta-analyses (Cook et al. 2013, Lynas et al. 2021) and NASA’s scientific consensus page.

MOSTLY TRUE

The claim is substantially accurate with minor imprecisions that do not change the reasonable takeaway.

Example: “The Eiffel Tower is about 1,000 feet tall.” - The actual height is 984 feet to the roof or 1,083 feet to the antenna tip. “About 1,000 feet” is approximately correct. The qualifier “about” signals approximation. The inaccuracy is real but does not change the claim’s essential meaning.

MIXED

The claim contains independently verifiable sub-claims, some true and some false, where the true and false elements address genuinely different aspects of the topic. Neither dominates.

Example: A claim about a country’s economy that correctly states GDP growth figures but incorrectly states unemployment statistics, where both numbers are independently meaningful.

MOSTLY FALSE

The claim contains a true element, but the core assertion is false. The true element does not salvage the main point.

Example: “Biden’s administration let 8 million migrants into the US.” - CBP recorded approximately 8.1 million encounter events, but encounters are not individuals (repeat crossings count separately), and over 3.6 million resulted in expulsions or removals. The number is real; the characterization of “let in” is false for the majority.

FALSE

The claim is inaccurate. No significant true sub-components.

Example: “Exposure to 5G cellular towers causes or spreads COVID-19.” - No mechanism exists by which radio waves could transmit a virus. Contradicted by WHO, CDC, and basic physics.

MISLEADING

Individual facts in the claim may be accurate, but they are assembled to create a false composite impression, and the false impression appears to be the purpose of the framing.

This is the most important distinction in the taxonomy: Misleading is not the same as Lacks Context. The difference is intent and construction. Misleading indicates engineered framing: cherry-picked metrics, advocacy context, rhetorical devices designed to lead to a predetermined conclusion. Lacks Context indicates passive omission.

Example: A politician cites accurate Census data on population share changes but omits that growth rates are converging, uses the data in a speech about “infiltration,” and attributes the change to a cause for which no government data exists.

LACKS CONTEXT

Important information is missing that would change the reasonable interpretation, but the omission appears incidental rather than engineered. The claim is not wrong - it’s incomplete in a way that creates a false impression the claimant may not have intended.

Example: “Company X’s profits doubled last year.” - True: profits went from $1M to $2M. But the company previously earned $50M and suffered massive losses. The “doubling” represents a 96% decline from the healthy baseline. The bare factual report is accurate but incomplete.

OUTDATED

The claim was accurate at the time of original publication but has been superseded by new data, policy changes, or events.

UNVERIFIABLE

The claim cannot be confirmed or denied with available evidence. This may be because the claim is structured to resist verification (anonymous sources discussing classified material in private settings), because the evidence is genuinely ambiguous, or because the relevant data doesn’t exist.

Example: Definitive claims about COVID-19 origins - the intelligence community remains split, and critical evidence may no longer be accessible.


The hard boundaries

Two verdict boundaries cause the most disagreement among fact-checkers. Veridi uses explicit decision trees to resolve them:

Misleading vs. Lacks Context: Does the false impression appear to be the purpose of the framing? Indicators: cherry-picked metrics, advocacy context, rhetorical devices, unsupported causal attribution. If yes → Misleading. If the omission appears incidental (routine reporting, no rhetorical framing) → Lacks Context.

Mixed vs. Mostly False: Are the true elements independently meaningful sub-claims, or are they preconditions/background facts that support the false core assertion? If the true element is a genuine separate point → Mixed. If the true element is a setup or anchor for the false main claim → Mostly False.

These decision trees were tested against 18 boundary cases in validation. All 18 resolved to the expected side.