Gaming Countermeasures

Technical reference

This page provides the detailed detection procedures for all eleven disinformation attack vectors recognized by the Veridi methodology. For a higher-level overview, see Adversarial Testing.


Summary

#Attack VectorDetection DifficultyImpact SeverityTest Claims
1Confidence LaunderingModerateHighADV-001, ADV-002
2CitogenesisModerateHighADV-003, ADV-004
3Unverifiable-by-DesignHardHighADV-005, ADV-006
4Preprint Pump-and-DumpModerateMediumADV-011
5Selective SkepticismHardHighADV-009
6Tier InflationHardHighADV-007
7Framing ManipulationVery HardVery HighADV-008
8Coordinated Legitimate SourcingHardHighADV-010
9AnchoringHardHighADV-012
10Data Disappearance ExploitationHardVery HighADV-013, ADV-014
11Institutional CaptureVery HardVery HighADV-015, ADV-016

Detection procedures

1. Confidence Laundering

Pattern: Multiple outlets all trace back to a single unreliable source, creating the false appearance of independent confirmation.

Detection: Trace every source cited in the claim to its ultimate origin. Determine whether sources that appear independent actually derive from the same root. Derived sources do not boost confidence - only genuinely independent sources with separate access to the underlying facts count as corroboration.

Consequence: Effective sourcing tier is based on the original source, regardless of which outlet ultimately published the claim. If the original source is Tier 4, the confidence ceiling is 50% even if the claim was later reported by Tier 2 outlets.

2. Citogenesis

Pattern: Circular citation loops. A claim appears on source A, is cited by source B, and source A is then updated to cite source B as confirmation.

Detection: Timestamp analysis - does the supposed confirmation predate or closely follow the original claim? Language similarity - do “independent” sources use identical unusual phrasing? Citation chain - do sources cite each other rather than independent evidence?

Consequence: Circular citations are collapsed to a single source. Confidence is based on the quality of the original, non-circular evidence.

3. Unverifiable-by-Design

Pattern: Claims structured so verification is architecturally impossible. Anonymous sources discussing classified material in private settings. Claims about internal deliberations with no documentary trail.

Detection: Assess whether the claim’s structure permits independent verification. If the only possible sources are the claimant and unnamed insiders, the claim is unverifiable by design, not merely unverified.

Consequence: Confidence capped. Specificity is not treated as a proxy for credibility; a claim can be highly detailed and completely fabricated.

4. Preprint Pump-and-Dump

Pattern: A methodologically weak preprint is amplified as “research” before peer review.

Detection: Check publication status (preprint vs. peer-reviewed). Check timing: was the preprint publicized immediately, before expert review? Check credentials: do the authors have relevant expertise? Check language: does the coverage use language (“proves,” “confirms”) that exceeds what a preprint can establish?

Consequence: Preprints are classified at a lower tier than peer-reviewed publications. Language that overstates preprint findings is flagged.

5. Selective Skepticism

Pattern: Asymmetric evidence standards. Impossibly high standards applied to one side of a debate while the opposing position is accepted without evidence.

Detection: Identify the evidence standard applied to the claim. Apply the same standard to the counterclaim. If the counterclaim would fail its own standard, selective skepticism is present.

Consequence: The methodology enforces symmetric evidence standards. The same burden of proof applies to both sides.

6. Tier Inflation

Pattern: Low-quality claims laundered through progressively more credible outlets until they appear authoritative.

Detection: Trace the evidence chain back to its origin. Classify the original source, not the final publisher. An anonymous blog post cited by a news aggregator cited by a respected publication is still an anonymous blog post.

Consequence: Evidence is classified based on original source tier. The confidence ceiling is set by the origin, not the endpoint.

7. Framing Manipulation

Pattern: Individually true facts assembled to create a composite false impression. Each component checks out, but the whole is intentionally deceptive.

Detection: Decompose the claim into individual assertions. Verify each independently. Then assess whether the composite impression is supported by the individual facts. Check for cherry-picked metrics, selective timeframes, and rhetorical devices that guide interpretation toward a predetermined conclusion.

Consequence: The methodology distinguishes between passive omission (Lacks Context) and engineered framing (Misleading). Indicators of purposeful deception - cherry-picked metrics, advocacy context, rhetorical devices - point toward Misleading.

8. Coordinated Legitimate Sourcing

Pattern: Synchronized publication across credible outlets mimicking genuine consensus.

Detection: Timestamp clustering - are multiple “independent” reports published within an unusually narrow window? Language similarity - identical unusual phrasing across outlets? Source overlap - same small pool of quoted experts across all reports?

Consequence: If coordination indicators are present, the apparently independent sources are treated as a single coordinated source for confidence purposes.

9. Anchoring

Pattern: A true, easily verified fact embedded alongside a false assertion. The true fact transfers credibility to the false claim.

Detection: Decompose multi-clause claims. Verify each clause independently. Assess whether a true element is serving as an anchor for a false payload.

Consequence: Multi-clause claims are rated on the composite, not the anchor. A true setup does not salvage a false core assertion.

10. Data Disappearance Exploitation

Pattern: The removal of government data collection programs is weaponized. Two variants: (a) the absence of new data is cited as proof that no problem exists, (b) the elimination of the program is reframed as evidence that its historical data was unreliable.

Detection: Check whether the relevant government data source is still publishing. Consult the Institutional Reliability Index for the agency and function. Identify whether the claim depends on the data gap. Consult alternative data sources: international equivalents, state-level programs, academic research, independent monitoring systems.

Consequence: The data gap is acknowledged but does not create a presumption either way. Alternative sources (comparison anchors from the IRI) are consulted to fill the gap.

11. Institutional Capture

Pattern: A formerly reliable institution’s output has been compromised by political interference, defunding, or leadership replacement. The institution’s historical reputation is exploited to lend credibility to its compromised current output.

Detection: Consult the Institutional Reliability Index for a per-agency, per-function assessment. Check for observable degradation indicators: personnel changes, funding cuts, policy directives overriding scientific assessment, unprecedented breaks with peer institutions.

Consequence: The institution’s effective tier is downgraded on affected topics. Comparison anchors - independent sources that can verify the same claims - are consulted as primary sources, with appropriate confidence ceilings, instead.


The Institutional Reliability Index

The IRI provides per-agency, per-function reliability assessments. Each entry includes:

  • Degradation level (0-4): 0 = nominal, 1 = watch, 2 = significant concerns, 3 = compromised, 4 = non-functional
  • Observable indicators: Specific, documented events supporting the assessment
  • Effective tier: What tier the agency’s output should be treated as on affected topics
  • Comparison anchors: Independent sources that can verify the same claims

The IRI is designed to be updated as institutional conditions change. An agency that recovers independence can be upgraded; an agency that degrades further can be downgraded. The assessment is per-function; an agency may remain reliable for routine data while being compromised on politically sensitive topics.

The IRI currently covers US federal agencies. International institutions (TurkStat, China NBS) were also assessed in the GTS-C validation.