Right Athlete, Right Time, Right Test Needs Receipts

A Slogan Isn’t a Control System

David Howman, the Chair of Athletics Integrity Unit (AIU), has now said the quiet part out loud: anti-doping has stalled, and sophisticated evasion exists at the elite level. Their response is a clean, memorable phrase: “the right athlete, right time, right test.” It sounds like an upgrade. It’s also not a plan.

I went looking for the operating model behind the phrase. I didn't want tradecraft or targeting logic; I wanted the governance blueprint that proves this isn’t just a rebranding exercise. Based public records, that blueprint doesn't exist. There is no transparency package, no concrete roadmap, and no timeline for when high-level intent becomes an auditable reality.

That’s a problem for the people funding the sport. Sponsors and broadcasters aren’t underwriting a slogan, they’re underwriting legitimacy. If the system is shifting from “random volume” to “targeted intelligence,” the AIU needs to show enough proof the shift is real without handing dopers a playbook.

A commenter under my last piece made the most practical suggestion I’ve heard in this whole debate: If athletes participating in Enhanced Games are openly saying what they’re using, why not treat that as intelligence and R&D? Take voluntary transparency, study the markers, improve the trigger library, and shrink blind spots.

New track athletes are joining Enhanced Games and the roster’s expanding. Pretending that data stream doesn’t exist doesn’t protect the sport. It just delays learning. That suggestion matters because it turns “anti-doping is always behind” from resignation into a concrete operating choice: shorten the learning loop or keep losing it.

Who Does What

This whole ecosystem is easier to understand when you separate the roles. World Athletics (WA) runs the sport’s core assets: eligibility, records, championships, and the commercial product built on top of those things. The AIU is the enforcement arm WA set up to protect that product, including anti-doping and integrity cases. World Anti-Doping Agency (WADA) is the global rule-setter: it defines what counts as doping, what labs must do, and what standards everyone is expected to follow.

WADA compliance tells you the system exists. It doesn’t tell you whether the system is effective against the hardest cases. That’s why “right athlete, right time, right test” needs receipts.

The Boundary: What Can’t Be Public vs What Must Be

Some details shouldn’t be public. Nobody needs athlete whereabouts specifics, active investigative leads, or exact thresholds that can be gamed. That kind of disclosure would actively help rule-breakers. Sponsors and partners still deserve something better than “trust us, bro” vibes. The minimum viable answer is governance, coverage, resourcing, and outcomes, reported in a way that protects privacy and doesn’t compromise open work.

The could look something like this:

  • Not public: Athlete-specific whereabouts, active investigations, exact decision thresholds, informant identities

  • Must be public: Targeting governance, critical-hours coverage rates, specialized testing rates, I&I resourcing, outcome metrics that don’t expose individuals

Those boundaries are what makes the audits reasonable, not reckless.

The AIU can’t publish tradecraft, and it shouldn’t. Sponsors and broadcasters still need proof the system is improving against sophisticated cheating. That proof isn’t tactics, it’s auditable governance, coverage, resourcing, and outcomes. The AIU has already signaled a shift toward intelligence-led targeting, but the public version still stops at slogans. The audits below are the minimum viable disclosures that would prove the shift is real without compromising investigations.

Audit 1: “Right Athlete” Should Mean Governed Targeting, Not Vibes

The AIU is telling the world it’s moving away from “random testing volume” and toward risk-based targeting. That’s a smart direction if it’s true. The problem is that “targeting” can mean anything from disciplined intelligence work to a quiet internal list nobody can scrutinize. Sponsors and athlete reps don’t need to know who’s on the list. They do need to know there’s a real governance chain, real decision gates, and real controls against bias and vendettas.

Below are the minimum standards I feel AIU should adopt and publish:

  • Risk-input categories: List the types of red flags that put an athlete on the radar, such as sudden jumps in performance, irregular blood data, missing tests, tips from whistleblowers, or connections to coaches with bad reputations.

  • Decision gates: Explain the step-by-step process of how a case moves from a general review to a high-priority investigation, without revealing the specific numbers that trigger a test.

  • Approval chain: Identify which job titles have the power to order a "target" test and who is required to sign off on those missions to ensure a clear chain of command.

  • Bias controls: Detail the internal checks used to keep the system fair, such as double-checking for conflicts of interest, making sure different people handle different parts of the process, and having a way to review claims of unfair targeting.

Targeting can’t stay a black box if the whole strategy now depends on it.

Audit 2: “Right Time” Should Mean Measurable Critical-Hours Coverage

“Right time” is the AIU’s way of admitting what everyone already knows: timing matters. Some substances and methods are harder to catch when your testing window is predictable, limited, or operationally constrained by geography and staffing. Nobody is asking the AIU to publish mission schedules. Stakeholders should be asking whether the AIU is actually able to reach athletes during high-risk windows in the real world, not just on paper.

Minimum standard the AIU should adopt and publish:

  • Critical-hours coverage: Report the percentage of tests that happen during late-night or early-morning hours and show if that number is increasing or decreasing year-over-year.

  • Capability map: Provide a simple heat map showing which parts of the world have strong, moderate, or weak testing presence so stakeholders can see the global reach.

  • Execution model: Disclose how much of the work is done by the AIU’s own staff versus how much they rely on local national agencies to carry out tests.

  • Constraints statement: List the main real-world problems that limit testing, such as travel restrictions, visa delays, or a lack of staff in certain regions, without naming specific training camp addresses.

This is coverage accountability, not operational self-sabotage.

Audit 3: “Right Test” Should Mean Published Rates and a Trigger Framework

“Right test” is where the slogan can become measurable. The AIU has already acknowledged, in substance, that not every sample gets every analysis. That’s normal. It’s also exactly why total test counts can be misleading. If most samples only receive baseline screens, then a high test volume can still produce a weak detection posture against sophisticated strategies. Sponsors should be able to see, at a category level, how often the AIU escalates samples to more specialized analysis and what kinds of signals trigger that escalation.

Minimum standard the AIU should adopt and publish:

  • Specialized analysis rate: Show the percentage of samples that get basic testing versus those that undergo deep-dive lab work for specific performance-enhancing substances.

  • Trigger framework categories: List what causes a sample to be flagged for extra testing, such as a tip, a weird blood reading, a sudden performance spike, or a history of missed tests.

  • Capacity bottlenecks: Disclose the actual limits of the system—such as lab costs or busy schedules—and explain the rules used to decide which tests are most important when resources are tight.

  • Outcome linkage: Provide a breakdown of how athletes are actually caught, showing the results of blood data versus standard tests versus investigations, so sponsors can see which strategies are working.

This shifts public reporting from “how many” to “how serious.”

Audit 4: “Intelligence Pivot” Should Mean Resourcing + Outputs, Not Speeches

This is the center of gravity. Sophisticated cheating doesn’t get beaten by urine cups alone. The AIU has positioned Investigations & Intelligence (I&I) as the weapon for the hardest cases: networks, trafficking, professional enablers, and non-analytical violations like evasion and tampering. That’s the right framing. The problem is that the public still can’t see whether I&I is being resourced like a real enforcement capability or treated like a side department that exists for credibility optics.

Minimum standard the AIU should adopt and publish:

  • I&I investment trend: Report the percentage of the total budget and the number of staff dedicated to investigations each year to show if the commitment to "intel" is growing.

  • Non-analytical yield: Publish the total number of people caught for "non-testing" crimes, such as smuggling, tampering with samples, or coaches helping athletes cheat.

  • Cycle-time ranges: Show the average time it takes from getting a tip to starting a mission, filing a charge, and reaching a final verdict, reported in general time brackets.

  • Partner KPI set: Provide a simple list of numbers that sponsors can track, such as the volume of whistleblower tips and how many cases are brought against support staff like doctors or agents.

If the AIU wants credibility, this is where it earns it.

Benchmark Option: The Commenter’s Wedge, Turned Into a Real R&D Pipeline

This follow-up article was prompted by a comment that cut through the usual moral theater. The commenter pointed out that swimming and track are dealing with the same structural issue: anti-doping is often one step behind sophisticated use. Then they offered a simple question: if some athletes in non-sanctioned contexts are being transparent about what they’re using, why not test them to learn what markers show up? That isn’t an argument to normalize doping. It’s an argument to stop wasting a live R&D opportunity.

This matters now more than ever. New athletics athletes are joining Enhanced Games in real time. That means the sport is watching a growing pool of athletes who may be willing to talk openly in a way sanctioned sport never allows. The AIU should be evaluating whether that transparency can be used to improve detection science faster, under strict legal and governance firewalls.

The AIU should publicly develop a research program that is strictly separated from its enforcement arm, ensuring that any data shared for research can never be used to punish the athletes who participate.

Minimum standard design features:

  • Legal and operational firewall: Establish a separate research stream with clear legal separation from enforcement use.

  • Consent and data rules: Publish consent terms, anonymization rules, data retention limits, and withdrawal rights.

  • Independent oversight: Create an external oversight body with authority to audit the firewall and publish findings.

  • Evaluation metrics: Publish what “success” means (new marker libraries, validated methods, reduced blind spots, adoption into routine testing standards).

This is how you shorten the learning loop without turning cooperation into a trap.

Cross-Sport Comparison: Focus on Failure Modes, Not Gossip

Other sports have already tried using intelligence-led testing, but many failed because they couldn't actually execute the plan. Usually, they failed because of legal loopholes, a lack of power to investigate across borders, or because the people in charge were too close to the business side of the sport. If the AIU wants its new plan to work, it has to prove it has fixed these structural flaws by publishing real results and rules, not just goals.

What this section should benchmark:

  • Common obstacles: Identify the main issues that stop progress, such as privacy laws, lack of power to investigate in other countries, lab backlogs, or the risk of investigators being too close to the people they're supposed to monitor.

  • Why cases fail: Explain the specific reasons why a doper wasn't caught or why an investigation couldn't be turned into a solid legal case.

  • The operational fix: Detail the specific changes made to the system—if any—to make sure the same mistake or failure doesn't happen twice.

  • Measuring the win: Show the real-world results of those changes, such as the number of coaches or agents sanctioned and the success rate of investigations in breaking up doping rings.

Receipts, Not Reassurance

“Right athlete, right time, right test” is a good goal, but it isn’t proof of progress. The people who fund the sport—the sponsors and broadcasters—don’t need to know every secret investigative tactic. However, they should demand to see the basic receipts: clear rules, enough staff, and actual results that justify the power the AIU has over athletes. Without this information, sponsors are essentially gambling on the sport’s reputation without any independent proof that the system is actually working.

Previous
Previous

Investors Fund Deployment, Not Just Ideas

Next
Next

Part 2: The Templates (Copy, Paste, Run)