Who Guards the Guardians? Governance, Ethics & Why Innovation Thrives
Spoke 5

Who Guards the Guardians? Governance, Ethics & Why Innovation Thrives

The HCC is a public‑minded utility: AI‑audited, anomaly‑aware, and openly governed. These guardrails don’t smother innovation—they stabilize it so genuine efficiency wins.

The Governance Problem

Healthcare’s prices are shaped by invisible contracts and black‑box algorithms. Compliance theater replaces trust. When no one can see the model—or contest it—capture is inevitable.

Fix: verifiable rails + civic oversight. Algorithms propose; humans ratify; the public can audit both.

A Public‑Minded Utility Model

  • Independent mission: non‑governmental, grant‑supported (like a NASA‑style research body) with utility‑grade obligations.
  • Board composition: one‑third public/patient advocates, one‑third technical/actuarial experts, one‑third provider/payer/manufacturer reps—each with a shadow seat (civic observer) empowered to trace and contest.
  • Conflict firewalls: disclosures, cooling‑off periods, term limits, and public minutes.
  • Open rules of the road: corridor policies, dispute precedents, and solvency metrics are published and versioned.

The AI Ethics Office

Models that set corridors or allocate risk are powerful. They must be auditable, explainable, and biased‑against bias.

What it audits

  • Corridor fairness: check for disparate impact across age, disability, rurality/ZIP proxies, and comorbidity clusters.
  • SRV equity: ensure means‑based contributions and catastrophic triggers do not burden specific groups.
  • Drift & robustness: monitor model drift, stress‑test on shortage spikes, new biosimilars, or site‑of‑care shifts.
  • Explainability: publish model cards, feature importances, counterfactuals; provide per‑case rationale in disputes.
  • Privacy & provenance: dataset documentation, de‑identification, and reproducible training pipelines.

Every model change is proposed in the open, red‑teamed, and then ratified by mixed committees with public notes.

Fraud‑Waste‑Abuse (FWA) Core

Move from “pay‑and‑chase” to prevent and prove.

Real‑time anomaly detection (examples)

  • Provider pattern anomalies: improbable volumes (e.g., hundreds of high‑RVU procedures/day), upcoding clusters, or sudden code‑mix shifts that don’t match clinical reality.
  • Impossible procedures: sex‑ or anatomy‑discordant codes; procedures on dates without corresponding encounters; J‑code units beyond safe dosing windows.
  • Duplicate/phantom claims: same NPI/NDC/unit/TS within impossible intervals; duplicate imaging across sites within hours.
  • Site‑of‑care arbitrage: buy‑and‑bill markups outside corridor; referral loops where ownership relationships exist.
  • Supply chain mismatches: NDC billed not matching lot/GS1; acquisition basis inconsistent with declared contract.

Flags are explainable; providers can self‑remediate in‑portal; escalations are time‑boxed; precedents feed back into corridor tuning.

The Public Dashboard

  • Corridor map: reference costs and floats by Node, region, and site‑of‑care.
  • SRV solvency: reserves, trigger events, contribution bands, and dividend outflows.
  • Arbitration stats: time‑to‑resolution, reversal rates, common patterns—plus public decisions.
  • Ethics reports: bias audits, model revisions, drift monitors, red‑team summaries.
Oversight isn’t a PDF report—it’s a living, queryable surface.

Why Innovation Thrives

  • Bounded competition: price corridors (e.g., ±15%) prevent rent‑seeking while rewarding operational excellence.
  • Stable rails: predictable settlement lets manufacturers and providers invest in genuine quality, not contract gymnastics.
  • Open integration: entrepreneurs plug into APIs (eligibility, cost stack, corridor hints, arbitration status) to build service modules—PBMs evolve into plan operating systems, paid for stewardship, not spread.

This isn’t anti‑profit; it’s anti‑predation. Value still wins—now it has to be real.

Practitioner Appendix: Technical Chops

A) AI Ethics Office: audit surfaces

  • Price‑corridor bias tests: compare residuals by protected proxies; require adjustments or exception logic if disparate impact detected.
  • SRV fairness: Shapley/feature attribution for contribution bands; ensure income × health‑status interactions don’t punish chronic illness.
  • Model cards & reproducibility: dataset lineage, training params, evaluation metrics, known limitations, and change logs published.

B) FWA Core: detection → action pipeline

  1. Detect: robust z/MAD outliers; graph‑based loop detection; lot‑NDC mismatches.
  2. Explain: per‑flag rationale (features, thresholds, peers).
  3. Remediate: provider portal for correction; member refunds auto‑issued if over‑collection is detected.
  4. Escalate: unresolved in SLA → arbitration queue; results feed back to models.

C) Governance spec (sketch)

  • Board: 9 voting seats (3 public/patient, 3 technical/actuarial, 3 provider/payer/manufacturer) + 9 shadow seats.
  • Quorum: 6 votes incl. ≥1 from each block; supermajority for corridor/fee changes.
  • Conflicts: real‑time disclosure registry; recusal rules; public minutes.

D) Open APIs (examples)

  • GET /nodes/{id}/corridor → reference cost, float, last‑updated, audit hash.
  • POST /claims/clear → cost stack + corridor validation + member share computed on net.
  • POST /disputes → file, simulate, and subscribe to precedent updates.

Next — Spoke 6: The First 100 Days — Launch Plan

Missed the economics? Read Spoke 4: Cents & Sensibility, or return to the HCC Hub.

Join the Founding Circle