Wednesday, January 6, 2021

Are the FDA's incentives aligned with the goals of the people?

The FDA is resisting pressure from academics to vaccinate more people with single doses:
"At this time, suggesting changes to the FDA-authorized dosing or schedules of these vaccines is premature and not rooted solidly in the available evidence," Dr. Stephen Hahn, FDA commissioner, and Dr. Peter Marks, director of the FDA's Center for Biologics Evaluation and Research, said in a statement. "Without appropriate data supporting such changes in vaccine administration, we run a significant risk of placing public health at risk."

Apparently, these FDA bureaucrats want more information before making a decision. 

However, we know how to make decisions under uncertainty: minimize expected error costs or maximize expected value. The expected benefit of one-dose regime (vaccinating twice as many people with a half dose) is millions of lives.
The simplest argument for First Doses First (FDF) is that 2*0.8>.95, i.e. two vaccinated people confers more immunity than one double vaccinated person. But there is more to it than that. Perhaps more important is that with FDF we will lower R more quickly and reach herd immunity sooner. 
Here’s an extreme but telling example. Suppose you have a pop of 300 million, need 2/3 to get to herd immunity and you have 100m doses and can vaccinate 100m a month. Then with FDF you vaccinate 100m in first month and a new 100m in the second month and then you are “done.” i.e. you can then do 2nd doses more or less at leisure since you are at herd immunity (yes, I know about overshooting, this is a simple example). If instead you do second doses you vaccinate 100m in first month and the same 100m in the second month which leaves 100 million at risk for another month. Under second doses you don’t reach herd immunity until the third month. Thus, under FDF you save a 100m infection-month which is a big deal.

The FDA has a long and sorry history of delaying medical innovation because type I errors (doing something that turns out to be wrong) are visible and type II errors are not (not doing something that turns out to be right). These bureaucrats seem to be putting their own interests ahead of those of the people they are supposed to protect.

This is a well-known incentive problem:  unless we evaluate bureaucrats based on expected value, not on whether they commit Type I errors, they will respond accordingly.

No comments:

Post a Comment