Over the past few weeks, there've been a few pissing contests, some of which experienced my participation, over the benefits and necessities of large n studies. In one case, two compounds with similar indications had n of 12 and 1,500, and the argument was over whether the 1,500 sized trial would, by size alone, produce more believeable, i.e. stat sig, results.
Naturally, I piped up with the standard, within the quant community, observation that the larger the delta between control and active, the fewer observations (n) one needs to reach stat sig. With big differences, small counts suffice.
Well, boy howdy. Today Sage releases data on its postpartum depression drug, SAGE-547. The measure used went from 26.5 (before) to 1.8 (after). The p = .001 for a paired t-test. There were 4 women in the trial. 4.
Matthew Herper has a long-ish write up, with usual caveats about the study, drug and so forth. And no arguments from me. Big delta, little trial.
On the other hand, crafty bio-pharma will fund trials with thousands of patients to generate stat sig on teeny, tiny deltas. All in hope of getting FDA to approve. When Pharma weeps that it cost $$$ to get a drug approved, remember why: large studies to demonstrate stat sig on minuscule difference eat a lot of moolah. Such a waste. With wily sales reps, these drugs might make some money. Naturally, Sage is in Cambridge (not the one in England) and run by a Harvard guy.