Quotulatiousness

June 12, 2013

Changing the FDA to meet the new needs of personalized medicine

Filed under: Health, Science — Tags: , , , , — Nicholas @ 08:31

At Marginal Revolution, Alex Tabarrok links to a new paper by Peter Huber:

In a brilliant new paper (pdf) (html) Peter Huber draws upon molecular biology, network analysis and Bayesian statistics to make some very important recommendations about FDA policy.

[. . .]

The current regime was built during a time of pervasive ignorance when the best we could do was throw a drug and a placebo against a randomized population and then count noses. Randomized controlled trials are critical, of course, but in a world of limited resources they fail when confronted by the curse of dimensionality. Patients are heterogeneous and so are diseases. Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good:

    Given what we now know about the biochemical complexity and diversity of the environments in which drugs operate, the unresolved question at the end of many failed clinical trials is whether it was the drug that failed or the FDA-approved script. It’s all too easy for a bad script to make a good drug look awful. The disease, as clinically defined, is, in fact, a cluster of many distinct diseases: a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or the good drug or cocktail fails because the disease’s biochemistry changes quickly but at different rates in different patients, and to remain effective, treatments have to be changed in tandem; but the clinical trial is set to continue for some fixed period that doesn’t align with the dynamics of the disease in enough patients

    Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority. Some cocktail cures that we need may well be composed of drugs that can’t deliver any useful clinical effects until combined in complex ways. Getting that kind of medicine through today’s FDA would be, for all practical purposes, impossible.

The alternative to the FDA process is large collections of data on patient biomarkers, diseases and symptoms all evaluated on the fly by Bayesian engines that improve over time as more data is gathered. The problem is that the FDA is still locked in an old mindset when it refuses to permit any drugs that are not “safe and effective” despite the fact that these terms can only be defined for a large population by doing violence to heterogeneity. Safe and effective, moreover, makes sense only when physicians are assumed to be following simple, A to B, drug to disease, prescribing rules and not when they are targeting treatments based on deep, contextual knowledge that is continually evolving

No Comments

No comments yet.

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress