Home PublicationsCommentary How FDA Can Accelerate Efforts to Approve Precision Medicines

How FDA Can Accelerate Efforts to Approve Precision Medicines

by Travis Korte
by
A representation of DNA

In 2012, the Food and Drug Administration (FDA) declined to approve a Pfizer-designed drug called tafamidis, which was intended to treat patients with a rare nerve disorder called Transthyretin Familial Amyloid Polyneuropathy (TTR-FAP). The FDA said the drug’s clinical trial did not provide sufficient evidence for efficacy, even though the agency’s European counterpart had approved the drug in the Europe a year earlier. TTR-FAP affects about 10,000 people worldwide, and the trial was correspondingly small with only about 100 participants. Larger samples can help more readily detect a drug’s effect, but researchers designing trials for rare diseases such as TTR-FAP often have a difficult time gathering study populations large enough to establish drugs’ efficacy. This problem will only get worse with the rise of precision medicine, in which drug developers can create treatments and diagnostics targeting extremely small populations with particular genetic mutations. In addition, FDA evaluators are reticent to approve drugs whose trials had small samples, because relatively rare harmful events that did not show up in a trial could arise when a broader population is exposed. Former FDA commissioner Andrew von Eschenbach has noted that drugs that work well in small populations are sometimes abandoned when they do not work as well in larger populations. To address this issue and help enable the use of precision medicines, the FDA should use two main strategies: modify the technical approaches used in medical research studies to help extract more information from small samples, and rely more on post-market surveillance to ensure that any harmful effects that arise once a drug is on the market are quickly detected and addressed.

Three of the most promising technical approaches to these “small-n” clinical trials are adaptive trial design, predictive enrichment strategies, and Bayesian statistical methods, and the FDA should ensure evaluators are well-versed in these techniques. First, adaptive trial design is a relatively new technique that allows researchers to gather information for a trial until an effect is established and then stop. This technique allows researchers to minimize the resources spent on trials thereby keeping costs down. In particular, a technique called adaptive sample size re-estimation can help ensure that trials with a limited patient base can adequately detect treatment effects. Although the FDA released guidance on adaptive trial design in 2010 and awareness is growing, stakeholders have not yet reached consensus on trial design methods and means to measure adaptive trial design adoption in practice. The FDA should work with the pharmaceutical industry to develop consensus-based best practices for adaptive designs to encourage more consistent use of these valuable methods. Second, predictive enrichment, or using genetic data and other biomarkers to select study populations that are more likely to exhibit drug effects than a general population would be, is another technique that can help get the most accurate results out of small studies of precision medicines. This means fewer participants are necessary to establish an effect, which is useful in trials for treatments that target very few individuals. Predictive enrichment has already been deployed in a handful of successful trials for cancer drugs.  The FDA published guidance on enrichment strategies in 2012 and noted that enrichment, which generally takes place during the early patient-selection phase of a trial, is valuable in part because it does not compromise other aspects of a well-controlled trial, such as randomization and blinding. However, predictive enrichment is not suitable for all trials, as a useful patient population for a given drug is not always defined by genetic factors. It is also often inappropriate for drugs designed for larger populations, where results may not generalize from the small subgroup being studied. Regardless, the FDA should continue to encourage drugmakers to look into emerging techniques such as predictive enrichment for drugs targeted at very small populations. Finally, Bayesian statistics, an approach to refining models from evidence as it accumulates, can help overcome the problem of small patient populations, providing quantitative justification for smaller-sized or shorter-duration trials in some cases. Although also useful for larger-scale clinical trials, Bayesian methods can be especially helpful in enabling researchers combine data from prior and current trials to deal with the challenge of small sample sizes. Bayesian analysis is often computationally intense, but recent advances in computing and algorithms have made it more tractable in recent years. Although trial sponsors have been slow to adopt Bayesian methods, the trend is positive. The FDA issued guidance on the use of Bayesian statistics in 2010 and should continue to champion the methods, both for general-purpose clinical trials and especially for precision medicine trials.

Increased post-market surveillance can reduce uncertainty around drugs that have not been tested in large populations, by ensuring that any ill effects can be detected and information can be made available to authorities for review rapidly. Some of the major FDA initiatives around post-market surveillance are the Sentinel program, the Adverse Event Reporting System, and the National System for Medical Device Post-Market Surveillance. Sentinel, which is currently being developed, will be the FDA’s next-generation system for tracking safety issues in drugs and medical devices. The FDA’s traditional means of post-market surveillance, such as the Adverse Event Reporting System, have relied on “passive” surveillance, in which the agency collects reports about drugs and medical products from external sources.  Sentinel will be an “active” surveillance system, enabling the agency to conduct its own evaluations using electronic health records and other data. Sentinel’s first major pilot, “Mini-Sentinel,” will end in September 2014. Mini-Sentinel has allowed the FDA to investigate the challenges associated with widespread deployment of a medical product safety surveillance system and give support to the concept of using such a system to detect safety issues. The FDA should integrate the lessons learned from the trial into future manifestations of the Sentinel system and continue to develop the program with an eye toward active surveillance and rapid response to adverse events in the future. The FDA can also work to improve post-market surveillance in areas where Sentinel will not be as useful. For example, while Sentinel will have access to granular administrative and claims data for its investigations, this data often does not contain manufacturer or brand-specific device information. Moving forward with the National Medical Device Post-Market Surveillance system can help fill this gap by establishing a protocol for uniquely identifying devices and creating national registries for certain high-risk products.

The challenges that drugmakers have had in seeking FDA approval for treatments for rare diseases will likely mirror some of the challenges they will face with precision medicine. Although the FDA has not yet fully embraced smaller trials, including those associated with precision medicine treatments, it has made some progress.  However, as drugmakers accelerate their efforts to develop more personalized treatments, the FDA should increase its efforts to ensure that the clinical trial approval process can support more “small-n” studies as soon as possible and step up its post-market surveillance activities to ensure that any issues not detected in trials can be swiftly detected and addressed.

Photo: Flickr user Andy Leppard

You may also like

Show Buttons
Hide Buttons