The Supercar Stays in the Garage: Factors Preventing Indirect Comparisons of Novel Medicines Targeting the Same Condition

DISCLOSURES
No outside funding supported the writing of this letter. Ollendorf reports advisory board and consulting fees from DBV Technologies, EMD Serano, Gerson Lehman Group, and Sarepta Therapeutics, unrelated to the content of this research letter. Naci reports a grant from the Commonwealth Fund, unrelated to the content of this letter. The other authors have nothing to disclose.

differences in entry criteria as well as baseline characteristics (e.g., number of previous chemotherapies) were cited. Other common reasons included outcome measurement (e.g., investigator vs. patient reported); timing issues (e.g., outcome assessment at 12 vs. 24 weeks); and study design (e.g., crossover vs. parallel arm). Findings were similar in a sensitivity analysis without our 5-year filter (Table 1).
We also examined whether the manufacturers of these 42 medicines sought early scientific advice from the European Medicines Agency (EMA) and/or the FDA, using information included in public assessment reports for the former; for the latter, we considered any one of the FDA's opportunities to speed the approval process (accelerated, breakthrough, fasttrack, or priority) to be a proxy for early advice. 6 We found that two thirds (28 of 42) of manufacturers sought advice from either regulator (Table 1).

Comments
This analysis was limited by our proxy for early FDA advice, although we note that nearly half of manufacturers sought early advice from the EMA alone. We also focused on 1 HTA organization, albeit one that produces comprehensive and fully public reports. Nevertheless, our findings have clear implications, suggesting that many potential indirect comparisons of

■■ The Supercar Stays in the Garage: Factors Preventing Indirect Comparisons of Novel Medicines Targeting the Same Condition
Regulatory approval of new medicines has increased in recent years. In 2018, the U.S. Food and Drug Administration (FDA) approved a record 59 "novel" medicines (their active ingredients had never before been approved). 1 In many therapeutic areas, it is now common to have several novel products approved contemporaneously that compete for similar patient populations. Examples include patisiran and inotersen for a hereditary form of amyloidosis and galcanezumab and fremanezumab for prevention of migraine. 1,2 Randomized controlled trials (RCTs) that directly compare these types of agents are exceedingly rare.
In the absence of head-to-head RCTs, indirect comparisons are required to evaluate the comparative benefits and harms of multiple treatment alternatives. Methods for evaluating treatments that have never been directly compared have evolved significantly and are now widely used. 3 Inclusion of new evidence in network meta-analysis (NMA), which allows for indirect comparisons of treatments linked through common comparators, has been found to identify clinical signals sooner than more traditional meta-analysis approaches. 4 Such advancements can benefit health technology assessment (HTA) and payer organizations that are tasked with comparing alternative treatments for the same condition.

Methods
We were interested in how frequently NMA is used to compare new medicines in an HTA context and what challenges to doing so have presented themselves. We systematically identified reports from the Institute for Clinical and Economic Review (ICER), a U.S. HTA organization that produces public reports following FDA approval timelines. We focused on (a) reports published since 2015, when ICER introduced its value framework for medicines, 5 and (b) medicines approved within 5 years of each other to reduce the likelihood that comparison challenges were due to exogenous changes in definitions or measurement. For comparisons where an NMA was deemed infeasible, 2 investigators categorized the reasons according to the Population, Interventions, Comparators, Outcomes, Timing and Setting (PICOTS) framework. A third investigator resolved any disagreements.

Results
We identified 28 ICER reports published since 2015 with the potential for NMA, representing 80 medicines (Table 1). NMAs were deemed infeasible in 15 reports (54%) and for 42 medicines (53%). Multiple reasons were typically given. Differences in population definitions were cited in 71% of cases; for example, in a review of PARP inhibitors for ovarian cancer,

■■ The Value of a Patient-Level Modeling Approach and Need for Better Reporting in Economic Evaluations of Osteoporosis
The article "Patient-Level Modeling Approach Using Discrete-Event Simulation: A Cost-Effectiveness Study of Current Treatment Guidelines for Women with Postmenopausal Osteoporosis" by Quang A. Le, 1 which was published in the October 2019 issue of JMCP, raised some important points of consideration, especially with regard to our recent recommendations for economic evaluations in osteoporosis that resulted from an ESCEO-IOF working group with a U.S. predominant perspective. 2 First, we recognize the value of a patient-level approach to simulate osteoporosis events that could address some of the limitations of cohort Markov models lacking comprehensive memory management. Patient-level modeling by the use of a discrete-event simulation or microsimulation Markov model that provides similar advantages could better accommodate the natural history of patients with osteoporosis. 3 In the development of our recommendations, experts highlighted the importance of avoiding a hierarchy of fractures in economic modeling and restrictions after fracture events. In addition, in the Le study, 1 we appreciate the presentation of various patient characteristics and clinical profiles. Our guidelines also recommend the conduct of multiple scenarios according to patients' characteristics, such as age and fracture risk.
On the other hand, the Le study has some potential limitations with respect to our guidelines. First, the model lacked a lifetime horizon consideration. A shorter time horizon (such as 10 years 1 ) limits the benefits of effective drugs. For example, the sequential therapy abaloparatide/alendronate was shown to be dominant (more effects for less costs) compared with no treatment using a lifetime model horizon, while the cost per quality-adjusted life-years was estimated at $62,861 using a 10-year time horizon. 4 Second, in the selection of osteoporosis treatments, Le omitted sequential therapies. There is evidence that now supports the concept of sequential therapy with the initiation of anabolic therapy first followed by an antiresorptive to improve health outcomes in osteoporosis, 5 and sequential therapies should thus be considered as relevant alternative options. Third, some model data were not appropriately reported regarding treatment side effects, treatment duration, effect of medication adherence, and treatments effect after discontinuation. Recent studies have suggested rapid bone loss and increased risk of multiple vertebral fractures after denosumab, but this does not seem to have been included. 6,7 Fourth, we would have liked to see more sensitivity analyses on key model parameters (such as treatment effect after discontinuation, time horizon) and probabilistic sensitivity analyses presented in cost-effectiveness acceptability curves.
All these reasons limit our ability to make a judgment about the reliability of Le's results. Although we do not question the comparable medicines are precluded by differences in the design, data, and measurement aspects of trials. Regulators may have missed an opportunity to create research standards that enable such comparisons. These advanced methods are therefore akin to a supercar without a suitable test track-nice to look at but impossible to drive.