Online tools to synthesize real-world evidence of comparative effectiveness research to enhance formulary decision making

Results of randomized controlled trials (RCTs) provide valuable comparisons of 2 or more interventions to inform health care decision making; however, many more comparisons are required than available time and resources to conduct them. Moreover, RCTs have limited generalizability. Comparative effectiveness research (CER) using real-world evidence (RWE) can increase generalizability and is important for decision making, but use of nonrandomized designs makes their evaluation challenging. Several tools are available to assist. In this study, we comparatively characterize 5 tools used to evaluate RWE studies in the context of making health care adoption decision making: (1) Good Research for Comparative Effectiveness (GRACE) Checklist, (2) IMI GetReal RWE Navigator (Navigator), (3) Center for Medical Technology Policy (CMTP) RWE Decoder, (4) CER Collaborative tool, and (5) Real World Evidence Assessments and Needs Guidance (REAdi) tool. We describe each and then compare their features along 8 domains: (1) objective/user/context, (2) development/scope, (3) platform/presentation, (4) user design, (5) study-level internal/external validity of evidence, (6) summarizing body of evidence, (7) assisting in decision making, and (8) sharing results/making improvements. Our summary suggests that the GRACE Checklist aids stakeholders in evaluation of the quality and applicability of individual CER studies. Navigator is a collection of educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of authoritative resources for RWE. The CMTP RWE Decoder aids in the assessment of relevance and rigor of RWE. The CER Collaborative tool aids in the assessment of credibility and relevance. The REAdi tool aids in refinement of the research question, study retrieval, quality assessment, grading the body of evidence, and prompts with questions to facilitate coverage decisions. All tools specify a framework, were designed with stakeholder input, assess internal validity, are available online, and are easy to use. They vary in their complexity and comprehensiveness. The RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making. This study adds clarity on what the tools provide so that the user can determine which best fits a given purpose.


SUMMARY
Results of randomized controlled trials (RCTs) provide valuable comparisons of 2 or more interventions to inform health care decision making; however, many more comparisons are required than available time and resources to conduct them. Moreover, RCTs have limited generalizability. Comparative effectiveness research (CER) using real-world evidence (RWE) can increase generalizability and is important for decision making, but use of nonrandomized designs makes their evaluation challenging. Several tools are available to assist.
Our summary suggests that the GRACE Checklist aids stakeholders in evaluation of the quality and applicability of individual CER studies. Navigator is a collection of educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of authoritative resources for RWE. The CMTP RWE Decoder aids in the assessment of relevance and rigor of RWE. The CER Collaborative tool aids in the assessment of credibility and relevance. The REAdi tool aids in refinement of the research question, study retrieval, quality assessment, grading the body of evidence, and prompts with questions to facilitate coverage decisions.
All tools specify a framework, were designed with stakeholder input, assess internal validity, are available online, and are easy to use. They vary in their complexity and comprehensiveness. The RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making. This study adds clarity on what the tools provide so that the user can determine which best fits a given purpose.
The limitations of traditional randomized controlled trials (RCTs) in informing health care decisions are well known, thus, the recent emphasis on comparative effectiveness research (CER). 1-3 CER requires the use of real-world data (RWD), which is rapidly becoming more accessible, with a corresponding increase in demand for its synthesis into Early efforts to develop a framework to assist health care decision makers in using RWD were led by the International Society for Health Economics and Outcomes Research (ISPOR). 7 In 2017, the ISPOR-International Society for Pharmacoepidemiology (ISPE) Special Task Force on RWE in Health Care Decision Making established good procedural practice intended to enhance confidence by decision makers in RWE derived from RWD. 8 In 2019, the RWE Transparency Initiative, a partnership among ISPOR, ISPE, the Duke-Margolis Center for Health Policy, and the National Pharmaceutical Council (NPC) released a draft white paper recommending widespread use of registries to improve transparency of RWE studies. 9 The AMCP Format for Formulary Submissions, Version 4.1 has evolved to include RWE as part of clinical evidence. 6 The Institute for Clinical and Economic Review (ICER) has described the opportunities and challenges of using RWE for coverage decisions and developed a framework to guide optimal development and use of RWE for this purpose. 10,11 Governmental and quasi-governmental agencies have been leading similar efforts. [12][13][14][15][16][17][18] In the context of CER, HTA decision makers are seldom positioned to conduct RWE studies or to undertake comprehensive systematic reviews to inform timely decision making. Existing barriers have slowed the rate at which RWE is used in decision making. [19][20][21] This situation has led to a rise in the development of a number of tools to guide HTA decision making. Yet, in our own HTA work, we find that colleagues are still seeking clarity about how to evaluate RWE. This uncertainty led to creation of the Real World Evidence Assessments and Needs Guidance (REAdi) tool (by the University of Washington's Comparative Health Outcomes, Policy and Economics [CHOICE] Institute) and, eventually, to a comparison of the features of all 5 tools highlighted in this study.

DESCRIPTION OF CURRENT TOOLS
In this report, we describe the 5 best practice tools identified by experts in CER and RWE: the Good Research for Comparative Effectiveness (GRACE) Checklist, the Innovative Medicines Initiative (IMI) GetReal RWE Navigator, the Center for Medical Technology and Policy (CMTP) RWE Decoder, the CER Collaborative tool, and the REAdi tool (Table 1). [22][23][24][25][26][27][28][29][30][31][32][33][34][35] 1. GRACE Checklist. [22][23][24][25] The GRACE Checklist was derived from a set of principles that define the elements of good practice for the design, conduct, analysis, and reporting of observational CER studies. The original GRACE principles were developed by Outcome Sciences (now part of IQVIA) with funding from the NPC and have been endorsed by ISPE. The principles served as the foundation for the 11-item GRACE Checklist that aids stakeholders in the evaluation of the quality and applicability of CER studies. The validated checklist was developed from a review of published literature and was tested globally by volunteers.
2. IMI GetReal RWE Navigator. 26,27 Launched in 2018, the GetReal Initiative is a public-private partnership of pharmaceutical companies, academia, HTA agencies, and regulators across the European Union. The goal is to increase the quality of RWE generation in medicines development and regulatory/HTA processes, optimizing and ensuring adoption and sustainability of the tools earlier developed under the "GetReal" Consortium. The online RWE Navigator includes educational resources to guide demonstration of effectiveness, a guidance tool to support development of medicines, and a directory of and links to authoritative resources for evaluation of RWE, including quality and credibility. Navigator maps the landscape of RWE to help users "navigate" to what they need to prioritize and make decisions.
3. CMTP RWE Decoder. 28-30 The online CMTP RWE Decoder was developed through a multistakeholder initiative for the purpose of making available an easy-to-use tool to help decision makers confidently and consistently assess RWE for their decision making needs. The tool is an Excel spreadsheet that facilitates user assessment of the relevance and rigor of existing evidence from RCTs and RWE. Finalized in 2017, RWE Decoder is composed of 3 modules. In Module 1, the user articulates the question of interest, framing the question in the PICOTS (population, intervention, comparators, outcomes, timing, setting) format. Module 2a provides the framework for assessing the relevance of each identified study. Module 2b prompts for assessment of the rigor of each individual study, including the quality of the evidence, potential for bias, precision, and data integrity. Module 2c calls for the magnitude and direction of effect. In Module 3, an integrated summary of each of these assessments is presented in graphical format. RWE Decoder is available in the public domain.
4. CER Collaborative Tool. 31-34 Developed by the CER Collaborative, a multistakeholder initiative of NPC, AMCP, and ISPOR, the CER Collaborative tool helps users synthesize RWE and assess its credibility and relevance of evidence. The goal is to provide greater uniformity and transparency in the evaluation of RWE to inform HTA decisions. The CER Collaborative tool facilitates the critical appraisal of 4 types of studies (prospective and retrospective observational    coverage decisions relevant to immediate payer decision need (Table 2). Constructed using an R-Shiny app, 48 the publicly available, online REAdi tool uses drop-down menus, branching logic, and piping, such that questions posed in subsequent tasks are based on previous answers. A graphical summary is presented. The tool also harbors the functionality to print screen and save literature reviews, allowing one to work on multiple projects simultaneously.

COMPARISON OF THE 5 TOOLS
In October 2018, CHOICE investigators were joined by a collaborator from NPC (Graff) at the AMCP Nexus meeting (Orlando, FL) in leading an invited workshop to compare, contrast, and offer an opportunity to use 3 of these tools, using a case study from the literature. 49 In this article, we describe and compare those 3 tools and add comparisons of the GRACE Checklist and the IMI Navigator. In evaluating each tool, we have identified 27 features. For comparison, we informally organized these into 8 domains 5. REAdi Tool. 35 The REAdi tool was developed by investigators at the University of Washington's Comparative Health Outcomes, Policy and Economics (CHOICE) Institute. Intended to provide guidance on the use of RWE for HTA decision making for drug and diagnostic interventions, the REAdi framework is comprehensive in leading the user through the decisionmaking process in 5 phases. In Phase 1, the user defines the research question in the PICOTS format. Once defined, the tool automatically synthesizes terms to create a PubMed search strategy; citations of relevant studies are returned for review. In Phase 2, the user reviews and quality-rates the RWE on a per-study basis, having been guided to an embedded qualityrating tool specific to each included study design. [39][40][41][42][43][44][45][46] Once completed, in Phase 3, the user is prompted to rate the strength of the body of evidence using GRADEPro (Grading of Recommendations, Assessment, Development and Evaluations). 47 In Phase 4, the user assesses the applicability and sufficiency of the evidence for the intended purpose. In Phase 5, questions are posed to facilitate

RWE Considerations Recommendations
• The benefit-risk trade-off

Implications
We reviewed 5 online tools to synthesize RWE of CER, which vary in their objectives, complexity, and context for use. The simplest to use, the GRACE Checklist, provides a simple quality rating, while the Navigator provides education, guidance, and resources. The 3 remaining tools-RWE Decoder, the CER Collaborative tool, and the REAdi tool are similar to each other in that they integrate quality ratings, education, and guidance resources. They are therefore more complex and are intended to evaluate a body of RWE to enhance formulary decision making. The RWE Decoder and CER Collaborative tool are useful for evaluating already identified evidence, while the REAdi tool spans a broader set of decision-making tasks by beginning upstream in explicitly assisting the user in specifying the research questions and finishing downstream by offering recommendations for formulary decision making. With their varying features, breadth of tasks, and levels of complexity, the RWE Decoder, CER Collaborative tool, and REAdi tool synthesize evidence and were specifically designed to aid formulary decision making.

Conclusions
This study characterizes 5 potentially useful tools for HTA decision making using RWE. Because use of RWE is low, research that explores awareness, usefulness, and barriers to use of these tools may result in their improvement, uptake in their use, and ultimately increased use of RWE for decision making. [19][20][21] Future research could also include a more in-depth comparison of these tools in the context of case studies to determine which features are of greatest value to decision makers. Best practices for tool use could then be developed and existing tools integrated. A discussion could then ensue about strategies to sustain the new tool. In the meantime, this study adds clarity on what the tools provide so that the user can determine which best fits a given purpose. Domain 5: Assess Internal and External Validity of Evidence. All tools provide a systematic method to assess internal validity (quality/bias); Navigator provides links to quality rating tools, while the REAdi tool embeds most of these. RWE Decoder provides 1 tool each for assessing the rigor of RCTs and non-RCTs, using the quality of the research questions, potential for bias, precision, and data integrity. The CER Collaborative tool assesses credibility using checklists corresponding to design, data, analysis, reporting, and interpretation. RWE Decoder, IMI Navigator, the CER Collaborative tool, and the REAdi tool assess relevance, that is, external validity, while the GRACE Checklist does not.