ご案内
Manufacturers typically plan Phase II-III research with the utmost care, but all too often regard real-world evidence (RWE) generation as a “check-the-box” activity. This is a mistake. Real-world research design is more complicated than clinical trial design, its complexity due to a multitude of factors including the differing evidentiary needs of diverse healthcare system stakeholders, the differing outcome measures available to meet those needs, and the differing methodologic approaches that can be used to collect clinical, economic and real-world data. The most frequently utilized study designs for Phase IV research include:
- Retrospective analyses of computerized health records (administrative claims and/or EHRs)
- Manual chart review
- Prospective observational studies and registries
- Pragmatic clinical trials
- Randomized controlled trials
- Economic modeling
Selecting the most appropriate and cost-effective research design can be quite challenging. This is especially the case when stakeholders are not engaged at the beginning of the process. But even with robust stakeholder engagement from the start, identifying the most appropriate study design is still a challenge because of the multiple factors (data sources, methodologies, outcome measures, etc.) that must be considered. To facilitate this process, we have developed and tested an algorithm that has proven to be useful in structuring decision-making in the design of real-world research.
First Step: Engage Stakeholders Early and Often
In clinical development, engagement with regulators is a routine component of study design, with consensus reached on study endpoints and analyses before proceeding to study conduct and reporting of results. Historically, RWE generation has not followed this process but has been more linear in nature, with manufacturers designing the research without stakeholder input and only engaging with stakeholders at the time of dissemination of study findings (Fig. 1). This approach runs the risk that the RWE generated will not meet stakeholder evidentiary needs, as their input was not solicited during research design.
The optimal approach to RWE generation prioritizes stakeholders (Fig. 2). Similar to clinical development, the process should begin with stakeholder engagement to ascertain evidentiary needs and agree upon the research design. This will help ensure that, upon study execution and dissemination of findings, health system stakeholders will find the evidence actionable to facilitate decision-making related to the product.
Considering the (Many) Possibilities
Another important point of difference between clinical and RWE research is that, while clinical evidence generation is relatively straightforward, RWE is far from it. In clinical development, there is one major health system stakeholder to target (regulatory), two broad categories of study measures (safety and efficacy), and one type of study design (randomized controlled trial (RCT)). In contrast, RWE generation is complicated by the multiplicities of health system stakeholders to be targeted, real-world measures of interest, and methodologic approaches available (Fig. 3). The multidimensional nature of the real-world research design challenge is further complicated by the fact that stakeholders naturally vary in terms of their levels of interest in different real-world measures (Fig. 4) and different real-world study designs vary in terms of their suitability to the measures being assessed (Fig. 5). In some sense, it’s as if the problem is exponentiating itself.
So how should we go about sorting through all of these complications? In other settings where decision-making can take different twists and turns depending on multiple variables, algorithms have proven invaluable. For example, algorithms are used in clinical guidelines as a way to assist healthcare providers in making decisions about optimal treatment approaches. Likewise, health economists have developed algorithms for use in selecting the most appropriate modeling approach. No such solution, however, exists to support investigators in selecting the most appropriate real-world research design. We sought to bridge this gap by developing and testing an algorithm that provides structure for the study design selection process in real-world research.
The Algorithm
Methodologic Approach
The algorithm, designed for ease of use and to strike a balance between being too simplistic or overly complex, consists of a series of structured yes/no questions (Fig. 6), as follows:
- Is the study focused on an intervention?
- Is the intervention on the market?
- Is the study intended to be comparative?
- Is treatment assigned by study protocol?
- Are data needed for the study available from existing sources?
- Are those existing sources accessible in computerized form (i.e., in administrative claims or electronic medical records)?
- Is the study setting real world?
- Is the evidence need for product value?
Responding to each of these yes/no questions within the structure of the algorithm successfully guides the researcher to one of six different research designs: (1) retrospective database analysis; (2) manual chart review; (3) prospective non-interventional study / registry; (4) pragmatic clinical trial; (5) traditional randomized controlled trial; or (6) economic modeling. As research designs vary dramatically in terms of time and cost requirements, in those instances in which multiple approaches are viable the algorithm steers the researcher to the most cost-effective option first (reading from left to right across the bottom row).
A Closer Look
The first question asks whether or not the study is focused on a product. In nearly all instances, what we mean by “product” is a drug, a biologic or a medical device. However, in some instances, the focus might be on a medical procedure, such as a surgical intervention or diagnostic test. Studies that are not product-focused will typically be disease-focused, emphasizing the following kinds of measures:
- Epidemiologic: incidence, prevalence, morbidity, mortality
- Economic: healthcare utilization, costs of care, treatment patterns
- Humanistic: disease burden, patient-reported outcomes (PROs), health-related quality of life, utilities
If the study is not product focused, the second question is whether or not data on study measures are available from existing sources. It may be that all, some or none of the data are available from existing sources. If all or some of the data are available from existing sources, there is potential for conducting the study as a “hybrid” retro-to-prospective data collection effort that combines different data sources.
If all or some of the data are available from existing sources, the next question is whether or not they are available in computerized form. In almost all instances, computerized data will be in the form of administrative billing claims or electronic medical records (EMRs). If the answer is yes, then the study would be classified as a retrospective database analysis. If the answer is no, then the study would be a manual chart review.
If none of the data are available from existing sources, or if a hybrid approach is being used, then the study would be classified as prospective observational or disease registry. (In some regions, the term “registry” is less common than “prospective cohort study.”) From a methodologic perspective, each of these study types would be considered non-interventional, as the research does not impact the treatment decisions or care processes being observed.
If the study is product focused, then the next question is whether or not the product is currently on the market. This is usually a rather straightforward question to discern based on dates of regulatory approval and market launch in relation to the timing of the study.
If the product is on the market, the next question is whether or not the study is comparative in nature. Are comparisons between interventions planned? If so, the study is indeed comparative. In those instances where this is not obvious, a comparative analysis might be indicated by reference to such terms as:
- Comparative effectiveness analysis
- Relative effectiveness analysis
- Usual care (e.g., drug A vs. usual care) • Standard care (e.g., drug A vs. standard care)
If the study is not comparative, the algorithm takes us back to the availability of existing data sources. Potential study types would then include database analyses, manual chart reviews, prospective observational or registry. In this instance, though, it would be a product registry rather than a disease registry. Even though product-focused, all of these study types would still be considered non-interventional by methodologists. It’s worth noting, however, that regulatory classifications might differ.
If the study is comparative, the next question is whether or not the scientific rigor of randomized treatment allocation is desired. If the answer is no, the algorithm takes us back over to the non-interventional study types. If the answer is yes, it is necessary to assess the intended study setting to classify the study.
The study setting may be experimental or real world. If real world, then the study would usually be classified as a pragmatic clinical trial. Pragmatic trials would have more relaxed patient eligibility criteria and a less intrusive study protocol, usually with active comparators. If experimental, then the study would usually be classified as a Phase IV trial. The methodologic classification for both study types is interventional.
If the product is not on the market, the study is more likely to be a Phase II-III clinical trial and, therefore, not in the real-world research realm. An exception occurs if the project is aimed at demonstrating product value. If so, it would most likely be done via economic modeling.
When these question-and-answer pathways are linked together in step-wise form, the result is an algorithm that strikes a practical balance between simplicity and comprehensiveness, helping steer the researcher to the most cost-effective option first when multiple choices are possible.
Headline
Discussion
An expanding array of healthcare decisionmakers—including payers, providers, patient advocacy groups, clinical guideline developers, and regulators—now require, beyond traditional safety and efficacy measures, outcomes data based on RWE. Undertaking real-world research without due diligence on stakeholder needs is a game of hit and miss—with misses having dramatic consequences for commercial success. It is therefore essential to engage with stakeholders during the design phase so as to ensure that once executed the study findings will resonate with decision-makers.
Even with strong stakeholder engagement, confusion still abounds about how to select from a wide range of methodologic approaches and identify the most cost-effective option that best meets stakeholder evidence requirements. While algorithms are widely used to provide guidance in areas like health economics and clinical decision-making, no such solution exists for selecting the most appropriate real-world research design.
We have developed and tested an algorithm to address this gap. Based on structured responses to a series of fairly simple questions regarding study focus and objectives, we have found through repeated use that this decision-making approach can facilitate the selection of optimal real-world research design. This algorithm may be useful to researchers, sponsors, stakeholders and others interested in assessing alternative study designs for real-world evidence generation.