The terms “efficacy” and “effectiveness” are often used interchangeably. But when evaluating health technologies (e.g., drugs, vaccines, diagnostics, devices), their meanings have crucial differences.
Efficacy concerns the question, “Can it work?” That is, can a novel technology improve health outcomes under ideal conditions in an experimental design? Randomized controlled trials (RCTs) address this important question. RCTs typically include carefully screened patient samples with little or no comorbidities, high adherence rates, short trial durations, comparisons to placebo, and clinical experts highly trained in using the new technology. Randomizing participants to either experimental or control groups (often standard of care or placebo) mitigate bias and provides the best evidence for a cause-effect relationship between the experimental technology and health outcomes. Demonstrating efficacy is a fundamental part of health technology assessment and regulatory approval. However, efficacy does not reflect how technologies perform in real-world settings.
Only after using technology in real-world settings can we begin to assess its effectiveness and answer the question, “Does it work?”
Unlike clinical trials, real-world settings involve more complexities such as patient diversity (e.g., disease severity, comorbidities, ages, genetic profiles), variability in adherence, and the opportunity to observe long-term effects or side effects. Observational studies address the question of effectiveness, relying on a variety of, and often large, real-world data sources, including administrative claims data, electronic health records, registries, surveys, and data provided directly by patients or from mobile/wearable devices. Real-world data (RWD) that demonstrates the effectiveness of technology and is used in decision-making is known as real-world evidence (RWE).
Efficacy and effectiveness data have inherent complementary strengths and weaknesses. While efficacy data from RCTs represent the best evidence for causal relationships (e.g., drug X causes improvement in symptom Y), findings from the select sample of study patients do not necessarily generalize to the larger population of real-world patients. In contrast, observational study designs, particularly large database studies, can generalize beyond the study sample to diverse populations and settings. Although observational studies lack strict controls and randomization, investigators can use rigorous methodological and statistical techniques (e.g., propensity score matching) to minimize bias and confounding. However, even a well-designed observational study technically measures the association between technology and its outcomes (e.g., drug X is correlated with improving symptom Y), stopping short of causal inference. Pragmatic trials are yet another type of research design, a middle-ground that combines the advantages of randomization and observational research.
RWE is increasingly used to inform regulatory decisions. Since the 2016 passage of the 21st Century Cures Act in the US, the FDA has been evaluating the use of RWE – generated from observational studies, pragmatic trials, and hybrid designs (e.g., using RWD-like hospitalizations for one clinical outcome as part of a clinical trial) – to monitor post-market safety and adverse events, and to support new indications for approved drugs and biologics. In 2021, based on observational data from a national transplant registry, the FDA approved the new indication for a tacrolimus product to help prevent organ rejection in lung transplants, extending the previous approval for liver, kidney, and heart transplants. During the same year, the FDA also approved a new cetuximab dosage to treat a type of colorectal cancer (mCRC) or squamous cell carcinoma of the head and neck (SCCHN) based in part on real-world data from patients’ overall survival. In another recent example, the FDA accepted tumor response and safety data from electronic health records and post-marketing reports to expand the indication for palbociclib in 2019 to male patients with breast cancer, a very rare patient subgroup excluded from the original 2017 clinical trial.
While efficacy data from clinical trials may be more readily available, they are not always ideal for health economic and outcomes research (HEOR). The best data to inform real-world decision-making comes from real-world settings. Of course, RWD is not available for new health technologies, where regulatory and formulary decisions must rely on efficacy data alone. But when taken together, efficacy and effectiveness data provide valuable information to inform decision-making on health technologies.
So, the next time you encounter these terms concerning a health technology, you can now better assess whether the technology demonstrates efficacy, effectiveness, or both.
Jason T. Hurwitz, MS, PhD
Jason T. Hurwitz, MS, PhD, is Assistant Director of the Center for Health Outcomes & Pharmacoeconomic Research (HOPE Center) at the University of Arizona. Dr. Hurwitz develops and teaches numerous HOPE
Center training programs for healthcare industry professionals each year. He also conducts numerous industry, foundation, and federally funded studies in health economics and outcomes research (HEOR). His recent research includes cost-effectiveness analyses of immunosuppressants for kidney transplant recipients and of PD-L1 testing for non-small cell lung (NSCLC) cancer, and the development of a shared decision-making (SDM) tool to help patients and providers reduce the risk of drug-drug interactions. Follow Dr. Hurwitz on LinkedIn or visit https://www.pharmacy.arizona.edu/hope for upcoming HEOR training programs.