The engagement of key opinion leaders (KOLs) is critical for pharmaceutical companies to develop their clinical research programs and for strategic planning. Historically, identifying KOLs has relied on personal recommendations as well as tools and services that aggregate metrics about the impact of a researcher.
While such approaches have been adopted widely and proven fruitful, utilizing the standard measurements of impact such as the number of papers on a topic, how often a researcher has been cited, their h-index (a measure of a researcher’s productivity and impact), and so on leaving a lot to be desired. Most notably, current metrics lack any insight into the reliability and reproducibility of the researcher’s work. As documented by others, reproducibility is a large and growing problem in drug development that can cost pharmaceutical companies significant amounts of time, money, and energy.
Identifying reliable research, and consequently, researchers would allow for more reliable KOL engagement and recruitment. But even with current methods, this process can be prohibitively time-consuming.
The Inefficiencies Created by Traditional Citation Metrics
The findings that scientific researchers publish are rarely in a vacuum; instead, they build on a growing body of work through citations: the references made to findings and claims from prior literature. These citations, and the textual context in which they are used, represent the constant scientific dialogue that shapes and informs our understanding of natural phenomena.
And yet we collapse this wealth of information into a single number: how many times were they cited?
In doing so, we lose the ability to qualitatively understand how they were cited by others. What was said about someone’s research? Can others reproduce their findings? Can I trust them to lead a clinical trial for a new drug we are developing? Should I invest time in building a relationship with a potential KOL?
Answering such questions requires a manual, time-consuming process of reading someone’s papers, finding other works that cite them, and seeing what they say by pouring over countless full-text articles. Doing this properly is further complicated by the interdisciplinary nature of science and the rapidly increasing rate of new publications. For someone new to a disease area, this can be a non-starter.
Medical science liaisons focused on identifying and developing relationships with clinical experts are forced into a series of trade-offs between evaluating how reliable a potential KOL’s research is and how quickly they can find and establish relationships; this in turn has significant business implications for pharmaceutical companies.
So what can be done?
The Next Generation of Citations
Now, in addition to knowing which publications reference a given paper, you can see:
- Citation statements — the relevant sentences from each citing paper where a citing paper references the publication you are looking at
- the section each citation statement is from in the citing paper (e.g. Discussion, Methods)
- a classification from a machine learning system indicating whether each citation statement from a citing paper offers evidence that supports or contrasts the claims made in the original paper
By associating information about these citation statements and publications, and by linking them with metadata about authors, it is possible to identify key experts in specific disease areas and efficiently build an understanding of how reliable their research is.
This approach has profound implications for how medical science liaisons discover, evaluate, and maintain relationships with KOLs.
For example, using the search feature, it is possible to refine search results to find reliable publications in disease areas by excluding research with findings that are heavily contrasted, or those with retractions. From the filters, an MSL could quickly see the top researchers that were involved in authoring those publications.
Moreover, an MSL can view the profile for each of those researchers, which outlines how they were cited in the rest of the literature, and also includes a list of papers they’ve authored. Loading the scite report for each of their papers exposes the relevant citation statements, helping anyone grasp how someone’s research was received by subsequent papers.
By configuring alerts for new citations to groups of papers, an MSL can be notified when there are new supporting or contrasting citations to research published by one or more experts they work with (or are evaluating as candidates), helping them stay on top of the ever-growing body of scientific literature, and making it easier to effectively operate in multiple disease areas.
Indeed, not all citations are equal. We believe that bringing back the context to citations will be transformative for scientific research, and we, in part, hope to improve how drug research and development is done by eliminating some of the inefficiencies introduced by traditional citation systems.
To learn more about how scite can be used for this purpose, read https://help.scite.ai/en-us/article/identifying-scientific-experts-in-a-field-of-research-1t6b0bh/.
Josh Nicholson, PhD
Josh Nicholson is co-founder and CEO of scite (scite.ai). He holds a PhD in Cell Biology from Virginia Tech, where his research focused on the effects of aneuploidy on chromosome segregation in cancer cells. Previously, he was the founder and CEO of Winnower and the CEO of Authorea (acquired in 2018 by Wiley), two companies aimed at improving how scientists publish and collaborate.
Ashish Uppala is a graduate of the University of Maryland, and an early employee of scite. He has worked in both academic research and software engineering, having previously been a research fellow at the National Cancer Institute before developing a career in technology. His experiences in academia, medicine, and software motivate him to find new ways to improve how we engage with and conduct research.