The scientific landscape and the ways in which we share and view evidence continue to change at a very rapid pace. Key opinion leaders (KOLs) are interacting with and educating colleagues, patients, and the general public in new ways, and the methodologies that we use to evaluate our engagement with them have also evolved. As MSL teams — and the levels of their visibility — expand within organizations via key responsibilities such as insight gathering and educational support for healthcare professionals, so too does interest in measuring the impact of their activities.
In particular, MSLs drive impact through their engagement of experts, which supports evidence generation and dissemination and, most importantly, improved patient management and outcomes. There are several ways of assessing the impact of KOLs and KOL engagement across the medical community. The most common method has been at the journal or author level, but now there are some other alternatives that can help to refine Medical Affairs and MSL strategic planning and engagement.
In the past, many of us have relied upon journal or author-level bibliometrics to evaluate impact. While these methods still may provide insight, they can also be unreliable. Let’s have a look at some of these more traditional metrics.
Journal Impact Factors¹ (JIF) of those publications: This classic “citation-based metric” works on the premise that the more an article is cited by other KOLs who publish, the greater impact it – and, by extension, the journal in which it is published – has.
Citation metrics such as Eigenfactor, SciMago Journal Rank², and others are calculated by dividing the number of citations a journal received in the past complete year for articles that were published in the preceding two years. Journals that publish articles that are frequently cited have higher journal impact factors than those whose articles receive fewer citations.
Not all JIFs assess publications the same way, however. Some segment journals by subject areas but not necessarily by a specific disease. Consequently, their search results are incomplete and, because citation behavior varies among fields of science, the magnitude scores are not easily comparable.
Another problem is that mid-level journals³ cannot be ranked with precision. Consequently, their rankings could vary by more than 10 positions with no meaningful change in the number of citations. A recent paper⁴ notes that journal impact factors reflect the journal’s quality rather than the authors’ and that impact factors cannot be compared across disciplines. Others point out that the determination of an impact factor is limited to its impact in the scientific community and does not consider its impact on policy or on society⁵.
The ability of journal editors to “game” the system is also concerning. A 2021 analysis of impact factor and SciMago rankings for trauma and orthopedic journals found the results were affected by self-citation – defined as editors citing their own publications within editorials. The analysis included 43 trauma and orthopedic journals with 151 editorials and found a positive correlation between journal self-citation in editorials and impact factor⁶. Journals also sometimes limit citable items⁷ by selecting only papers that are most likely to be cited – effectively preferring hot topics to valuable, but more obscure research. To compound the issue, impact factors for scientific conferences or poster/oral presenters have not yet been developed, with the exception of the Dynamic Impact Factor, created a few years ago by Pharmaspectra⁸.
Author-level citation metrics such as the h-index (Hirsch index) and G-index are similar to JIF but apply to individual authors. They measure the number of papers specific authors write and the number of citations those papers or authors receive. Individuals’ rankings vary according to the database the indexes use to track the authors and fail to consider the widely varying citation differences among disciplines and even among fields within the same discipline. Data for sub-diseases or rare diseases may also be missing.
Some alternative methods try to address the differences. Source-normalized impact per paper (SNIP)⁹, for example, uses Scopus data to compare citations among scientific fields. But, that doesn’t resolve the greater issue.
The correlation between a KOL’s h-index ranking and recognition within the scientific community is declining because of changes in scientific communication methods and patterns¹⁰. Consequently, some researchers are questioning the value of impact factors themselves.
The Declaration on Research Assessment (DORA) is a prime advocate for improving the way researchers and scholarly journals are evaluated. Its premise is that authors should not be judged on the number of times they have been cited, but on the quality of their work. Therefore, DORA recommends not using journal-based metrics as surrogate measures of the quality of individual research articles¹¹.
Instead, it recommends assessing an individual scientist’s contributions. When journal impact factor is used, DORA calls for using a variety of journal-based metrics, such as Eigenfactor, SciMago, h-index, editorial and publication times, etc., to provide a richer view of the journal’s performance. It also calls for making a range of article-level metrics available to better assess the scientific quality of the work.
This global initiative, launched in 2012 to develop and promote best practices in this area, now has nearly 22,000 signatories¹², including institutions and individuals throughout the world.
Clearly, by focusing only on the traditional impact factor, MA professionals may overlook more relevant papers in less prestigious publications and may be unaware of some KOLs who prefer to present rather than publish their data. Basing decisions solely on impact factor inherently risks missing meaningful content and experts. Therefore, using traditional bibliometrics (which were developed for decidedly different objectives) to identify experts may take you down the wrong path.
New Impact Metrics Have Emerged
The challenges associated with traditional bibliometrics are evident, but what can replace or complement them?
There are several more effective ways to assess impact. They address specific needs, such as medical imperatives, specific therapeutic areas, the weighted share of scientific voice, and dynamic impact factors and can be applied to a variety of assessment solutions.
Share of scientific voice (SoSV) is a valuable way to assess KOLs’ contributions. Pioneered by Pharmaspectra, it measures the number of disseminations of your product (and percentage share) and strategic topics across all scientific evidence compared to others in a therapeutic area (TA). SoSV can be measured for individual KOLs for their scientific contributions in a given TA, no matter how specific or in what specialty or sub-specialty they work. For example, a KOL’s SoSV can be assessed across all scientific evidence, on a particular topic, such as Crohn’s disease, or be measured by meeting presentations and abstracts, or publications only.
One of the most valuable applications of SoSV for MA is to measure the impact of individual KOLs across a team’s medical imperatives and/or strategic topics.
For example, ocrelizumab made headlines when it was approved to treat multiple sclerosis in 2017. References to it in publications of the New England Journal of Medicine spiked in 2016 during the run-up to FDA approval and held steady for more than a year afterward¹³ resulting in a high overall share of the scientific voice in its category. Qualitatively, however, a team also may be interested in assessing the SoSV and impact of a specific medical imperative being communicated. For example, it may want to identify potential differences between selected disease-modifying therapies (DMTs) or dosing regimens in relation to the long-term length of disease progression in multiple sclerosis (MS).
Applying that imperative now as a KPI for ongoing measurement ensures that an objective and strategic measurement is in place, to gain insights and improve MA impact.
Factoring in a weighted share of scientific voice helps further narrow the scope of potential KOLs. If looking at authors in the gastrointestinal space and Crohn’s disease specifically, for example, one may have a high SoSV. Applying weighted SoSV, however, may lower that if, perhaps, the KOL is published primarily in a state medical society journal or in journals that were not peer-reviewed and the other KOLs are published in high-impact journals or presented at major meetings.
Weighted SoSV lets MA professionals identify researchers who publish more frequently in specialty or subspecialty journals rather than those who publish less specific research in broader-based publications. By using weighted SoSV, you can identify and assess the impact of KOLs with deep subject-matter expertise more easily.
Weighted SoSV based on a specific disease state or medical imperatives can help you drill down to identify the KOL or data you want and, often, find it in somewhat unexpected places assisting you in assessing potential impact. For example, by using weighted SoSV to identify journals discussing multiple sclerosis (MS), virology journals are identified instead of neurology journals only. This would seem unusual, except for the fact it has been reported that EBV and several other viruses have been noted to be a leading cause of multiple sclerosis, with a recent EBV infection causing a 32-fold increase in the risk of developing MS¹⁴,¹⁵,¹⁶. Searching only using JIFs geared to therapeutic areas (such as SciMago) would not have revealed this.
Using peer-reviewed evidence, scientific presentations and clinical trials is not the only effective way to identify and assess the impact of relevant opinion leaders, however. In recent years, medical experts began turning to social media in increasing numbers, using Twitter, LinkedIn, and other sites to share and discuss scientific findings. While exact scientific usage numbers aren’t known, the large numbers of articles and blogs discussing how healthcare professionals and scientists can use social media effectively to share findings and discuss concepts hint at the popularity and acceptance of this newer communications method. Even before the pandemic, in 2019, Nature Medicine, wrote, “From online journal clubs to ‘tweetorials’ to conference updates, social media is changing the dissemination and discussion of biomedicine.¹⁷”
According to LeverageRX, nearly 70% of U.S. physicians use the private social media outlet Doximity, which connects physicians to physicians¹⁸. Nearly 62% of plastic surgeons have active professional social media accounts, according to a 2019 survey of the American Society for Aesthetic Plastic Surgery, Inc.¹⁹ Nonacademic surgeons topped the list, with nearly 72% having accounts.
An analysis of social media by diabetes researchers²⁰ that appeared in the journal Pharmaceutical Medicine noted that there are quantifiable differences between KOLs who Tweet and those who do not. “Those publishing most frequently on specific themes…were more likely to tweet about these themes,” according to the paper. They also were most likely to have smaller networks. Surprisingly, however, authors with large networks and many academic co-authors were less likely to Tweet. Therefore, accessing digital KOLs may be a way to find specialists in highly specific content. Beware, however, that digital KOLs, as social media influencers, change frequently²¹. Nonetheless, social media can be valuable in identifying reliable KOLs.
The nephrology community is an example of how the use of social media has evolved. Before 2010, nephrologists commented on personal blogs and gradually moved to Twitter for quicker conversations. More formal outlets emerged, like the Nephrology Journal Club, which hosts chats or events around specific topics, and #AskRenal which combs Twitter for renal questions and then retweets the message, thus expanding physicians’ reach²².
For another example, data published in Pediatric Critical Care Medicine between February 1, 2020, and May 1, 2020, showed that tweets with “#PedsICU” were shared 49,865 times on six continents and included links to literature, reviews, and educational reviews²³.
During the pandemic, social media became even more important, by enabling data to be shared immediately with a global audience. In 2021, the second year of the pandemic, Altmetric found, not surprisingly, that 98 of the 100 research articles receiving the most attention were related to COVID-19. They generated more than 2 million tweets and retweets, nearly 50,000 news articles and blog posts, 481 videos on YouTube, 282 Wikipedia citations, and 277 policy citations²⁴, indicating that social media is here to stay as an important way of sharing information. Breaking the data into the Top 100 Featured Subjects showed the change in emphasis in 2021 from vaccines and therapies to social science and policy issues. ²⁵
By generating conversations around their research and opinions, researchers who tweet tend to garner more citations, according to a study in the Annals of Thoracic Surgery²⁶
That fact isn’t lost on medical journals, either. The Lancet, the New England Journal of Medicine, and other high-impact journals tweet regularly. The Journal of Medical Internet Research reported in 2021 that promoting publications on Twitter improved the metrics for academic medical journals, albeit by using alternative metrics for academic medical journals²⁷.
Altmetric, for example, recently developed a new way of determining online impact factors in the form of field-normalized metrics²⁸. This method isn’t based upon discipline but does consider discipline in its calculations. Describing this at its most basic levels, the algorithm includes the percentage of publications getting attention such as mentions in Twitter, Wikipedia, and news outlets, as well as many other variables. The articles then are scored by age, subject area, and format and by the attention they actually received versus what they were expected to receive.
That may be fine for articles, but it doesn’t do much to identify the right KOLs for your scientific activities. That’s because, when it comes to KOL impact, social media presence is rarely reflected in author impact factor ratings. Dimensions, a sister company to Altmetric, includes Altmetric scores, which factor in social media, as well as patents, policy documents, research grants, and more²⁹.
Consequently, when selecting the most relevant thought leaders in a particular field for your projects, MA professionals should consider whether and to what extent both KOLs and digital opinion leaders (DOLs) are relevant for their strategy. Consider, however, whether a DOL’s online presence consists of earned content (which references their work and is shared by others) or owned content (their own communications). How often their data are shared by others is crucial, and is a key to determining whether they are advancing scientific knowledge or simply sharing the work of others.
Ultimately, whichever impact metrics you use to drive your KOL engagement, it is key that they also enable you to effectively measure your own medical imperatives in relation to specific experts. Applying impact metrics will help you to assess the relative impact of KOLs on your strategy:
- By scientific dissemination and/or the digital landscape
- With specific subject matter expertise right down to sub-categories
- Who has current or emerging knowledge (e.g. rising stars)
- Who can’t be identified using traditional methods?
Also, using newer impact analytics and algorithms that regularly and frequently mine data from publications, conferences, clinical trials, and the digital landscape will deliver the most up-to-date information to you, tailored to your specific medical imperatives. More importantly, they will give you the ability to identify the right experts who can properly interpret and use the evidence you provide to ultimately improve patient care
1. https://researchguides.uic.edu/if/impact, accessed April 6, 2022.
2. https://www.scimagojr.com/journalrank.php?category=2703&area=2700, accessed April 6, 2022.
3. Greenwood, Darren C., “Reliability of journal impact factor rankings,” BMC Medical Research Methodology, 2007;7:48.
4. Kaldas M., Michael S., Hanna J, Yousef GM, “Journal impact factor: a bumpy ride in an open space,” J Investig Med., 2020 Jan;68(1):83-87. doi: 10.1136/jim-2019-001009. Epub 2019 Jun 26,
5. The PLoS Medicine Editors, “The impact factor game,” PLoS Medicine, https://doi.org/10.1371/journal.pmed.0030291 June 6, 2006.
6. Jain A., Khor K.S., Beard D., Smith T.O., Hing C.B., “Do journals raise their impact factor or SCImago ranking by self-citing in editorials? A bibliometric analysis of trauma and orthopaedic journals,” ANZ J Surg. 2021 May;91(5):975-979. doi: 10.1111/ans.16546. Epub 2021 Feb 8.
7. The PLoS Medicine Editors, “The impact factor game,” PLoS Medicine, https://doi.org/10.1371/journal.pmed.0030291 June 6, 2006.
8. Laudano JB, Hong S, Wei G, Skirbe P, Matheis. Development of a New Algorithm and Scoring Metric for the Evaluation of Scientific Conference Impact. Poster #181. Presented at the MAPS 2018 Annual Meeting. Strengthening Medical Affairs Impact for Improved Patient Outcomes: Building Our Future Together. February 25-27, Miami, Florida
9. Beatty, Susannah, “Journal Metrics in Scopus: Sourced Normalized Impact per Paper (SNIP)“, Scopus, September 13, 2016.
10. Koltun V, Hafner D., “The h-index is no longer an effective correlate of scientific reputation,” PLoS One, Jun 28;16(6):e0253397. doi: 10.1371/journal.pone.0253397. eCollection 2021.
11. “San Francisco Declaration on Research Assessment,” DORA, http://www.sfdora.org, page accessed May 19, 2022.
12. “21,623 individuals and organizations in 158 countries have signed DORA to date,” DORA, https://sfdora.org/signers/, page accessed April 11, 2022.
13. Schweiger, C., “Ist Medical Affairs Effektivität messbar? Using Medical Affairs Analytics to Power Strategic Decision Making and Measure Success,” Medical Affairs Network Meeting, Medical Affairs Professional Society, Frankfurt, October, 24, 2018.
14. Robinson, William H.; Steinman, Lawrence (13 January 2022). “Epstein-Barr virus and multiple sclerosis”. Science. 375 (6578): 264–265. Bibcode:2022Sci…375..264R. doi:10.1126/science.abm7930. PMID 35025606. S2CID 245978874.
15. Bjornevik, Kjetil; Cortese, Marianna; Healy, Brian C.; Kuhle, Jens; Mina, Michael J.; Leng, Yumei; Elledge, Stephen J.; Niebuhr, David W.; Scher, Ann I.; Munger, Kassandra L.; Ascherio, Alberto (21 January 2022). “Longitudinal analysis reveals high prevalence of Epstein-Barr virus associated with multiple sclerosis”. Science. American Association for the Advancement of Science (AAAS). 375 (6578): 296–301. Bibcode:2022Sci…375..296B. doi:10.1126/science.abj8222. ISSN 0036-8075. PMID 35025605. Related non-technical article: Cox, David (20 March 2022). “Can we vaccinate against Epstein-Barr, the virus you didn’t know you had?” The Observer.
16. Ascherio A, Munger KL (September 2010). “Epstein-Barr virus infection and multiple sclerosis: a review”. Journal of Neuroimmune Pharmacology. 5 (3): 271–7. doi:10.1007/s11481-010-9201-3. PMID 20369303. S2CID 24409610.
17. Wetsman, N. “How Twitter is changing medical research.” Nat Med 26, 11–13 (2020). https://doi.org/10.1038/s41591-019-0697-7
18. Wolstenholm, Jack, “Social Media for Doctors: Are Doctor Networks Worth Your Time?” LeverageRX, https://www.leveragerx.com/blog/social-media-for-doctors/, page accessed May 19, 2022.
19. Economides JM, Fan KL, Pittman TA, “An Analysis of Plastic Surgeons’ Social Media Use and Perceptions,” Review Aesthet Surg , 2019 Jun 21;39(7):794-802. doi: 10.1093/asj/sjy209.
PMID: 30137192 DOI: 10.1093/asj/sjy209
20. Leigh S., Noble ME, Pearson FE, Iremonger J, Williams DT, “To Tweet or Not to Tweet: A Longitudinal Analysis of Social Media Use by Global Diabetes Researchers,” Pharmaceut Med, 2021 Nov.,
21. Zheng C, Wang W, Young SD, “Identifying HIV-related Digital Social Influencers Using an Iterative Deep Learning Approach,” AIDS, 2021 May 1.
22. Dave ND., Sparks MA, Farouk SS, “The Virtual Nephrology Community,” Medscape, April 22, 2022.
23. Kudchadkar SR, Carroll CL, “Using Social Media for Rapid Information Dissemination in a Pandemic: #PedsICU and Coronavirus Disease 2019,” Pediatr Crit Care Med, 2020 Aug;21(8):e538-e546. doi: 10.1097/PCC.0000000000002474.
24. Taylor, Mike, “Reimagining the Altmetric Top 100,” http://www.Altmetric.com, accessed May 19, 2022.
25. Taylor, Mike, “Top 100 2021 – Feat. Subjects,” http://www.altmetric.com, accessed May 19, 2022.
26. Coret M., et al, “Twitter Activity Is Associated With a Higher Research Citation Index for Academic Thoracic Surgeons,” Annals of Thoracic Surgery, Vol 110, Issue 2, P660-663, August 01, 2020.
27. Erskine N., Hendricks S, “The Use of Twitter by Medical Journals: Systematic Review of the Literature,” J Med Internet Res, 2021 Jul 28;23(7):e26378. doi: 10.2196/26378.
28. Taylor, Mike, “The Techno Remix,” http://www.altmetrix.com, accessed May 19, 2022.
29. Impact Factors, Health Sciences Library, University of Washington
Joseph B. Laudano, BS Pharm, PharmD
Joseph B. Laudano, is Vice President, Medical Affairs at Pharmaspectra LLC. Dr. Laudano has over 30 years of experience in the pharmaceutical industry. Before joining Pharmaspectra, he was Vice President of Medical Affairs at Alliqua Biomedical. Before joining Alliqua Biomedical, he was Senior Director of Medical Affairs and head of Publication Planning at Forest Research Institute. Prior to this, he spent 21 years at Roche Laboratories U.S. in Medical Affairs and Marketing in various roles including; Director of Medical Information, Product Director, and Medical Science Liaison. He was Roche’s first Medical Science Liaison, covering major institutions for the whole country and paved the way for the creation of an entire team. Joe has extensive research experience in several different therapeutic areas including infectious diseases, dermatology, and oncology, and has authored numerous publications and scientific posters.
Gail Dutton, BS
Gail Dutton has covered the business of life science for more than three decades, writing about the evolution of biotechnology, management trends, human resources development, and related topics. Her writing has appeared in more than 45 print and online publications, including Genetic Engineering News, BioSpace, and Life Science Leader. She has presented to the National Defense University and the Genopole Paris conference and is particularly interested in new technologies driving innovation throughout the enterprise.
David Kelaher, BPharm, MSc
David Kelaher is Chief Medical Officer at Pharmaspectra. David has 20 years of experience in pharmaceutical Medical Affairs and healthcare communications, driving therapeutic portfolio success and international brand growth for several of the world’s leading companies, such as Sanofi, Abbvie, and AstraZeneca.
He was previously Managing Director of BBH Health, a specialist strategic and creative business within BBH, one of the world’s most awarded agencies. During his five years at the agency, BBH Health became a global agency of record for HUMIRA, Symbicort, and Brilinta, and led the global brand development and launch campaigns for Fasenra, Skyrizi, and Rinvoq. Prior to this, David spent a decade at Sanofi, including Medical Affairs roles in the Australian affiliate, and as a Global Medical Director in the company’s Paris headquarters.