We won! To follow up on my last post: At a debate today at the Royal College of Nursing’s so-called “fringe session” at its annual International Nursing Research Conference, Elizabeth Anionwu, emeritus professor of nursing at Thames Valley University in Middlesex (near London), joined me in arguing in opposition to the statement, “research should be published in the highest impact journals available.”
On the affirmative side of the argument was Michael Traynor, the Trevor Clay Professor of Nursing Policy and head of the Centre for Research in Healthcare Practice and Policy at Middlesex University in London, and Kate Seers, director of the Royal College of Nursing Research Institute at the School of Health and Social Studies of the University of Warwick, also in the UK. Jane Salvage, an independent health consultant who was recently appointed to head the Commission on the Future of Nursing in England (and recently became a contributing editor of AJN) moderated the session and polled the audience on the statement before the debate began. A clear majority of those who voted (show of hands) supported the statement. When the audience was polled after the debate, the vast majority had switched to the opposing side.
Elizabeth and I focused on three points.
• First, the “journal impact factor” (JIF), which is used to measure the impact of a journal, is a flawed, unscientific, rather secretive statistic.
• Second, “impact” is the wrong word and mischaracterizes the concept. “Impact” shouldn’t even be part of the phrase, since even fraudulent research can be published in “high-impact” journals (as happened with two fraudulent papers published in Science, a high-impact journal, that later had to be retracted—but were cited hundreds of times). The JIF isn’t a measure of the quality of individual studies or papers.
• Third, nurse researchers should be considering other factors when evaluating the “impact” of their publications, such as online page views, extent of public media reporting on the work, and the influence of the work on clinical practice and health policy (something that is very hard to quantify).
Traynor argued that bedside nurses don’t read or use research, so disseminating it to those in practice should not be a consideration. I disagreed, and after our session, some researchers reported on an unprecedented research initiative that occurred from 1967 to 1973 and focused on the study of various aspects of nursing care. One of the purposes of this initiative was to involve bedside nurses in research and prepare them to continue this involvement. Something has happened here in the UK in the subsequent 35 years to widen the gap between nurse researchers and practicing nurses. I attributed the movement in the U.S. for bedside nurses to engage in clinical research and the development of evidence-based practice to their work in Magnet hospitals—something that has not caught on here. But that’s for another posting. –Diana Mason, PhD, RN, editor-in-chief of AJN.
That’s interesting, Mark.Actually, there are several ways that a journal editor can raise a journal’s IF.One is to write editorials that cite the journal’s publications. The citations in the editorial count in the numerator but the editorial doesn’t count in the denominator. Clearly a conceptually flawed ratio. JIF has its place but not the one it’s being given by universities, in particular.many of the researchers are being driven by tenure and promotion committees that give JIFs way too much weight.
I’m pleased you made those points about the JIF being flawed. I believe CA: A Cancer Journal for Clinicians annually has far and away the highest JIF because so many other articles reference its “Cancer Statistics” issue/article.