When I was much younger, a journal editor of a well-known journal accepted my paper on the condition that I cite relevant papers in his journal.
The editor even suggested a few citations. The logic being, to better connect the paper to the journal and it’s tradition.
Often, though, the request is for irrelevant citations.
While the practice is condemned by many, this editor isn’t the only one I’ve seen ask for citations in their journal - using exactly the same logic - and to good effect.
The citation counts and impact factors for these journals are pretty good.
Lately, I’ve seen the practice extend to initial submissions. I’ve had, and heard tell, of papers being desk-rejected for not citing the journal enough.
What bothers me more, is that I’ve seen the practice run into the review team - with ‘convenient’ citations suggested in the journal.
Because the practice is pervasive, I have one colleague who coaches their students to add citations from a a journal before submitting. I have another colleague who says to only suggest reviewers that you cite, esp. if their work appears in the journal.
This advice is rational.
This advice makes me sad.
Have we really reached the point? Where it’s ok for editors, reviewers & advisors to encourage gratuitous citations?
Yes. We have.
While people may complain about the practice, there are no negative outcomes for journals that make it a best practice.
Absent sanctions for the journal, authors have little choice but to anticipate or comply with these requests.
Advisors are not wrong to advise students about the reality of publication - cite papers in the target journal - even if they aren’t very good.
Why?
Because other than inflating impact factors, citations don’t matter that much. They don’t make the science any better or worse.
And accepted papers earn you job offers & tenure.
Yet.
It doesn’t feel very good to advise phd students to engage in shady practices or comply with predatory requests. It teaches bad habits.
So what to do?
Rather than pyrrhic journal ranking exercises, professional associations & groups should consider an ethical scoring system for journals with three parts.
The conduct of the editor - do they demand self-citation? Do they publish in their journal? If so, how was the review handled?
The timeliness of the reviews - do they have reasonable review cycles? How long does it take from submission to acceptance?
The enforcement of conflict of interest- do they have a written policy? How is it enforced? Who enforces it?
After a baseline is established, if a journal consistenly receives low marks - then outcomes could be established- such as being dropped from the ABDC or FT50 journal lists.
Making a rating scheme work, will require publishers, scholars, and associations to come together.
All it will take is a determined group of scholars!
Let’s build a better academe!

Comments