For some reason, suggesting that university faculty are engaged in pay-to-publish scheme tends to burn bridges and create tension.
Here is a brief review of the research paper.
Note: This review is not a factual determination one way or another of the validity of the findings, but just an overall critique of the paper.
Quotes From The Paper
This study is the ﬁrst to compare the rewards of publishing in predatory journals with the rewards of publishing in traditional journals. It ﬁnds that the majority of faculty with research responsibilities at a small Canadian business school have publications in predatory journals. In terms of ﬁnancial compensation, these publications produce greater rewards than many non-predatory journal publications. Publications in predatory journals are also positively correlated with receiving internal research awards. By improving the understanding of the incentives to publish in predatory journals, this research aims to contribute to a better-informed debate on policies dealing with predatory journals.”
Okay, this is just the opening summary, but the point is clear: so-called “predatory journals” seem to be more lucrative in terms of receiving publications, and in professional gains.
“When academics publish in these journals, their university afﬁliations contribute to the credibility of the journals. Because decision makers and the public may lack the expertise to distinguish between nonsense and legitimate research, they may be led to suspect expert opinion in general. In addition, when academics are rewarded for publishing in predatory journals, the research incentives of their universities are distorted.”
This is actually a bad combination: researchers get rewards distorted by publishing in predatory journals, and the decisions are being made by people who lack the expertise.
The university does not have merit pay for research success, but publications affect compensation in several ways:
1. through initial academic rank and placement of individuals on the salary grid;
2. through the speed at which individuals are promoted and thus pass the salary ceiling for their existing rank; and
3. by the opportunity cost of time spent on research in lieu of earning opportunities.
The ﬁrst two considerations imply a positive relationship between publication success and compensation, while the third implies a negative relationship.
Interesting observations. #1 and #2 refer to “indirect” rewards which are gained from publishing, while #3 references time researching and not “working”.
“literature review Several articles have examined the relationship between journal publications and faculty compensation. For example, Sen, Ariizumi, and DeSousa studied the relationship between the research productivity of economics faculty in Ontario universities and their salaries. Contrary to the present study, they found that publications in top journals were positively correlated with salary increments but that publications in lower-ranked journals were not related to salaries. “
A fairly obvious conclusion, and one that is backed up with more research. Publishing in top journals gets more money, while publishing in subpar journals has little effect.
“[Beale’s] six pages of criteria for evaluating journals largely relate to dishonest practices. Examples include not conducting ‘a bona ﬁde peer review,’ copying or mimicking journal titles from other publishers, identifying the publisher’s owner as the editor of each and every journal published by the organization, not identifying a speciﬁc person as the editor, two or more of the publisher’s journals having duplicate editorial boards, and the publisher falsely claiming to have an ISI impact factor or purchasing ‘fake impact factors’ services. Publishers who believe they have been wrongly included can apply to a four-person appeal panel for removal.”
Interesting signs to look for:
-No proper peer review
-copying or imitating titles
-identifying owner of publication as each journal’s editor
-not having a specific editor
-2+ journals with duplicate editorial boards
-false claims of impact factor services.
“Bohannon conducted a ‘sting operation’ by submitting a scientiﬁcally ﬂawed paper to 304 open access journals, some on Beall’s list. Eighty two per cent of the journals on Beall’s list accepted the paper; thus he concluded that ‘Beall is good at spotting publishers with poor quality control.
Ray argues that predatory journals may be able to screen for hoax articles. Thus, her approach was to submit essays written by eighth- and tenth-grade secondary school students to ten open access journals. Of the nine who responded with an editorial decision, six accepted the paper without revisions, and only one rejected the paper. The paper was rejected for being too short, but the journal suggested to the author that it be expanded and resubmitted. “
Nice ways to screen for validity of academic journalism: do a little investigative journalism and see if they will literally publish anything. Several pages of data and charts are then presented in the paper.
“discussion and conclusions Predatory journals have become an increasing problem when it comes to assessing and rewarding researchers for the merit of their publishing records. In addition, the presence of predatory journals makes it difﬁcult for non-experts to judge the quality and validity of published research. This paper ﬁnds that, at least at one university, there are few incentives not to publish in predatory journals. In addition, when the opportunity cost of forgone income from extra teaching is signiﬁcant, publishing in ranked journals is costly.
A number of questions for future research on predatory publication are raised. A key question is the degree to which these ﬁndings are generalizable to business schools, and other faculties, at other universities. The similar proportions of questionable publications reported by Ray suggest that the results may be generalizable to other business schools, but additional research is needed. This type of research involves time consuming data collection, and answering these questions would require signiﬁcant research support. However, the beneﬁts of better understanding the market for predatory publications would be substantial. For example, such data could be used to study whether faculty research output is improved when administrators also have a research background.”
To summarise here: the author actually makes a pretty compelling case (backed up by data), that publishing in so called “predatory journals” is economically a better choice. This would apply both in terms of time (far fewer rejections), and financially (such as costs involved in ranked journals).