If you are looking for updated references for your English-related studies, you can use the reference lists provided by the International Research Foundation for English Language Education (TIRF). Download HERE
Summarizing various reasons for rejection of scientific manuscripts, Lucey (2015) proposed 12 dominant factors:
- Clarity: the paper needs to tell us what it is doing. If there are a host of good ideas all crowding each other out then in the confines of the space available in a modern journal article this is going to present a problem. Without getting into salami slicing, where a host of papers are created from one base, each differing only minutely form each other, the rubric of “One for One”, one major idea per paper, is one to live by. That way you can present a tightly argued, clear, organised paper. The other ideas go in other papers. Ask yourself – is this tightly coherent.
- Fit: It still astonishes me how often I see papers that are simply not within the aims and scope of the journal. How hard can it be to check have similar papers been published in the last few years, to read the journal homepage, to perhaps even email the editor or an associate editor? Again, this is not to say that journals shouldn’t, and perhaps even have a responsibility to, go outside the box a little, but sending a theory paper to an empirical journal, or a paper on international trade to one focusing on corporate finance suggests sloppy preparation and a lack of clarity. Check if it fits.
- Contribution: I mentioned salami slicing. In empirical papers this most often appears where one or two variables or approaches are changed and a new paper produced. Thus one paper uses one methodology and another a similar, with essentially the same explanatory variable set. Really this is down to referees and editors, where the dreaded “robustness checks” are required to be shown. Let the reader feel they learned something
- Triviality: Some things, if not known (can anything be known, really, in social science?) are well accepted. A paper that demonstrates already well attributed findings but in another setting, that is hard to publish. In my area this usually manifests itself as a paper that takes a concept or finding from developed or increasingly emerging markets, applies it to a frontier market and finds the same findings. Salami slicing works this way also, or rather, not. Give the reader a solid reason for reading the paper
- Coherence: Some papers are a mess. There is a good reason for, and again this is in my area, a conventional layout: introduction, previous literature, data and methodology, findings, robustness checks, conclusion and recommendations. It works to aid the writer and more important the reader in understanding the flow of the paper. Too many or too few sections, lack of integration across same, a sense where there are multiple authors of multiple voices rather than one, all these make it hard to read and hard to understand. Remember, this is a discourse, a communication. Make the paper clear
- Completeness: some papers are simply not complete. For most publishers now there is a technical screening before it hits the editor; are the manuscripts, tables, figures, data etc. in the submission? Has it passed the plagiarism screening? Is it legible? Sometimes people simply forget to include material. It is uncommon but not unknown to have papers that have <to be added – Jim> or something similar. If it’s not complete, it’s not going anywhere. Complete the paper.
- Legibility: At times I feel like channelling Samuel L. Jackson, discussing linguistics with Brett, in Pulp Fiction. English is overwhelmingly the language of academic publishing. If the language is poorly structured, riddled with errors syntactical and lexical, then it’s going to be rejected. Get it proofread, even if you are a native English speaker.
- Correctness. A paper needs, especially if it is going to challenge established wisdom, to be very well constructed and to leave the reader feeling that yes there is a solid challenge. If the paper misses a whole pile of literature, has bad statistics, overambitious conclusions drawn from fuzzy data, is in general riddled with poor science, then it’s going to go down. Alternative perspectives are great, but being wrong is an alternative to being right. Check your science.
- Strength: This is often the case when papers are from junior researchers or are driving forward a new area. At the end we want to know – so now what do we do or where do we go? If the paper cant tell us that, perhaps because of some of the other issues noted here or because the paper spent too long or in too rambling a way to get to the point, then it is not going to prosper. Make it strong but grounded.
- Replicability: Data integrity and replicability are becoming key concerns of journal editors. Some have adopted a policy of having data and commands deposited with the paper. In general however the paper should be complete in its descriptions so that someone with the same or similar data can reproduce the flow. Explain what data, where sourced, what cleaning etc.; outline the nature of the theoretical steps; explain the experiments. Many of these explanations, which can be quite long, can be placed now as supplemental appendices, and should be so done. That way the paper as such can be short and pointed and the interested replicator can go to the appendices for detail. If there is a sense that this can’t be replicated, then it’s incomplete and poorly written and will crash. Make it reproducible
- Courtesy: The academy is quite small once you get into paper writing and reviewing. I have had occasion to reject a paper from a journal knowing, as I had been the reviewer just two weeks before, that the paper authors had not made any effort to address my previous concerns. That doesn’t mean agreeing with them, it does mean addressing them. Sending a literally identical paper sequentially rejected to a multiple of journals will get you a bad reputation and you WILL meet as editors or other gatekeepers people whose views you have blown off. Address the concerns.
- Bad Luck: ideas and topics go into and out of vogue. It is not uncommon to see two or more similar papers addressing similar areas being submitted. In that case there is an element of luck. Generally I will try to track back, via working papers dates, and see who has some claim on priority. This, by the way, is another reason why working papers and conference presentations are useful; they show intellectual priority. At any rate, Solomonic judgements are sometimes required. Be swift, but sure, I suggest.