We start our journal editor interview series with Judgment and Decision Making’s editor Jon Baron. JDM is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. JDM publishes original and relevant to the tradition of research in the field represented by SJDM and EADM. Relevant articles deal with normative, descriptive, and/or prescriptive analyses of human judgments and decisions.
What makes you go “Wow!” or “Yuck!” when first read a submission? Wow!: When it shines new light on a traditional JDM problem, including possible applications in the real world. I choose the lead article in each issue on the basis of this sort of reaction. (Of course, some issues have no article that merits the exclamation point, and some have more than one.)
Yuck!: When it applies the Analytic Hierarchy Process to the pipe-fitting industry in Pakistan. Or when it uses a tiny sample, with no replication, to show that people are at the mercy of subtle, unconscious forces. Or when it makes obvious statistical errors, like claiming an interaction on the basis of a significant effect next to a non-significant effect.
What are the common mistakes people make when submitting or publishing? Submitting to the wrong journal.
What are your best tips on how to successfully get published? Study big effects, or use large samples. Don’t waste time studying phenomena that are ephemeral and difficult to replicate, especially if you are trying to find moderators of such effects.
How are reviewers selected? When I handle papers – about half of them go to associated editors – I try to find the most expert reviewers who are willing to review, including members of the journal’s board when possible. This often takes several attempts; people say no. Often I use Google Scholar, as well as citations in the paper and authors’ recommendations.
How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? If you know someone who is an editor (including an associate editor), tell him or her that you are willing. I often ask grad students to review, but only if I know them to be experts on the topic of the paper. I am not willing to take a student’s word for this expertise, or to assume that being first author of a related paper is sufficient. Thus, personal knowledge is important.
I think that grad students should do occasional reviews. But anyone who keeps publishing is going to get asked to do more and more reviews. Be nice to editors (and other authors). If you get asked to do a review, respond quickly. Saying no immediately allows the editor to go to the next person on the list.
What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer? Explains why the paper is fatally flawed, if it is. Otherwise provides helpful advice for revision, or (only if necessary) for additional research. What I find unhelpful are requests for more “theory”, as if theory were something like soy sauce.
How do you resolve conflicts when reviewers disagree? I regard reviews as information, not votes. They point out flaws I had not discovered, literature I did not know, or strengths that I did not appreciate. The review’s bottom-line recommendation is just a little more information. Thus, these recommendations are not conflicts that need to be resolved.
But reviewers also disagree about specifics, about what needs to be done. Here, I think it is my job to tell the author which of the reviewers’ comments to ignore, and which to follow, and (if the review does not say), how to follow them (if I can). As an author, I find it annoying to be at the receiving end of conflicting reviews, with no idea what magic I must do in order to satisfy everyone.
What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? The best way to react to a revise/resubmit is to try to do what it says, or explain politely and clearly why you can not or should not do that. Or give up and try another journal if you think you are being asked to do the impossible.
The best way to react to a rejection depends on what it says. If it finds a fatal flaw that cannot be fixed, the best thing may be to regard the paper as sunk cost, and move on. In other cases, rejections are very specific to the journal, so you should just send the paper
elsewhere. If you think a paper is good, don’t give up. Keep sending it elsewhere. In still other cases, papers are rejected because more work is needed. Maybe do the more work.
Is there a paper you were sceptical about but turned out to be an important one? Not really.
As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? On topics, I think that fields go through fads – well, let’s say “periods in which some topics become very popular and then gradually fade into the background”. These are often good things. Many come from external interest and funding, such as the enormous interest now in “nudges”, or the interest in forecasting and prediction arising from the recent IARPA competition. Around 1991 the Exxon Valdez case inspired (and funded) a great deal of JDM research on contingent valuation and value measurement in general.
On methods, our field is slowly but surely catching up with the enormous increase in the powers of computers and the Internet. Data analysis is becoming more sophisticated. A variety of approaches are being explored (including Bayesian ones). Web studies are becoming more numerous and more sophisticated. People are making use of large
data sets available on the Web, including those they make themselves by mining data.
What are the biggest challenges for journals today? The biggest is integrity. The work of Simonsohn, Simmons, Nelson, Ionnides, Pashler, Bar-Hillel (earlier) and others on p-hacking, file-drawer effects, basic statistical errors, and outright fraud has raised serious questions about what journals should and can do. The problems vary by research area. Medical research and social psychology are probably worse than JDM. But I am still trying to work out a way to deal with this problem. Asking for data and for sufficient stimulus materials for replication is a step. I spend a lot of time checking data analysis with the data that authors send.
The next biggest challenge is how to take back scholarly communication from those who seek to profit from it by building pay walls of one sort or another, including both subscription fees and publication charges. I have ignored this problem, hoping that it will go away or that someone else will solve it (e.g., by endowing JDM with $500,000). Right now, JDM has neither type of fee, because I do the production and “office work”. Other journals work this way, but the authors all submit papers with LaTeX formatting. My job would be easier if Microsoft Word did not exist. Maybe I will outlast it, and then the problem will be solved for the next editor. But a little money – nowhere near as much as proprietary journals get – would still help, and I don’t know where to get it.
The third biggest challenge is how to get rid of the perverse incentives that arise from the use of the “impact factor” of a journal for evaluation of authors of papers in that journal. Journals cannot do much about it, except perhaps to stop advertising their impact
factors in large print.
Pingback: New series: Opening the black box of academic publishing | :InDecision: