Inside the Black Box: Psychological Bulletin

Psychological_Bulletin-500x500

Psychological Bulletin is a bimonthly peer-reviewed academic journal that publishes evaluative and integrative research reviews and interpretations of issues in psychology, including both qualitative (narrative) and/or quantitative (meta-analytic) aspects. The editor in chief Stephen Hinshaw gives us insight into this journal

What makes you go “Wow!” or “Yuck!” when first read a submission? Our journal (Psychological Bulletin) is different from most others, in that it publishes only lengthy, synthetic review papers–across the entirety of psychology and behavioral science.  So, I look for a deeply conceptual introductions and systematic reviews of primary literature, written in an accessible yet still scholarly fashion.

What are the common mistakes people make when submitting/publishing? Not reading instructions carefully (or at all)–so that we sometimes receive single empirical studies or extremely preliminary ‘review’ papers suggesting leads for further study (but not providing a deep review of a mature literature)

What are your best tips on how to successfully get published? Research, research, research your topic and revise, revise, revise your writing.

How are reviewers selected? In consultation with Associate Editors, I scour reference sections and consult lists of experts in various subfields.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? See if a more senior person will ask the editor to enlist you as a co-reviewer (with permission of Editor).

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer?  Good reviews are thoughtful, respectful, and reveal deep knowledge of topic – showing how the paper does or does not provide an advance in that field.

How do you resolve conflicts when reviewers disagree? Careful reading and rereading of the paper…and sometimes going to a Consulting Editor for a ‘tie-break’ review

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Worst way is to get overly defensive and battle every point of the reviews.

Is there a paper you were sceptical about but turned out to be important one? Yes, sometimes initial submissions that didn’t really deliver can be greatly improved with substantial revision.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? PB is so broad that it’s hard to see emerging trends across all of the sub-facets of the field.

What are the biggest challenges for journals today?  Finding and engaging willing reviewers, keeping up with flow of submissions, battling ‘crank’ journals.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Medical Decision Making

mdmMedical Decision Making (MDM) is a peer-reviewed journal published 8 times a year offering rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health policy development. MDM presents theoretical, statistical, and modeling techniques and methods from disciplines including decision psychology, health economics, clinical epidemiology, and evidence synthesis. Editor-in-Chief Alan Schwartz gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission?

  1. “What’s new?” – What will I learn from this paper that I didn’t know before?  A paper presenting an original approach to a problem, or an extension of past approaches, or a first replication of a previously unreplicated finding is exciting to read. At Medical Decision Making, what’s new is usually a new method for studying or improving decisions, but sometimes it’s an exemplary application of prior methods. (Of course, some journals, like PLOS One, have explicitly chosen not to use this criterion).
  2. “What’s true?” – How do I know that I can rely on the results? Are the methods rigorous, sound, and appropriate for the question? Did the authors interpret their findings appropriately, without overgeneralizing?
  3. “So what?” – Why was this study proposed in the first place? What motivates the research question, and is it an important question in the context of the field and our currrent knowledge?
  4. “Who cares?” – Is this paper right for the readers of my journal, or does it belong somewhere else? A straightforward clinical trial comparing two drugs — or a basic psychology study of non-medical decision — probably doesn’t belong at Medical Decision Making.

What are the common mistakes people make when submitting/publishing? 

My top three:

  • Failing to motivate the research question or ground it in a theoretical or conceptual framework. Theory is important.
  • Overstating the conclusions and ignoring limitations.   Your paper doesn’t have to be the final word or solve every problem.
  • Sending to the wrong journal (violating the “who cares?” principle)

What are your best tips on how to successfully get published? Be open to feedback. Before you send a paper out, it should be the best paper you can write, so you should have had friends and mentors read and criticize it. If you can anticipate issues that a critic might raise, address those forthrightly. When you receive reviews, pay attention to them. If you don’t understand something a reviewer says, don’t ignore it — ask the editor for guidance.

How are reviewers selected? At Medical Decision Making, as at many journals, we have experienced reviewers on our editorial board and in our reviewer database. We find new reviewers through suggestions from authors (yes, you may suggest potential reviewers, and yes, we will often invite at least one of your suggestions if we agree that they really have specific content expertise) and through looking at the paper’s citations and related literature ourselves and seeing who else is working in the same area.

Our goal is to ask for reviews from experts whose reviews not only advise the editor on the disposition decision but are valuable to the authors, whether or not we publish the paper.  We’re fortunate at MDM to have really outstanding reviewers, and many first-time authors comment on how helpful the reviews have been.  We also score our reviews, and reviewers who do a poor job tend to get selected against in the future.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I’m a big proponent of reviewing both papers and grant applications; I think you learn a lot from reading very good (and sometimes poor) writing, and from comparing your review with those of the paper’s other reviewers and the editor (at MDM, we cc our decision letters to the reviewers). One good way for PhD students to get some experience with this is to do a “mentored review” with their advisor when their advisor is asked to review a paper. Many journals will allow the invited reviewer to share the review with a student as long as the invited reviewer supervises and takes responsibility for the review. Post-PhD, as a postdoc or junior faculty, if you haven’t already been asked to review for a journal that you’d like to, you can often contact the editorial office and ask to be added to the reviewer database. Of course, submitting a paper to the journal and filling out your author profile with a good set of keywords for your expertise is also likely to lead to reviews in the future.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? This is a matter of editorial taste, but I really like to see a review that begins by looking at the big questions and pointing out the strengths of the manuscript (or at least of what the authors hoped to achieve through the study), and then moves on to detailed constructive criticism about methods, and presentation and interpretation of results. The review should conclude with minor concerns or suggestions for improving the writing.

Some little things that are very helpful: Number the points in the review to make it easier for the author to respond point by point. Refer to parts of the manuscript by page number and line number to help the author locate exactly what you’re asking about. Make it clear to the author when you’re making a suggestion (e.g. please describe the factor rotation strategy in more detail) and when you’re asking a (non-rhetorical) question (e.g. why did you expect patients to be more influenced by attribute range than attribute context?)  Don’t say (in the
comments to the author) whether the paper should be rejected or accepted – that’s the editor’s job. Definitely don’t recommend rejection privately to the editor and then write a wholly positive review for the author.

How do you resolve conflicts when reviewers disagree? Reviewers advise; editors decide. I’ll admit to a little bias: when good reviewers disagree, I think that means there’s something important to work out, and I’ll usually ask the author to help the reader understand both perspectives and how the author chose to resolve them. There isn’t a single right way to study something.  On rare occasions, reviewer disagreement lends itself to inviting one or both reviewers to write an editorial about the study, if we’ve decide to publish it.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? A revise and resubmit is a positive signal, especially from a paper journal that has a limited page budget. It usually means that the editor thinks there’s something important in the paper to make it worth spending more editorial and reviewer time on, and that you’re capable of addressing the reviewer concerns. So that’s easy – always resubmit, and always include a cover letter addressing each point made by each reviewer. That can mean explaining why you didn’t choose to make a suggested change, but pick your battles: a wholly unresponsive revision is not going to go very far with the editor.

Medical Decision Making also has a category of initial decision called “reject and resubmit”. This means that the editor doesn’t want the paper or a revision of it, but thinks there might be a different, related paper you could write that would be competitive. The new paper gets the full peer review treatment, usually with different reviewers.

A flat rejection – well, when I get those, I usually shake my fist at the sky, eat a piece of chocolate, and get a good night’s sleep. Then I see what useful information I can get from the reviews and improve the paper to send it elsewhere. Uncertainty is a fundamental
fact of life.

The worst way to react to a rejection is to send a nasty email to the editor-in-chief to try to bully him into reconsidering the decision and to threaten that you will never send your priceless work to that journal again. Yes, that happens (especially early in my term).
We have an appeals process if it’s clear that a reviewer or editor deeply misunderstood something, but that’s not it.

Is there a paper you were sceptical about but turned out to be an important one?I think I’m still too early in my editorship to know. In about 3 years, though, I’d be interested in looking at that — collecting the top 10 important papers we’ve published based on reader response and looking back at my notes to see how many of those I only assigned to an associate editor reluctantly.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? That’s one of the best parts of the job. Right now, MDM is publishing a lot of innovative work in simulation models and value of information analysis methods. Approaches to utility models are emerging in which econometric and behavioral research are triangulating on phenomena that call into question some longstanding simple assumptions of health state valuation — for example, that the proper unit on which to assess utility is the individual decision maker. And there’s a lot more interest in dual process theory and decision psychology/behavior economics manipulations of the decision environment in order to understand and improve health decisions.

What are the biggest challenges for journals today? There’s a great debate going on right now about open access models for science journals and how publishers do or don’t contribute to science, but in some ways, I think that’s just the opening act for a larger discussion of the value of an expert peer review process vs. open publishing and crowdsourced reviewing. I want to see good science clearly communicated, and journals need to demonstrate to their readers that they are promoting those ideals.

Journal website

More ‘Inside the Black Box’

Inside the Black Box: Psychological Review

Psychological_Review-500x500Psychological Review, founded in 1894, is one of the most prominent journals in psychology today. Psychological Review focus on psychological theory and publishes papers that make important theoretical contributions to psychology. Associate Editor Prof. Susan Fiske helped us with more insight into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Clear statement of the argument in the title & abstract enable an immediate evaluation of an article’s contribution. It is amazing how often authors fail to be clear about their hypothesis and its significance.

What are the common mistakes people make when submitting/publishing? Failing to check whether the article is appropriate for that journal.

What are your best tips on how to successfully get published? Aha! Plus Evidence.  Good ideas, backed up by good science.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? Participating in a grad-led journal club, reading and critiquing published articles. Telling one’s advisor that one would like some reviewing experience.  When asked, returning high quality, on time reviews.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Balanced, thoughtful, succinct.

How do you resolve conflicts when reviewers disagree? Reviewers often disagree because they are recruited for differing expertise. Editors must consider the inputs relative to the expertise and perspective of the reviewers.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? R&R needs to list, in a cover letter, each response to each suggestion, including to explain (rare) instances of decking to make certain changes.

Is there a paper you were sceptical about but turned out to be important one? Not that comes to mind.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? Interdisciplinary and global collaborations.

What are the biggest challenges for journals today? Maintaining humanity depute the volume.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Journal of Behavioral Decision Making

JBDMThe Journal of Behavioral Decision Making is a multidisciplinary journal publishing original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged. Associate Editor Frank Yates gives us his insights into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Papers that surprise me because of their conclusions or even their topics are the ones that most often make me go “Wow!”  So do papers that exhibit different ways of thinking about old topics, ones that force me to say, “I’ve never seen this idea before.  That is so cool.“  I must also say that I love manuscripts that clearly point toward ways that people can decide better than they normally do.  And if the authors can complement their messages with concrete and elegant illustrations, that’s even better.

I can honestly say that I have never said “Yuck!” in response to a submission, even to myself.  I realize that virtually every submission I have ever seen represents the culmination of a huge investment by the authors.  I would feel guilty if I didn’t acknowledge that investment.  On the other hand, certain aspects of some submissions do make me groan (“Oh, no!”).  Papers that are weakly motivated tend to do that, especially ones that report the results of studies that I infer began with nothing more than: “I wonder what would happen if we tried this manipulation.”  Groans are also evoked by papers that are unnecessarily hard to read.  This happens, for instance, when papers are overly abstract or are too tedious and too long relative to the significance of their messages.

What are the common mistakes people make when submitting/publishing? Well, whatever actions produce the kinds of groans I just mentioned are good examples.  An especially common error that authors make is assuming that the reader appreciates and knows as much about their focal research problem as they do.  This makes their papers dull and impenetrable.  One way to avoid this is to simulate the journal reading experience in advance of submission.  That is, recruit friends and colleagues who are similar to the journal’s audience but are naïve to the topic.  Then ask them to review and discuss with you in detail their interpretations of your work.  You are virtually guaranteed to uncover misconceptions that will amaze you: “Really?  That’s what you thought I meant?”

What are your best tips on how to successfully get published? My first suggestion is to somehow choose to work on problems that are easy to convince people are interesting and important to solve, and then solve them.  Our field is like baseball, where an outstanding hitter fails 2/3 of the time.  That being the case, successful authors necessarily must be unusually energetic and well organized.  That is, they must always be working on several projects simultaneously.  Therefore, despite the low “hit rate,” they maintain a steady flow of results that are ready to submit for publication.  My second tip is to learn to view and use reviewers as one’s collaborators.  Typically, reviewers are among the most knowledgeable people in the world concerning an author’s focal problem.  So why not exploit the expertise underneath their comments to sharpen your writing, your thinking, and your next studies?

How are reviewers selected? My goal is to have every submission read critically and constructively by 2-4 people who know more about an author’s research problem and related topics than just about anyone else in the field.  Some of these people are likely to be on our editorial board.  Many others will have been authors of articles cited in the manuscript.  Because it is a multidisciplinary journal, at JBDM, we make a special effort to have at least two different specialties represented on every team of reviewers.

How can a young researcher become a reviewer? When is the best time during one’s PhD training to start doing so? In my view, the best time for a PhD student to start reviewing is after he or she has developed expertise and credibility in a particular area of research and therefore would have something useful to offer and to gain as a reviewer.  Having successfully published in that area is usually a safe indicator of such expertise.  At that point, the student might be well advised to write to editors of journals that publish work in the student’s area of specialization, volunteering to review occasional submissions on particular topics.  The response is likely to be immediate and positive, since editors are always on the lookout for good reviewers.

Since reviewing is hard work and takes time, why would (or should) a PhD student want to serve as a reviewer?  What exactly is there to gain from doing so?  One reward is the sense of contributing to the advancement of the field at its cutting edge.  But the main advantage is the unparalleled potential for learning and inspiration.  The author, reviewers, and action editor for a journal submission are essentially an especially exciting (and consequential) expert seminar on a topic of great interest to everyone involved.  Moreover, all the members are highly motivated to get things right.  The reviewers and editor work really hard to make sure that they do justice to the author’s contributions.  In addition, no one wants his or her comments to appear foolish to the rest of the group.  Finally, the review process often serves to spark new insights and research problems in the minds of all the participants—authors, reviewers, and editors.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Good reviews offer valid analyses of the author’s ideas, reasoning, and methods, answers to this question: “Is this legitimate?”  In addition, though, for the benefit of the author as well as the action editor, a good review also clearly explains how the reviewer arrived at his or her conclusions.  The best reviews also provide helpful suggestions and guidance, including useful references and even design ideas that might help settle key questions that have been left unresolved by the author’s current efforts.  And good reviews are never mean-spirited.

How do you resolve conflicts when reviewers disagree? Although it is tempting to do so, I never rely on a simple “vote count” among the reviewers.  Instead, I try to understand why the reviewers disagree.  In my experience, more often than not, reviewers only appear to disagree because they are focusing on different aspects of the author’s work.  This frequently occurs because the reviewers’ own research programs have different foci.  So, relying on the specifics of the reviewers’ analyses as well as my own reading of the manuscript, I arrive at summary conclusions about the sensible disposition of the submission—acceptance, revision and resubmission, or rejection.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? This is an exceptionally important issue.  My sense is that such reactions are often critical determinants of many people’s career paths.  I have known numerous researchers who seem to have found rejections so demoralizing that they eventually abandoned their research careers.  But I have also known several highly productive investigators whose success seemed largely traceable to how they dealt with the emotions evoked by negative feedback in the review process.  Three elements of their strategies seemed to stand out and perhaps deserve emulation:

#1: Expect reviewers to identify weaknesses in your work, and accept that as a good thing; you have the opportunity to benefit from their expertise.

#2: Don’t allow yourself to brood over negative comments, even rejections; instead, immediately start working on your next move, be it a revision, a clarifying follow-up study, or the abandonment of a now-recognized dead end.

#3: If and when you revise and resubmit, in your cover/response letter, make sure to respond—respectfully—to every major reviewer comment; reviewers simply hate being ignored.

Is there a paper you were skeptical about but it turned out being an important one? That’s a funny question.  There definitely have been a few such papers, and you have probably read them.  It clearly would be unwise for me to identify them, though.

I assume that you asked the question because you wonder why my initial appraisals of such papers were “off” and what can be done to try to reduce such mistakes, or at least their impact. One basis for misappraisals, which seems uncommon, is that reviewers and editors sometimes misjudge the technical quality of authors’ reasoning and methods.  Given human fallibility, occasional misjudgments like that are inevitable.  I recommend that authors maintain vigilance for such mistakes and call attention to them in cover/response letters for submissions of revisions—again, respectfully.  Another reason that editors occasionally underestimate manuscript potential is that some authors prove to be unusually good at making marked improvements from one revision to the next.  They are especially adept at building on reviewer and editor comments.  Cultivating that skill seems wise.

As an editor, you get to read many papers and thus have insight about emerging trends.  What are the emerging trends in research topics/methodologies? Perhaps the most obvious trend is toward studies that focus on biological underpinnings, or at least correlates, of overt decision behaviors.  Our field has also seen a noticeable, and perhaps surprising, uptick in efforts to understand the role of time in people’s decision making.  There has been a good bit of excitement about the involvement of emotions in decision making, too.  Yet another noticeable trend has been toward assessing and explaining individual differences in decision making character and quality.

What are the biggest challenges for journals today? In my view, two related challenges are at the top of the list. The first is the increased volume of journal submissions. The second is the need for good reviewers to read and respond to those submissions, thereby accelerating the advancement of the field.  The problem seems to be exacerbated by increasing institutional pressures on potential reviewers to perform other duties.

More ‘Inside the Black Box’

Inside the Black Box: Frontiers in Psychology

frontiers banner

Next in our Inside the Black Box series is Frontiers in Psychology, an open access journal that aims at publishing the best research across the entire field of psychology. Frontiers in Psychology publishes articles on the most outstanding discoveries across the entire research spectrum of psychology. The mission of Frontiers in Psychology is to bring all relevant specialties in psychology together on a single platform. Field Chief Editor Axel Cleeremans gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? I go “Yuck!” instantly if the paper looks like it’s poorly written, if the figures don’t look good (see Tufte’s advice on that), if it contains typos, or if looks very verbose or boring. There is an important message there: If you don’t fine-tune the presentation of your findings, it’s as good as nothing.

“Wow!” can result from different factors. Sometimes it’s the finding itself — for instance, I find Geraint Rees’s recent demonstration that one’s experience of the Ebbinghaus illusion is inversely proportional the size of one V1 stunning. Other times it’s the sheer power of technique — Bonhoeffer’s applying two-photon microscopy to visualize synaptic growth in vivo is a good example of that. The cleverness of an experimental design is a further “Wow!” inducer; Jacoby’s process dissociation procedure, when I first read about it, definitely elicited a “Wow!” response from me. And then of course, I go “Wow!” when reading about impressive ideas. Rumelhart and McClelland’s PDP volumes made we go “Wow!” for years, as did Hofstadter’s “Gödel, Escher, Bach”.

What are the common mistakes people make when submitting/publishing? Submitting to the wrong journal. Making the story too complicated. Not having any story. Reporting uninteresting findings. Reporting uninteresting findings but trying to make them sound interesting. Failing to cite relevant work from many years ago that old editors know about.  Leaving typos in the manuscript. Ugly figures.

What are your best tips on how to successfully get published? Work on the most important issue in your domain. Build a good narrative. Papers that read like detective stories (and finish with an satisfying resolution!) are always good. Get the writing absolutely perfect. Of course, interesting and solid data. Simplify. Kill all the typos. Cite previous work. All referees first look for flaws because if any are found then the review is done and the referee can focus on something else. It is only when no surface flaws are found that the referee actually thinks about whether the paper is interesting…

How are reviewers selected? That very much depends on the journal. Some editorial systems are almost entirely automated, which has advantages (speed) but also disadvantages (relevance). Some editors hand-pick their referees based on different criteria (mostly, whether they think they know something about the topic and whether they think they’ll compose their review in time). Many systems offer referee suggestions based on keyword matches. Authors can also often propose referees themselves. This is a good idea as it speeds up the work of the editor, who will typically select referees both from the author’s suggestions and from his own pool of referees.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I wouldn’t do it too quickly — say, three years in your Ph.D. Reviewing an article is an important and difficult job. It gets much easier as your knowledge of the field grows and as your expertise at reviewing increases, but the first reviews you do are always very intensive jobs. You worry that you’ll be ridiculous in the eyes of the editor and the other referees. You worry that you missed a central point. You’ll spend days on your first review. On the other hand, knowledge of what’s going on in your field before it gets published can be invaluable — but for this, you can count on your advisor.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? A good reviewer is a reviewer who turns in her review in time and who manages to discuss the paper from a neutral tone while clearly listing the issues that concern her, if any. And of course: Good reviews also contain a clear recommendation that is congruent with the listed points. Sometimes you get almost self-contradictory reviews. They begin by “This is a very interesting paper that uses clever methods” and finishes by “I recommend the paper be rejected”. This makes it almost impossible for an editor to use the review, as do reviews that contain too many subjective comments. Reviews should almost be written as though they were public comments, that is, with all the care one would use if one were talking in public about someone else’s work.

How do you resolve conflicts when reviewers disagree? That’s a tough one. I regularly receive conflicting reports, sometimes at either end of the spectrum (i.e. Referee #1 says “Reject”; Referee #2 says “Accept without revisions”). If both reports make sense (that is, it is clear both referees understood the paper), most typically, I will consult a third referee (which sometimes doesn’t help). When all else fails, you read the paper and make the decision yourself… (just kidding: editors read the papers, but then there is a difference between reading a paper and forming an expert opinion about it). It is worth mentioning here that some open access journals (i.e. Frontiers in Psychology) have adopted a completely different manner of resolving differences between referees, namely to ask referees and authors to interact until a consensus between referees is reached. Many conflicts between referees are solvable by iterated interaction — something that can be tough to achieve with the standard reviewing process.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Revise and resubmit is pretty much the norm — it is exceptionally rare for a paper to be accepted right away. Dealing with rejection is understandably difficult. Your reaction to it very much depends on what you can attribute the rejection to. Being rejected from Science is not an indication that your research is not good; just that it’s not good enough, or not novel enough, or not interesting enough in the eyes of Science’s editors. You may think otherwise and feel wronged somehow, but it’s not your decision to make in either case, so it’s best to move on and submit to another journal. The worst case scenario is when you submit to a mediocre journal, wait for months, and find that your paper is rejected. If you really feel a “reject” decision was incorrect, it’s always a good idea to interact with the editor. As an editor, I only use “reject” when all referees agree that the paper is not publishable. Dealing with a revise and resubmit is easy: Just address all the points raised by the referees one by one and thoroughly. In the vast majority of cases, papers in that category will end up published;  it’s just a matter of taking all the points seriously and in detail.

Is there a paper you were skeptical about but turned out to be important one? Not that I can remember as an editor. A couple of my own papers as an author, though, had very difficult beginnings and turned out to be considered as quite important. Science is about data, but also involves rhetoric: Not only do the data have to be important, but you also have to present the results and their implications in a persuasive manner.

As an Editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? There is an important ongoing discussion on twitter, blogs, facebook, email and the press about the importance of replication in psychology. Developing methods that make it possible to analyze replication efforts properly, as well as promoting the publication of replication findings, are important issues. One of the most interesting methodological developments in this respect is the emergence of novel statistics based on Bayes’ ideas. I also continue to be impressed with the increased sophistication of neuroimaging methods — think MPVA for instance. Increased meta-data in all fields will also make all sorts of meta-analyses possible.

What are the biggest challenges for journals today? The challenges are not the same for traditional journals and for new, online, typically open-access journals. Some journals are more or less immune from challenges because of their extraordinary status in the field. The challenge for traditional journals is to stay relevant in an increasingly open-access, rapid-fire world: Interesting results are tweeted or otherwise shared almost instantly, and people want to download the relevant material freely and right away. The challenge for open-access journals is to accrue enough credibility. A challenge that faces every actor today, individuals and journals alike, is to find interesting ways of attracting attention. So much is published today (considerably more than even a few years ago) that it becomes a challenge to even find relevant material.

Journal home page

‘Inside the Black Box’ series home page

New series: Opening the black box of academic publishing

Publish or perish is a phrase most scholars know well. Publications is the currency of academia: if you don’t publish, having an academic career is difficult.

However, publishing is hard so we  decided to ask editors of leading journals some questions that many of us have and which only editors can answer. The aim is to provide more insight into the process of scientific publishing.

For the upcoming series, we spoke to leading journals in general psychology, judgment and decision making as well as journals in marketing and economics that publish articles related to judgment and decision making. We’ll be publishing the individual interviews over the next couple of months and hope you find them useful in your work!

First up is Judgment and Decision Making

Inside the Black Box: Judgment and Decision Making

We start our journal editor interview series with Judgment and Decision Making’s editor Jon BaronJDM is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. JDM publishes original and relevant to the tradition of research in the field represented by SJDM and EADM. Relevant articles deal with normative, descriptive, and/or prescriptive analyses of human judgments and decisions. 

What makes you go “Wow!” or “Yuck!” when first read a submission? Wow!: When it shines new light on a traditional JDM problem, including possible applications in the real world. I choose the lead article in each issue on the basis of this sort of reaction. (Of course, some issues have no article that merits the exclamation point, and some have more than one.)

Yuck!: When it applies the Analytic Hierarchy Process to the pipe-fitting industry in Pakistan. Or when it uses a tiny sample, with no replication, to show that people are at the mercy of subtle, unconscious forces. Or when it makes obvious statistical errors, like claiming an interaction on the basis of a significant effect next to a non-significant effect.

What are the common mistakes people make when submitting or publishing? Submitting to the wrong journal.

What are your best tips on how to successfully get published? Study big effects, or use large samples. Don’t waste time studying phenomena that are ephemeral and difficult to replicate, especially if you are trying to find moderators of such effects.

How are reviewers selected? When I handle papers – about half of them go to associated editors – I try to find the most expert reviewers who are willing to review, including members of the journal’s board when possible. This often takes several attempts; people say no. Often I use Google Scholar, as well as citations in the paper and authors’ recommendations.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? If you know someone who is an editor (including an associate editor), tell him or her that you are willing. I often ask grad students to review, but only if I know them to be experts on the topic of the paper. I am not willing to take a student’s word for this expertise, or to assume that being first author of a related paper is sufficient. Thus, personal knowledge is important.

I think that grad students should do occasional reviews. But anyone who keeps publishing is going to get asked to do more and more reviews. Be nice to editors (and other authors). If you get asked to do a review, respond quickly. Saying no immediately allows the editor to go to the next person on the list.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer? Explains why the paper is fatally flawed, if it is. Otherwise provides helpful advice for revision, or (only if necessary) for additional research. What I find unhelpful are requests for more “theory”, as if theory were something like soy sauce.

How do you resolve conflicts when reviewers disagree? I regard reviews as information, not votes. They point out flaws I had not discovered, literature I did not know, or strengths that I did not appreciate. The review’s bottom-line recommendation is just a little more information. Thus, these recommendations are not conflicts that need to be resolved.

But reviewers also disagree about specifics, about what needs to be done. Here, I think it is my job to tell the author which of the reviewers’ comments to ignore, and which to follow, and (if the review does not say), how to follow them (if I can). As an author, I find it annoying to be at the receiving end of conflicting reviews, with no idea what magic I must do in order to satisfy everyone.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? The best way to react to a revise/resubmit is to try to do what it says, or explain politely and clearly why you can not or should not do that. Or give up and try another journal if you think you are being asked to do the impossible.

The best way to react to a rejection depends on what it says. If it finds a fatal flaw that cannot be fixed, the best thing may be to regard the paper as sunk cost, and move on. In other cases, rejections are very specific to the journal, so you should just send the paper
elsewhere. If you think a paper is good, don’t give up. Keep sending it elsewhere. In still other cases, papers are rejected because more work is needed. Maybe do the more work.

Is there a paper you were sceptical about but turned out to be an important one? Not really.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? On topics, I think that fields go through fads – well, let’s say “periods in which some topics become very popular and then gradually fade into the background”. These are often good things. Many come from external interest and funding, such as the enormous interest now in “nudges”, or the interest in forecasting and prediction arising from the recent IARPA competition. Around 1991 the Exxon Valdez case inspired (and funded) a great deal of JDM research on contingent valuation and value measurement in general.

On methods, our field is slowly but surely catching up with the enormous increase in the powers of computers and the Internet. Data analysis is becoming more sophisticated. A variety of approaches are being explored (including Bayesian ones). Web studies are becoming more numerous and more sophisticated. People are making use of large
data sets available on the Web, including those they make themselves by mining data.

What are the biggest challenges for journals today? The biggest is integrity. The work of Simonsohn, Simmons, Nelson, Ionnides, Pashler, Bar-Hillel (earlier) and others on p-hacking, file-drawer effects, basic statistical errors, and outright fraud has raised serious questions about what journals should and can do. The problems vary by research area. Medical research and social psychology are probably worse than JDM. But I am still trying to work out a way to deal with this problem. Asking for data and for sufficient stimulus materials for replication is a step. I spend a lot of time checking data analysis with the data that authors send.

The next biggest challenge is how to take back scholarly communication from those who seek to profit from it by building pay walls of one sort or another, including both subscription fees and publication charges. I have ignored this problem, hoping that it will go away or that someone else will solve it (e.g., by endowing JDM with $500,000). Right now, JDM has neither type of fee, because I do the production and “office work”. Other journals work this way, but the authors all submit papers with LaTeX formatting. My job would be easier if Microsoft Word did not exist. Maybe I will outlast it, and then the problem will be solved for the next editor. But a little money – nowhere near as much as proprietary journals get – would still help, and I don’t know where to get it.

The third biggest challenge is how to get rid of the perverse incentives that arise from the use of the “impact factor” of a journal for evaluation of authors of papers in that journal. Journals cannot do much about it, except perhaps to stop advertising their impact
factors in large print.

Journal homepage

‘Inside the Black Box’ series home page

Jon Baron Research Hero interview