The Journal of Behavioral Decision Making is a multidisciplinary journal publishing original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged. Associate Editor Frank Yates gives us his insights into the journal.
What makes you go “Wow!” or “Yuck!” when first read a submission? Papers that surprise me because of their conclusions or even their topics are the ones that most often make me go “Wow!” So do papers that exhibit different ways of thinking about old topics, ones that force me to say, “I’ve never seen this idea before. That is so cool.“ I must also say that I love manuscripts that clearly point toward ways that people can decide better than they normally do. And if the authors can complement their messages with concrete and elegant illustrations, that’s even better.
I can honestly say that I have never said “Yuck!” in response to a submission, even to myself. I realize that virtually every submission I have ever seen represents the culmination of a huge investment by the authors. I would feel guilty if I didn’t acknowledge that investment. On the other hand, certain aspects of some submissions do make me groan (“Oh, no!”). Papers that are weakly motivated tend to do that, especially ones that report the results of studies that I infer began with nothing more than: “I wonder what would happen if we tried this manipulation.” Groans are also evoked by papers that are unnecessarily hard to read. This happens, for instance, when papers are overly abstract or are too tedious and too long relative to the significance of their messages.
What are the common mistakes people make when submitting/publishing? Well, whatever actions produce the kinds of groans I just mentioned are good examples. An especially common error that authors make is assuming that the reader appreciates and knows as much about their focal research problem as they do. This makes their papers dull and impenetrable. One way to avoid this is to simulate the journal reading experience in advance of submission. That is, recruit friends and colleagues who are similar to the journal’s audience but are naïve to the topic. Then ask them to review and discuss with you in detail their interpretations of your work. You are virtually guaranteed to uncover misconceptions that will amaze you: “Really? That’s what you thought I meant?”
What are your best tips on how to successfully get published? My first suggestion is to somehow choose to work on problems that are easy to convince people are interesting and important to solve, and then solve them. Our field is like baseball, where an outstanding hitter fails 2/3 of the time. That being the case, successful authors necessarily must be unusually energetic and well organized. That is, they must always be working on several projects simultaneously. Therefore, despite the low “hit rate,” they maintain a steady flow of results that are ready to submit for publication. My second tip is to learn to view and use reviewers as one’s collaborators. Typically, reviewers are among the most knowledgeable people in the world concerning an author’s focal problem. So why not exploit the expertise underneath their comments to sharpen your writing, your thinking, and your next studies?
How are reviewers selected? My goal is to have every submission read critically and constructively by 2-4 people who know more about an author’s research problem and related topics than just about anyone else in the field. Some of these people are likely to be on our editorial board. Many others will have been authors of articles cited in the manuscript. Because it is a multidisciplinary journal, at JBDM, we make a special effort to have at least two different specialties represented on every team of reviewers.
How can a young researcher become a reviewer? When is the best time during one’s PhD training to start doing so? In my view, the best time for a PhD student to start reviewing is after he or she has developed expertise and credibility in a particular area of research and therefore would have something useful to offer and to gain as a reviewer. Having successfully published in that area is usually a safe indicator of such expertise. At that point, the student might be well advised to write to editors of journals that publish work in the student’s area of specialization, volunteering to review occasional submissions on particular topics. The response is likely to be immediate and positive, since editors are always on the lookout for good reviewers.
Since reviewing is hard work and takes time, why would (or should) a PhD student want to serve as a reviewer? What exactly is there to gain from doing so? One reward is the sense of contributing to the advancement of the field at its cutting edge. But the main advantage is the unparalleled potential for learning and inspiration. The author, reviewers, and action editor for a journal submission are essentially an especially exciting (and consequential) expert seminar on a topic of great interest to everyone involved. Moreover, all the members are highly motivated to get things right. The reviewers and editor work really hard to make sure that they do justice to the author’s contributions. In addition, no one wants his or her comments to appear foolish to the rest of the group. Finally, the review process often serves to spark new insights and research problems in the minds of all the participants—authors, reviewers, and editors.
What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Good reviews offer valid analyses of the author’s ideas, reasoning, and methods, answers to this question: “Is this legitimate?” In addition, though, for the benefit of the author as well as the action editor, a good review also clearly explains how the reviewer arrived at his or her conclusions. The best reviews also provide helpful suggestions and guidance, including useful references and even design ideas that might help settle key questions that have been left unresolved by the author’s current efforts. And good reviews are never mean-spirited.
How do you resolve conflicts when reviewers disagree? Although it is tempting to do so, I never rely on a simple “vote count” among the reviewers. Instead, I try to understand why the reviewers disagree. In my experience, more often than not, reviewers only appear to disagree because they are focusing on different aspects of the author’s work. This frequently occurs because the reviewers’ own research programs have different foci. So, relying on the specifics of the reviewers’ analyses as well as my own reading of the manuscript, I arrive at summary conclusions about the sensible disposition of the submission—acceptance, revision and resubmission, or rejection.
What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? This is an exceptionally important issue. My sense is that such reactions are often critical determinants of many people’s career paths. I have known numerous researchers who seem to have found rejections so demoralizing that they eventually abandoned their research careers. But I have also known several highly productive investigators whose success seemed largely traceable to how they dealt with the emotions evoked by negative feedback in the review process. Three elements of their strategies seemed to stand out and perhaps deserve emulation:
#1: Expect reviewers to identify weaknesses in your work, and accept that as a good thing; you have the opportunity to benefit from their expertise.
#2: Don’t allow yourself to brood over negative comments, even rejections; instead, immediately start working on your next move, be it a revision, a clarifying follow-up study, or the abandonment of a now-recognized dead end.
#3: If and when you revise and resubmit, in your cover/response letter, make sure to respond—respectfully—to every major reviewer comment; reviewers simply hate being ignored.
Is there a paper you were skeptical about but it turned out being an important one? That’s a funny question. There definitely have been a few such papers, and you have probably read them. It clearly would be unwise for me to identify them, though.
I assume that you asked the question because you wonder why my initial appraisals of such papers were “off” and what can be done to try to reduce such mistakes, or at least their impact. One basis for misappraisals, which seems uncommon, is that reviewers and editors sometimes misjudge the technical quality of authors’ reasoning and methods. Given human fallibility, occasional misjudgments like that are inevitable. I recommend that authors maintain vigilance for such mistakes and call attention to them in cover/response letters for submissions of revisions—again, respectfully. Another reason that editors occasionally underestimate manuscript potential is that some authors prove to be unusually good at making marked improvements from one revision to the next. They are especially adept at building on reviewer and editor comments. Cultivating that skill seems wise.
As an editor, you get to read many papers and thus have insight about emerging trends. What are the emerging trends in research topics/methodologies? Perhaps the most obvious trend is toward studies that focus on biological underpinnings, or at least correlates, of overt decision behaviors. Our field has also seen a noticeable, and perhaps surprising, uptick in efforts to understand the role of time in people’s decision making. There has been a good bit of excitement about the involvement of emotions in decision making, too. Yet another noticeable trend has been toward assessing and explaining individual differences in decision making character and quality.
What are the biggest challenges for journals today? In my view, two related challenges are at the top of the list. The first is the increased volume of journal submissions. The second is the need for good reviewers to read and respond to those submissions, thereby accelerating the advancement of the field. The problem seems to be exacerbated by increasing institutional pressures on potential reviewers to perform other duties.