About Neda Kerimi

Scientist

Star Track: Mandeep K. Dhami

mandeep-croppedThis week in Star Track we’re featuring Mandeep K. Dhami, PhD, who is Professor of Decision Psychology at Middlesex University. She received her PhD in Psychology from City University, London, UK. Her research focuses on human JDM and choice, and risk primarily in the criminal justice domain. Her previous academic posts include the University of Cambridge (UK), University of Maryland (USA), and the Max Planck Institute for Human Development (Germany). Mandeep has also worked outside academia for the Ministry of Defence and for two British prisons. Mandeep has also won several awards, including from Division 9 of the APA and EADM. Mandeep advises Government organizations nationally and internationally on criminal justice issues, and has helped to establish a Restorative Justice Program in the City of Victoria, Canada. Mandeep is Fellow of the Society for the Psychological Study of Social Issues (SPSSI; Division 9 of the APA). She has authored over 80 scientific articles and book chapters, is the lead editor of the book Judgment and decision making as a skill: Learning, development, and evolution, and on the editorial board of prestigious journals such as Perspectives on Psychological Science. In her spare time, Mandeep is a competitive ballroom dancer, and has represented England in Latin formation.

I wanted to pursue an academic career in this field because… Well, actually, I hadn’t planned on an academic career in Decision Science…things just worked out that way, and I’m very pleased they did. I had worked in prisons as an assistant psychologist while doing my undergraduate degree, and had wanted to go into prison management afterwards. However, a head psychologist encouraged me to do a PhD – saying my career in prisons would benefit from having a solid research background. So, off I went to do a Masters in Criminology followed by a PhD in JDM – and although I never did return to work in prisons, I’ve been back behind bars many times in the UK, US and Canada to study prisoner decision-making. Decision Science affords researchers considerable opportunities to conduct studies in a variety of field settings.

I find the inspiration for my research mostly from the social world around me, and particularly from policy debates in the criminal justice arena. By starting with the problem first, I can be free to choose the most relevant theories and appropriate methods. Dogmatic adherence to theories and methods has blighted the development of social scientific fields, and doing research for the sake of doing research is a waste of opportunity. I want my research to ‘count’ – I want to change some aspects of the world I live in, and so I find myself conducting research to solve social problems.

When people ask me what I do, I say “I study how people think and make decisions, focusing often on people in the criminal justice system such as offenders, police officers and court judges.” There have been several occasions when this simple question and answer has led to extremely useful feedback on my research as well as new research opportunities.

The paper that has most influenced me is… Two books have influenced me hugely – Erving Goffman’s Asylums and Paul Meehl’s Clinical versus statistical prediction. Goffman taught me that to study people we need to see the world from their perspective,and Meehl taught me to question expertise rather than revere it.

The best research project I have worked on during my career… I’m not sure how to operationalize ‘best’ – there have been some projects that have been fun to work on and others that made my ‘head hurt’ – both types of projects produced publications I’m proud of. But, given that I have about 3 decades before retirement, I’d like to think the ‘best’ is yet to come….

If I wasn’t doing this, I would be… If I’d gone down the prison management route, I’d probably be a senior civil servant in the UK Ministry of Justice or Home Office by now.

The most important quality for a researcher to have is… In one word ‘resilience.’ Some of the most common phrases in academia include ‘rejected’, ‘declined’, and ‘unsuccessful’. What a lot of young academics don’t realise is that good researchers take this negative feedback and use it to improve their work – they don’t simply ignore it, and they certainly don’t just give up.

The biggest challenge for our field in the next 10 years… We have too many effects and not enough explanations. Our field needs to develop process models that integrate different theoretical approaches, and that are tested under representative task conditions. This can produce more robust findin gs, and those that translate to the world outside the laboratory.

My advice for young researchers at the start of their career is… Work on something you feel passionate about. This will hopefully mean you don’t give up when things get tough. Over time, you’ll learn to communicate the value of your work to others, and although they may not share your enthusiasm, they will come to appreciate your work, and you.

The one thing I’ve found most challenging is… The slow pace of academia; the time lag from having a research idea through conducting the research to publishing it can be several years; patience is not a virtue that I can say I have much of. Fortunately, the time lag has been reduced in recent years with e.g. the introduction of ‘online first’.

For more information on Mandeep, visit her page.

Star Track: Peter McGraw

Following on the success of our Research Heroes interviews, we’re launching a new interview series: Star Track. In this series, we turn the spotlight on researchers who will play an important role in shaping the future of the field. These people have already made a significant contribution with their ground breaking research and engagement in the research community –  you might know about them or might not, but you should definitely listen to what they have to say – enjoy!
First in our new series is Peter McGraw, an DSC_0667-1associate professor of marketing and psychology at the University of Colorado Boulder, who is an expert in the interdisciplinary fields of emotion and behavioral decision theory. His research examines the interrelationship of judgment, emotion, and choice, with a focus on consumer behavior and public policy. Lately, McGraw has been investigating what makes things funny. He directs at the Humor Research Lab (aka HuRL), a laboratory dedicated to the experimental study of humor, its antecedents, and consequences. He has co-authored The Humor Code: A Global Search for What Makes Things Funny, which hit the bookstores on 4/1/2014. Of recent note, McGraw made the 2013 Stylish Scientist List – probably because he likes to rock a sweater vest.

I wanted to pursue an academic career in this field because… I thought that pursuing an academic career would yield a stimulating yet leisurely intellectual life. (I was half right.) While researching grad programs, I read Tom Gilovich’s book: How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. By the end of chapter 2, I was hooked on the idea of studying judgment and decision making.

I find the inspiration for my research mostly from… Entrepreneurs and artists. Scientists don’t often think of their research as a creative endeavor that is important to share broadly with the world. I believe that the process of creating and disseminating scientific insights is enhanced by emulating people who have a different perspective and a broader array of tools. Also, behaving like an artist or an entrepreneur is much more fun than just trying to please peer reviewers.

When people ask me what I do, I say…. I study what makes things funny.

The best research project I have worked on during my career… In the summer of 2008, Caleb Warren and I set out to answer the question of why people laugh at moral violations. That project changed my life, as it spurred a quest to crack the humor code (something that behavioral decision theory’s “emotional revolution” had overlooked). The resulting paper, which published in Psychological Science in 2010, brought together my two main research areas at the time: moral judgment and mixed emotions. Caleb and I introduce the benign violation theory of humor and showed that moral violations can be a source of pleasure (something every good comic knows).

Everything came together just right; the paper was accepted with no requested changes – something that I never expect to happen again.

The paper that has most influenced me is… When Caleb and I were examining the research on humor, the theories didn’t seem quite right. Fortunately, we found a little-cited paper published by a linguist named of Thomas Veatch. To us, it was a huge advance over existing theories. Veatch’s work served as the foundation for the benign violation theory, which in turn, serves as the foundation for the research conducted in the Humor Research Lab.

If I wasn’t doing this, I would be… Starting some sort of business.

The most important quality for a researcher to have is… Perseverance. Repeat after me, “They can slow us down, but they can’t stop us.”

The biggest challenge for our field in the next 10 years… Finding a way speed the peer-review process.

My advice for young researchers at the start of their career is… Write every day. Start today – and purchase the book: How to Write A Lot.

The one thing I’ve found most challenging is… Staying asleep until my alarm goes off. The work academics do is highly evaluative and uncertain – two conditions that contribute to anxiety. And anxiety gets me out of bed early. On the other hand, it has a silver lining. I believe that every day is a big day and should be lived with a sense of urgency. And big days rarely start with the snooze button.

For more information on Peter McGraw visit his page: http://www.petermcgraw.org/

For more information on his book see: http://humorcode.com/

Research Heroes: Barbara Mellers

BarbMProfessor Mellers is the 11th Penn Integrates Knowledge Professor at Penn University. Her research examines how people develop beliefs, formulate preferences, and arrive at choices. She focuses  on why  people deviate from principles of rationality and how those deviations influence consumer choices and cooperative behavior. She is currently exploring how to elicit and aggregate probability judgments to arrive at the best possible predictions of uncertain events. She has authored over 100 articles and book chapters. She was a recipient of the Presidential Young Investigator Award and a past president of the Judgment and Decision Making Society.

I wish someone had told me at the beginning of my career that all careers come to an end.  When I was young, I felt invincible, I thought I had all the time in the world. But reality caught up with me, and I have a different perspective now. Each research project might be the last, so each one should, at least in principle, be better than the one that went before it.

I most admire people who are clear thinkers, beautiful writers, big dreamers, and hard-core scientists. They work through the implications of their ideas and are their own worst critics. And they do it all in the most graceful and elegant way imaginable.

The best research project I have worked on during my careermight be the one I am doing now on human forecasting. This is a large and long-term project that gives me the opportunity to work with many talented people with wide ranging and diverse skills. This project reminds me of an onion; we keep pulling off layers and finding more layers to go. It gets better and better. 

The worst research project I have worked on during my careeris the last thing in the world I want to talk about.

The most amazing or memorable experiences when I am doing research….happen when I am surprised by the results of an experiment. I once did an adversarial collaboration with Hertwig and Kahneman, Kahneman described the process perfectly. When the data don’t turn out right, we suddenly gain 20 IQ points. Everything seems to make perfect sense in a brand new light that was completely obscure until that moment! Unfortunately, those IQ gains disappear when the surprise is over.

The one story I always wanted to tell but never had a chance…is hard to imagine because there are always opportunities to tell stories. So I would never hold back on one that was worth telling.

A research project I wish I had done…is something I always thinking about.

If I wasn’t doing this, I would be in doing science in another field, and the choice of which field to pursue is a difficult forecasting problem. It is hard to know what areas of science will be the most exciting twenty years from now. The best fields to work in are ones that are changing fast due to the synergy of several good ideas and ingenious technological innovations. Neuroscience, astronomy, and genetics are good examples.

The biggest challenge for our field in the next 10 years…is figuring out how we can make better judgments at individual, societal, and national levels. This goal applies to everything – medical decisions, career decisions, military decisions, romantic decisions, legal decisions, business decisions, policy decisions, and more. We need theories, but we also need to generate useful knowledge. That is the only reason why the public will listen.

My advice for young researchers at the start of their career is…replicate everything you do several times. The truth, however hard it is to accept, is what moves science in the right direction and leads to progress. Admit your uncertainties; you aren’t the only one who has them. Remember that you can’t praise people too much (yes, we really are that shallow!). And last but not least, when in doubt, give credit to others. Time usually sorts things out.

Departmental site: http://psychology.sas.upenn.edu/node/20474

Inside the Black Box: Psychological Bulletin

Psychological_Bulletin-500x500

Psychological Bulletin is a bimonthly peer-reviewed academic journal that publishes evaluative and integrative research reviews and interpretations of issues in psychology, including both qualitative (narrative) and/or quantitative (meta-analytic) aspects. The editor in chief Stephen Hinshaw gives us insight into this journal

What makes you go “Wow!” or “Yuck!” when first read a submission? Our journal (Psychological Bulletin) is different from most others, in that it publishes only lengthy, synthetic review papers–across the entirety of psychology and behavioral science.  So, I look for a deeply conceptual introductions and systematic reviews of primary literature, written in an accessible yet still scholarly fashion.

What are the common mistakes people make when submitting/publishing? Not reading instructions carefully (or at all)–so that we sometimes receive single empirical studies or extremely preliminary ‘review’ papers suggesting leads for further study (but not providing a deep review of a mature literature)

What are your best tips on how to successfully get published? Research, research, research your topic and revise, revise, revise your writing.

How are reviewers selected? In consultation with Associate Editors, I scour reference sections and consult lists of experts in various subfields.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? See if a more senior person will ask the editor to enlist you as a co-reviewer (with permission of Editor).

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer?  Good reviews are thoughtful, respectful, and reveal deep knowledge of topic – showing how the paper does or does not provide an advance in that field.

How do you resolve conflicts when reviewers disagree? Careful reading and rereading of the paper…and sometimes going to a Consulting Editor for a ‘tie-break’ review

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Worst way is to get overly defensive and battle every point of the reviews.

Is there a paper you were sceptical about but turned out to be important one? Yes, sometimes initial submissions that didn’t really deliver can be greatly improved with substantial revision.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? PB is so broad that it’s hard to see emerging trends across all of the sub-facets of the field.

What are the biggest challenges for journals today?  Finding and engaging willing reviewers, keeping up with flow of submissions, battling ‘crank’ journals.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Medical Decision Making

mdmMedical Decision Making (MDM) is a peer-reviewed journal published 8 times a year offering rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health policy development. MDM presents theoretical, statistical, and modeling techniques and methods from disciplines including decision psychology, health economics, clinical epidemiology, and evidence synthesis. Editor-in-Chief Alan Schwartz gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission?

  1. “What’s new?” – What will I learn from this paper that I didn’t know before?  A paper presenting an original approach to a problem, or an extension of past approaches, or a first replication of a previously unreplicated finding is exciting to read. At Medical Decision Making, what’s new is usually a new method for studying or improving decisions, but sometimes it’s an exemplary application of prior methods. (Of course, some journals, like PLOS One, have explicitly chosen not to use this criterion).
  2. “What’s true?” – How do I know that I can rely on the results? Are the methods rigorous, sound, and appropriate for the question? Did the authors interpret their findings appropriately, without overgeneralizing?
  3. “So what?” – Why was this study proposed in the first place? What motivates the research question, and is it an important question in the context of the field and our currrent knowledge?
  4. “Who cares?” – Is this paper right for the readers of my journal, or does it belong somewhere else? A straightforward clinical trial comparing two drugs — or a basic psychology study of non-medical decision — probably doesn’t belong at Medical Decision Making.

What are the common mistakes people make when submitting/publishing? 

My top three:

  • Failing to motivate the research question or ground it in a theoretical or conceptual framework. Theory is important.
  • Overstating the conclusions and ignoring limitations.   Your paper doesn’t have to be the final word or solve every problem.
  • Sending to the wrong journal (violating the “who cares?” principle)

What are your best tips on how to successfully get published? Be open to feedback. Before you send a paper out, it should be the best paper you can write, so you should have had friends and mentors read and criticize it. If you can anticipate issues that a critic might raise, address those forthrightly. When you receive reviews, pay attention to them. If you don’t understand something a reviewer says, don’t ignore it — ask the editor for guidance.

How are reviewers selected? At Medical Decision Making, as at many journals, we have experienced reviewers on our editorial board and in our reviewer database. We find new reviewers through suggestions from authors (yes, you may suggest potential reviewers, and yes, we will often invite at least one of your suggestions if we agree that they really have specific content expertise) and through looking at the paper’s citations and related literature ourselves and seeing who else is working in the same area.

Our goal is to ask for reviews from experts whose reviews not only advise the editor on the disposition decision but are valuable to the authors, whether or not we publish the paper.  We’re fortunate at MDM to have really outstanding reviewers, and many first-time authors comment on how helpful the reviews have been.  We also score our reviews, and reviewers who do a poor job tend to get selected against in the future.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I’m a big proponent of reviewing both papers and grant applications; I think you learn a lot from reading very good (and sometimes poor) writing, and from comparing your review with those of the paper’s other reviewers and the editor (at MDM, we cc our decision letters to the reviewers). One good way for PhD students to get some experience with this is to do a “mentored review” with their advisor when their advisor is asked to review a paper. Many journals will allow the invited reviewer to share the review with a student as long as the invited reviewer supervises and takes responsibility for the review. Post-PhD, as a postdoc or junior faculty, if you haven’t already been asked to review for a journal that you’d like to, you can often contact the editorial office and ask to be added to the reviewer database. Of course, submitting a paper to the journal and filling out your author profile with a good set of keywords for your expertise is also likely to lead to reviews in the future.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? This is a matter of editorial taste, but I really like to see a review that begins by looking at the big questions and pointing out the strengths of the manuscript (or at least of what the authors hoped to achieve through the study), and then moves on to detailed constructive criticism about methods, and presentation and interpretation of results. The review should conclude with minor concerns or suggestions for improving the writing.

Some little things that are very helpful: Number the points in the review to make it easier for the author to respond point by point. Refer to parts of the manuscript by page number and line number to help the author locate exactly what you’re asking about. Make it clear to the author when you’re making a suggestion (e.g. please describe the factor rotation strategy in more detail) and when you’re asking a (non-rhetorical) question (e.g. why did you expect patients to be more influenced by attribute range than attribute context?)  Don’t say (in the
comments to the author) whether the paper should be rejected or accepted – that’s the editor’s job. Definitely don’t recommend rejection privately to the editor and then write a wholly positive review for the author.

How do you resolve conflicts when reviewers disagree? Reviewers advise; editors decide. I’ll admit to a little bias: when good reviewers disagree, I think that means there’s something important to work out, and I’ll usually ask the author to help the reader understand both perspectives and how the author chose to resolve them. There isn’t a single right way to study something.  On rare occasions, reviewer disagreement lends itself to inviting one or both reviewers to write an editorial about the study, if we’ve decide to publish it.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? A revise and resubmit is a positive signal, especially from a paper journal that has a limited page budget. It usually means that the editor thinks there’s something important in the paper to make it worth spending more editorial and reviewer time on, and that you’re capable of addressing the reviewer concerns. So that’s easy – always resubmit, and always include a cover letter addressing each point made by each reviewer. That can mean explaining why you didn’t choose to make a suggested change, but pick your battles: a wholly unresponsive revision is not going to go very far with the editor.

Medical Decision Making also has a category of initial decision called “reject and resubmit”. This means that the editor doesn’t want the paper or a revision of it, but thinks there might be a different, related paper you could write that would be competitive. The new paper gets the full peer review treatment, usually with different reviewers.

A flat rejection – well, when I get those, I usually shake my fist at the sky, eat a piece of chocolate, and get a good night’s sleep. Then I see what useful information I can get from the reviews and improve the paper to send it elsewhere. Uncertainty is a fundamental
fact of life.

The worst way to react to a rejection is to send a nasty email to the editor-in-chief to try to bully him into reconsidering the decision and to threaten that you will never send your priceless work to that journal again. Yes, that happens (especially early in my term).
We have an appeals process if it’s clear that a reviewer or editor deeply misunderstood something, but that’s not it.

Is there a paper you were sceptical about but turned out to be an important one?I think I’m still too early in my editorship to know. In about 3 years, though, I’d be interested in looking at that — collecting the top 10 important papers we’ve published based on reader response and looking back at my notes to see how many of those I only assigned to an associate editor reluctantly.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? That’s one of the best parts of the job. Right now, MDM is publishing a lot of innovative work in simulation models and value of information analysis methods. Approaches to utility models are emerging in which econometric and behavioral research are triangulating on phenomena that call into question some longstanding simple assumptions of health state valuation — for example, that the proper unit on which to assess utility is the individual decision maker. And there’s a lot more interest in dual process theory and decision psychology/behavior economics manipulations of the decision environment in order to understand and improve health decisions.

What are the biggest challenges for journals today? There’s a great debate going on right now about open access models for science journals and how publishers do or don’t contribute to science, but in some ways, I think that’s just the opening act for a larger discussion of the value of an expert peer review process vs. open publishing and crowdsourced reviewing. I want to see good science clearly communicated, and journals need to demonstrate to their readers that they are promoting those ideals.

Journal website

More ‘Inside the Black Box’

Inside the Black Box: Psychological Review

Psychological_Review-500x500Psychological Review, founded in 1894, is one of the most prominent journals in psychology today. Psychological Review focus on psychological theory and publishes papers that make important theoretical contributions to psychology. Associate Editor Prof. Susan Fiske helped us with more insight into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Clear statement of the argument in the title & abstract enable an immediate evaluation of an article’s contribution. It is amazing how often authors fail to be clear about their hypothesis and its significance.

What are the common mistakes people make when submitting/publishing? Failing to check whether the article is appropriate for that journal.

What are your best tips on how to successfully get published? Aha! Plus Evidence.  Good ideas, backed up by good science.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? Participating in a grad-led journal club, reading and critiquing published articles. Telling one’s advisor that one would like some reviewing experience.  When asked, returning high quality, on time reviews.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Balanced, thoughtful, succinct.

How do you resolve conflicts when reviewers disagree? Reviewers often disagree because they are recruited for differing expertise. Editors must consider the inputs relative to the expertise and perspective of the reviewers.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? R&R needs to list, in a cover letter, each response to each suggestion, including to explain (rare) instances of decking to make certain changes.

Is there a paper you were sceptical about but turned out to be important one? Not that comes to mind.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? Interdisciplinary and global collaborations.

What are the biggest challenges for journals today? Maintaining humanity depute the volume.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Journal of Behavioral Decision Making

JBDMThe Journal of Behavioral Decision Making is a multidisciplinary journal publishing original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged. Associate Editor Frank Yates gives us his insights into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Papers that surprise me because of their conclusions or even their topics are the ones that most often make me go “Wow!”  So do papers that exhibit different ways of thinking about old topics, ones that force me to say, “I’ve never seen this idea before.  That is so cool.“  I must also say that I love manuscripts that clearly point toward ways that people can decide better than they normally do.  And if the authors can complement their messages with concrete and elegant illustrations, that’s even better.

I can honestly say that I have never said “Yuck!” in response to a submission, even to myself.  I realize that virtually every submission I have ever seen represents the culmination of a huge investment by the authors.  I would feel guilty if I didn’t acknowledge that investment.  On the other hand, certain aspects of some submissions do make me groan (“Oh, no!”).  Papers that are weakly motivated tend to do that, especially ones that report the results of studies that I infer began with nothing more than: “I wonder what would happen if we tried this manipulation.”  Groans are also evoked by papers that are unnecessarily hard to read.  This happens, for instance, when papers are overly abstract or are too tedious and too long relative to the significance of their messages.

What are the common mistakes people make when submitting/publishing? Well, whatever actions produce the kinds of groans I just mentioned are good examples.  An especially common error that authors make is assuming that the reader appreciates and knows as much about their focal research problem as they do.  This makes their papers dull and impenetrable.  One way to avoid this is to simulate the journal reading experience in advance of submission.  That is, recruit friends and colleagues who are similar to the journal’s audience but are naïve to the topic.  Then ask them to review and discuss with you in detail their interpretations of your work.  You are virtually guaranteed to uncover misconceptions that will amaze you: “Really?  That’s what you thought I meant?”

What are your best tips on how to successfully get published? My first suggestion is to somehow choose to work on problems that are easy to convince people are interesting and important to solve, and then solve them.  Our field is like baseball, where an outstanding hitter fails 2/3 of the time.  That being the case, successful authors necessarily must be unusually energetic and well organized.  That is, they must always be working on several projects simultaneously.  Therefore, despite the low “hit rate,” they maintain a steady flow of results that are ready to submit for publication.  My second tip is to learn to view and use reviewers as one’s collaborators.  Typically, reviewers are among the most knowledgeable people in the world concerning an author’s focal problem.  So why not exploit the expertise underneath their comments to sharpen your writing, your thinking, and your next studies?

How are reviewers selected? My goal is to have every submission read critically and constructively by 2-4 people who know more about an author’s research problem and related topics than just about anyone else in the field.  Some of these people are likely to be on our editorial board.  Many others will have been authors of articles cited in the manuscript.  Because it is a multidisciplinary journal, at JBDM, we make a special effort to have at least two different specialties represented on every team of reviewers.

How can a young researcher become a reviewer? When is the best time during one’s PhD training to start doing so? In my view, the best time for a PhD student to start reviewing is after he or she has developed expertise and credibility in a particular area of research and therefore would have something useful to offer and to gain as a reviewer.  Having successfully published in that area is usually a safe indicator of such expertise.  At that point, the student might be well advised to write to editors of journals that publish work in the student’s area of specialization, volunteering to review occasional submissions on particular topics.  The response is likely to be immediate and positive, since editors are always on the lookout for good reviewers.

Since reviewing is hard work and takes time, why would (or should) a PhD student want to serve as a reviewer?  What exactly is there to gain from doing so?  One reward is the sense of contributing to the advancement of the field at its cutting edge.  But the main advantage is the unparalleled potential for learning and inspiration.  The author, reviewers, and action editor for a journal submission are essentially an especially exciting (and consequential) expert seminar on a topic of great interest to everyone involved.  Moreover, all the members are highly motivated to get things right.  The reviewers and editor work really hard to make sure that they do justice to the author’s contributions.  In addition, no one wants his or her comments to appear foolish to the rest of the group.  Finally, the review process often serves to spark new insights and research problems in the minds of all the participants—authors, reviewers, and editors.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Good reviews offer valid analyses of the author’s ideas, reasoning, and methods, answers to this question: “Is this legitimate?”  In addition, though, for the benefit of the author as well as the action editor, a good review also clearly explains how the reviewer arrived at his or her conclusions.  The best reviews also provide helpful suggestions and guidance, including useful references and even design ideas that might help settle key questions that have been left unresolved by the author’s current efforts.  And good reviews are never mean-spirited.

How do you resolve conflicts when reviewers disagree? Although it is tempting to do so, I never rely on a simple “vote count” among the reviewers.  Instead, I try to understand why the reviewers disagree.  In my experience, more often than not, reviewers only appear to disagree because they are focusing on different aspects of the author’s work.  This frequently occurs because the reviewers’ own research programs have different foci.  So, relying on the specifics of the reviewers’ analyses as well as my own reading of the manuscript, I arrive at summary conclusions about the sensible disposition of the submission—acceptance, revision and resubmission, or rejection.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? This is an exceptionally important issue.  My sense is that such reactions are often critical determinants of many people’s career paths.  I have known numerous researchers who seem to have found rejections so demoralizing that they eventually abandoned their research careers.  But I have also known several highly productive investigators whose success seemed largely traceable to how they dealt with the emotions evoked by negative feedback in the review process.  Three elements of their strategies seemed to stand out and perhaps deserve emulation:

#1: Expect reviewers to identify weaknesses in your work, and accept that as a good thing; you have the opportunity to benefit from their expertise.

#2: Don’t allow yourself to brood over negative comments, even rejections; instead, immediately start working on your next move, be it a revision, a clarifying follow-up study, or the abandonment of a now-recognized dead end.

#3: If and when you revise and resubmit, in your cover/response letter, make sure to respond—respectfully—to every major reviewer comment; reviewers simply hate being ignored.

Is there a paper you were skeptical about but it turned out being an important one? That’s a funny question.  There definitely have been a few such papers, and you have probably read them.  It clearly would be unwise for me to identify them, though.

I assume that you asked the question because you wonder why my initial appraisals of such papers were “off” and what can be done to try to reduce such mistakes, or at least their impact. One basis for misappraisals, which seems uncommon, is that reviewers and editors sometimes misjudge the technical quality of authors’ reasoning and methods.  Given human fallibility, occasional misjudgments like that are inevitable.  I recommend that authors maintain vigilance for such mistakes and call attention to them in cover/response letters for submissions of revisions—again, respectfully.  Another reason that editors occasionally underestimate manuscript potential is that some authors prove to be unusually good at making marked improvements from one revision to the next.  They are especially adept at building on reviewer and editor comments.  Cultivating that skill seems wise.

As an editor, you get to read many papers and thus have insight about emerging trends.  What are the emerging trends in research topics/methodologies? Perhaps the most obvious trend is toward studies that focus on biological underpinnings, or at least correlates, of overt decision behaviors.  Our field has also seen a noticeable, and perhaps surprising, uptick in efforts to understand the role of time in people’s decision making.  There has been a good bit of excitement about the involvement of emotions in decision making, too.  Yet another noticeable trend has been toward assessing and explaining individual differences in decision making character and quality.

What are the biggest challenges for journals today? In my view, two related challenges are at the top of the list. The first is the increased volume of journal submissions. The second is the need for good reviewers to read and respond to those submissions, thereby accelerating the advancement of the field.  The problem seems to be exacerbated by increasing institutional pressures on potential reviewers to perform other duties.

More ‘Inside the Black Box’