About Neda Kerimi

Scientist

Star Track: Mandeep K. Dhami

mandeep-croppedThis week in Star Track we’re featuring Mandeep K. Dhami, PhD, who is Professor of Decision Psychology at Middlesex University. She received her PhD in Psychology from City University, London, UK. Her research focuses on human JDM and choice, and risk primarily in the criminal justice domain. Her previous academic posts include the University of Cambridge (UK), University of Maryland (USA), and the Max Planck Institute for Human Development (Germany). Mandeep has also worked outside academia for the Ministry of Defence and for two British prisons. Mandeep has also won several awards, including from Division 9 of the APA and EADM. Mandeep advises Government organizations nationally and internationally on criminal justice issues, and has helped to establish a Restorative Justice Program in the City of Victoria, Canada. Mandeep is Fellow of the Society for the Psychological Study of Social Issues (SPSSI; Division 9 of the APA). She has authored over 80 scientific articles and book chapters, is the lead editor of the book Judgment and decision making as a skill: Learning, development, and evolution, and on the editorial board of prestigious journals such as Perspectives on Psychological Science. In her spare time, Mandeep is a competitive ballroom dancer, and has represented England in Latin formation.

I wanted to pursue an academic career in this field because… Well, actually, I hadn’t planned on an academic career in Decision Science…things just worked out that way, and I’m very pleased they did. I had worked in prisons as an assistant psychologist while doing my undergraduate degree, and had wanted to go into prison management afterwards. However, a head psychologist encouraged me to do a PhD – saying my career in prisons would benefit from having a solid research background. So, off I went to do a Masters in Criminology followed by a PhD in JDM – and although I never did return to work in prisons, I’ve been back behind bars many times in the UK, US and Canada to study prisoner decision-making. Decision Science affords researchers considerable opportunities to conduct studies in a variety of field settings.

I find the inspiration for my research mostly from the social world around me, and particularly from policy debates in the criminal justice arena. By starting with the problem first, I can be free to choose the most relevant theories and appropriate methods. Dogmatic adherence to theories and methods has blighted the development of social scientific fields, and doing research for the sake of doing research is a waste of opportunity. I want my research to ‘count’ – I want to change some aspects of the world I live in, and so I find myself conducting research to solve social problems.

When people ask me what I do, I say “I study how people think and make decisions, focusing often on people in the criminal justice system such as offenders, police officers and court judges.” There have been several occasions when this simple question and answer has led to extremely useful feedback on my research as well as new research opportunities.

The paper that has most influenced me is… Two books have influenced me hugely – Erving Goffman’s Asylums and Paul Meehl’s Clinical versus statistical prediction. Goffman taught me that to study people we need to see the world from their perspective,and Meehl taught me to question expertise rather than revere it.

The best research project I have worked on during my career… I’m not sure how to operationalize ‘best’ – there have been some projects that have been fun to work on and others that made my ‘head hurt’ – both types of projects produced publications I’m proud of. But, given that I have about 3 decades before retirement, I’d like to think the ‘best’ is yet to come….

If I wasn’t doing this, I would be… If I’d gone down the prison management route, I’d probably be a senior civil servant in the UK Ministry of Justice or Home Office by now.

The most important quality for a researcher to have is… In one word ‘resilience.’ Some of the most common phrases in academia include ‘rejected’, ‘declined’, and ‘unsuccessful’. What a lot of young academics don’t realise is that good researchers take this negative feedback and use it to improve their work – they don’t simply ignore it, and they certainly don’t just give up.

The biggest challenge for our field in the next 10 years… We have too many effects and not enough explanations. Our field needs to develop process models that integrate different theoretical approaches, and that are tested under representative task conditions. This can produce more robust findin gs, and those that translate to the world outside the laboratory.

My advice for young researchers at the start of their career is… Work on something you feel passionate about. This will hopefully mean you don’t give up when things get tough. Over time, you’ll learn to communicate the value of your work to others, and although they may not share your enthusiasm, they will come to appreciate your work, and you.

The one thing I’ve found most challenging is… The slow pace of academia; the time lag from having a research idea through conducting the research to publishing it can be several years; patience is not a virtue that I can say I have much of. Fortunately, the time lag has been reduced in recent years with e.g. the introduction of ‘online first’.

For more information on Mandeep, visit her page.

Star Track: Peter McGraw

Following on the success of our Research Heroes interviews, we’re launching a new interview series: Star Track. In this series, we turn the spotlight on researchers who will play an important role in shaping the future of the field. These people have already made a significant contribution with their ground breaking research and engagement in the research community –  you might know about them or might not, but you should definitely listen to what they have to say – enjoy!
First in our new series is Peter McGraw, an DSC_0667-1associate professor of marketing and psychology at the University of Colorado Boulder, who is an expert in the interdisciplinary fields of emotion and behavioral decision theory. His research examines the interrelationship of judgment, emotion, and choice, with a focus on consumer behavior and public policy. Lately, McGraw has been investigating what makes things funny. He directs at the Humor Research Lab (aka HuRL), a laboratory dedicated to the experimental study of humor, its antecedents, and consequences. He has co-authored The Humor Code: A Global Search for What Makes Things Funny, which hit the bookstores on 4/1/2014. Of recent note, McGraw made the 2013 Stylish Scientist List – probably because he likes to rock a sweater vest.

I wanted to pursue an academic career in this field because… I thought that pursuing an academic career would yield a stimulating yet leisurely intellectual life. (I was half right.) While researching grad programs, I read Tom Gilovich’s book: How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. By the end of chapter 2, I was hooked on the idea of studying judgment and decision making.

I find the inspiration for my research mostly from… Entrepreneurs and artists. Scientists don’t often think of their research as a creative endeavor that is important to share broadly with the world. I believe that the process of creating and disseminating scientific insights is enhanced by emulating people who have a different perspective and a broader array of tools. Also, behaving like an artist or an entrepreneur is much more fun than just trying to please peer reviewers.

When people ask me what I do, I say…. I study what makes things funny.

The best research project I have worked on during my career… In the summer of 2008, Caleb Warren and I set out to answer the question of why people laugh at moral violations. That project changed my life, as it spurred a quest to crack the humor code (something that behavioral decision theory’s “emotional revolution” had overlooked). The resulting paper, which published in Psychological Science in 2010, brought together my two main research areas at the time: moral judgment and mixed emotions. Caleb and I introduce the benign violation theory of humor and showed that moral violations can be a source of pleasure (something every good comic knows).

Everything came together just right; the paper was accepted with no requested changes – something that I never expect to happen again.

The paper that has most influenced me is… When Caleb and I were examining the research on humor, the theories didn’t seem quite right. Fortunately, we found a little-cited paper published by a linguist named of Thomas Veatch. To us, it was a huge advance over existing theories. Veatch’s work served as the foundation for the benign violation theory, which in turn, serves as the foundation for the research conducted in the Humor Research Lab.

If I wasn’t doing this, I would be… Starting some sort of business.

The most important quality for a researcher to have is… Perseverance. Repeat after me, “They can slow us down, but they can’t stop us.”

The biggest challenge for our field in the next 10 years… Finding a way speed the peer-review process.

My advice for young researchers at the start of their career is… Write every day. Start today – and purchase the book: How to Write A Lot.

The one thing I’ve found most challenging is… Staying asleep until my alarm goes off. The work academics do is highly evaluative and uncertain – two conditions that contribute to anxiety. And anxiety gets me out of bed early. On the other hand, it has a silver lining. I believe that every day is a big day and should be lived with a sense of urgency. And big days rarely start with the snooze button.

For more information on Peter McGraw visit his page: http://www.petermcgraw.org/

For more information on his book see: http://humorcode.com/

Research Heroes: Barbara Mellers

BarbMProfessor Mellers is the 11th Penn Integrates Knowledge Professor at Penn University. Her research examines how people develop beliefs, formulate preferences, and arrive at choices. She focuses  on why  people deviate from principles of rationality and how those deviations influence consumer choices and cooperative behavior. She is currently exploring how to elicit and aggregate probability judgments to arrive at the best possible predictions of uncertain events. She has authored over 100 articles and book chapters. She was a recipient of the Presidential Young Investigator Award and a past president of the Judgment and Decision Making Society.

I wish someone had told me at the beginning of my career that all careers come to an end.  When I was young, I felt invincible, I thought I had all the time in the world. But reality caught up with me, and I have a different perspective now. Each research project might be the last, so each one should, at least in principle, be better than the one that went before it.

I most admire people who are clear thinkers, beautiful writers, big dreamers, and hard-core scientists. They work through the implications of their ideas and are their own worst critics. And they do it all in the most graceful and elegant way imaginable.

The best research project I have worked on during my careermight be the one I am doing now on human forecasting. This is a large and long-term project that gives me the opportunity to work with many talented people with wide ranging and diverse skills. This project reminds me of an onion; we keep pulling off layers and finding more layers to go. It gets better and better. 

The worst research project I have worked on during my careeris the last thing in the world I want to talk about.

The most amazing or memorable experiences when I am doing research….happen when I am surprised by the results of an experiment. I once did an adversarial collaboration with Hertwig and Kahneman, Kahneman described the process perfectly. When the data don’t turn out right, we suddenly gain 20 IQ points. Everything seems to make perfect sense in a brand new light that was completely obscure until that moment! Unfortunately, those IQ gains disappear when the surprise is over.

The one story I always wanted to tell but never had a chance…is hard to imagine because there are always opportunities to tell stories. So I would never hold back on one that was worth telling.

A research project I wish I had done…is something I always thinking about.

If I wasn’t doing this, I would be in doing science in another field, and the choice of which field to pursue is a difficult forecasting problem. It is hard to know what areas of science will be the most exciting twenty years from now. The best fields to work in are ones that are changing fast due to the synergy of several good ideas and ingenious technological innovations. Neuroscience, astronomy, and genetics are good examples.

The biggest challenge for our field in the next 10 years…is figuring out how we can make better judgments at individual, societal, and national levels. This goal applies to everything – medical decisions, career decisions, military decisions, romantic decisions, legal decisions, business decisions, policy decisions, and more. We need theories, but we also need to generate useful knowledge. That is the only reason why the public will listen.

My advice for young researchers at the start of their career is…replicate everything you do several times. The truth, however hard it is to accept, is what moves science in the right direction and leads to progress. Admit your uncertainties; you aren’t the only one who has them. Remember that you can’t praise people too much (yes, we really are that shallow!). And last but not least, when in doubt, give credit to others. Time usually sorts things out.

Departmental site: http://psychology.sas.upenn.edu/node/20474

Inside the Black Box: Psychological Bulletin

Psychological_Bulletin-500x500

Psychological Bulletin is a bimonthly peer-reviewed academic journal that publishes evaluative and integrative research reviews and interpretations of issues in psychology, including both qualitative (narrative) and/or quantitative (meta-analytic) aspects. The editor in chief Stephen Hinshaw gives us insight into this journal

What makes you go “Wow!” or “Yuck!” when first read a submission? Our journal (Psychological Bulletin) is different from most others, in that it publishes only lengthy, synthetic review papers–across the entirety of psychology and behavioral science.  So, I look for a deeply conceptual introductions and systematic reviews of primary literature, written in an accessible yet still scholarly fashion.

What are the common mistakes people make when submitting/publishing? Not reading instructions carefully (or at all)–so that we sometimes receive single empirical studies or extremely preliminary ‘review’ papers suggesting leads for further study (but not providing a deep review of a mature literature)

What are your best tips on how to successfully get published? Research, research, research your topic and revise, revise, revise your writing.

How are reviewers selected? In consultation with Associate Editors, I scour reference sections and consult lists of experts in various subfields.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? See if a more senior person will ask the editor to enlist you as a co-reviewer (with permission of Editor).

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer?  Good reviews are thoughtful, respectful, and reveal deep knowledge of topic – showing how the paper does or does not provide an advance in that field.

How do you resolve conflicts when reviewers disagree? Careful reading and rereading of the paper…and sometimes going to a Consulting Editor for a ‘tie-break’ review

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Worst way is to get overly defensive and battle every point of the reviews.

Is there a paper you were sceptical about but turned out to be important one? Yes, sometimes initial submissions that didn’t really deliver can be greatly improved with substantial revision.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? PB is so broad that it’s hard to see emerging trends across all of the sub-facets of the field.

What are the biggest challenges for journals today?  Finding and engaging willing reviewers, keeping up with flow of submissions, battling ‘crank’ journals.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Medical Decision Making

mdmMedical Decision Making (MDM) is a peer-reviewed journal published 8 times a year offering rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health policy development. MDM presents theoretical, statistical, and modeling techniques and methods from disciplines including decision psychology, health economics, clinical epidemiology, and evidence synthesis. Editor-in-Chief Alan Schwartz gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission?

  1. “What’s new?” – What will I learn from this paper that I didn’t know before?  A paper presenting an original approach to a problem, or an extension of past approaches, or a first replication of a previously unreplicated finding is exciting to read. At Medical Decision Making, what’s new is usually a new method for studying or improving decisions, but sometimes it’s an exemplary application of prior methods. (Of course, some journals, like PLOS One, have explicitly chosen not to use this criterion).
  2. “What’s true?” – How do I know that I can rely on the results? Are the methods rigorous, sound, and appropriate for the question? Did the authors interpret their findings appropriately, without overgeneralizing?
  3. “So what?” – Why was this study proposed in the first place? What motivates the research question, and is it an important question in the context of the field and our currrent knowledge?
  4. “Who cares?” – Is this paper right for the readers of my journal, or does it belong somewhere else? A straightforward clinical trial comparing two drugs — or a basic psychology study of non-medical decision — probably doesn’t belong at Medical Decision Making.

What are the common mistakes people make when submitting/publishing? 

My top three:

  • Failing to motivate the research question or ground it in a theoretical or conceptual framework. Theory is important.
  • Overstating the conclusions and ignoring limitations.   Your paper doesn’t have to be the final word or solve every problem.
  • Sending to the wrong journal (violating the “who cares?” principle)

What are your best tips on how to successfully get published? Be open to feedback. Before you send a paper out, it should be the best paper you can write, so you should have had friends and mentors read and criticize it. If you can anticipate issues that a critic might raise, address those forthrightly. When you receive reviews, pay attention to them. If you don’t understand something a reviewer says, don’t ignore it — ask the editor for guidance.

How are reviewers selected? At Medical Decision Making, as at many journals, we have experienced reviewers on our editorial board and in our reviewer database. We find new reviewers through suggestions from authors (yes, you may suggest potential reviewers, and yes, we will often invite at least one of your suggestions if we agree that they really have specific content expertise) and through looking at the paper’s citations and related literature ourselves and seeing who else is working in the same area.

Our goal is to ask for reviews from experts whose reviews not only advise the editor on the disposition decision but are valuable to the authors, whether or not we publish the paper.  We’re fortunate at MDM to have really outstanding reviewers, and many first-time authors comment on how helpful the reviews have been.  We also score our reviews, and reviewers who do a poor job tend to get selected against in the future.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I’m a big proponent of reviewing both papers and grant applications; I think you learn a lot from reading very good (and sometimes poor) writing, and from comparing your review with those of the paper’s other reviewers and the editor (at MDM, we cc our decision letters to the reviewers). One good way for PhD students to get some experience with this is to do a “mentored review” with their advisor when their advisor is asked to review a paper. Many journals will allow the invited reviewer to share the review with a student as long as the invited reviewer supervises and takes responsibility for the review. Post-PhD, as a postdoc or junior faculty, if you haven’t already been asked to review for a journal that you’d like to, you can often contact the editorial office and ask to be added to the reviewer database. Of course, submitting a paper to the journal and filling out your author profile with a good set of keywords for your expertise is also likely to lead to reviews in the future.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? This is a matter of editorial taste, but I really like to see a review that begins by looking at the big questions and pointing out the strengths of the manuscript (or at least of what the authors hoped to achieve through the study), and then moves on to detailed constructive criticism about methods, and presentation and interpretation of results. The review should conclude with minor concerns or suggestions for improving the writing.

Some little things that are very helpful: Number the points in the review to make it easier for the author to respond point by point. Refer to parts of the manuscript by page number and line number to help the author locate exactly what you’re asking about. Make it clear to the author when you’re making a suggestion (e.g. please describe the factor rotation strategy in more detail) and when you’re asking a (non-rhetorical) question (e.g. why did you expect patients to be more influenced by attribute range than attribute context?)  Don’t say (in the
comments to the author) whether the paper should be rejected or accepted – that’s the editor’s job. Definitely don’t recommend rejection privately to the editor and then write a wholly positive review for the author.

How do you resolve conflicts when reviewers disagree? Reviewers advise; editors decide. I’ll admit to a little bias: when good reviewers disagree, I think that means there’s something important to work out, and I’ll usually ask the author to help the reader understand both perspectives and how the author chose to resolve them. There isn’t a single right way to study something.  On rare occasions, reviewer disagreement lends itself to inviting one or both reviewers to write an editorial about the study, if we’ve decide to publish it.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? A revise and resubmit is a positive signal, especially from a paper journal that has a limited page budget. It usually means that the editor thinks there’s something important in the paper to make it worth spending more editorial and reviewer time on, and that you’re capable of addressing the reviewer concerns. So that’s easy – always resubmit, and always include a cover letter addressing each point made by each reviewer. That can mean explaining why you didn’t choose to make a suggested change, but pick your battles: a wholly unresponsive revision is not going to go very far with the editor.

Medical Decision Making also has a category of initial decision called “reject and resubmit”. This means that the editor doesn’t want the paper or a revision of it, but thinks there might be a different, related paper you could write that would be competitive. The new paper gets the full peer review treatment, usually with different reviewers.

A flat rejection – well, when I get those, I usually shake my fist at the sky, eat a piece of chocolate, and get a good night’s sleep. Then I see what useful information I can get from the reviews and improve the paper to send it elsewhere. Uncertainty is a fundamental
fact of life.

The worst way to react to a rejection is to send a nasty email to the editor-in-chief to try to bully him into reconsidering the decision and to threaten that you will never send your priceless work to that journal again. Yes, that happens (especially early in my term).
We have an appeals process if it’s clear that a reviewer or editor deeply misunderstood something, but that’s not it.

Is there a paper you were sceptical about but turned out to be an important one?I think I’m still too early in my editorship to know. In about 3 years, though, I’d be interested in looking at that — collecting the top 10 important papers we’ve published based on reader response and looking back at my notes to see how many of those I only assigned to an associate editor reluctantly.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? That’s one of the best parts of the job. Right now, MDM is publishing a lot of innovative work in simulation models and value of information analysis methods. Approaches to utility models are emerging in which econometric and behavioral research are triangulating on phenomena that call into question some longstanding simple assumptions of health state valuation — for example, that the proper unit on which to assess utility is the individual decision maker. And there’s a lot more interest in dual process theory and decision psychology/behavior economics manipulations of the decision environment in order to understand and improve health decisions.

What are the biggest challenges for journals today? There’s a great debate going on right now about open access models for science journals and how publishers do or don’t contribute to science, but in some ways, I think that’s just the opening act for a larger discussion of the value of an expert peer review process vs. open publishing and crowdsourced reviewing. I want to see good science clearly communicated, and journals need to demonstrate to their readers that they are promoting those ideals.

Journal website

More ‘Inside the Black Box’

Inside the Black Box: Psychological Review

Psychological_Review-500x500Psychological Review, founded in 1894, is one of the most prominent journals in psychology today. Psychological Review focus on psychological theory and publishes papers that make important theoretical contributions to psychology. Associate Editor Prof. Susan Fiske helped us with more insight into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Clear statement of the argument in the title & abstract enable an immediate evaluation of an article’s contribution. It is amazing how often authors fail to be clear about their hypothesis and its significance.

What are the common mistakes people make when submitting/publishing? Failing to check whether the article is appropriate for that journal.

What are your best tips on how to successfully get published? Aha! Plus Evidence.  Good ideas, backed up by good science.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? Participating in a grad-led journal club, reading and critiquing published articles. Telling one’s advisor that one would like some reviewing experience.  When asked, returning high quality, on time reviews.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Balanced, thoughtful, succinct.

How do you resolve conflicts when reviewers disagree? Reviewers often disagree because they are recruited for differing expertise. Editors must consider the inputs relative to the expertise and perspective of the reviewers.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? R&R needs to list, in a cover letter, each response to each suggestion, including to explain (rare) instances of decking to make certain changes.

Is there a paper you were sceptical about but turned out to be important one? Not that comes to mind.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? Interdisciplinary and global collaborations.

What are the biggest challenges for journals today? Maintaining humanity depute the volume.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Journal of Behavioral Decision Making

JBDMThe Journal of Behavioral Decision Making is a multidisciplinary journal publishing original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged. Associate Editor Frank Yates gives us his insights into the journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? Papers that surprise me because of their conclusions or even their topics are the ones that most often make me go “Wow!”  So do papers that exhibit different ways of thinking about old topics, ones that force me to say, “I’ve never seen this idea before.  That is so cool.“  I must also say that I love manuscripts that clearly point toward ways that people can decide better than they normally do.  And if the authors can complement their messages with concrete and elegant illustrations, that’s even better.

I can honestly say that I have never said “Yuck!” in response to a submission, even to myself.  I realize that virtually every submission I have ever seen represents the culmination of a huge investment by the authors.  I would feel guilty if I didn’t acknowledge that investment.  On the other hand, certain aspects of some submissions do make me groan (“Oh, no!”).  Papers that are weakly motivated tend to do that, especially ones that report the results of studies that I infer began with nothing more than: “I wonder what would happen if we tried this manipulation.”  Groans are also evoked by papers that are unnecessarily hard to read.  This happens, for instance, when papers are overly abstract or are too tedious and too long relative to the significance of their messages.

What are the common mistakes people make when submitting/publishing? Well, whatever actions produce the kinds of groans I just mentioned are good examples.  An especially common error that authors make is assuming that the reader appreciates and knows as much about their focal research problem as they do.  This makes their papers dull and impenetrable.  One way to avoid this is to simulate the journal reading experience in advance of submission.  That is, recruit friends and colleagues who are similar to the journal’s audience but are naïve to the topic.  Then ask them to review and discuss with you in detail their interpretations of your work.  You are virtually guaranteed to uncover misconceptions that will amaze you: “Really?  That’s what you thought I meant?”

What are your best tips on how to successfully get published? My first suggestion is to somehow choose to work on problems that are easy to convince people are interesting and important to solve, and then solve them.  Our field is like baseball, where an outstanding hitter fails 2/3 of the time.  That being the case, successful authors necessarily must be unusually energetic and well organized.  That is, they must always be working on several projects simultaneously.  Therefore, despite the low “hit rate,” they maintain a steady flow of results that are ready to submit for publication.  My second tip is to learn to view and use reviewers as one’s collaborators.  Typically, reviewers are among the most knowledgeable people in the world concerning an author’s focal problem.  So why not exploit the expertise underneath their comments to sharpen your writing, your thinking, and your next studies?

How are reviewers selected? My goal is to have every submission read critically and constructively by 2-4 people who know more about an author’s research problem and related topics than just about anyone else in the field.  Some of these people are likely to be on our editorial board.  Many others will have been authors of articles cited in the manuscript.  Because it is a multidisciplinary journal, at JBDM, we make a special effort to have at least two different specialties represented on every team of reviewers.

How can a young researcher become a reviewer? When is the best time during one’s PhD training to start doing so? In my view, the best time for a PhD student to start reviewing is after he or she has developed expertise and credibility in a particular area of research and therefore would have something useful to offer and to gain as a reviewer.  Having successfully published in that area is usually a safe indicator of such expertise.  At that point, the student might be well advised to write to editors of journals that publish work in the student’s area of specialization, volunteering to review occasional submissions on particular topics.  The response is likely to be immediate and positive, since editors are always on the lookout for good reviewers.

Since reviewing is hard work and takes time, why would (or should) a PhD student want to serve as a reviewer?  What exactly is there to gain from doing so?  One reward is the sense of contributing to the advancement of the field at its cutting edge.  But the main advantage is the unparalleled potential for learning and inspiration.  The author, reviewers, and action editor for a journal submission are essentially an especially exciting (and consequential) expert seminar on a topic of great interest to everyone involved.  Moreover, all the members are highly motivated to get things right.  The reviewers and editor work really hard to make sure that they do justice to the author’s contributions.  In addition, no one wants his or her comments to appear foolish to the rest of the group.  Finally, the review process often serves to spark new insights and research problems in the minds of all the participants—authors, reviewers, and editors.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? Good reviews offer valid analyses of the author’s ideas, reasoning, and methods, answers to this question: “Is this legitimate?”  In addition, though, for the benefit of the author as well as the action editor, a good review also clearly explains how the reviewer arrived at his or her conclusions.  The best reviews also provide helpful suggestions and guidance, including useful references and even design ideas that might help settle key questions that have been left unresolved by the author’s current efforts.  And good reviews are never mean-spirited.

How do you resolve conflicts when reviewers disagree? Although it is tempting to do so, I never rely on a simple “vote count” among the reviewers.  Instead, I try to understand why the reviewers disagree.  In my experience, more often than not, reviewers only appear to disagree because they are focusing on different aspects of the author’s work.  This frequently occurs because the reviewers’ own research programs have different foci.  So, relying on the specifics of the reviewers’ analyses as well as my own reading of the manuscript, I arrive at summary conclusions about the sensible disposition of the submission—acceptance, revision and resubmission, or rejection.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? This is an exceptionally important issue.  My sense is that such reactions are often critical determinants of many people’s career paths.  I have known numerous researchers who seem to have found rejections so demoralizing that they eventually abandoned their research careers.  But I have also known several highly productive investigators whose success seemed largely traceable to how they dealt with the emotions evoked by negative feedback in the review process.  Three elements of their strategies seemed to stand out and perhaps deserve emulation:

#1: Expect reviewers to identify weaknesses in your work, and accept that as a good thing; you have the opportunity to benefit from their expertise.

#2: Don’t allow yourself to brood over negative comments, even rejections; instead, immediately start working on your next move, be it a revision, a clarifying follow-up study, or the abandonment of a now-recognized dead end.

#3: If and when you revise and resubmit, in your cover/response letter, make sure to respond—respectfully—to every major reviewer comment; reviewers simply hate being ignored.

Is there a paper you were skeptical about but it turned out being an important one? That’s a funny question.  There definitely have been a few such papers, and you have probably read them.  It clearly would be unwise for me to identify them, though.

I assume that you asked the question because you wonder why my initial appraisals of such papers were “off” and what can be done to try to reduce such mistakes, or at least their impact. One basis for misappraisals, which seems uncommon, is that reviewers and editors sometimes misjudge the technical quality of authors’ reasoning and methods.  Given human fallibility, occasional misjudgments like that are inevitable.  I recommend that authors maintain vigilance for such mistakes and call attention to them in cover/response letters for submissions of revisions—again, respectfully.  Another reason that editors occasionally underestimate manuscript potential is that some authors prove to be unusually good at making marked improvements from one revision to the next.  They are especially adept at building on reviewer and editor comments.  Cultivating that skill seems wise.

As an editor, you get to read many papers and thus have insight about emerging trends.  What are the emerging trends in research topics/methodologies? Perhaps the most obvious trend is toward studies that focus on biological underpinnings, or at least correlates, of overt decision behaviors.  Our field has also seen a noticeable, and perhaps surprising, uptick in efforts to understand the role of time in people’s decision making.  There has been a good bit of excitement about the involvement of emotions in decision making, too.  Yet another noticeable trend has been toward assessing and explaining individual differences in decision making character and quality.

What are the biggest challenges for journals today? In my view, two related challenges are at the top of the list. The first is the increased volume of journal submissions. The second is the need for good reviewers to read and respond to those submissions, thereby accelerating the advancement of the field.  The problem seems to be exacerbated by increasing institutional pressures on potential reviewers to perform other duties.

More ‘Inside the Black Box’

Meet the Editors: Neda

Neda Kerimi

Neda is currently a post-doctoral fellow at the department of Psychology, Harvard University after receiving her PhD from Stockholm University and a working at Uppsala University. Her research interests include decision-making, happiness, risk as well as human-computer interaction. She’s also the news editor for the European Association for Decision Making. Besides being a self-confessed technology geek, she loves useless facts and futurist science.

I’m working on InDecision because…Someone has to do it! Ever since my PhD studies I have been involved with different scientific societies, and I noticed that especially in JDM, a forum for early career researchers did not exist. In addition, there is just so much graduate programs or conferences can teach you. We wanted to create a forum where people can discuss the science itself and everything else that we all go through during our academic careers. We get so much satisfaction from running the blog that we have decided it’s well worth the time and energy.

I’m most passionate aboutKnowledge and people! I love learning, especially if it helps me to understand humans better. I have come to terms with the fact that I am a science geek in heart and soul (indeed, 90% of my conversations start “I read an article about a study…..”). In addition, I am passionate about understanding the core of human mind, whatever that may be. Don’t know if we will ever have a grand theory of the human mind but we are learning new things everyday. I am also passionate about how we can use knowledge and scientific progress for the greater good (more on that in upcoming future indecisionblog.com series).

At a conference, you’ll most likely find me in a session with key words like… Financial JDM, social JDM, and technology. More or less anything that can please my tech-geek and JDM-geek identities. For me conferences are not solely about the talks but also an opportunity to connect with new people and reconnect with those that I seldom see in person. So I might skip a few talks just to get the time to chat with an old friend or a new friend.

How I ended up doing research in this field… I started in IT and studied psychology alongside with my full-time job, but I soon realized I wanted to pursue my Phd in psychology. Being a bad decision maker (I couldn’t even decide where to eat lunch), it came naturally to me to immerse myself in the science of decision making. Fortunately, a PhD in the subject has actually made me a better decision maker. However, I can’t say how much of it should be credited to my PhD or to the fact that I have gained more experience in making decisions.

My personal research heroes are… so many that it is not worth mentioning names. For me, a research hero is more than someone who has come up with a ground-breaking theory – it’s also about the person. I have been incredibly lucky to meet so many people who, despite their fame and prominence, have taken the time to meet or chat with me, which I find hugely inspiring. I especially admire the many female researchers who lead the way for other women to progress in the field. The scientific community has traditionally been male-dominated, and I am pleased to see that is changing, and it is because of the excellent work than many female researchers do.

What I find most challenging is… not losing focus! I feel that research has become so much fiercer and competitive than before. The currency in our field is publications and citations and whether one get a job or a funding relies on the number of publications and citations (which I do not see as a good currency). I guess my challenge is to focus on what gets me going and not be affected by the stress and pressure that come with working in academia. Another challenge for me is to say no to projects. I get overly excited about everything that has to do with the human mind and science and want to run a project on it. It is a challenge, but I am getting better at it.

What I’d be doing if I wasn’t a researcher… I would most likely work with psychology or technology (maybe both?) in one way or another. I actually think more scientists should embark a career outside academia. We need to share the valuable knowledge and experience we have with also non-academics.

——————————————————————————————————————-

Other things to read:

Inside the Black Box: Frontiers in Psychology

frontiers banner

Next in our Inside the Black Box series is Frontiers in Psychology, an open access journal that aims at publishing the best research across the entire field of psychology. Frontiers in Psychology publishes articles on the most outstanding discoveries across the entire research spectrum of psychology. The mission of Frontiers in Psychology is to bring all relevant specialties in psychology together on a single platform. Field Chief Editor Axel Cleeremans gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? I go “Yuck!” instantly if the paper looks like it’s poorly written, if the figures don’t look good (see Tufte’s advice on that), if it contains typos, or if looks very verbose or boring. There is an important message there: If you don’t fine-tune the presentation of your findings, it’s as good as nothing.

“Wow!” can result from different factors. Sometimes it’s the finding itself — for instance, I find Geraint Rees’s recent demonstration that one’s experience of the Ebbinghaus illusion is inversely proportional the size of one V1 stunning. Other times it’s the sheer power of technique — Bonhoeffer’s applying two-photon microscopy to visualize synaptic growth in vivo is a good example of that. The cleverness of an experimental design is a further “Wow!” inducer; Jacoby’s process dissociation procedure, when I first read about it, definitely elicited a “Wow!” response from me. And then of course, I go “Wow!” when reading about impressive ideas. Rumelhart and McClelland’s PDP volumes made we go “Wow!” for years, as did Hofstadter’s “Gödel, Escher, Bach”.

What are the common mistakes people make when submitting/publishing? Submitting to the wrong journal. Making the story too complicated. Not having any story. Reporting uninteresting findings. Reporting uninteresting findings but trying to make them sound interesting. Failing to cite relevant work from many years ago that old editors know about.  Leaving typos in the manuscript. Ugly figures.

What are your best tips on how to successfully get published? Work on the most important issue in your domain. Build a good narrative. Papers that read like detective stories (and finish with an satisfying resolution!) are always good. Get the writing absolutely perfect. Of course, interesting and solid data. Simplify. Kill all the typos. Cite previous work. All referees first look for flaws because if any are found then the review is done and the referee can focus on something else. It is only when no surface flaws are found that the referee actually thinks about whether the paper is interesting…

How are reviewers selected? That very much depends on the journal. Some editorial systems are almost entirely automated, which has advantages (speed) but also disadvantages (relevance). Some editors hand-pick their referees based on different criteria (mostly, whether they think they know something about the topic and whether they think they’ll compose their review in time). Many systems offer referee suggestions based on keyword matches. Authors can also often propose referees themselves. This is a good idea as it speeds up the work of the editor, who will typically select referees both from the author’s suggestions and from his own pool of referees.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I wouldn’t do it too quickly — say, three years in your Ph.D. Reviewing an article is an important and difficult job. It gets much easier as your knowledge of the field grows and as your expertise at reviewing increases, but the first reviews you do are always very intensive jobs. You worry that you’ll be ridiculous in the eyes of the editor and the other referees. You worry that you missed a central point. You’ll spend days on your first review. On the other hand, knowledge of what’s going on in your field before it gets published can be invaluable — but for this, you can count on your advisor.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? A good reviewer is a reviewer who turns in her review in time and who manages to discuss the paper from a neutral tone while clearly listing the issues that concern her, if any. And of course: Good reviews also contain a clear recommendation that is congruent with the listed points. Sometimes you get almost self-contradictory reviews. They begin by “This is a very interesting paper that uses clever methods” and finishes by “I recommend the paper be rejected”. This makes it almost impossible for an editor to use the review, as do reviews that contain too many subjective comments. Reviews should almost be written as though they were public comments, that is, with all the care one would use if one were talking in public about someone else’s work.

How do you resolve conflicts when reviewers disagree? That’s a tough one. I regularly receive conflicting reports, sometimes at either end of the spectrum (i.e. Referee #1 says “Reject”; Referee #2 says “Accept without revisions”). If both reports make sense (that is, it is clear both referees understood the paper), most typically, I will consult a third referee (which sometimes doesn’t help). When all else fails, you read the paper and make the decision yourself… (just kidding: editors read the papers, but then there is a difference between reading a paper and forming an expert opinion about it). It is worth mentioning here that some open access journals (i.e. Frontiers in Psychology) have adopted a completely different manner of resolving differences between referees, namely to ask referees and authors to interact until a consensus between referees is reached. Many conflicts between referees are solvable by iterated interaction — something that can be tough to achieve with the standard reviewing process.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Revise and resubmit is pretty much the norm — it is exceptionally rare for a paper to be accepted right away. Dealing with rejection is understandably difficult. Your reaction to it very much depends on what you can attribute the rejection to. Being rejected from Science is not an indication that your research is not good; just that it’s not good enough, or not novel enough, or not interesting enough in the eyes of Science’s editors. You may think otherwise and feel wronged somehow, but it’s not your decision to make in either case, so it’s best to move on and submit to another journal. The worst case scenario is when you submit to a mediocre journal, wait for months, and find that your paper is rejected. If you really feel a “reject” decision was incorrect, it’s always a good idea to interact with the editor. As an editor, I only use “reject” when all referees agree that the paper is not publishable. Dealing with a revise and resubmit is easy: Just address all the points raised by the referees one by one and thoroughly. In the vast majority of cases, papers in that category will end up published;  it’s just a matter of taking all the points seriously and in detail.

Is there a paper you were skeptical about but turned out to be important one? Not that I can remember as an editor. A couple of my own papers as an author, though, had very difficult beginnings and turned out to be considered as quite important. Science is about data, but also involves rhetoric: Not only do the data have to be important, but you also have to present the results and their implications in a persuasive manner.

As an Editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? There is an important ongoing discussion on twitter, blogs, facebook, email and the press about the importance of replication in psychology. Developing methods that make it possible to analyze replication efforts properly, as well as promoting the publication of replication findings, are important issues. One of the most interesting methodological developments in this respect is the emergence of novel statistics based on Bayes’ ideas. I also continue to be impressed with the increased sophistication of neuroimaging methods — think MPVA for instance. Increased meta-data in all fields will also make all sorts of meta-analyses possible.

What are the biggest challenges for journals today? The challenges are not the same for traditional journals and for new, online, typically open-access journals. Some journals are more or less immune from challenges because of their extraordinary status in the field. The challenge for traditional journals is to stay relevant in an increasingly open-access, rapid-fire world: Interesting results are tweeted or otherwise shared almost instantly, and people want to download the relevant material freely and right away. The challenge for open-access journals is to accrue enough credibility. A challenge that faces every actor today, individuals and journals alike, is to find interesting ways of attracting attention. So much is published today (considerably more than even a few years ago) that it becomes a challenge to even find relevant material.

Journal home page

‘Inside the Black Box’ series home page

Inside the Black Box: Judgment and Decision Making

We start our journal editor interview series with Judgment and Decision Making’s editor Jon BaronJDM is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. JDM publishes original and relevant to the tradition of research in the field represented by SJDM and EADM. Relevant articles deal with normative, descriptive, and/or prescriptive analyses of human judgments and decisions. 

What makes you go “Wow!” or “Yuck!” when first read a submission? Wow!: When it shines new light on a traditional JDM problem, including possible applications in the real world. I choose the lead article in each issue on the basis of this sort of reaction. (Of course, some issues have no article that merits the exclamation point, and some have more than one.)

Yuck!: When it applies the Analytic Hierarchy Process to the pipe-fitting industry in Pakistan. Or when it uses a tiny sample, with no replication, to show that people are at the mercy of subtle, unconscious forces. Or when it makes obvious statistical errors, like claiming an interaction on the basis of a significant effect next to a non-significant effect.

What are the common mistakes people make when submitting or publishing? Submitting to the wrong journal.

What are your best tips on how to successfully get published? Study big effects, or use large samples. Don’t waste time studying phenomena that are ephemeral and difficult to replicate, especially if you are trying to find moderators of such effects.

How are reviewers selected? When I handle papers – about half of them go to associated editors – I try to find the most expert reviewers who are willing to review, including members of the journal’s board when possible. This often takes several attempts; people say no. Often I use Google Scholar, as well as citations in the paper and authors’ recommendations.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? If you know someone who is an editor (including an associate editor), tell him or her that you are willing. I often ask grad students to review, but only if I know them to be experts on the topic of the paper. I am not willing to take a student’s word for this expertise, or to assume that being first author of a related paper is sufficient. Thus, personal knowledge is important.

I think that grad students should do occasional reviews. But anyone who keeps publishing is going to get asked to do more and more reviews. Be nice to editors (and other authors). If you get asked to do a review, respond quickly. Saying no immediately allows the editor to go to the next person on the list.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer? Explains why the paper is fatally flawed, if it is. Otherwise provides helpful advice for revision, or (only if necessary) for additional research. What I find unhelpful are requests for more “theory”, as if theory were something like soy sauce.

How do you resolve conflicts when reviewers disagree? I regard reviews as information, not votes. They point out flaws I had not discovered, literature I did not know, or strengths that I did not appreciate. The review’s bottom-line recommendation is just a little more information. Thus, these recommendations are not conflicts that need to be resolved.

But reviewers also disagree about specifics, about what needs to be done. Here, I think it is my job to tell the author which of the reviewers’ comments to ignore, and which to follow, and (if the review does not say), how to follow them (if I can). As an author, I find it annoying to be at the receiving end of conflicting reviews, with no idea what magic I must do in order to satisfy everyone.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? The best way to react to a revise/resubmit is to try to do what it says, or explain politely and clearly why you can not or should not do that. Or give up and try another journal if you think you are being asked to do the impossible.

The best way to react to a rejection depends on what it says. If it finds a fatal flaw that cannot be fixed, the best thing may be to regard the paper as sunk cost, and move on. In other cases, rejections are very specific to the journal, so you should just send the paper
elsewhere. If you think a paper is good, don’t give up. Keep sending it elsewhere. In still other cases, papers are rejected because more work is needed. Maybe do the more work.

Is there a paper you were sceptical about but turned out to be an important one? Not really.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? On topics, I think that fields go through fads – well, let’s say “periods in which some topics become very popular and then gradually fade into the background”. These are often good things. Many come from external interest and funding, such as the enormous interest now in “nudges”, or the interest in forecasting and prediction arising from the recent IARPA competition. Around 1991 the Exxon Valdez case inspired (and funded) a great deal of JDM research on contingent valuation and value measurement in general.

On methods, our field is slowly but surely catching up with the enormous increase in the powers of computers and the Internet. Data analysis is becoming more sophisticated. A variety of approaches are being explored (including Bayesian ones). Web studies are becoming more numerous and more sophisticated. People are making use of large
data sets available on the Web, including those they make themselves by mining data.

What are the biggest challenges for journals today? The biggest is integrity. The work of Simonsohn, Simmons, Nelson, Ionnides, Pashler, Bar-Hillel (earlier) and others on p-hacking, file-drawer effects, basic statistical errors, and outright fraud has raised serious questions about what journals should and can do. The problems vary by research area. Medical research and social psychology are probably worse than JDM. But I am still trying to work out a way to deal with this problem. Asking for data and for sufficient stimulus materials for replication is a step. I spend a lot of time checking data analysis with the data that authors send.

The next biggest challenge is how to take back scholarly communication from those who seek to profit from it by building pay walls of one sort or another, including both subscription fees and publication charges. I have ignored this problem, hoping that it will go away or that someone else will solve it (e.g., by endowing JDM with $500,000). Right now, JDM has neither type of fee, because I do the production and “office work”. Other journals work this way, but the authors all submit papers with LaTeX formatting. My job would be easier if Microsoft Word did not exist. Maybe I will outlast it, and then the problem will be solved for the next editor. But a little money – nowhere near as much as proprietary journals get – would still help, and I don’t know where to get it.

The third biggest challenge is how to get rid of the perverse incentives that arise from the use of the “impact factor” of a journal for evaluation of authors of papers in that journal. Journals cannot do much about it, except perhaps to stop advertising their impact
factors in large print.

Journal homepage

‘Inside the Black Box’ series home page

Jon Baron Research Hero interview