The Conference Survival Guide

Two of the biggest conferences of the season for InDecision readers are coming up soon: The Association for Consumer Research (ACR) and the Society for Judgment and Making’s (SJDM). We will be live blogging and live tweeting them, as well as providing pre and post coverage of the conferences, so keep an eye out for updates and exclusives with conference researchers and speakers. In preparation for these conferences we prepared a comprehensive “Conference Survival Guide.”

Speakers: Avoid The 4 Sins of Conference Presentations

cons

  1. Saying too much
  2. Showing a long and useless literature review
  3. Failing to remind your audience of the designs and terms of your research
  4. Showing off your methods and analysis rather than the substance of your findings (we aren’t quantitative economists after all)

Full detailed article here.

Manage Your Worries

Screen Shot 2013-09-25 at 4.58.08 PM

Can’t figure out what talks to go to? Finding you are losing interest in your own topics but love that trendy talk on virtual reality or green psychology? Just generally feeling lost?

Don’t worry: our expert panel has you covered here.

Remember to Do the 4 Things We Vow to Do After Conferences (But Rarely Do)

g2

  1. Read the full program
  2. Email that new contact
  3. Focus on your new inspirations
  4. Talk more with peers

Full article here.

How to (Not) Network

network

Students: remember conferences are as much about peers as they are about professors. Peers are important, as we explained a little while ago on the blog.

“Off the record” many professors have been said that they hate it when people blindly come up to them at conferences and just grill with them with questions. Of course, not everyone does this and if you come up with good questions it can certainly sometimes work, but there are many professors who hate “networking antics,” so be careful how and who you approach.

Many successful professors have also told us they spent conferences staying up late chatting research and partying with peers not professors and these academics turned out more than just fine, so if you’ve got mad networking game, then go for it. But if it’s not your style, don’t worry: it’s not the end of the world.

One additional strategy for networking is walking around with someone who actually knows people – maybe that assistant professor you get along with so well at your school could help you?

Pay Attention to Twitter

And not simply because we will be live tweeting and blogging the conferences we cover, but because other people will be too—some of which you may already know. Twitter has now reached a critical mass in Academia such that twitter has become a useful tool and a source of entertainment. Remember, you don’t need to tweet, just follow people on twitter or type in the conference hashtag to look at what’s up.

For reference check out the top tweets from the Association for Psychological Science 2013’s conference. For ACR 2013, the hashtag is:  #ACR2013 and the conference handle is @AConsRes

Go to the Big Talks

g3Why? Because sometimes they are always topical and good or bad, everyone will be talking about them. If you get lucky you might even get a repeat of ACR 2011 when a panelist literally turned to another panelists and said, “Maybe no cares about your type of research.”

Sometimes they can be truly fantastic and as energizing as a rock concert: we are all still tingling from Michelle Pham’s 2013 Society for Consumer Psychology lunch speech on the “7 Sins of Consumer Psychology”, which we’ve published in its entirety here. While it’s not quite the same magic as being in that electrified San Antonio room with the amazing Pham, it’s almost as good. If you don’t believe us, you can always watch this video of the speech instead:

Be A Positive Open Attendee  

According to Professor Mike Norton, a person should never respond to another’s talk by saying, “Isn’t that just cognitive dissonance?”. Instead, Norton suggests “always be trying to build on people’s ideas.” The mindset he believes you should be in is, “That’s really cool and you know what else you could do is X.” More from Mike in the video below:

Remember: Many People Want to Help You

Here, Professors Leif Nelson and Simona Botti speak about how you should just ask people and professors for help if you need it (and watch until end for funny bit).

People in the field are likely to help you because they are nice. Alessandro Peluso even has a 2013 ACR presentation that shows people enjoy giving advice, so ask away.

Remember, feel free to ask us anything at InDecision and we will hazard an answer or direct you to someone who can better answer the question. Remember, we are one big team of researchers and teamwork should be part of our daily lives.

Research Heroes: Colin F. Camerer

camererThis week we continue our Research Heroes series with Colin F. Camerer, who is the Robert Kirby Professor of Behavioral Finance and Economics at the California Institute of Technology. Before joining Caltech, he earned his PhD from the University of Chicago Graduate School of Business at the age of 22, and subsequently worked at the Kellogg, Wharton, and University of Chicago business schools. He’s published more than 50 articles and a book on behavioural game theory. He’s a past president of the Economic Science Association and in 1999 he also became the first behavioral economist elected as a Fellow of the Econometric Society. He’s also just become a recipient one of the 24 annual MacArthur Foundation Fellow grants.

I wish someone had told me at the beginning of my career… Learn more math! I was good at math but didn’t appreciate how important it is to learn it when you’re young. Math is central to applied economics and could be used more in JDM psychology. The same is true for statistics— knowing a lot of tricks helps you get the most from data, win arguments, and figure out what you do and don’t believe in other people’s papers. 

I most admire academically… (With apologies to many whom I’ve forgetfully excluded)  Dick Thaler, for setting a good example by writing a small number of papers on important questions, and making each a gem. Danny Kahneman for being so wise, getting wiser every year (how does he do it?) and writing so beautifully. Amos Tversky, for a steeltrap mind and tenacity in digging on a topic until he had it figured out and expressed in a simple formalism. George Loewenstein for his gift of synthesizing lots of ideas and examples into an insight in a way that is very fruitful for others to then pursue. Gary Becker for seeing the interesting economic elements in so many kinds of choices (like having children, and crime).  The economist Bob Shiller for being eclectic and for daring to write aggressively about the role of social forces in asset pricing (which everyone else thought was crazy and unmodellable but now is starting to gain traction). I also admire a lot of people JDMers may not have heard of in other fields. One is Joe Henrich, a cultural evolution anthropologist who did the first economics experiment in a small-scale society, which then led to an influential cross-society project. Three more are: Duncan Watts who knows a ton of things about social networks, Mike Kearns, a computer scientist who recently became interested in experiments on networks and problem solving, and Peter Dayan, a “dry” theoretical neuroscientist who is always coming up with remarkable bold ideas.

The best research project I have worked on during my career… If you’re doing it right, you almost always have the very genuine feeling that the paper you just finished is the best one (even though you had that same feeling N-1 times before). One of the best was our paper on taxicab driver labor supply (QJE 1997). It was a really simple insight and one of the earliest clear tests, outside of finance, between a behavioral alternative and a very standard economic idea—that labor supply curves slope upward (i.e., workers put in more hours when wages are higher). I was living in New York at the Russell Sage Foundation so off to the Taxi and Limo Commission I went. There sat a bored economist whose main job is to collect statistics so they can justify taxi fare increases every couple of years. It turned out they had done some studies asking drivers for information on the hours they drove on different days, so I left with a (free!) floppy disk full of data from them. We did not have any formal model in the paper, but others came along later, figured out the proper way to model it with reference-dependence, and replicated our basic finding.

With that paper, we also had a mixed editorial experience with a happy ending. We sent it to American Economic Review, where we ended up getting one silly short report basically saying “I don’t believe it” and mentioning measurement error, which we had addressed very squarely (with a good “instrumental variable”).  A lot of economists were (mindlessly) hostile back then. We withdrew it and submitted it to a special issue of the QJE honoring Amos Tversky and got incredible help from the editor there (Larry Katz) who is an outstanding labor economist and told us exactly what to do.

The worst research project I have worked on during my career… The worst was the first experiment we did in Charlie Plott’s class in winter 1980 at Chicago GSB. Charlie was an incredibly patient and generous teacher, so he required us to actually run an experiment. We were interested in finance at the time so we created an experiment to test whether specialists in stock markets would smooth prices as they are supposed to do in theory, by buying during price drops and selling during price increases.

We made every possible mistake. First, there was only one specialist per session and a lot of live traders, but only the specialist’s behavior was interesting. So the design had incredibly fragile internal validity—a distracted or confused specialist would just produce terrible uninteresting results. The instructions were a mess. And of course we did not plan well so 10 minutes before the experiment we were in the library– a 5 minute walk from the lab– Xeroxing the instructions. Now I tell students that their first experiment will be their worst—hopefully!, since there is a learning curve—so they should just pick something and get started, rather than fret and ponder endlessly trying to make it perfect.

The most amazing or memorable experience when I was doing research… Probably the most memorable was a paper exploring whether you could create herd behavior in a horse race betting market. At the time, people in economics were just beginning to formally model “cascades”, in which you observe decisions other people make— like a crowd outside a new restaurant—and decide how to combine your own belief with what you infer from the crowd.

By mistake I once put in a ticket for a race that had not been held yet, and the terminal screen came up “Do you want to cancel your bet?” So I realized you could make bets and cancel them before the race. Then I got the idea to make large bets on a horse, $500 or $1000, and see if those bets influenced others to bet on the same horse (herding) or to stick with their own hunches and bet against me. Either result would be interesting.

It was fun to actually make the bets and see what happened. It was a matched-pair design in which races with two similar horses were picked, and I literally flipped a coin to decide which of the two to bet on (the other one was a within-race control).  I had a little notebook and wrote down the betting totals every minute, it was fun being like a naturalist in the economic wild. It was also nerve-wracking because half the betting happens in the last three minutes, so there was always a chance I would get stuck in a slow line and not cancel the bet in time. Imagine having to explain to the university accountants why I needed to be reimbursed $1000 for a bet at the track!?

The one story I always wanted to tell but never had a chance… In graduate school and my first two assistant professor jobs, I had a small independent record label. I always wanted to write a short casual paper on behavioral decision making and valuation under ambiguity in businesses based on my experience. It was fun and actually made a bit of money, which was a miracle.

A research project I wish I had done… A few years ago Dave Perrett came to Caltech and showed some beautiful work using facial morphing. After that a PhD student (I think it was Meghana Bhatt) suggested that maybe you could make people think about the future differently by showing them an aged version of their own face.  We were lazy about actually doing it. Hershfeld et al. 2011 actually did this.  The general method of facial morphing could be used in lots of other JDM research, too.

If I wasn’t doing this, I would be… I would be a photojournalist or a documentary filmmaker. My first job after college was working for a beach newspaper in Ocean City, MD. I loved the idea of taking pictures and had an excellent semipro photographer coaching me. (This was in the old days where serious photographers would develop the film in a darkroom, in a chemical bath—it was tedious but cool!) Sadly, my pictures were terrible. At the very end of the job we discovered there was a light leak in the camera (sadface) so my pictures weren’t so awful after all. Anyway, pictures of dramatic events, especially political events and war, can be so riveting and important (like Nick Ut’s famous picture of the napalmed Vietnamese child running down the street). Documentaries can make the same impact in a longer form. And they are actually surprisingly profitable as a whole because they are cheap to make and because of the long tail from the possible huge box office gross.

The biggest challenge for our field in the next 10 years… In my view, probably the biggest challenge and opportunity is to make use of the amazing change in accessibility of new field data (so-called “big data”).  Economists have a head start on this because most of the data they work with are not experimental or survey data they produced, so they are well-equipped to find data and get answers out of it. Computer scientists are looking at these data too, and they have a huge edge in being able to get data (e.g. scraping websites etc.) If JDMers are stuck only in lab mode we will miss an opportunity to use both field and lab data to study robustness, whether interesting effects evident in short lab experiments persist over longer periods of time, and so on.

Keep your eyes open for where data are available. Lots of useful data are available from the web. In the US, the Freedom of Information Act (FOIA) requires governments to release any data they collected unless it’s classified. Many nonprofits and government agencies are interested in using behavioral science to make sense of what they do, and they are often eager to publish results (whereas companies may consider findings intellectual property and don’t want to publish it in order to keep it private). Tech companies like Google, Microsoft and Facebook have big research groups looking at their internal data and like having people spend time there as interns etc. A lot of people they hire are computer scientists who can be quite clueless about psychology and social science. JDM could add a lot.

Instead of thinking about what lab experiment to run, I hope some new researchers in JDM first think—what are the ideal field data to test my hypothesis?— then keep their eyes peeled for those data, including cold-calling companies asking for data. You can always run experiments as well if the field data are inconclusive about causality.

My advice for young researchers at the start of their career is… From a career point of view, it pays to specialize in a topic you find really interesting and explore it thoroughly using various tools. When you come up for tenure you want to be know as “Ms. Emotion and Risk” or “Mr. Overconfidence” or what have you. Don’t be shy about introducing yourself to senior researchers at conferences and sending them papers. Usually we won’t read the papers (or if so, not carefully enough to comment) but it gets your work into our memory.

Another important thing is to have a very clear understanding with your colleagues and department chair about what is expected of you to get tenure. Some places have very clear criteria, in terms of the number of papers and what journals count the most.

Another common mistake, in my opinion, is to invest too heavily in teaching pre-tenure. Teaching can be fun, you get positive feedback, and it’s deadline-driven. Research can be painful, frustrating, with negative feedback and no deadlines so that you can always procrastinate. To be very frank, as long as your teaching is adequate, research-oriented schools really do not care about teaching quality in making tenure decisions. If the colleagues who will be judging you say teaching does count a lot, get them to spell out what exactly that means and look carefully at the last 10 years or so of who actually did or didn’t get tenure. If star teachers with short vitas are getting fired that tells you what you need to do. When I was at Wharton business school there was a streak of people winning teaching awards then getting turned down for tenure just afterwards. It got so bad that people would start to worry if they won an award.

One more thing for women on the tenure track (and beyond): Many female colleagues complain that they get asked to do a disproportionate amount of service, such as serving on thesis committees, working on curriculum, recruiting, organizing speakers, and so on. Obviously these are activities that somebody has to do and you should feel obliged to do your share. The problem seems to be that women do too much. Maybe women feel more compelled to do it. Men seem to either not get asked as often or say No more often. It could also be that men do such a mediocre job that they get “punished” by not having to help out in the future.  While your tenure clock is ticking, you need to guard your research time fiercely (or enlist a senior colleague who can help you do that).

Departmental website

TED talk

Research Heroes: Barbara Mellers

BarbMProfessor Mellers is the 11th Penn Integrates Knowledge Professor at Penn University. Her research examines how people develop beliefs, formulate preferences, and arrive at choices. She focuses  on why  people deviate from principles of rationality and how those deviations influence consumer choices and cooperative behavior. She is currently exploring how to elicit and aggregate probability judgments to arrive at the best possible predictions of uncertain events. She has authored over 100 articles and book chapters. She was a recipient of the Presidential Young Investigator Award and a past president of the Judgment and Decision Making Society.

I wish someone had told me at the beginning of my career that all careers come to an end.  When I was young, I felt invincible, I thought I had all the time in the world. But reality caught up with me, and I have a different perspective now. Each research project might be the last, so each one should, at least in principle, be better than the one that went before it.

I most admire people who are clear thinkers, beautiful writers, big dreamers, and hard-core scientists. They work through the implications of their ideas and are their own worst critics. And they do it all in the most graceful and elegant way imaginable.

The best research project I have worked on during my careermight be the one I am doing now on human forecasting. This is a large and long-term project that gives me the opportunity to work with many talented people with wide ranging and diverse skills. This project reminds me of an onion; we keep pulling off layers and finding more layers to go. It gets better and better. 

The worst research project I have worked on during my careeris the last thing in the world I want to talk about.

The most amazing or memorable experiences when I am doing research….happen when I am surprised by the results of an experiment. I once did an adversarial collaboration with Hertwig and Kahneman, Kahneman described the process perfectly. When the data don’t turn out right, we suddenly gain 20 IQ points. Everything seems to make perfect sense in a brand new light that was completely obscure until that moment! Unfortunately, those IQ gains disappear when the surprise is over.

The one story I always wanted to tell but never had a chance…is hard to imagine because there are always opportunities to tell stories. So I would never hold back on one that was worth telling.

A research project I wish I had done…is something I always thinking about.

If I wasn’t doing this, I would be in doing science in another field, and the choice of which field to pursue is a difficult forecasting problem. It is hard to know what areas of science will be the most exciting twenty years from now. The best fields to work in are ones that are changing fast due to the synergy of several good ideas and ingenious technological innovations. Neuroscience, astronomy, and genetics are good examples.

The biggest challenge for our field in the next 10 years…is figuring out how we can make better judgments at individual, societal, and national levels. This goal applies to everything – medical decisions, career decisions, military decisions, romantic decisions, legal decisions, business decisions, policy decisions, and more. We need theories, but we also need to generate useful knowledge. That is the only reason why the public will listen.

My advice for young researchers at the start of their career is…replicate everything you do several times. The truth, however hard it is to accept, is what moves science in the right direction and leads to progress. Admit your uncertainties; you aren’t the only one who has them. Remember that you can’t praise people too much (yes, we really are that shallow!). And last but not least, when in doubt, give credit to others. Time usually sorts things out.

Departmental site: http://psychology.sas.upenn.edu/node/20474

More than Friendship: The Importance of Student Peers

Screen Shot 2013-09-01 at 12.59.46 PM

Time and time again, you hear students talk about how lonely graduate school can be. To fight the loneliness, graduate students often befriend each other, play board games together, go to trivia nights together, or yes even party together—only on weekends and always responsibly of course. Even though this makes graduate school less lonely, the research itself may remain a lonely enterprise.

Yet it doesn’t have to be: future professors, inventors, and intellectual powerhouses are residing on the desk across from you, why not take advantage of that?

On day one of graduate school I wished someone would have told me so many things (e.g. difference between theory-application, how run certain models) but most of all I wish someone would simply have told me: “Student peers are fundamentally important to your academic life.” 

Of course, everyone knows you want to befriend and get along with the students in your department. However, unlike during your undergraduate studies where friendship is the ultimate goal, in graduate school so much more can occur. Graduate students are not just potential friends, they are potential colleagues, co-authors, discussion partners, support networks, and walking encyclopaedias of various literatures. Fellow students are the one of the biggest and most powerful resources in graduate school, yet we often overlook this fact.

No matter who your advisor is, he or she will not be around as much as your fellow students who are almost always there. They hear your ideas in class and lab, attend your conference presentations, talk at length with you over coffee and lunches, and see your ideas develop from day one. In many ways your peers often know your ideas, thought processes, passions, and weaknesses better than anyone else. This is especially true for students working with multiple advisors or switching between advisors.

Yet, often we simply don’t take advantage of our friendly fellow students. We don’t follow the example of the Psych Your Mind students who spend one lunch a week talking about ideas just amongst themselves. We don’t take the time to kick ideas back and forth, or just be someone’s sounding board. Instead, we stumble into advisor meetings will ill-prepared pitches, when a pre-conversation with a peer could have drastically improved them.

Recently, a group of students at a conference agreed to start a purposefully small and private online message board group, so they could communicate about important topics and questions. With this message board system, these students can get insight on complicated questions, methods, cites, and theories within an hour. A network of graduate students supporting each other can be at times more powerful than any individual meeting with a faculty member.

Lastly, even if we talk together or form networks, we don’t tend to co-author with each other. Remember last time you just couldn’t figure out the right stimuli, couldn’t handle the stress of a revision, and got writers block? Or remember that time you needed feedback from your advisor, but the advisor was in a conference in Spain? That’s when a student co-author would have saved you.

Professor Gavan Fitzsimons at Duke University often gets praised for one interesting talent: he’s good at putting graduate students together and building research teams. He knows how powerful a network of graduate students, senior professors, and often also young professors can be and his CV is a testimony of that.

There’s a belief in Improv Comedy that when two performers get on stage and make up a scene together, the performers create something that is greater than either performer would have created own their own. Improv performers believe that putting two passionate people together creates true greatness as they positively build upon one another’s ideas. Whether it is as co-authors, giving feedback on manuscripts, or just chatting about research over lunch, togetherness is a path to greater things.

Inside the Black Box: Psychological Bulletin

Psychological_Bulletin-500x500

Psychological Bulletin is a bimonthly peer-reviewed academic journal that publishes evaluative and integrative research reviews and interpretations of issues in psychology, including both qualitative (narrative) and/or quantitative (meta-analytic) aspects. The editor in chief Stephen Hinshaw gives us insight into this journal

What makes you go “Wow!” or “Yuck!” when first read a submission? Our journal (Psychological Bulletin) is different from most others, in that it publishes only lengthy, synthetic review papers–across the entirety of psychology and behavioral science.  So, I look for a deeply conceptual introductions and systematic reviews of primary literature, written in an accessible yet still scholarly fashion.

What are the common mistakes people make when submitting/publishing? Not reading instructions carefully (or at all)–so that we sometimes receive single empirical studies or extremely preliminary ‘review’ papers suggesting leads for further study (but not providing a deep review of a mature literature)

What are your best tips on how to successfully get published? Research, research, research your topic and revise, revise, revise your writing.

How are reviewers selected? In consultation with Associate Editors, I scour reference sections and consult lists of experts in various subfields.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? See if a more senior person will ask the editor to enlist you as a co-reviewer (with permission of Editor).

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer?  Good reviews are thoughtful, respectful, and reveal deep knowledge of topic – showing how the paper does or does not provide an advance in that field.

How do you resolve conflicts when reviewers disagree? Careful reading and rereading of the paper…and sometimes going to a Consulting Editor for a ‘tie-break’ review

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Worst way is to get overly defensive and battle every point of the reviews.

Is there a paper you were sceptical about but turned out to be important one? Yes, sometimes initial submissions that didn’t really deliver can be greatly improved with substantial revision.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? PB is so broad that it’s hard to see emerging trends across all of the sub-facets of the field.

What are the biggest challenges for journals today?  Finding and engaging willing reviewers, keeping up with flow of submissions, battling ‘crank’ journals.

Journal home page

More ‘Inside the Black Box’

Inside the Black Box: Medical Decision Making

mdmMedical Decision Making (MDM) is a peer-reviewed journal published 8 times a year offering rigorous and systematic approaches to decision making that are designed to improve the health and clinical care of individuals and to assist with health policy development. MDM presents theoretical, statistical, and modeling techniques and methods from disciplines including decision psychology, health economics, clinical epidemiology, and evidence synthesis. Editor-in-Chief Alan Schwartz gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission?

  1. “What’s new?” – What will I learn from this paper that I didn’t know before?  A paper presenting an original approach to a problem, or an extension of past approaches, or a first replication of a previously unreplicated finding is exciting to read. At Medical Decision Making, what’s new is usually a new method for studying or improving decisions, but sometimes it’s an exemplary application of prior methods. (Of course, some journals, like PLOS One, have explicitly chosen not to use this criterion).
  2. “What’s true?” – How do I know that I can rely on the results? Are the methods rigorous, sound, and appropriate for the question? Did the authors interpret their findings appropriately, without overgeneralizing?
  3. “So what?” – Why was this study proposed in the first place? What motivates the research question, and is it an important question in the context of the field and our currrent knowledge?
  4. “Who cares?” – Is this paper right for the readers of my journal, or does it belong somewhere else? A straightforward clinical trial comparing two drugs — or a basic psychology study of non-medical decision — probably doesn’t belong at Medical Decision Making.

What are the common mistakes people make when submitting/publishing? 

My top three:

  • Failing to motivate the research question or ground it in a theoretical or conceptual framework. Theory is important.
  • Overstating the conclusions and ignoring limitations.   Your paper doesn’t have to be the final word or solve every problem.
  • Sending to the wrong journal (violating the “who cares?” principle)

What are your best tips on how to successfully get published? Be open to feedback. Before you send a paper out, it should be the best paper you can write, so you should have had friends and mentors read and criticize it. If you can anticipate issues that a critic might raise, address those forthrightly. When you receive reviews, pay attention to them. If you don’t understand something a reviewer says, don’t ignore it — ask the editor for guidance.

How are reviewers selected? At Medical Decision Making, as at many journals, we have experienced reviewers on our editorial board and in our reviewer database. We find new reviewers through suggestions from authors (yes, you may suggest potential reviewers, and yes, we will often invite at least one of your suggestions if we agree that they really have specific content expertise) and through looking at the paper’s citations and related literature ourselves and seeing who else is working in the same area.

Our goal is to ask for reviews from experts whose reviews not only advise the editor on the disposition decision but are valuable to the authors, whether or not we publish the paper.  We’re fortunate at MDM to have really outstanding reviewers, and many first-time authors comment on how helpful the reviews have been.  We also score our reviews, and reviewers who do a poor job tend to get selected against in the future.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I’m a big proponent of reviewing both papers and grant applications; I think you learn a lot from reading very good (and sometimes poor) writing, and from comparing your review with those of the paper’s other reviewers and the editor (at MDM, we cc our decision letters to the reviewers). One good way for PhD students to get some experience with this is to do a “mentored review” with their advisor when their advisor is asked to review a paper. Many journals will allow the invited reviewer to share the review with a student as long as the invited reviewer supervises and takes responsibility for the review. Post-PhD, as a postdoc or junior faculty, if you haven’t already been asked to review for a journal that you’d like to, you can often contact the editorial office and ask to be added to the reviewer database. Of course, submitting a paper to the journal and filling out your author profile with a good set of keywords for your expertise is also likely to lead to reviews in the future.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? This is a matter of editorial taste, but I really like to see a review that begins by looking at the big questions and pointing out the strengths of the manuscript (or at least of what the authors hoped to achieve through the study), and then moves on to detailed constructive criticism about methods, and presentation and interpretation of results. The review should conclude with minor concerns or suggestions for improving the writing.

Some little things that are very helpful: Number the points in the review to make it easier for the author to respond point by point. Refer to parts of the manuscript by page number and line number to help the author locate exactly what you’re asking about. Make it clear to the author when you’re making a suggestion (e.g. please describe the factor rotation strategy in more detail) and when you’re asking a (non-rhetorical) question (e.g. why did you expect patients to be more influenced by attribute range than attribute context?)  Don’t say (in the
comments to the author) whether the paper should be rejected or accepted – that’s the editor’s job. Definitely don’t recommend rejection privately to the editor and then write a wholly positive review for the author.

How do you resolve conflicts when reviewers disagree? Reviewers advise; editors decide. I’ll admit to a little bias: when good reviewers disagree, I think that means there’s something important to work out, and I’ll usually ask the author to help the reader understand both perspectives and how the author chose to resolve them. There isn’t a single right way to study something.  On rare occasions, reviewer disagreement lends itself to inviting one or both reviewers to write an editorial about the study, if we’ve decide to publish it.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? A revise and resubmit is a positive signal, especially from a paper journal that has a limited page budget. It usually means that the editor thinks there’s something important in the paper to make it worth spending more editorial and reviewer time on, and that you’re capable of addressing the reviewer concerns. So that’s easy – always resubmit, and always include a cover letter addressing each point made by each reviewer. That can mean explaining why you didn’t choose to make a suggested change, but pick your battles: a wholly unresponsive revision is not going to go very far with the editor.

Medical Decision Making also has a category of initial decision called “reject and resubmit”. This means that the editor doesn’t want the paper or a revision of it, but thinks there might be a different, related paper you could write that would be competitive. The new paper gets the full peer review treatment, usually with different reviewers.

A flat rejection – well, when I get those, I usually shake my fist at the sky, eat a piece of chocolate, and get a good night’s sleep. Then I see what useful information I can get from the reviews and improve the paper to send it elsewhere. Uncertainty is a fundamental
fact of life.

The worst way to react to a rejection is to send a nasty email to the editor-in-chief to try to bully him into reconsidering the decision and to threaten that you will never send your priceless work to that journal again. Yes, that happens (especially early in my term).
We have an appeals process if it’s clear that a reviewer or editor deeply misunderstood something, but that’s not it.

Is there a paper you were sceptical about but turned out to be an important one?I think I’m still too early in my editorship to know. In about 3 years, though, I’d be interested in looking at that — collecting the top 10 important papers we’ve published based on reader response and looking back at my notes to see how many of those I only assigned to an associate editor reluctantly.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? That’s one of the best parts of the job. Right now, MDM is publishing a lot of innovative work in simulation models and value of information analysis methods. Approaches to utility models are emerging in which econometric and behavioral research are triangulating on phenomena that call into question some longstanding simple assumptions of health state valuation — for example, that the proper unit on which to assess utility is the individual decision maker. And there’s a lot more interest in dual process theory and decision psychology/behavior economics manipulations of the decision environment in order to understand and improve health decisions.

What are the biggest challenges for journals today? There’s a great debate going on right now about open access models for science journals and how publishers do or don’t contribute to science, but in some ways, I think that’s just the opening act for a larger discussion of the value of an expert peer review process vs. open publishing and crowdsourced reviewing. I want to see good science clearly communicated, and journals need to demonstrate to their readers that they are promoting those ideals.

Journal website

More ‘Inside the Black Box’