New series: Opening the black box of academic publishing

Publish or perish is a phrase most scholars know well. Publications is the currency of academia: if you don’t publish, having an academic career is difficult.

However, publishing is hard so we  decided to ask editors of leading journals some questions that many of us have and which only editors can answer. The aim is to provide more insight into the process of scientific publishing.

For the upcoming series, we spoke to leading journals in general psychology, judgment and decision making as well as journals in marketing and economics that publish articles related to judgment and decision making. We’ll be publishing the individual interviews over the next couple of months and hope you find them useful in your work!

First up is Judgment and Decision Making

Inside the Black Box: Judgment and Decision Making

We start our journal editor interview series with Judgment and Decision Making’s editor Jon BaronJDM is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. JDM publishes original and relevant to the tradition of research in the field represented by SJDM and EADM. Relevant articles deal with normative, descriptive, and/or prescriptive analyses of human judgments and decisions. 

What makes you go “Wow!” or “Yuck!” when first read a submission? Wow!: When it shines new light on a traditional JDM problem, including possible applications in the real world. I choose the lead article in each issue on the basis of this sort of reaction. (Of course, some issues have no article that merits the exclamation point, and some have more than one.)

Yuck!: When it applies the Analytic Hierarchy Process to the pipe-fitting industry in Pakistan. Or when it uses a tiny sample, with no replication, to show that people are at the mercy of subtle, unconscious forces. Or when it makes obvious statistical errors, like claiming an interaction on the basis of a significant effect next to a non-significant effect.

What are the common mistakes people make when submitting or publishing? Submitting to the wrong journal.

What are your best tips on how to successfully get published? Study big effects, or use large samples. Don’t waste time studying phenomena that are ephemeral and difficult to replicate, especially if you are trying to find moderators of such effects.

How are reviewers selected? When I handle papers – about half of them go to associated editors – I try to find the most expert reviewers who are willing to review, including members of the journal’s board when possible. This often takes several attempts; people say no. Often I use Google Scholar, as well as citations in the paper and authors’ recommendations.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? If you know someone who is an editor (including an associate editor), tell him or her that you are willing. I often ask grad students to review, but only if I know them to be experts on the topic of the paper. I am not willing to take a student’s word for this expertise, or to assume that being first author of a related paper is sufficient. Thus, personal knowledge is important.

I think that grad students should do occasional reviews. But anyone who keeps publishing is going to get asked to do more and more reviews. Be nice to editors (and other authors). If you get asked to do a review, respond quickly. Saying no immediately allows the editor to go to the next person on the list.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer? Explains why the paper is fatally flawed, if it is. Otherwise provides helpful advice for revision, or (only if necessary) for additional research. What I find unhelpful are requests for more “theory”, as if theory were something like soy sauce.

How do you resolve conflicts when reviewers disagree? I regard reviews as information, not votes. They point out flaws I had not discovered, literature I did not know, or strengths that I did not appreciate. The review’s bottom-line recommendation is just a little more information. Thus, these recommendations are not conflicts that need to be resolved.

But reviewers also disagree about specifics, about what needs to be done. Here, I think it is my job to tell the author which of the reviewers’ comments to ignore, and which to follow, and (if the review does not say), how to follow them (if I can). As an author, I find it annoying to be at the receiving end of conflicting reviews, with no idea what magic I must do in order to satisfy everyone.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? The best way to react to a revise/resubmit is to try to do what it says, or explain politely and clearly why you can not or should not do that. Or give up and try another journal if you think you are being asked to do the impossible.

The best way to react to a rejection depends on what it says. If it finds a fatal flaw that cannot be fixed, the best thing may be to regard the paper as sunk cost, and move on. In other cases, rejections are very specific to the journal, so you should just send the paper
elsewhere. If you think a paper is good, don’t give up. Keep sending it elsewhere. In still other cases, papers are rejected because more work is needed. Maybe do the more work.

Is there a paper you were sceptical about but turned out to be an important one? Not really.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? On topics, I think that fields go through fads – well, let’s say “periods in which some topics become very popular and then gradually fade into the background”. These are often good things. Many come from external interest and funding, such as the enormous interest now in “nudges”, or the interest in forecasting and prediction arising from the recent IARPA competition. Around 1991 the Exxon Valdez case inspired (and funded) a great deal of JDM research on contingent valuation and value measurement in general.

On methods, our field is slowly but surely catching up with the enormous increase in the powers of computers and the Internet. Data analysis is becoming more sophisticated. A variety of approaches are being explored (including Bayesian ones). Web studies are becoming more numerous and more sophisticated. People are making use of large
data sets available on the Web, including those they make themselves by mining data.

What are the biggest challenges for journals today? The biggest is integrity. The work of Simonsohn, Simmons, Nelson, Ionnides, Pashler, Bar-Hillel (earlier) and others on p-hacking, file-drawer effects, basic statistical errors, and outright fraud has raised serious questions about what journals should and can do. The problems vary by research area. Medical research and social psychology are probably worse than JDM. But I am still trying to work out a way to deal with this problem. Asking for data and for sufficient stimulus materials for replication is a step. I spend a lot of time checking data analysis with the data that authors send.

The next biggest challenge is how to take back scholarly communication from those who seek to profit from it by building pay walls of one sort or another, including both subscription fees and publication charges. I have ignored this problem, hoping that it will go away or that someone else will solve it (e.g., by endowing JDM with $500,000). Right now, JDM has neither type of fee, because I do the production and “office work”. Other journals work this way, but the authors all submit papers with LaTeX formatting. My job would be easier if Microsoft Word did not exist. Maybe I will outlast it, and then the problem will be solved for the next editor. But a little money – nowhere near as much as proprietary journals get – would still help, and I don’t know where to get it.

The third biggest challenge is how to get rid of the perverse incentives that arise from the use of the “impact factor” of a journal for evaluation of authors of papers in that journal. Journals cannot do much about it, except perhaps to stop advertising their impact
factors in large print.

Journal homepage

‘Inside the Black Box’ series home page

Jon Baron Research Hero interview

Outside The Matrix: Jolie Martin, Quantitative UX Researcher, Google

jolie martinAfter a long break we return to the Outside the Matrix series with Jolie Martin, a quantitative user experience researcher at Google. She received her PhD in Science, Technology, & Management at Harvard through a joint program between Harvard Business School and Computer Science department, and did post-docs both at Harvard Law School Program on Negotiation as well as the Social and Decision Sciences department at Carnegie Mellon. Prior to joining Google, she was also an Assistant Professor in Strategic Communication at the University of Minnesota.

Tell us about your work: how does decision making psychology fit in it? My title for the last year or so has been Quantitative User Experience Researcher at Google. However cumbersome, all the words are necessary to indicate what I do. Like my colleagues who do “regular” (qualitative) user experience research, my goal is to understand when users successfully satisfy their information needs using Google products. In my case, working on the Search Analysis team, I specifically develop metrics that describe how users interact with features on the Google search results page. The key distinction from other user experience researchers is the data source I draw upon, and as a result the types of analyses I do. Rather than running lab studies or even large online studies through tools like mturk, for the most part I rely on data recorded in logs to tell me how real users behave under natural conditions. The benefit of this approach is massive amounts of data. Nearly everything of interest is significant, sometimes even with very minor tweaks to the product that are imperceptible to the average user. The drawback – although it’s sometimes the fun part – is that I have to draw inferences from behavioral signals about users’ preferences, intentions, and satisfaction.

Judgment and decision making to the rescue! My theoretical background in this field has been extremely helpful in formulating hypotheses about why users search the way they do, from the queries they enter to the sequence of clicks that they take. For example, in considering ways to improve the user experience with exploratory tasks that require large amounts of subjective information (say, choosing where to go on vacation), I need to be mindful of contrasting interpretations of a user’s behavior. If she spends more time and clicks more links, this could be a bad signal that she simply didn’t find the information necessary to make a decision, that she suffered from information overload, or that she was distracted and continued browsing to procrastinate on a more worthwhile task. On the other hand, it could be a good signal that we offered her a rich set of information sources – increasingly tied to her personal characteristics and social networks – that offered insights worth delving into. To tease apart these interpretations requires testing mental and behavioral models of an extremely diverse set of users.

Why you decide to go into industry instead of continuing in academia? Unlike many of my academic colleagues – and even many people I know in industry who jumped ship – I never embarked on a PhD specifically to pursue a career in academia. In fact, I was clueless that this was the expectation of my advisors until several years into my PhD program! I was operating under the assumption that building theoretical knowledge and methodological skills would serve me well in any career. At some point right around my third or fourth year of grad school, I did become somewhat indoctrinated to the notion that academia is the “highest calling” and we should leave the actual implementation of our ideas to others. And of course I realized how difficult it would be to return to academia should I leave, so with this in mind, I gave it the old college + MBA + PhD + 2 post docs + assistant professorship try before finally divesting myself of those sunk costs. I liked each of my academic positions, but often felt as if I was spinning my wheels to achieve an objective (publishing in journals read almost exclusively by other academics) that I didn’t really care about, so when Google contacted me, I figured it couldn’t hurt to interview. During the process, I was surprised to find many other people like me with PhDs and interests in “pure” research. These were very smart people, and all had various personal and professional reasons for leaving academia, but it became clear to me that it was a choice, not necessarily indicating that someone couldn’t make it in academia.

That said, I am a firm believer that people enjoy things that they are good at, and where they can continue improving over time. I thought Google would offer exactly this for me. I have always loved building cool stuff, which is really the core of what we do. At the same time, there would be a lot to learn. When I accepted the offer at Google, I took a one-year leave from my assistant professorship (which was extremely generous of my department chair to offer), and it was nice to have that safety net should I dislike my new job. During the week of orientation with mostly software engineers, I thought more than once that I might need to use it. Just about everything flew over my head. But once I settled in with my teammates, I realized that everyone was willing to help, and no one had all the answers; doing logs analysis from end to end is complex by its very nature, and no one could step into the role as an expert. The expectations of me were that I be persistent and keep asking interesting questions. After a year in my position, the torrent of learning opportunities hasn’t tapered off in the least.

What do you enjoy the most in your current role? The main appeal of my job is the rapid pace that I can have impact on products that improve people’s lives in a tangible way, sometimes just through offering them a whimsical break from a busy life. I love working for a company that takes this mission seriously, and always holds it above monetary factors. Of course, this is not true of every company, so I feel lucky in that regard. I also have a nice variety of projects that result from mutual selection, and work with people in just about every role. There are only about 10 of us across the company in the Quantitative User Experience Researcher position, and our ability to glean insights from large data sets is highly valued by others. There is no prescribed way to perform these analyses, so we have freedom to use novel methods in distributed computing, machine learning, and natural language processing, among others. Last but not least of what makes my work stimulating is the chance to witness the evolution of cutting edge new technologies, such as riding in a self-driving car, wearing Glass, and seeing a prototype of a balloon that may one day provide internet in developing countries. Making these products useful requires not only tech savviness, but also political and legal knowhow.

Do you see any challenges to the wider adoption of decision making psychology in your field? Google and many other large companies are quite receptive to using decision making psychology in some ways. For example, I was involved in a “20% project” (whereby we can spend 20% of our time on something completely unrelated to our job function) running consumer sentiment surveys during the Democratic National Convention and presidential debates. I’m now working on another 20% project that draws upon academic research to test how environmental and informational factors shape food choices in our cafes. Similar studies have been conducted at Google to examine how defaults affect 401K allocations, and programs have been implemented based on the findings, with material effects on employee well-being.

However, for several reasons, there is more resistance to using basic research in the creation of products for end users. First, many companies in the technology industry are comprised mainly of software engineers (at last count, about 75% of Google employees) who may not consider psychology relevant. They often expect that users are “rational” in the sense of taking optimal actions given the set of options and information at their disposal, whereas we know this is rarely the case. Second, what research we do has focused on user response to specific technologies, with little ability to then generalize to a broader set of stimuli or outcome measures. This is related to the fast product development cycle I mentioned previously; we simply don’t have time to test fundamental psychological principles or the product will be launched and onto v2 before we have anything to say about it. This is changing gradually as the value of longer-term focus is realized. Third, while publishing is encouraged, there are not huge incentives to do so, especially given the more rigorous hoops we have to jump through in obtaining approval. Even in cases where we have interesting findings applicable to psychology more broadly, we often can’t disclose them for proprietary or privacy reasons.

How do you see the relationship between academic researchers and practitioners? In my opinion, the ideal relationship between academics and practitioners is one that takes into account the comparative advantages of each. While academics are usually more in touch with trending or provocative research topics that are likely to interest audiences and gain traction, practitioners are more aware of the available data sources and product use cases. Similarly, in terms of resources, academic connections provide legitimacy and wider dissemination of research findings, while those of us in industry can potentially be more useful in supplying funding, a sample population for experiments (be they users or employees), and analysis infrastructure (i.e., computing power). Collaborations would be more synergistic if there was greater engagement in both directions, with academics developing research questions based on real business or social issues, and practitioners making the additional effort to share findings via peer-reviewed conferences and journals.

What advice would you give to young researchers who might be interested in a career in your field? I’d suggest that students contemplating a transition to industry try a temporary or part-time internship; it’s a relatively low risk way to test the waters, and realistically, given the scarcity of professorships at top research universities, your advisors should support your consideration of other options. However, also be aware that one company isn’t going to fully represent all of industry, the same way stepping into a random graduate program or postdoc could be quite different from the one that is the best fit for you. I interned at a hedge fund during grad school and knew pretty quickly that it wasn’t for me, but it was a valuable experience nonetheless.

Perhaps more feasible for faculty members who are dissatisfied with certain aspects of their careers (e.g., working weekends and responding to emails at 3am), consider reaching out to people at companies of interest to you. You will likely find that they are excited to talk to someone with the wherewithal to do in-depth analysis of their users, and may even be open to handing over data or running experiments with you. Ask if you can present at company meetings to get a sense of the culture and style, or invite industry folks to present at your university. And don’t just build your network, but also maintain it by staying in touch with people you’ve worked with in the past. Referrals from a company’s current employees will make a big difference if you decide to apply!

Viewpoint: Video Advice from Mike Norton, Leif Nelson, and Simona Botti

This spring, we took our shaky camera around the Conference for the Society for Consumer Psychology. We asked a few professors to give us some ‘bite sized’ words of wisdom. They talked about hope, presentation style, and how to think like a researcher.

A couple of months later, here are the results (finally)…

Michael Norton

…on “simple presentations” and “asking questions.”

 

 The Doctoral Consortium organizers Leif Nelson and Simona Botti

…on going to conferences, how grad school is actually manageable, and how there are more helping hands out there than we normally think.