Inside the Black Box: Frontiers in Psychology

frontiers banner

Next in our Inside the Black Box series is Frontiers in Psychology, an open access journal that aims at publishing the best research across the entire field of psychology. Frontiers in Psychology publishes articles on the most outstanding discoveries across the entire research spectrum of psychology. The mission of Frontiers in Psychology is to bring all relevant specialties in psychology together on a single platform. Field Chief Editor Axel Cleeremans gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? I go “Yuck!” instantly if the paper looks like it’s poorly written, if the figures don’t look good (see Tufte’s advice on that), if it contains typos, or if looks very verbose or boring. There is an important message there: If you don’t fine-tune the presentation of your findings, it’s as good as nothing.

“Wow!” can result from different factors. Sometimes it’s the finding itself — for instance, I find Geraint Rees’s recent demonstration that one’s experience of the Ebbinghaus illusion is inversely proportional the size of one V1 stunning. Other times it’s the sheer power of technique — Bonhoeffer’s applying two-photon microscopy to visualize synaptic growth in vivo is a good example of that. The cleverness of an experimental design is a further “Wow!” inducer; Jacoby’s process dissociation procedure, when I first read about it, definitely elicited a “Wow!” response from me. And then of course, I go “Wow!” when reading about impressive ideas. Rumelhart and McClelland’s PDP volumes made we go “Wow!” for years, as did Hofstadter’s “Gödel, Escher, Bach”.

What are the common mistakes people make when submitting/publishing? Submitting to the wrong journal. Making the story too complicated. Not having any story. Reporting uninteresting findings. Reporting uninteresting findings but trying to make them sound interesting. Failing to cite relevant work from many years ago that old editors know about.  Leaving typos in the manuscript. Ugly figures.

What are your best tips on how to successfully get published? Work on the most important issue in your domain. Build a good narrative. Papers that read like detective stories (and finish with an satisfying resolution!) are always good. Get the writing absolutely perfect. Of course, interesting and solid data. Simplify. Kill all the typos. Cite previous work. All referees first look for flaws because if any are found then the review is done and the referee can focus on something else. It is only when no surface flaws are found that the referee actually thinks about whether the paper is interesting…

How are reviewers selected? That very much depends on the journal. Some editorial systems are almost entirely automated, which has advantages (speed) but also disadvantages (relevance). Some editors hand-pick their referees based on different criteria (mostly, whether they think they know something about the topic and whether they think they’ll compose their review in time). Many systems offer referee suggestions based on keyword matches. Authors can also often propose referees themselves. This is a good idea as it speeds up the work of the editor, who will typically select referees both from the author’s suggestions and from his own pool of referees.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I wouldn’t do it too quickly — say, three years in your Ph.D. Reviewing an article is an important and difficult job. It gets much easier as your knowledge of the field grows and as your expertise at reviewing increases, but the first reviews you do are always very intensive jobs. You worry that you’ll be ridiculous in the eyes of the editor and the other referees. You worry that you missed a central point. You’ll spend days on your first review. On the other hand, knowledge of what’s going on in your field before it gets published can be invaluable — but for this, you can count on your advisor.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? A good reviewer is a reviewer who turns in her review in time and who manages to discuss the paper from a neutral tone while clearly listing the issues that concern her, if any. And of course: Good reviews also contain a clear recommendation that is congruent with the listed points. Sometimes you get almost self-contradictory reviews. They begin by “This is a very interesting paper that uses clever methods” and finishes by “I recommend the paper be rejected”. This makes it almost impossible for an editor to use the review, as do reviews that contain too many subjective comments. Reviews should almost be written as though they were public comments, that is, with all the care one would use if one were talking in public about someone else’s work.

How do you resolve conflicts when reviewers disagree? That’s a tough one. I regularly receive conflicting reports, sometimes at either end of the spectrum (i.e. Referee #1 says “Reject”; Referee #2 says “Accept without revisions”). If both reports make sense (that is, it is clear both referees understood the paper), most typically, I will consult a third referee (which sometimes doesn’t help). When all else fails, you read the paper and make the decision yourself… (just kidding: editors read the papers, but then there is a difference between reading a paper and forming an expert opinion about it). It is worth mentioning here that some open access journals (i.e. Frontiers in Psychology) have adopted a completely different manner of resolving differences between referees, namely to ask referees and authors to interact until a consensus between referees is reached. Many conflicts between referees are solvable by iterated interaction — something that can be tough to achieve with the standard reviewing process.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Revise and resubmit is pretty much the norm — it is exceptionally rare for a paper to be accepted right away. Dealing with rejection is understandably difficult. Your reaction to it very much depends on what you can attribute the rejection to. Being rejected from Science is not an indication that your research is not good; just that it’s not good enough, or not novel enough, or not interesting enough in the eyes of Science’s editors. You may think otherwise and feel wronged somehow, but it’s not your decision to make in either case, so it’s best to move on and submit to another journal. The worst case scenario is when you submit to a mediocre journal, wait for months, and find that your paper is rejected. If you really feel a “reject” decision was incorrect, it’s always a good idea to interact with the editor. As an editor, I only use “reject” when all referees agree that the paper is not publishable. Dealing with a revise and resubmit is easy: Just address all the points raised by the referees one by one and thoroughly. In the vast majority of cases, papers in that category will end up published;  it’s just a matter of taking all the points seriously and in detail.

Is there a paper you were skeptical about but turned out to be important one? Not that I can remember as an editor. A couple of my own papers as an author, though, had very difficult beginnings and turned out to be considered as quite important. Science is about data, but also involves rhetoric: Not only do the data have to be important, but you also have to present the results and their implications in a persuasive manner.

As an Editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? There is an important ongoing discussion on twitter, blogs, facebook, email and the press about the importance of replication in psychology. Developing methods that make it possible to analyze replication efforts properly, as well as promoting the publication of replication findings, are important issues. One of the most interesting methodological developments in this respect is the emergence of novel statistics based on Bayes’ ideas. I also continue to be impressed with the increased sophistication of neuroimaging methods — think MPVA for instance. Increased meta-data in all fields will also make all sorts of meta-analyses possible.

What are the biggest challenges for journals today? The challenges are not the same for traditional journals and for new, online, typically open-access journals. Some journals are more or less immune from challenges because of their extraordinary status in the field. The challenge for traditional journals is to stay relevant in an increasingly open-access, rapid-fire world: Interesting results are tweeted or otherwise shared almost instantly, and people want to download the relevant material freely and right away. The challenge for open-access journals is to accrue enough credibility. A challenge that faces every actor today, individuals and journals alike, is to find interesting ways of attracting attention. So much is published today (considerably more than even a few years ago) that it becomes a challenge to even find relevant material.

Journal home page

‘Inside the Black Box’ series home page

Advertisements

Want to say something?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s