Star Track: Marcel Zeelenberg

FOTO Zeelenberg GROOTThis week on Star Track we’re moving across the Atlantic over to Europe with professor Marcel Zeelenberg who is the head of the department of social psychology at Tilburg University in the Netherlands. After receiving his PhD from University of Amsterdam in 1996, he held posts at Eindhoven University of Technology and University of Sussex before moving to Tilburg in 1998 (first in the Marketing Department, and since 2000 in the Social Psychology Department). He’s also the academic director of the Tilburg Institute for Behavioral Economics Research who organise an annual symposium on psychology and economics. His research interests include impact of emotion on decision making, consumer decision making and financial behavior.

I wanted to pursue an academic career in this field because… I ended up studying psychology simply because my brother did it, and he liked it. I started out studying Biomedical Sciences, but found out quickly that this was not something for me. My brother, also rather oblivious about how he was going to make a living, ended up studying psychology (I forgot why). Since he is older than I am, and I always followed his footsteps, I thought it would be something for me as well and it turned out to be one of the best decisions I had made so far. Both of us ended up majoring in cognitive psychology at Leiden University which is where we met Willem-Albert Wagenaar who was extremely influential in fueling my interest in psychology in general and in JDM research in particular. He was inspiring, supportive and super smart, and the best teacher you can imagine. I followed all the courses he taught, including one on gambling (using his own book “Paradoxes of Gambling Behavior”) and one on the psychology of decision making (using Frank Yates’ book). Later on, through social psychologists Henk Wilke and Eric van Dijk I got introduced to social decision making and economic psychology. Looking back, I think the solid basic training at Leiden University clearly prepared me for a career in academia.

In 1992 I was thrilled to be able to work on a PhD project on affect in decision making at the University of Amsterdam (supervised by Joop van der Pligt, Tony Manstead and Nanne de Vries) where my interest in emotion was formed and where I also started working on regret (and later, with Wilco van Dijk on disappointment). That is also when I met Jane Beattie, who also had a big impact on me: Jane and I had submitted a Marie Curie proposal for me to do a postdoc under her supervision at the University of Sussex in Brighton, UK, but before the funding came through, Jane got ill and died. In the end we got the funding, but with Jane gone, Brighton lost most of its appeal. Luckily, I found a postdoc with Gideon Keren, at Eindhoven University of Technology, and soon after that a tenure track position at Tilburg Univesity (where I still am), where I started to work with Rik Pieters.

I mention all of these people, because for me pursuing a career in academia is so much the result of working with inspiring people and being able to educate yourself continuously. What other job allows you create your own work and study things that you think are relevant or interesting?

I find the inspiration for my research mostly from… Half of it, I think, comes from observations of everyday behavior that I would like to understand. The other half comes from reading papers or seeing talks and thinking, “hmm, that does not work that way. That is actually much more simple than that!” Eric van Dijk taught me that each time you read an article and think “hmm”, you have an idea for a new article. I was skeptical when he first told me that, but over the years I have learned that he is right. I actually tell my students the same thing now and hope it helps them to come to new research ideas easier (a thing that I found difficult in the beginning).

When people ask me what I do, I say… I say that I am a psychologist, and then quickly explain that I study decision making , emotions and how those two interact. Most people find it interesting and want to learn more. I have always had mixed reactions upon telling that I am a psychologist because people more often equate it with being a therapist than with being a scientist. I could say that I am a behavioral economist because it covers most of what I am doing but because I have no formal training in economics that does not feel appropriate.

The paper that has most influenced me is… There is no single paper that I can name. I have read many interesting papers, but I can mention 2 papers that I like a lot and that I think are underappreciated.

Beattie, J., Baron, J., Hershey, J. C., & Spranca, M. D. (1994). Psychological determinants of decision attitude. Journal of Behavioral Decision Making, 7, 129–144.

I read this paper shortly before I met Jane Beattie for the first time. I liked it then and still like it. They introduce the concept of decision attitude (in analogy to risk attitude), which refers to the propensity to make or avoid making decisions. People can show decision aversion and decision seeking.

Jones. S. K., Frisch, D., Yurak, T. J., & Kim, E. (1998). Choices and opportunities: Another effect of framing on decisions. Journal of Behavioral Decision Making, 11, 211–226.

I remember seeing Steven Jones give a talk about this paper in 1997. The point they make is that decision researchers typically study decision making by confronting participants with a choice between alternatives (do you choose brand A or brand B?), while in daily life we often do not compare alternatives, but simply evaluate the attractiveness of a single option (you favorite band has a new CD out, do you buy it?). And then they show that there are important differences between choices and opportunities and that research about choices cannot simply be projected on opportunities.

I like these articles because they show that we can learn so much from looking at how people make decisions in the real world. I also admire the authors for being able to bring these ideas back into more mainstream JDM research. These are articles where I whish I had written them.

The best research project I have worked on during my career… This is a hard one. I am inclined to say that the things that we are working on today are the best we have been doing but that is not the type of answer that you would be interested in. So, what I like best is projects that evolve in something bigger: they may start out as a single paper, but then quickly new questions pop up and new studies need to be done. That has happened a few times now, first with our research on regret and disappointment and later on with our research on shame and guilt. And currently we are working on the economic psychology of greed (this is Terri Seuntjens’ PhD project) and we generate so many ideas for studies that it is impossible to run them all.

Also, over the past years we have become more and more interested in examining mundane financial decisions (insurances, pensions, poverty, etc.) which is gratifying because of their direct relevance. There are so many interesting problems to study – it is an embarrassment of riches.

If I wasn’t doing this, I would be… The justice system and the law have always intrigued me. During my undergraduate years I have taken some courses in law and forensic science (what we would call CSI-studies now).  I think I could be a lawyer.

The most important quality for a researcher to have is… Stamina! I mean we are all smart and well educated, but I think a large factor in success is to simply do the work that is needed. There are so many obstacles in our work and the delay of gratification is extensive. It can take years to become an expert in something and many studies to give the insights that you hoped beforehand. Data collection can be difficult. Journals do not always like your work. Also, especially in the beginning of a career jobs are often temporary and you may need to move several times before getting a tenured position. Without stamina you will give up.

The biggest challenge for our field in the next 10 years… That must be solid science. We need to change how we do research and how we report about doing research. There are many good initiatives (archive data, share materials, increase transparency about data collection and analysis, facilitate replication; basically things that we all learned as undergraduates and that the open science framework is doing now) and we need to journals to take responsibility as well (Jon Baron is doing an excellent job with Judgment and Decision Making, by also publishing data and materials with the articles). We also we need to accept that results are most often not perfect, so we should not demand perfect data. And, because we are not p-hacking anymore (Simmons, Nelson, & Simonsohn, 2013), we need larger sample sizes and should accept (or embrace) that we can publish less papers.

My advice for young researchers at the start of their career is… Find a topic you really like and go for it. Do not get demotivated because no one else is studying it. I never recommend a student to investigate something that is fashionable. The risk is that by the time you get your work done, either someone else has been working on the same questions, that people got bored with it and the topic is not fashionable any more, or worse, that you are not really into it and your work shows that.

The one thing I’ve found most challenging is… When I was a PhD student I constantly questioned whether I would have good ideas, or better, ideas that were good enough to acquire a position. I felt comfortable about my skills because I found the education at Leiden University to be thorough but my capacity to ask the right questions was never really put to the test before I started my PhD project and then I felt that it all came down to being creative and smart and that made me uncertain.

It also did not help that JDM research is not main stream at most departments, causing me to be peripheral with respect to research in most places I worked. It takes so long to get feedback from the field (you need to develop your studies, run them, write them up, get them published, and only then people can read them), that for a long time I feared that one day someone would found out that they made a mistake by appointing me (I think I suffered from the imposter syndrome.). That did not happen, and slowly I found out that there were people that liked my work and that there were good students that wanted to work with me.

It took some time to find out that what I was doing was good enough and interesting to others. Thus I think the most challenging thing was to be persistent and believe that my own ideas were worthwhile investigating.

The call for papers for 13th TIBER Symposium 2014 is now open: deadline for abstract submissions is 18th May. The symposium itself is on 22nd August with keynote speakers Shane Frederick from Yale University and Richard Zeckhauser from Harvard University.

Departmental webpage

Outside the Matrix: Dan Lockton

danlockton_5This week we’re returning to our Outside the Matrix series with Dan Lockton who is a senior associate at the Helen Hamlyn Centre for Design, a specialist research centre at the Royal College of Art in London, and does freelance work as Requisite Variety. He received his PhD in Design for Behaviour Change from Brunel University, based around the Design with Intent toolkit, and has worked on behavioural research projects, particularly on energy use, at the University of Warwick and at Brunel, before his current role in a collaborative project between the RCA, Imperial College London, the Institute for Sustainability and a number of European partners. Before returning to academia, Dan worked on a range of commercial product design and R&D projects; he also has a Cambridge-MIT Institute Master’s in Technology Policy from the University of Cambridge (Judge Business School), and a BSc in Industrial Design Engineering from Brunel.
Tell us about your work: how does decision making psychology fit in it? All design necessarily embodies models of people’s behaviour—assumptions about how people will make decisions, and behave, when using, interacting with or otherwise experiencing products, services, or environments. It’s a fairly basic component of design, although it’s perhaps only rarely considered explicitly as being about decision making psychology. Whether or not designers think about their work in these terms, it is going to have an impact on how people behave, so it’s important to try to understand users’ decision processes, and how design affects them (or should be affected by them). So both in research projects themselves, and in teaching design students how to do ‘people-centred’ design research, psychology plays a big role in my work.

Understanding how different people make decisions, through research in real contexts, becomes even more crucial when trying to do ‘design for behaviour change’, of course. You end up (hopefully) confronting and questioning many of the models and assumptions that you previously had, and develop much more nuanced models of behaviour which usefully preserve the variety of real-life differences.

In my current main project, SusLab (which is a small part of a major European project), I’m working with Flora Bowden on reducing domestic energy use through a combination of technology and behaviour change, but we’re taking a much more people-centred approach than much of the work in this field has done previously—doing ethnographic research with householders to uncover much more detailed insights about what people are actually doing when they are ‘using energy’—the psychology of the decision processes involved, the mental models people have of the systems around them, and the social contexts of practices such as heating, entertainment and cleaning. We then co-design and prototype new products and services (somewhat grudgingly termed interventions) with householders, so that they are not test subjects, but participants in developing their own ways of changing their own behaviour. This is the Helen Hamlyn Centre for Design’s forté —including people better in design processes, from ageing populations and users with special needs to particular communities underserved by the assumptions embedded in the systems around them.

Reducing energy use is a major societal challenge—there is a vast array of projects and initiatives, from government, industry and academia as well as more locally driven schemes, all aiming to tackle different aspects of the problem. However, many approaches, including the UK’s smart metering rollout, largely treat ‘energy demand’ as something almost homogeneous, to be addressed primarily through pricing-based feedback, rather than being based on an understanding why people use energy in the first place—what are they actually doing? We think that people don’t set out to ‘use energy’: instead, they’re solving everyday problems, meeting needs for comfort, light, food, cleaning and entertainment, with a heavy dose of psychology in there, and sometimes with an emotional dimension too.

Equally, people’s understandings—mental models—of what energy is, and how their actions relate to its use, and their use of heuristics for deciding what actions to take, are under-explored, and could be extremely important in developing ways of visualising or engaging with energy use which are meaningful for householders. This is where ethnographic research, and in-context research on decision-making in real life, can provide insights which are directly useful for the design process.  

The overall project covers a broad scope of work and expertise, including environmental scientists and architects alongside design researchers, and benefits from ‘Living Lab’ instrumented houses in each country, which will provide a platform (albeit artificial) for demonstrating and trialling the interventions developed, before they are installed in houses in real life.

How did you first become interested in decision making psychology? I first got interested in the area while doing my Master’s back in 2004-5. For my project, I was looking at how technologies, and the structure of systems, have been used to influence (and control) public behaviour, and as such, approaches such as B.J. Fogg’s Persuasive Technology were very relevant. While Persuasive Technology has tended not to employ ‘behavioural economics’ techniques too much, it was initially through this angle of ‘persuasion’ that I read people like Robert Cialdini, then followed the thread through to learn more about cognitive biases and heuristics, from authors such as Scott Plous, the Russell Sage Foundation-supported collections of Tversky, Kahneman, Gilovich, Slovic et al’s papers, then Gigerenzer and the ABC group’s work. Herbert Simon’s work has also been a huge influence, because his multidisciplinarity enabled so many parallels to be drawn between different fields. It was partly through his work, I think, that I became interested in cybernetics and this whole body of work from the 1940s onwards which attempted to draw together systems across human psychology, technology and nature, but which in public consciousness seems mainly to be about people with robotic hands.

In parallel, I was familiar with concepts such as heuristics, affordances and mental models from the cognitive ergonomics literature, one of the other main intersections between design and psychology. Here, the work of people such as Don Norman and Jakob Nielsen is hugely influential; this had first become interesting when I was in industry, working on some products which really would have benefitted from a better understanding of the intended customers’ perceptions, thought processes, needs and abilities, and I was hungry to learn more about how to do this. The idea of applying psychological insights to the design process greatly appealed to me: I had something of an engineer’s mindset that wanted, Laplace’s demon-like, to be able to integrate all phenomena, social and physical, into something ‘actionable’ from a design standpoint. While I now appreciate my naïvety, the vision of this ‘system’ was a good inspiration for taking things further.

For my PhD—supervised by David Harrison (Brunel) from the ‘design’ side and Neville Stanton (Southampton) from the ‘psychology’ side—I tried to bring together insights relevant to behaviour change from lots of different disciplines, including behavioural economics, into a form which designers could use during design processes, for products, services and environments, with a focus on influencing more sustainable and socially beneficial behaviour. Various iterations were developed, via lots of workshops with designers and other stakeholders, ending up with the Design with Intent toolkit. This is still a work in progress, though it’s had to take back seat to some more practical projects in the last couple of years, but I hope in 2014 to be able to release a new version together with, perhaps, a book.

Why you decide to stay in academia instead of going into industry?
I like to think I’ve found the best of both worlds: the Helen Hamlyn Centre for Design acts as a consultancy for many of its projects with commercial clients, but also (as part of the Royal College of Art) works as part of many academic research projects (though always with a practical focus). During my first six months here, I’ve worked on commercial projects for new startups and a mobility products manufacturer, as well as two academic research projects. Alongside this job I also do some freelance consultancy in industry, which often involves running workshops on design and behaviour, writing articles, and generating early-stage ideas for companies interested in including a ‘behavioural’ element in their design processes.

There are advantages and disadvantages of academic and industrial work contexts. The freedom to pursue ‘pure’ knowledge (whatever that really means), and indeed more open-ended research, with longer timeframes, is a wonderful aspect of academia, a luxury that most companies cannot really afford given the constraints of the market. However, I found the bureaucracy at both Brunel and the University of Warwick crushingly slow: there was a lot of research that just never got done because the system made sure it took too long, or involved too much paperwork to bother with. That was deeply frustrating, when there are many very good researchers at both institutions who would thrive given a bit more freedom to do things. The RCA (perhaps because it’s so small) is refreshingly fast: it’s possible to decide to try something in the morning and go and do it in the afternoon, or even immediately.

Perhaps also, despite being relatively knowledgeable about behaviour change—one of the biggest buzzwords of the last five years!—I was very reluctant to go straight into a commercial application of the work which has no social benefit. I don’t want to use insights to sell people more things they don’t need, or exploit biases and heuristics to segment and profile consumers to target them with more advertising. I apply John Rawls’s ‘veil of ignorance’ wherever I can: I hate it when advertisers and marketers make assumptions about me, and my likely behaviour, so I don’t particularly want to do that to other people. That rules out a lot of organisations who want people with ‘behaviour change’ credentials.

What do you enjoy the most in your current role? While doing lots of projects is a lot of work, and there’s a tendency for this sort of thing to take over your life, in all honesty this is a very enjoyable job. Meeting lots of different people—members of the public—and actually involving them in the research: designing with them rather than for them, is incredibly satisfying. Also, I think most of the people working for the Helen Hamlyn Centre, because their jobs involve so much research with the public, are actually genuinely nice people.So they’re great to work with.

Do you see any challenges to the wider adoption of decision making psychology in your field? Most designers are not trained in psychology, so there is always a barrier to adoption. There is also the risk that highly popularised approaches and trends, such as what Nudge has become, lose their nuance and the cautious scientific approach when they just become another soundbite or quick-fix ‘solution’, applied to any context without doing any actual user research. And I’m aware that Design with Intent was essentially this, a context-free toolbox of ideas to apply to any situation, and I now see it as a major flaw which needs to be addressed in future versions.

But if I see the DDB/VW Piano Stairs video one more time used as a kind of example universal panacea for deeply complex social problems (“Design can fix anything, just look at how they made taking the stairs fun!!!!”) then I’ll scream, or more likely mumble something grumpily at the back of the room.

How do you see the relationship between academic researchers and practitioners? Design isn’t really an academic subject in itself—it’s a process. I might have a PhD in it, but I’ll be honest and say that it’s lacking in a lot of formal theory. That isn’t a bad thing, necessarily—again, Herbert Simon (in The Sciences of the Artificial) and then Donald Schön (in The Reflective Practitioner) did good jobs of explaining in different ways why it is aqualitatively different approach to knowledge the natural sciences—but what it does mean is that the most interesting and useful research for designers is often not in design at all, but in other fields that overlap. Designers need to be learning from psychologists, anthropologists, social researchers, economists, biologists, and actual practitioners in other fields. It also means there are a lot of design research papers which are basically restatements of the “What is design? What does it mean to be a designer?” question, which are fine but become tiring after a while.

So, to return to the question, academic ‘design’ research is generally very poor at being useful to practitioners. Part of this is the eternal language / framing barrier between academia and practice—there are so many assumptions about terminology and so on which prevent easy engagement—but there is also the access problem. Design consultancies very rarely subscribe to academic journals, and even if they do subscribe to design journals, it’s probably journals from outside the field (see above) that would bring more useful insights anyway. When I did a brief survey on this, these were a few of the points which came up.

What advice would you give to young researchers who might be interested in a career in your field? I would very much like to see more designers drawing on the heuristics work of Gerd Gigerenzer, Peter Todd, et al, and exploring what this means in the context of design for behaviour change and design in general, given that bounded rationality seen as a reality, and essentially adaptive, rather than a ‘defect’ in human decision-making, seems to marry up quite well with the tenets of ethnography and people-centred design. Some people have started to do it, e.g. Yvonne Rogers at UCL, but there is a massive opportunity for some very interesting work here.

Also, consider cybernetics. Read Hugh Dubberly and Paul Pangaro’s work and think about systems more broadly than the disciplinary boundaries within which you may have been educated. In general, read as much as you can, outside of what you think ‘your subject’ is. The most interesting innovations always occur at the boundaries between fields.

More than anything else, work on projects where you do research with real people, in real, everyday life contexts, rather than only in lab studies. It will change how you model behaviour, how you think about people, and how you understand decision making.

Visit Dan’s website: http://architectures.danlockton.co.uk/dan-lockton/

Star Track: Peter McGraw

Following on the success of our Research Heroes interviews, we’re launching a new interview series: Star Track. In this series, we turn the spotlight on researchers who will play an important role in shaping the future of the field. These people have already made a significant contribution with their ground breaking research and engagement in the research community –  you might know about them or might not, but you should definitely listen to what they have to say – enjoy!
First in our new series is Peter McGraw, an DSC_0667-1associate professor of marketing and psychology at the University of Colorado Boulder, who is an expert in the interdisciplinary fields of emotion and behavioral decision theory. His research examines the interrelationship of judgment, emotion, and choice, with a focus on consumer behavior and public policy. Lately, McGraw has been investigating what makes things funny. He directs at the Humor Research Lab (aka HuRL), a laboratory dedicated to the experimental study of humor, its antecedents, and consequences. He has co-authored The Humor Code: A Global Search for What Makes Things Funny, which hit the bookstores on 4/1/2014. Of recent note, McGraw made the 2013 Stylish Scientist List – probably because he likes to rock a sweater vest.

I wanted to pursue an academic career in this field because… I thought that pursuing an academic career would yield a stimulating yet leisurely intellectual life. (I was half right.) While researching grad programs, I read Tom Gilovich’s book: How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. By the end of chapter 2, I was hooked on the idea of studying judgment and decision making.

I find the inspiration for my research mostly from… Entrepreneurs and artists. Scientists don’t often think of their research as a creative endeavor that is important to share broadly with the world. I believe that the process of creating and disseminating scientific insights is enhanced by emulating people who have a different perspective and a broader array of tools. Also, behaving like an artist or an entrepreneur is much more fun than just trying to please peer reviewers.

When people ask me what I do, I say…. I study what makes things funny.

The best research project I have worked on during my career… In the summer of 2008, Caleb Warren and I set out to answer the question of why people laugh at moral violations. That project changed my life, as it spurred a quest to crack the humor code (something that behavioral decision theory’s “emotional revolution” had overlooked). The resulting paper, which published in Psychological Science in 2010, brought together my two main research areas at the time: moral judgment and mixed emotions. Caleb and I introduce the benign violation theory of humor and showed that moral violations can be a source of pleasure (something every good comic knows).

Everything came together just right; the paper was accepted with no requested changes – something that I never expect to happen again.

The paper that has most influenced me is… When Caleb and I were examining the research on humor, the theories didn’t seem quite right. Fortunately, we found a little-cited paper published by a linguist named of Thomas Veatch. To us, it was a huge advance over existing theories. Veatch’s work served as the foundation for the benign violation theory, which in turn, serves as the foundation for the research conducted in the Humor Research Lab.

If I wasn’t doing this, I would be… Starting some sort of business.

The most important quality for a researcher to have is… Perseverance. Repeat after me, “They can slow us down, but they can’t stop us.”

The biggest challenge for our field in the next 10 years… Finding a way speed the peer-review process.

My advice for young researchers at the start of their career is… Write every day. Start today – and purchase the book: How to Write A Lot.

The one thing I’ve found most challenging is… Staying asleep until my alarm goes off. The work academics do is highly evaluative and uncertain – two conditions that contribute to anxiety. And anxiety gets me out of bed early. On the other hand, it has a silver lining. I believe that every day is a big day and should be lived with a sense of urgency. And big days rarely start with the snooze button.

For more information on Peter McGraw visit his page: http://www.petermcgraw.org/

For more information on his book see: http://humorcode.com/

Viewpoint: Why I’m Leaving Academia

fishbowl cropped This week we’re featuring a guest post from Ben Kozary, a PhD candidate at the University of Newcastle in Australia. After getting to know Ben at various conferences over the past year, the InDecision team was disappointed to hear about his decision to leave academia – partly because he’s an excellent and passionate researcher, partly because we wouldn’t benefit from his jovial company at future conferences! However, his reasons for leaving echoed many dinner conversations we’ve had with fellow PhD students so we asked him to write about his experience and his decision to move to industry. Over to Ben…

To say I’ve learnt a lot during my PhD candidature would be an understatement. From a single blank page, I now know more than most people in the world about my particular topic area. I understand the research process: from planning and designing a study; to conducting it; and then writing it up clearly – so that readers may be certain about what I did, how I did it, what I found, and why it’s important. I’ve met a variety of people from around the world, with similar interests and passions to me, and forged close friendships with many of them. And I’ve learnt that academia might well be the best career path in the world. After all, you get to choose your own research area; you have flexible working hours; you get to play around with ideas, concepts and data, and make new and often exciting discoveries; and you get to attend conferences (meaning you get to travel extensively, and usually at your employer’s expense), where you can socialise (often at open bars) under the guise of “networking”. Why, then, you might be wondering, would I want to leave all of that behind?

My journey through the PhD program has been fairly typical; I’ve gone through all of the usual stages. I’ve been stressed in the lead-up to (and during) my proposal defence. I’ve had imposter syndrome. And I’ve been worried about being scooped, and/or finding “that paper”, which presents the exact research I’m doing, but does it better than me. But now, as I begin my final year of the four year Australian program, I’m feeling comfortable with, and confident in, the work I’ve produced so far in my dissertation. And yet, I’m also disillusioned – because, for all of its positives, I’ve come to see academia as a broken institution.

That there are problems facing academic research is not news, especially in psychology. Stapel and Smeesters, researcher degrees of freedom and bias, (the lack of) statistical power and precision, the “replication crisis” and “theoretical amnesia”, social and behavioural priming: the list goes on. However, these problems are not altogether removed from one another; in fact, they highlight what I believe is a larger, underlying issue.

Academic research is no longer about a search for the truth

Stapel and Smeesters are two high profile examples of fraud, which represents an extreme exploitation of researcher degrees of freedom. But what makes any researcher “massage” their data? The bias towards publishing only positive results is no doubt a driving force. Does that excuse cases of fraud? Absolutely not. My point, however, is that there are clear pressures on the academic community to “publish or perish”. Consequently, academic research is largely an exercise in career development and promotion, and no longer (if, indeed, it ever was) an objective search for the truth.

For instance, the lack of statistical power evident in our field has been known for more than fifty years, with Cohen (1962) first highlighting the problem, and Rossi (1990) and Maxwell (2004) providing further prompts. Additionally, Cohen (1990; 1994) reminded us of the many issues associated with null-hypothesis significance testing – issues that were raised as far back as 1938 – and yet, it still remains the predominant form of data analysis for experimental researchers in the psychology field. To address these issues, Cohen (1994: 1002) suggested a move to estimation:

“Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. […] Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing – larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. 

Twenty years later, and we’re finally starting to see some changes. Unfortunately, the field now has to suffer the consequences of being slow to change. Even if all our studies were powered at the conventional level of 80% (Cohen, 1988; 1992), they would still be imprecise; that is, the width of their 95% confidence intervals would be approximately ±70% of the point estimate or effect size (Goodman and Berlin, 1994). In practical terms, that means that if we used Cohen’s d as an effect size metric (for the standardised difference between two means), and we found that it was “medium” (that is, d = 0.50), the 95% confidence interval would range from 0.15 to 0.85. This is exactly what Cohen (1994) was talking about when he said the confidence intervals in our field are “so embarrassingly large”: in this case, the interval tells us that we can be 95% confident the true effect size is potentially smaller than “small” (0.20), larger than “large” (0.80), or somewhere in between. Remember, however, that many of the studies in our field are underpowered, which makes the findings even more imprecise than what is illustrated here; that is, the 95% confidence intervals are even wider. And so, I wonder: How many papers have been published in our field in the last twenty years, while we’ve been slow to change? And how many of these papers have reported results at least as meaningless as this example?

I suspect that part of the reason for the slow adoption of estimation techniques is due to the uncertainty they bring to the data. Significance testing is characterised by dichotomous thinking: an effect is either statistically significant or it is not. In other words, significance testing is seen as easier to conduct and analyse, relative to estimation; however, it does not allow for the same degree of clarity in our findings. By reporting confidence intervals (and highlighting uncertainty), we reduce the risk of committing one of the cardinal sins of consumer psychology: overgeneralisation. Furthermore, you may be surprised to learn that estimation is just as easy to conduct as significance testing, and even easier to report (because you can extrapolate greater meaning from your results).

Replication versus theoretical development

When you consider the lack of precision in our field, in conjunction with the magnitude of the problems of researcher degrees of freedom and publication bias, is it any wonder that so many replication attempts are unsuccessful? The issue of failed replications is then compounded further by the lack of theoretical development that takes place in our discipline, which creates additional problems. The incentive structure upon which the academic institution is situated implies that success (in the form of promotion and grants) comes to those who publish a high number of high quality papers (as determined by the journal in which they are published). As a result, we have a discipline that lacks both internal and external relevance, due to the multitude of standalone empirical findings that fail to address the full scope of consumer behaviour (Pham, 2013). In that sense, it seems to me that replication is at odds with theoretical development, when, in fact, the two should be working in tandem; that is, replication should guide theoretical development.

Over time, some of you may have observed (as I have) that single papers are now expected to “do more”. Papers will regularly report four or more experiments, in which they will identify an effect; perform a direct and/or conceptual replication; identify moderators and/or mediators and/or boundary conditions; and rule out alternative process accounts. I have heard criticism directed at this approach, usually from fellow PhD candidates, that there is an unfair expectation on the new generation of researchers to do more work to achieve what the previous generation did. In other words, that the seminal/classic papers in the field, upon which now-senior academics were awarded tenure, do less than what emerging and early career researchers are currently expected to do in their papers. I do not share this view that there is an issue of hypocrisy; rather, my criticism is that as the expectation that papers “do more” has grown, there is now less incentive for academics to engage in theoretical development. The “flashy” research is what gets noticed and, in turn, what gets its author(s) promoted and wins them grants. Why, then, would anyone waste their time trying to further develop an area of work that someone else has already covered so thoroughly – especially when, if you fail to replicate their basic effect, you will find it extremely difficult to publish in a flagship journal (where the “flashiest” research appears)?

This observation also begs the question: where has this expectation that papers “do more” come from? As other scientific fields (particularly the hard sciences) have reported more breakthroughs over time, I suspect that psychology has desired to keep up. The mind, however, in its intangibility, is too complex to allow for regular breakthroughs; there are simply too many variables that can come into effect, especially when behaviour is also brought into the equation. Such an issue is highlighted no more clearly than in the case of behavioural priming. Yet, with the development of a general theory of priming, researchers can target their efforts at identifying the varied and complex “unknown moderators” of the phenomenon and, in turn, design experiments that are more likely to replicate (Cesario, 2014). Consequently, the expectation for single papers to thoroughly explain an entire process is removed – and our replications can then do what they’re supposed to: enhance precision and uncover truth.

The system is broken

The psychology field seems resistant to regressing to simpler papers that take the time to develop theory, and contribute to knowledge in a cumulative fashion. Reviewers continue to request additional experiments, rather than to demand greater clarity from reported studies (for example, in the form of effect sizes and confidence intervals), and/or to encourage further theoretical development. Put simply, there is an implicit assumption that papers need to be “determining” when, in fact, they should be “contributing”. As Cumming (2014: 23) argues, it is important that a study “be considered alongside any comparable past studies and with the assumption that future studies will build on its contribution.”

In that regard, it would seem that the editorial/publication process is arguably the larger, underlying issue contributing (predominantly, though not necessarily solely) to the many problems afflicting academic research in psychology. But what is driving this issue? Could it be that the peer review process, which seems fantastic in theory, doesn’t work in practice? I believe that is certainly a possibility.

Something else I’ve come to learn throughout my PhD journey is that successful academic research requires mastery of several skills: you need to be able to plan your time; communicate your ideas clearly; think critically; explore issues from a “big picture” or macro perspective, as well as at the micro level; undertake conceptual development; design and execute studies; and be proficient at statistical analysis (assuming, of course, that you’re not an interpretive researcher). Interestingly, William Shockley, way back in 1957, posited that producing a piece of research involves clearing eight specific hurdles – and that these hurdles are essentially all equal. In other words, successful research calls for a researcher to be adept at each stage of the research process. However, in reality, it is often that the case that we are very adept (sometimes exceptional) at a few aspects, and merely satisfactory at others. The aim of the peer review process is to correct or otherwise improve the areas we are less adept at, which should – theoretically – result in a strong (sometimes exceptional) piece of research. Multiple reviewers evaluate a manuscript in an attempt to overcome these individual shortfalls; yet, look at the state of the discipline! The peer review process is clearly not working.

I’m not advocating abandoning the peer review process; I believe it is one of the cornerstones of scientific progress. What I am proposing, however, is for an adjustment to the system – and I’m not the first to do so. What if we, as has been suggested, move to a system of pre-registration? What if credit for publications in such a system were two-fold, with some going towards the conceptual development (resulting in the registered study), and some going towards the analysis and write-up? Such a system naturally lends itself to specialisation, so, what if we expected less of our researchers? That is, what if we were free to focus on those aspects of research that we’re good at (whether that’s, for example, conceptual development or data analysis), leaving our shortfalls to other researchers? What if the peer review process became specialised, with experts in the literature reviewing the proposed studies, and experts in data analysis reviewing the completed studies? This system also lends itself to collaboration and, therefore, to further skill development, because the experts in a particular aspect of research are well-recognised. The PhD process would remain more or less the same under this system, as it would allow emerging researchers to identify – honestly – their research strengths and weaknesses, before specialising after they complete grad school. There are, no doubt, issues with this proposal that I have not thought of, but to me, it suggests a stronger and more effective peer review process than the current one.

A recipe for change

Unfortunately, I don’t believe these issues that I’ve outlined are going to change – at least not in a hurry, if the slow adoption of estimation techniques is anything to go by. For that reason, when I finish my PhD later this year, I will be leaving academia to pursue a career in market research, where obtaining truth from the data to deliver actionable insights to clients is of the utmost importance. Some may view this decision as synonymous with giving up, but it’s not a choice I’ve made lightly; I simply feel as though I have the opportunity to pursue a more meaningful career in research outside of academia – and I’m very much looking forward to the opportunities and challenges that lay ahead for me in industry.

For those who choose to remain in academia, it is your responsibility to promote positive change; that responsibility does not rest solely on the journals. It has been suggested that researchers boycott the flagship journals if they don’t agree with their policies – but that is really only an option for tenured professors, unless you’re willing to risk career self-sabotage (which, I’m betting, most emerging and early career researchers are not). The push for change, therefore, needs to come predominantly (though not solely) from senior academics, in two ways: 1) in research training, as advisors and supervisors of PhDs and post-docs; and 2) as reviewers for journals, and members of editorial boards. Furthermore, universities should offer greater support to their academics, to enable them to take the time to produce higher quality research that strives to discover the truth. Grant committees, also, may need to re-evaluate their criteria for awarding research grants, and focus more on quality and meaningful research, as opposed to research that is “flashy” and/or “more newsworthy”. And the next generation of academics (that is, the emerging and early career researchers) should familiarise themselves with these issues, so that they may make up their own minds about where they stand, how they feel, and how best to move forward; the future of the academic institution is, after all, in their hands.