Viewpoint: Nudging Nudgers’ Nudges

Editor’s note: Ben Kozary, who you may already know from his articulate and thought-provoking post on why he has decided to leave academia, has become the latest recruit into the InDecision team, giving us a view of decision sciences from the other side of the world in Australia. We asked Ben to report on the first International Behavioural Insights Conference in Sydney from the perspective of someone moving from academia into industry – here he reflects on his first experience of the world of behavioural insights Out There. 

bx2014A few days ago, I read an interesting piece on the differences between behavioural economics and psychology. Truth be told, even after attending the inaugural Behavioural Exchange conference in Sydney on June 2 and 3, I’m still not sure what exactly the difference is. But does the distinction even matter? The academic in me screams, “YES!” but, outside of the ivory tower, it seems that most people aren’t concerned. Behavioural Exchange (hereafter referred to as bx2014) was, after all, a public policy conference – and as such, the emphasis of the conference was on actionable and applicable insights from the behavioural sciences.

It was the promise of these insights that drew me to bx2014. As a final year PhD candidate researching Consumer Psychology, I was looking to ground the heavily theoretical work I do in some concrete practical applications. My hope was that, in doing so, my work would take on greater meaning, even if only to me – not to mention that I’d be able to pad out the practical implications component of my dissertation. Additionally, as someone transitioning from academia into industry, I was curious as to what the transition was going to entail. Who are the sorts of people I’m going to be dealing with in industry, and how are they similar/different to academics? What value is placed on the skills and knowledge I have, and how might I be best able to apply myself? And how well received are the ideas that I’ve been exposed to throughout my time in academia? In this post, I hint at the answers to these questions but, as you may have guessed, they’re by no means definitive.

bx2014: an overview

Speakers at the event included academics, the majority of whom were from Harvard University; as well as members of government, primarily from Australia, but also from the US, UK, and Singapore; and businesspeople from around the world, including CEOs, consultants, and designers. For me, the first day seemed to focus more on government, whilst the second day was primarily about business – but regardless of the relevance (or not) to me, I found all of the sessions fascinating. Individual presenters and panels alike dealt with such issues as:

  • The importance and benefits of nudging
  • How governments can embrace, and have already undertaken, nudging
  • Opportunities, risks, and common challenges of nudging
  • The fundamentals of nudging, including data, design, and delivery
  • How the integration of findings from academia and experiences in business and government can make nudges more effective
  • Reflections and insights from business and academia on the application of nudges in the corporate world
  • The future of nudging and behavioural science

Nudging: is it just a fad?

At this point, you may have noticed that the term “nudging” seems to have been applied as a catchcry for any behavioural intervention. Overgeneralisation may be a common sin of consumer psychology researchers, but our thinking appears more nuanced than that of our industry counterparts. For instance, many of the initiatives suggested or discussed at bx2014 were based upon research describing numerous cognitive biases, including the sunk-cost fallacy, present bias, hindsight bias, confirmation bias, anchoring, and framing effects – but there was almost no mention of the intricacies of these biases, nor any real emphasis placed on the conditions specific to particular case studies upon which they had been used as part of a successful behavioural intervention. Given that I was asked on more than ten separate occasions during the two days whether I’d read Kahneman’s Thinking, Fast and Slow (I haven’t, which apparently put me firmly in the minority of conference attendees), my concern here is that many of the attendees were searching for quick, easily applied solutions to issues affecting their stakeholders.

This overgeneralisation is dangerous ground to walk, because it flirts with the prospect of nudging becoming yet another apparent management or political panacea, when it’s anything but. Instead, we need to heed what Professor Cass Sunstein said in the first presentation of the conference; that effective nudging is about recognising individual differences. “Nudges are like GPS units: they tell you the most efficient, or ‘best’, route, but you don’t have to take it; you can go your own way and choose the scenic route, if you like.” In other words, nudges should preserve individual choice by not being overly paternalistic; this is what separates them from mandates. In that sense, I feel that Professor Sunstein was nudging us (if you will) to not overgeneralise.

Experimentation: “test, learn, adapt”

If you’re thinking executing Professor Sunstein’s advice is easier said than done, you’re right – but that was also addressed at bx2014. One of the recurring points of the conference was the need for experimentation, despite how challenging it may be. Dr David Halpern, Chief Executive of the UK Behavioural Insights Team, told us to live by one simple principle: “Test, learn, adapt.” Another presenter suggested, “It’s better to say, ‘I don’t know,’ and then test something, than to skip trials and push ahead to a full roll-out on a hunch.”

We were also reminded that the most successful companies, especially in the technology sphere – Google, Facebook, Amazon, etc. – perpetually experiment. Randomised controlled trials (RCTs) are the gold standard of experimentation, but they’re not always viable. When that’s the case, we were advised to “do whatever experiments or tests you can, provided the costs don’t outweigh the potential benefits – but always strive to get the best data available.” And therein lay two of the foremost challenges of behavioural interventions in industry: funding, and time. Fortunately for me, my time in academia has me well versed in both of these issues…

Replication: it’s essential in industry, too

The issue of replication is one that we should all be familiar with by now – and it didn’t go unmentioned at bx2014, either. For instance, Professor Richard Thaler, from the University of Chicago, told us that managers and policy makers generally think they’re right, and they don’t like taking risks; however, they are often too impatient to run experiments, and don’t see the point of replication, with their philosophy being, “It worked already, so why do we need to spend more time and money to test it again?” This problem is an obvious one, but it can be overcome with education and training.

A more serious problem with replication was highlighted during the Design breakout session I attended on the afternoon of the first day, where one of the presenters said, “Relative to hard sciences, social science is difficult, because the results will not always replicate. You can implement a nudge or a system of some sort as an effective intervention, but in 6-12 months’ time (or maybe more), people might have adapted and changed their behaviours such that it no longer works – and therefore won’t replicate in any RCTs or experiments you run.” From an academic standpoint, I find this idea intriguing, because it’s something that we rarely consider; we tend to take the more general view that, if an effect is real, it will replicate. I’m yet to hear people’s adaptability offered as a reason for some of the recent failures for studies to replicate – and I’m not saying that it’s necessarily a legitimate reason, but it does highlight something that we as researchers risk forgetting: that there are real people behind our statistics, and they can be unpredictable and subject to change.

Big data: how do we use it?

On the second day, I attended the Data breakout session, during which several interesting points were made. The focus of the session was on big data, which Dr James Guszcza, of the Deloitte Analytics Institute in Singapore, told us referred to predictive analytics and modelling. These models, he said, can point us in the right direction, and tell us who to target our interventions to, but they don’t tell us how to prompt the desired behaviour change. For that, behavioural insights are required; thus, he recommended that behavioural insights and predictive modelling be infused, because – to echo Professor Sunstein – solutions will often need to be nuanced and individually focused. Dr Guszcza also advised that we be flexible with our data, and be open to the possibility that it can be useful in ways that you wouldn’t previously have imagined. “Old” datasets are particularly useful in this regard, he said, so you should also be mindful of “digital exhaustion” (that is, the deletion of older data).  “With today’s storage capabilities, you shouldn’t need to delete anything simply because it’s old.”

Collaboration in nudging: how academia and industry can work together

Strangely, the importance of collaboration between academia and industry wasn’t strongly highlighted at the conference; however, one presenter did note its significance. For collaboration to be effective, he said, it’s a matter of recognising each other’s needs. That means that academics should look at questions important to organisations, and organisations should allow academics to publish – especially given the type of rich data they have access to. Furthermore, collaboration could help solve what was highlighted as a critical issue affecting research into behavioural insights. That is, Professor Max Bazerman, of Harvard University, described how the judgement and decision-making field originated decades ago under the notion that, “If we understand what’s wrong with the human mind, we can fix it – but this approach is flawed. Instead, we should focus on understanding and accepting that this is the way the world works, and therefore we can learn to adapt and be effective.”

Final thoughts

At the conclusion of the conference, we were asked to fill out a short questionnaire. One of the questions asked us to describe bx2014 in one sentence; I wrote: “Nudging nudgers’ nudges”, because I felt that neatly summed up the notion that most people were there to learn how to implement more effective behavioural interventions (plus, my brain was fried after an intense two days, as well as having being punished by my downing more than a few drinks at the reception the night before…). But, the truth is, this conference can’t be compressed into a sentence; the ideas are just too big. So, with that in mind, I’d like now to share with you a few thought provoking ideas that I jotted down over the two days:

If I were young and wanted to start a business, I would start a choice engine, because they will do for other industries what travel websites did for that industry. The amount of data emerging and becoming available is monumental. -Professor Richard Thaler, University of Chicago

Nudges lead people to engage in behaviour – but, as a psychologist, I’m interested in the outcome beyond that behaviour; for example, are people happy or unhappy? -Professor Mike Norton, Harvard University

When thinking about nudges, consider this piece from the late author, David Foster Wallace: There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, “Morning, boys, how’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?” Remember: there will always be water – and there will always be nudges, even if we don’t realise they’re there. We must open our eyes to the extensive possibilities of nudging. -Professor Cass Sunstein, Harvard University

And, finally:

Done is better than perfect. -Mia Garlick, Head of Policy Australia and New Zealand, Facebook

So, in the interests of applying something I learned at bx2014, I’m calling this post done. It’s not perfect – but as several speakers remarked at the conference, satisficing is better than optimising. And, at the end of the day, I think that’s a pretty good example of behavioural economics in action.

Advertisements

Viewpoints: “Becoming a Professor” Podcasts

On InDecision Blog we write a lot about how be the “ideal candidate” featuring the “ideal advice.”  While advice is great, there’s something special about hearing first hand about the intimate struggles and personal journeys people had that can enlighten both those in and outside of academia.

As The Professor Job Market season dawns this year, we wanted to share four unique stories about four young scholar’s recent journeys on the job market. If you’ve ever wanted an honest behind the scenes look, this is it: four young scholars explain their non-traditional paths with frank honesty and in personally revealing ways. These podcasts will give you an opportunity to connect with their stories and hear their views on research and life outside it. You’ll also get the perspective of people who chose not to take the “top 20” school path – a perspective we have not covered as despite the fact that majority of the field does not resides in the top 20 schools.

These short podcasts are hosted by InDecision contributor Troy Campbell who is starting a series of InDecision podcasts aimed to get at both the professional and personal side of research. Note the conversations are with people who got placements in business schools – particularly marketing. If you are a nonacademic we put the podcast in order “general topical appeal.” All podcasts can be downloaded to be listened to offline (e.g. during a work out).

Jim Mourey
Being a presenter, being true to yourself, quality of life, and personally valuing teaching.
Ph.D: University of Michigan
Professorship: DePaul University
Rob Smith
Choosing not to go the top schools, quality of life, importance of teaching.
Ph.D: University of Michigan
Professorship: Ohio State
Caroline Roux
Choosing a different path: policy research and being a different type of scholar.
PhD: Northwestern
Professorship: Concordia University, Canada
Adrian Camilleri
Being interdisciplinary, reflecting on your Identity, dealing with Marketing Interviews.
Post Doc: Duke University
Professorship: RMIT University, Australia
Coming soon.

—————-

troy-resizedYour podcast host Troy Campbell is a Ph.D student at Duke University and hopeful on the Job Market this year. For some of his viewpoints and his always animated Indecision Blog reporting, click here.

Viewpoint: Life as an Assistant Professor

joebwWe recently had a chat with Joseph Redden about his happy life as an assistant professor. From this conversation, we found out that Professor Redden had a lot answers to some of the questions that stressed out graduate students often have about being a young professor so we asked him to do a Q&A with a representative stressed out graduate student.

So far, we’ve been interviewing the established greats and focused a lot on life after tenure. But as a blog dedicated in part to helping young researchers find their way, we thought it would be good to have some more posts about your most immediate concerns and fears. So here’s Joseph Redden with some guidance and comfort in the first of many soon to come InDecision Blog posts on the two topics that on all young researchers minds: the job market and being a quality young faculty member. Joseph Redden is an Assistant Professor of Marketing at Carlson School of Management at the University of Minnesota, and he is an emerging expert on the topic of satiation.


Hi Joe, I am a stressed out student in graduate school – or actually I’m not just stressed but also worried and afraid. I am worried that my impostor syndrome is not just a syndrome but real: I look at the top people on the job market recently and I just feel inadequate. I look at the greats in our field and their theoretical might and publishing powerhouse make me feel like I’ll never make in the field, and even if I can get a few publications I am worried my work won’t matter. So if you don’t mind, I have a bunch of questions... 

SOS: How stressed are you? Do you have free time?

Professor Redden: Like any academic or human for that matter, I feel like my life has plenty of stress. That being said, I do find that my stress seems to diminish a bit every year. I like to think that is not just adaptation, but rather a reflection of my active efforts to manage stress in two ways. First, I’ve focused my time more on problems that really pique my interest and leverage my areas of expertise. Second, I make sure some of the time “savings” I get from being more productive translates into free time for me to enjoy. I personally find this last point the most attractive aspect of an academic life.

SOS: What’s daily life like for you?

Professor Redden: Like any other academic, there is not really a protypical day. Some days are mostly teaching, others mostly writing, some mostly reading while others might be service. Even so, I really try to keep a regular schedule (a 9-to-5 if you will) to avoid burnout. If you don’t do this I think it’s very easy to burn yourself out because there is always more we could do on every research project or teaching topic. I find it helpful to set a goal for what I want to get done in a week. If I happen to get lucky and get things done quickly, then I might leave early. If instead things take quite a bit longer, then that becomes a longer week (and possibly weekend). Over time, I’ve found that I’ve become much better calibrated at setting what is reasonable for a week.

SOS: Do you ever have fun?

Professor Redden: Of course. Otherwise, what is the point? In fact, I explicitly carve out time for fun. As an example, I often teach on Wednesdays until noon and then often go catch an early movie. Interestingly, I have found this increases my productivity as I come back Thursday morning refreshed and ready to work. I think everyone should carve out some of these hobbies to take advantage of the flexibility academia offers. For me, this is movies, tennis leagues, my kids’ sports teams, etc.

SOS: How do you manage your choice of projects?

Professor Redden: That is a great question. I found that early in my career I tended to work on anything I found interesting. This led me to jump from project to project chasing after the “shiny new object”. You can imagine how this hampered my productivity. I now try to decide what enters my portfolio in three stages. First, I make sure that any new idea leverages an area of my expertise. I want to avoid one-off projects that require me to learn an entirely new literature each time. Second, I go ahead and write a potential contribution paragraph to flush out whether this idea could be in an A-journal. The worst outcome is for an idea to work perfectly yet have no chance to be published. Third, I try to run a quick study to see if the idea seems promising at all. If it works, I try to quickly replicate it so I’ll know I have something real. If it fails at first, I’ll give it one more shot if I think the idea is super promising. If it works at first and fails on the replication, then I’ll often give it one more go as a sort of tiebreaker. I’ve found this approach has really helped me weed out effects that will be difficult to establish and understand.

SOS: What do I really need to do to get tenure in this field?

Professor Redden: The answer to this question is both ambiguous and varied across schools. At my university, the guidance is centered on achieving distinction in your field. Of course, this could mean something very different for everyone. Personally, I tried to make sure that two things would hold true. First, that there was a topic (satiation in my case) such that I would be one of the first few names mentioned if one asked who was doing research in that area. Second, that it worked the other way such that when asked what I researched people would have a consistent answer. I think if both of those are true then you will have achieved distinction in your field.  

SOS: How do you choose collaborators?

Professor Redden: A great deal of this is serendipity so I’m not sure there is a conscious effort to “choose” collaborators. I can say that the collaborators I want are those that share my interests, possess complementary skills, and make research fun. I’d say the last one, having fun, is by far the most important.

SOS:  I am worried that only the Thaler’s and Loewensteins of the world will make a difference. I know now that I’ll never be them, so I am thinking, what’s the point, what will I really do for this field?

Professor Redden: It matters how you define making a difference. If you consider yourself a success only if you make a difference for an entire field, then that is a really high standard for nearly anyone. I like to think of making a difference at a more micro level. Think about how your presentation at a conference may affect how a listener writes their paper, how a conservation may lead a doctoral student to their thesis idea, how teaching a topic may spark a student’s interest, how seemingly minor coverage of a paper may affect a marketer at a company (and hence millions of people). I believe that many of these unknown differences are happening — as long as we work on interesting problems.

If you have any questions for future interviews, let us know at indecisionblogging@gmail.com

Interview by Troy Campbell

Viewpoint: Why I’m Leaving Academia

fishbowl cropped This week we’re featuring a guest post from Ben Kozary, a PhD candidate at the University of Newcastle in Australia. After getting to know Ben at various conferences over the past year, the InDecision team was disappointed to hear about his decision to leave academia – partly because he’s an excellent and passionate researcher, partly because we wouldn’t benefit from his jovial company at future conferences! However, his reasons for leaving echoed many dinner conversations we’ve had with fellow PhD students so we asked him to write about his experience and his decision to move to industry. Over to Ben…

To say I’ve learnt a lot during my PhD candidature would be an understatement. From a single blank page, I now know more than most people in the world about my particular topic area. I understand the research process: from planning and designing a study; to conducting it; and then writing it up clearly – so that readers may be certain about what I did, how I did it, what I found, and why it’s important. I’ve met a variety of people from around the world, with similar interests and passions to me, and forged close friendships with many of them. And I’ve learnt that academia might well be the best career path in the world. After all, you get to choose your own research area; you have flexible working hours; you get to play around with ideas, concepts and data, and make new and often exciting discoveries; and you get to attend conferences (meaning you get to travel extensively, and usually at your employer’s expense), where you can socialise (often at open bars) under the guise of “networking”. Why, then, you might be wondering, would I want to leave all of that behind?

My journey through the PhD program has been fairly typical; I’ve gone through all of the usual stages. I’ve been stressed in the lead-up to (and during) my proposal defence. I’ve had imposter syndrome. And I’ve been worried about being scooped, and/or finding “that paper”, which presents the exact research I’m doing, but does it better than me. But now, as I begin my final year of the four year Australian program, I’m feeling comfortable with, and confident in, the work I’ve produced so far in my dissertation. And yet, I’m also disillusioned – because, for all of its positives, I’ve come to see academia as a broken institution.

That there are problems facing academic research is not news, especially in psychology. Stapel and Smeesters, researcher degrees of freedom and bias, (the lack of) statistical power and precision, the “replication crisis” and “theoretical amnesia”, social and behavioural priming: the list goes on. However, these problems are not altogether removed from one another; in fact, they highlight what I believe is a larger, underlying issue.

Academic research is no longer about a search for the truth

Stapel and Smeesters are two high profile examples of fraud, which represents an extreme exploitation of researcher degrees of freedom. But what makes any researcher “massage” their data? The bias towards publishing only positive results is no doubt a driving force. Does that excuse cases of fraud? Absolutely not. My point, however, is that there are clear pressures on the academic community to “publish or perish”. Consequently, academic research is largely an exercise in career development and promotion, and no longer (if, indeed, it ever was) an objective search for the truth.

For instance, the lack of statistical power evident in our field has been known for more than fifty years, with Cohen (1962) first highlighting the problem, and Rossi (1990) and Maxwell (2004) providing further prompts. Additionally, Cohen (1990; 1994) reminded us of the many issues associated with null-hypothesis significance testing – issues that were raised as far back as 1938 – and yet, it still remains the predominant form of data analysis for experimental researchers in the psychology field. To address these issues, Cohen (1994: 1002) suggested a move to estimation:

“Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. […] Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing – larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. 

Twenty years later, and we’re finally starting to see some changes. Unfortunately, the field now has to suffer the consequences of being slow to change. Even if all our studies were powered at the conventional level of 80% (Cohen, 1988; 1992), they would still be imprecise; that is, the width of their 95% confidence intervals would be approximately ±70% of the point estimate or effect size (Goodman and Berlin, 1994). In practical terms, that means that if we used Cohen’s d as an effect size metric (for the standardised difference between two means), and we found that it was “medium” (that is, d = 0.50), the 95% confidence interval would range from 0.15 to 0.85. This is exactly what Cohen (1994) was talking about when he said the confidence intervals in our field are “so embarrassingly large”: in this case, the interval tells us that we can be 95% confident the true effect size is potentially smaller than “small” (0.20), larger than “large” (0.80), or somewhere in between. Remember, however, that many of the studies in our field are underpowered, which makes the findings even more imprecise than what is illustrated here; that is, the 95% confidence intervals are even wider. And so, I wonder: How many papers have been published in our field in the last twenty years, while we’ve been slow to change? And how many of these papers have reported results at least as meaningless as this example?

I suspect that part of the reason for the slow adoption of estimation techniques is due to the uncertainty they bring to the data. Significance testing is characterised by dichotomous thinking: an effect is either statistically significant or it is not. In other words, significance testing is seen as easier to conduct and analyse, relative to estimation; however, it does not allow for the same degree of clarity in our findings. By reporting confidence intervals (and highlighting uncertainty), we reduce the risk of committing one of the cardinal sins of consumer psychology: overgeneralisation. Furthermore, you may be surprised to learn that estimation is just as easy to conduct as significance testing, and even easier to report (because you can extrapolate greater meaning from your results).

Replication versus theoretical development

When you consider the lack of precision in our field, in conjunction with the magnitude of the problems of researcher degrees of freedom and publication bias, is it any wonder that so many replication attempts are unsuccessful? The issue of failed replications is then compounded further by the lack of theoretical development that takes place in our discipline, which creates additional problems. The incentive structure upon which the academic institution is situated implies that success (in the form of promotion and grants) comes to those who publish a high number of high quality papers (as determined by the journal in which they are published). As a result, we have a discipline that lacks both internal and external relevance, due to the multitude of standalone empirical findings that fail to address the full scope of consumer behaviour (Pham, 2013). In that sense, it seems to me that replication is at odds with theoretical development, when, in fact, the two should be working in tandem; that is, replication should guide theoretical development.

Over time, some of you may have observed (as I have) that single papers are now expected to “do more”. Papers will regularly report four or more experiments, in which they will identify an effect; perform a direct and/or conceptual replication; identify moderators and/or mediators and/or boundary conditions; and rule out alternative process accounts. I have heard criticism directed at this approach, usually from fellow PhD candidates, that there is an unfair expectation on the new generation of researchers to do more work to achieve what the previous generation did. In other words, that the seminal/classic papers in the field, upon which now-senior academics were awarded tenure, do less than what emerging and early career researchers are currently expected to do in their papers. I do not share this view that there is an issue of hypocrisy; rather, my criticism is that as the expectation that papers “do more” has grown, there is now less incentive for academics to engage in theoretical development. The “flashy” research is what gets noticed and, in turn, what gets its author(s) promoted and wins them grants. Why, then, would anyone waste their time trying to further develop an area of work that someone else has already covered so thoroughly – especially when, if you fail to replicate their basic effect, you will find it extremely difficult to publish in a flagship journal (where the “flashiest” research appears)?

This observation also begs the question: where has this expectation that papers “do more” come from? As other scientific fields (particularly the hard sciences) have reported more breakthroughs over time, I suspect that psychology has desired to keep up. The mind, however, in its intangibility, is too complex to allow for regular breakthroughs; there are simply too many variables that can come into effect, especially when behaviour is also brought into the equation. Such an issue is highlighted no more clearly than in the case of behavioural priming. Yet, with the development of a general theory of priming, researchers can target their efforts at identifying the varied and complex “unknown moderators” of the phenomenon and, in turn, design experiments that are more likely to replicate (Cesario, 2014). Consequently, the expectation for single papers to thoroughly explain an entire process is removed – and our replications can then do what they’re supposed to: enhance precision and uncover truth.

The system is broken

The psychology field seems resistant to regressing to simpler papers that take the time to develop theory, and contribute to knowledge in a cumulative fashion. Reviewers continue to request additional experiments, rather than to demand greater clarity from reported studies (for example, in the form of effect sizes and confidence intervals), and/or to encourage further theoretical development. Put simply, there is an implicit assumption that papers need to be “determining” when, in fact, they should be “contributing”. As Cumming (2014: 23) argues, it is important that a study “be considered alongside any comparable past studies and with the assumption that future studies will build on its contribution.”

In that regard, it would seem that the editorial/publication process is arguably the larger, underlying issue contributing (predominantly, though not necessarily solely) to the many problems afflicting academic research in psychology. But what is driving this issue? Could it be that the peer review process, which seems fantastic in theory, doesn’t work in practice? I believe that is certainly a possibility.

Something else I’ve come to learn throughout my PhD journey is that successful academic research requires mastery of several skills: you need to be able to plan your time; communicate your ideas clearly; think critically; explore issues from a “big picture” or macro perspective, as well as at the micro level; undertake conceptual development; design and execute studies; and be proficient at statistical analysis (assuming, of course, that you’re not an interpretive researcher). Interestingly, William Shockley, way back in 1957, posited that producing a piece of research involves clearing eight specific hurdles – and that these hurdles are essentially all equal. In other words, successful research calls for a researcher to be adept at each stage of the research process. However, in reality, it is often that the case that we are very adept (sometimes exceptional) at a few aspects, and merely satisfactory at others. The aim of the peer review process is to correct or otherwise improve the areas we are less adept at, which should – theoretically – result in a strong (sometimes exceptional) piece of research. Multiple reviewers evaluate a manuscript in an attempt to overcome these individual shortfalls; yet, look at the state of the discipline! The peer review process is clearly not working.

I’m not advocating abandoning the peer review process; I believe it is one of the cornerstones of scientific progress. What I am proposing, however, is for an adjustment to the system – and I’m not the first to do so. What if we, as has been suggested, move to a system of pre-registration? What if credit for publications in such a system were two-fold, with some going towards the conceptual development (resulting in the registered study), and some going towards the analysis and write-up? Such a system naturally lends itself to specialisation, so, what if we expected less of our researchers? That is, what if we were free to focus on those aspects of research that we’re good at (whether that’s, for example, conceptual development or data analysis), leaving our shortfalls to other researchers? What if the peer review process became specialised, with experts in the literature reviewing the proposed studies, and experts in data analysis reviewing the completed studies? This system also lends itself to collaboration and, therefore, to further skill development, because the experts in a particular aspect of research are well-recognised. The PhD process would remain more or less the same under this system, as it would allow emerging researchers to identify – honestly – their research strengths and weaknesses, before specialising after they complete grad school. There are, no doubt, issues with this proposal that I have not thought of, but to me, it suggests a stronger and more effective peer review process than the current one.

A recipe for change

Unfortunately, I don’t believe these issues that I’ve outlined are going to change – at least not in a hurry, if the slow adoption of estimation techniques is anything to go by. For that reason, when I finish my PhD later this year, I will be leaving academia to pursue a career in market research, where obtaining truth from the data to deliver actionable insights to clients is of the utmost importance. Some may view this decision as synonymous with giving up, but it’s not a choice I’ve made lightly; I simply feel as though I have the opportunity to pursue a more meaningful career in research outside of academia – and I’m very much looking forward to the opportunities and challenges that lay ahead for me in industry.

For those who choose to remain in academia, it is your responsibility to promote positive change; that responsibility does not rest solely on the journals. It has been suggested that researchers boycott the flagship journals if they don’t agree with their policies – but that is really only an option for tenured professors, unless you’re willing to risk career self-sabotage (which, I’m betting, most emerging and early career researchers are not). The push for change, therefore, needs to come predominantly (though not solely) from senior academics, in two ways: 1) in research training, as advisors and supervisors of PhDs and post-docs; and 2) as reviewers for journals, and members of editorial boards. Furthermore, universities should offer greater support to their academics, to enable them to take the time to produce higher quality research that strives to discover the truth. Grant committees, also, may need to re-evaluate their criteria for awarding research grants, and focus more on quality and meaningful research, as opposed to research that is “flashy” and/or “more newsworthy”. And the next generation of academics (that is, the emerging and early career researchers) should familiarise themselves with these issues, so that they may make up their own minds about where they stand, how they feel, and how best to move forward; the future of the academic institution is, after all, in their hands.

 

Viewpoint: A Great Research Paper Explains A Lot, Not Everything

ImageRecently Kathleen Vohs of the University of Minnesota and colleagues published a paper on rituals and we think it is a fantastic model for how to write a paper. We think this paper solves a lot of issues that have been discussed by the professors and writers here at InDecision about how to write and present research. Particularly it avoids certain research “sins”.

In case you are in a rush, or too busy with important ideas or data analyses to read this entire blog post (and even more so the article), or whether you’d just rather go look at cat memes on Tumblr, below is a short version of this post, followed by a longer form post.

Short Version

The Problem: Michel Pham and others have declared that one of the biggest research “sins” is to propose that a single phenomenon has a single process. This “full mediation quest,” as it is sometimes known, can often lead to research that lacks practical relevance, any external validity, and at worst misrepresents reality.

The Solution: Vohs and colleagues’ paper on rituals studies an important issue, with strong external validity, and identifies the existence of a psychological phenomenon. They then provide strong evidence for a psychological process, but do not claim to have found an exclusive micro process for the phenomenon. In the general discussion (pages 14-17 of this link), the authors discuss how the phenomenon is most likely multiply determined – something Michel Pham in his excellent 7 sins InDecision Blog post argues we need to embrace more. In the end, Vohs and colleagues are able to provide that prized “theoretical contribution” of a psychological process without committing the specific research “sin” of over claiming that the phenomenon is just one process.

Of course, you can assert that sometimes we need more nitty gritty larger research than this. We’ve got no argument with you there and we doubt Vohs would either – in fact, you can see other Vohs papers for example of “bigger” theories. It should go without saying that all research can’t be short Psychological Science reports like this one but we think there is a great lesson to be learned from the approach of this one paper (in addition to the quality scientific contribution) and that lesson is that, whenever possible, researchers should be open about multiply determined phenomenon and not seduced by the “full mediation” quest.

Long Version

Alright, so what exactly is going on in this ritual paper?

In the paper, “four experiments tested the novel hypothesis that ritualistic behavior potentiates and enhances the enjoyment of ensuing consumption – an effect found for chocolates, lemonade, and even carrots.” It identifies two processes that explain why rituals that provide no objective change to the consumption item itself, such as making the same pattern of hand gestures before consuming the item, affect consumption enjoyment. Particularly they find “a delay between a ritual and the opportunity to consume heightens enjoyment, which attests to the idea that ritual behavior stimulates goal-directed action to consume.” Further, they find rituals increase involvement in the experience.

As the paper continues it takes a real phenomenon (rituals), establishes that something psychological is going on, and then provides some enlightenment about what that is. In addition, it provides managers with an intervention idea (e.g. put time between the ritual and the consumption). However, what truly won us over was the discussion section where the authors note that, “ritualized behavior likely encompasses several mechanisms” such as preparatory mindsets, symbolic meanings, social implications, palliative functions, and lay beliefs. Some researchers might see this “multiple mechanisms” account as weak and so these researchers might meagerly attempt to hand wave away alternative mechanisms, in order to protect their precious “full mediation theoretical contribution.” Rather than reject the complications, Vohs and colleagues embrace them.

If you have ever heard a researcher talk about the endowment effect and try to claim that the entire endowment effect is simply one process, you know the problem we are alluding to. When researchers seek to claim phenomena are driven by just one thing for the most part they are wrong: science should be about nuance and openness, and this paper embraces that mindset.

Final Thoughts

With papers like this one (and there are tons of others out there, feel free to link some in the comments section), we can see that one does not need to boldly force a massive theoretical contribution and defend a single process against all others with the insecurity of a teenager and the ferocity of a honey badger. Instead one can approach things powerfully and humbly as Vohs and colleagues did, embracing the modern research movement where a single paper does not need to explain everything as long as it provides a quality contribution.

Troy Campbell is a Ph.D Student at Duke University. To read more InDecision posts from Troy click here. You can also visit his occasionally-updated personal blog People Science where Troy writes about everything from scientific methodologies to Batman.

And if that post was not enough for you, you can check out a related discussion that is even longer in this 2010 academic paper by Zhao, Lynch, and Chen. The paper examines the strengths of mediation analysis as well as the problem of full mediation and the problem of the mediation-as-theory assumption. 

Derek Rucker – Shaky Camera Interview on Doctoral Consortiums

We caught up with Professor Derek Rucker and brought our classic shaky camera to get a quick interview on his perspective about what a graduate student should get out of a doctoral consortium and academic conferences in general. Watch it, it’s like 90 seconds.

Viewpoint: How Scientists Can Get the Media’s Attention

ImageThe New York Times best selling author and journalist Chris Mooney has made a career out of bringing science into the mainstream. His articles, such as The Science of Why We Don’t Believe Science, go extremely viral.

In a time when the Internet allows science to find the eyeballs of non-scientists like never before, a researcher wants his or her research to land on the desk of a journalist like Mooney. But how can you make that happen?

Mooney recently spoke to an audience from the Society for Personality and Social Psychology about how researchers can turn their academic headlines into news headlines. Here’s a couple of his major tips and some commentary.

Caveat: In this post, we are side stepping the question of “should scientists actually want their research to be in the media?” We generally think the answer to this question is yes. However, there are many nuances that make this a complicated issue and we will return those issues in future posts. 

Mooney Tip #1: “There’s nothing like a good figure – something people can quickly grasp and understand.”

Mooney explains the idea that simple graphs do really well online. For instance, take a look at this recent graph in a psychological article on “internet trolling” that helped propel this article to mega viral status. The headline: “Internet trolls really are horrible people – Machiavellianism, narcissism, psychopathy, sadism.”

Image

Researchers need to make attractive graphs that can be exported into an article (or any one researcher’s power-point slides for that matter) without any changes. And by graphs, Mooney means graphs not tables. He jokes that, “Journalists need graphics from you or you run the risk they’ll make the graphics themselves.” So if you want to be viral and protect the purity of your science in the public eye, put some graphs in that article. Just because you and ten of your colleagues can understand a 10 column x 12 row table in the blink of an eye, does not mean the public and even scientific journalists can.

Mooney Tip #2: “Turn a correlation into a percent.”

Graphs can make things go viral but so can a good statistic. Mooney explains that readers and journalists don’t think in correlations, but rather in quantities like percentages. People can be moved by startling statistics like “a 25% increase in health” or “40% increase in reported enjoyment” – these items are concrete and tangible.

Image

Mooney Tip #3: “Put your studies in context with other studies.”

Image

Illustration: Jonathon Rosen

Journalists often practice “weight of the evidence” when deciding what scientific pieces to write about. Many will be unwilling to publish findings that are too far off from the majority of the scientific field. Thus, when positioning one’s finding to a journalist in an email or press release, it is important to demonstrate how the finding is largely supported by other research. Also, journalists need to know whether a finding should be presented as brand new or providing evidence for an existing theory. It’s very difficult for journalists to figure this out themselves. Hell – that can be difficult for even for us as scientists! Though there may be a few journalists out there that will run any headline, most journalists are interested in getting the facts straight, so scientists need to help them with that.

Mooney Tip #4: “I need to know when your study is coming out and I need to know first.”

Journalism is a competitive business and breaking a story is one of the biggest competitions in the game. Mooney recommends contacting journalists ahead of time. Many news articles have editors who can easily be contacted.

For instance Scientific American says at the conclusion of many of their articles:

“Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, […] at garethideas AT gmail.com or Twitter @garethideas.”

Mooney Tip #5: “Add Value.”

ImageIf a young researcher wants to develop a web presence, Mooney’s recommendation is to become a trusted brand that consistently provides a certain type of value.  Readers must learn that they can rely on your blog/website/content stream for a specific and continuous content.

I asked Mooney about how this is sort of anti-academic. Many of us like to do many things. We like to jump around and move on to new things, whether that’s deeper into one theory or bouncing between theories. We don’t like making the same points over and over again, and we always are preoccupied with the new and the cutting edge.

Mooney told me how he sometimes feels the same: after promoting his first book, he quickly got tired of talking about the same thing over and over again. But he has learned through his career that if you want your stuff to matter, you have to repeat yourself over and over again – that’s part of this job. Just look at how often Daniel Kahneman talks to the public about imperfect rationality and heuristic judgments! He has been doing it for his entire career and that continued presence and quality message has made him famous and unquestionably changes policy and the world.

For two great examples of young scientists who have developed platforms that people consistently come back to for information check out:

PsychYourMind.com – a place where a crew of graduate students talk about stuff in all the best manners suited for the Internet.

Very Bad Wizards Podcast  – A podcast and a tumblr about psychology and philosophy. We recommend this one with an “explicit warning.” Seriously, they talk about academics, but this isn’t like the Freakonomics podcast you listen to in your car with your conservative father.

Twitter also has quite a few examples of young scientists in our field who have built up bases of thousands of followers without yet being famous for their own research. Why? Because they are good at daily or very frequently posting quality links.

———————————

Chris Mooney’s personal webpage and his amazon author page.

Troy Campbell’s personal webpage.