Viewpoint: Nudging Nudgers’ Nudges

Editor’s note: Ben Kozary, who you may already know from his articulate and thought-provoking post on why he has decided to leave academia, has become the latest recruit into the InDecision team, giving us a view of decision sciences from the other side of the world in Australia. We asked Ben to report on the first International Behavioural Insights Conference in Sydney from the perspective of someone moving from academia into industry – here he reflects on his first experience of the world of behavioural insights Out There. 

bx2014A few days ago, I read an interesting piece on the differences between behavioural economics and psychology. Truth be told, even after attending the inaugural Behavioural Exchange conference in Sydney on June 2 and 3, I’m still not sure what exactly the difference is. But does the distinction even matter? The academic in me screams, “YES!” but, outside of the ivory tower, it seems that most people aren’t concerned. Behavioural Exchange (hereafter referred to as bx2014) was, after all, a public policy conference – and as such, the emphasis of the conference was on actionable and applicable insights from the behavioural sciences.

It was the promise of these insights that drew me to bx2014. As a final year PhD candidate researching Consumer Psychology, I was looking to ground the heavily theoretical work I do in some concrete practical applications. My hope was that, in doing so, my work would take on greater meaning, even if only to me – not to mention that I’d be able to pad out the practical implications component of my dissertation. Additionally, as someone transitioning from academia into industry, I was curious as to what the transition was going to entail. Who are the sorts of people I’m going to be dealing with in industry, and how are they similar/different to academics? What value is placed on the skills and knowledge I have, and how might I be best able to apply myself? And how well received are the ideas that I’ve been exposed to throughout my time in academia? In this post, I hint at the answers to these questions but, as you may have guessed, they’re by no means definitive.

bx2014: an overview

Speakers at the event included academics, the majority of whom were from Harvard University; as well as members of government, primarily from Australia, but also from the US, UK, and Singapore; and businesspeople from around the world, including CEOs, consultants, and designers. For me, the first day seemed to focus more on government, whilst the second day was primarily about business – but regardless of the relevance (or not) to me, I found all of the sessions fascinating. Individual presenters and panels alike dealt with such issues as:

  • The importance and benefits of nudging
  • How governments can embrace, and have already undertaken, nudging
  • Opportunities, risks, and common challenges of nudging
  • The fundamentals of nudging, including data, design, and delivery
  • How the integration of findings from academia and experiences in business and government can make nudges more effective
  • Reflections and insights from business and academia on the application of nudges in the corporate world
  • The future of nudging and behavioural science

Nudging: is it just a fad?

At this point, you may have noticed that the term “nudging” seems to have been applied as a catchcry for any behavioural intervention. Overgeneralisation may be a common sin of consumer psychology researchers, but our thinking appears more nuanced than that of our industry counterparts. For instance, many of the initiatives suggested or discussed at bx2014 were based upon research describing numerous cognitive biases, including the sunk-cost fallacy, present bias, hindsight bias, confirmation bias, anchoring, and framing effects – but there was almost no mention of the intricacies of these biases, nor any real emphasis placed on the conditions specific to particular case studies upon which they had been used as part of a successful behavioural intervention. Given that I was asked on more than ten separate occasions during the two days whether I’d read Kahneman’s Thinking, Fast and Slow (I haven’t, which apparently put me firmly in the minority of conference attendees), my concern here is that many of the attendees were searching for quick, easily applied solutions to issues affecting their stakeholders.

This overgeneralisation is dangerous ground to walk, because it flirts with the prospect of nudging becoming yet another apparent management or political panacea, when it’s anything but. Instead, we need to heed what Professor Cass Sunstein said in the first presentation of the conference; that effective nudging is about recognising individual differences. “Nudges are like GPS units: they tell you the most efficient, or ‘best’, route, but you don’t have to take it; you can go your own way and choose the scenic route, if you like.” In other words, nudges should preserve individual choice by not being overly paternalistic; this is what separates them from mandates. In that sense, I feel that Professor Sunstein was nudging us (if you will) to not overgeneralise.

Experimentation: “test, learn, adapt”

If you’re thinking executing Professor Sunstein’s advice is easier said than done, you’re right – but that was also addressed at bx2014. One of the recurring points of the conference was the need for experimentation, despite how challenging it may be. Dr David Halpern, Chief Executive of the UK Behavioural Insights Team, told us to live by one simple principle: “Test, learn, adapt.” Another presenter suggested, “It’s better to say, ‘I don’t know,’ and then test something, than to skip trials and push ahead to a full roll-out on a hunch.”

We were also reminded that the most successful companies, especially in the technology sphere – Google, Facebook, Amazon, etc. – perpetually experiment. Randomised controlled trials (RCTs) are the gold standard of experimentation, but they’re not always viable. When that’s the case, we were advised to “do whatever experiments or tests you can, provided the costs don’t outweigh the potential benefits – but always strive to get the best data available.” And therein lay two of the foremost challenges of behavioural interventions in industry: funding, and time. Fortunately for me, my time in academia has me well versed in both of these issues…

Replication: it’s essential in industry, too

The issue of replication is one that we should all be familiar with by now – and it didn’t go unmentioned at bx2014, either. For instance, Professor Richard Thaler, from the University of Chicago, told us that managers and policy makers generally think they’re right, and they don’t like taking risks; however, they are often too impatient to run experiments, and don’t see the point of replication, with their philosophy being, “It worked already, so why do we need to spend more time and money to test it again?” This problem is an obvious one, but it can be overcome with education and training.

A more serious problem with replication was highlighted during the Design breakout session I attended on the afternoon of the first day, where one of the presenters said, “Relative to hard sciences, social science is difficult, because the results will not always replicate. You can implement a nudge or a system of some sort as an effective intervention, but in 6-12 months’ time (or maybe more), people might have adapted and changed their behaviours such that it no longer works – and therefore won’t replicate in any RCTs or experiments you run.” From an academic standpoint, I find this idea intriguing, because it’s something that we rarely consider; we tend to take the more general view that, if an effect is real, it will replicate. I’m yet to hear people’s adaptability offered as a reason for some of the recent failures for studies to replicate – and I’m not saying that it’s necessarily a legitimate reason, but it does highlight something that we as researchers risk forgetting: that there are real people behind our statistics, and they can be unpredictable and subject to change.

Big data: how do we use it?

On the second day, I attended the Data breakout session, during which several interesting points were made. The focus of the session was on big data, which Dr James Guszcza, of the Deloitte Analytics Institute in Singapore, told us referred to predictive analytics and modelling. These models, he said, can point us in the right direction, and tell us who to target our interventions to, but they don’t tell us how to prompt the desired behaviour change. For that, behavioural insights are required; thus, he recommended that behavioural insights and predictive modelling be infused, because – to echo Professor Sunstein – solutions will often need to be nuanced and individually focused. Dr Guszcza also advised that we be flexible with our data, and be open to the possibility that it can be useful in ways that you wouldn’t previously have imagined. “Old” datasets are particularly useful in this regard, he said, so you should also be mindful of “digital exhaustion” (that is, the deletion of older data).  “With today’s storage capabilities, you shouldn’t need to delete anything simply because it’s old.”

Collaboration in nudging: how academia and industry can work together

Strangely, the importance of collaboration between academia and industry wasn’t strongly highlighted at the conference; however, one presenter did note its significance. For collaboration to be effective, he said, it’s a matter of recognising each other’s needs. That means that academics should look at questions important to organisations, and organisations should allow academics to publish – especially given the type of rich data they have access to. Furthermore, collaboration could help solve what was highlighted as a critical issue affecting research into behavioural insights. That is, Professor Max Bazerman, of Harvard University, described how the judgement and decision-making field originated decades ago under the notion that, “If we understand what’s wrong with the human mind, we can fix it – but this approach is flawed. Instead, we should focus on understanding and accepting that this is the way the world works, and therefore we can learn to adapt and be effective.”

Final thoughts

At the conclusion of the conference, we were asked to fill out a short questionnaire. One of the questions asked us to describe bx2014 in one sentence; I wrote: “Nudging nudgers’ nudges”, because I felt that neatly summed up the notion that most people were there to learn how to implement more effective behavioural interventions (plus, my brain was fried after an intense two days, as well as having being punished by my downing more than a few drinks at the reception the night before…). But, the truth is, this conference can’t be compressed into a sentence; the ideas are just too big. So, with that in mind, I’d like now to share with you a few thought provoking ideas that I jotted down over the two days:

If I were young and wanted to start a business, I would start a choice engine, because they will do for other industries what travel websites did for that industry. The amount of data emerging and becoming available is monumental. -Professor Richard Thaler, University of Chicago

Nudges lead people to engage in behaviour – but, as a psychologist, I’m interested in the outcome beyond that behaviour; for example, are people happy or unhappy? -Professor Mike Norton, Harvard University

When thinking about nudges, consider this piece from the late author, David Foster Wallace: There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, “Morning, boys, how’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?” Remember: there will always be water – and there will always be nudges, even if we don’t realise they’re there. We must open our eyes to the extensive possibilities of nudging. -Professor Cass Sunstein, Harvard University

And, finally:

Done is better than perfect. -Mia Garlick, Head of Policy Australia and New Zealand, Facebook

So, in the interests of applying something I learned at bx2014, I’m calling this post done. It’s not perfect – but as several speakers remarked at the conference, satisficing is better than optimising. And, at the end of the day, I think that’s a pretty good example of behavioural economics in action.

Viewpoints: “Becoming a Professor” Podcasts

On InDecision Blog we write a lot about how be the “ideal candidate” featuring the “ideal advice.”  While advice is great, there’s something special about hearing first hand about the intimate struggles and personal journeys people had that can enlighten both those in and outside of academia.

As The Professor Job Market season dawns this year, we wanted to share four unique stories about four young scholar’s recent journeys on the job market. If you’ve ever wanted an honest behind the scenes look, this is it: four young scholars explain their non-traditional paths with frank honesty and in personally revealing ways. These podcasts will give you an opportunity to connect with their stories and hear their views on research and life outside it. You’ll also get the perspective of people who chose not to take the “top 20” school path – a perspective we have not covered as despite the fact that majority of the field does not resides in the top 20 schools.

These short podcasts are hosted by InDecision contributor Troy Campbell who is starting a series of InDecision podcasts aimed to get at both the professional and personal side of research. Note the conversations are with people who got placements in business schools – particularly marketing. If you are a nonacademic we put the podcast in order “general topical appeal.” All podcasts can be downloaded to be listened to offline (e.g. during a work out).

Jim Mourey
Being a presenter, being true to yourself, quality of life, and personally valuing teaching.
Ph.D: University of Michigan
Professorship: DePaul University
Rob Smith
Choosing not to go the top schools, quality of life, importance of teaching.
Ph.D: University of Michigan
Professorship: Ohio State
Caroline Roux
Choosing a different path: policy research and being a different type of scholar.
PhD: Northwestern
Professorship: Concordia University, Canada
Adrian Camilleri
Being interdisciplinary, reflecting on your Identity, dealing with Marketing Interviews.
Post Doc: Duke University
Professorship: RMIT University, Australia
Coming soon.

—————-

troy-resizedYour podcast host Troy Campbell is a Ph.D student at Duke University and hopeful on the Job Market this year. For some of his viewpoints and his always animated Indecision Blog reporting, click here.

Viewpoint: Life as an Assistant Professor

joebwWe recently had a chat with Joseph Redden about his happy life as an assistant professor. From this conversation, we found out that Professor Redden had a lot answers to some of the questions that stressed out graduate students often have about being a young professor so we asked him to do a Q&A with a representative stressed out graduate student.

So far, we’ve been interviewing the established greats and focused a lot on life after tenure. But as a blog dedicated in part to helping young researchers find their way, we thought it would be good to have some more posts about your most immediate concerns and fears. So here’s Joseph Redden with some guidance and comfort in the first of many soon to come InDecision Blog posts on the two topics that on all young researchers minds: the job market and being a quality young faculty member. Joseph Redden is an Assistant Professor of Marketing at Carlson School of Management at the University of Minnesota, and he is an emerging expert on the topic of satiation.


Hi Joe, I am a stressed out student in graduate school – or actually I’m not just stressed but also worried and afraid. I am worried that my impostor syndrome is not just a syndrome but real: I look at the top people on the job market recently and I just feel inadequate. I look at the greats in our field and their theoretical might and publishing powerhouse make me feel like I’ll never make in the field, and even if I can get a few publications I am worried my work won’t matter. So if you don’t mind, I have a bunch of questions... 

SOS: How stressed are you? Do you have free time?

Professor Redden: Like any academic or human for that matter, I feel like my life has plenty of stress. That being said, I do find that my stress seems to diminish a bit every year. I like to think that is not just adaptation, but rather a reflection of my active efforts to manage stress in two ways. First, I’ve focused my time more on problems that really pique my interest and leverage my areas of expertise. Second, I make sure some of the time “savings” I get from being more productive translates into free time for me to enjoy. I personally find this last point the most attractive aspect of an academic life.

SOS: What’s daily life like for you?

Professor Redden: Like any other academic, there is not really a protypical day. Some days are mostly teaching, others mostly writing, some mostly reading while others might be service. Even so, I really try to keep a regular schedule (a 9-to-5 if you will) to avoid burnout. If you don’t do this I think it’s very easy to burn yourself out because there is always more we could do on every research project or teaching topic. I find it helpful to set a goal for what I want to get done in a week. If I happen to get lucky and get things done quickly, then I might leave early. If instead things take quite a bit longer, then that becomes a longer week (and possibly weekend). Over time, I’ve found that I’ve become much better calibrated at setting what is reasonable for a week.

SOS: Do you ever have fun?

Professor Redden: Of course. Otherwise, what is the point? In fact, I explicitly carve out time for fun. As an example, I often teach on Wednesdays until noon and then often go catch an early movie. Interestingly, I have found this increases my productivity as I come back Thursday morning refreshed and ready to work. I think everyone should carve out some of these hobbies to take advantage of the flexibility academia offers. For me, this is movies, tennis leagues, my kids’ sports teams, etc.

SOS: How do you manage your choice of projects?

Professor Redden: That is a great question. I found that early in my career I tended to work on anything I found interesting. This led me to jump from project to project chasing after the “shiny new object”. You can imagine how this hampered my productivity. I now try to decide what enters my portfolio in three stages. First, I make sure that any new idea leverages an area of my expertise. I want to avoid one-off projects that require me to learn an entirely new literature each time. Second, I go ahead and write a potential contribution paragraph to flush out whether this idea could be in an A-journal. The worst outcome is for an idea to work perfectly yet have no chance to be published. Third, I try to run a quick study to see if the idea seems promising at all. If it works, I try to quickly replicate it so I’ll know I have something real. If it fails at first, I’ll give it one more shot if I think the idea is super promising. If it works at first and fails on the replication, then I’ll often give it one more go as a sort of tiebreaker. I’ve found this approach has really helped me weed out effects that will be difficult to establish and understand.

SOS: What do I really need to do to get tenure in this field?

Professor Redden: The answer to this question is both ambiguous and varied across schools. At my university, the guidance is centered on achieving distinction in your field. Of course, this could mean something very different for everyone. Personally, I tried to make sure that two things would hold true. First, that there was a topic (satiation in my case) such that I would be one of the first few names mentioned if one asked who was doing research in that area. Second, that it worked the other way such that when asked what I researched people would have a consistent answer. I think if both of those are true then you will have achieved distinction in your field.  

SOS: How do you choose collaborators?

Professor Redden: A great deal of this is serendipity so I’m not sure there is a conscious effort to “choose” collaborators. I can say that the collaborators I want are those that share my interests, possess complementary skills, and make research fun. I’d say the last one, having fun, is by far the most important.

SOS:  I am worried that only the Thaler’s and Loewensteins of the world will make a difference. I know now that I’ll never be them, so I am thinking, what’s the point, what will I really do for this field?

Professor Redden: It matters how you define making a difference. If you consider yourself a success only if you make a difference for an entire field, then that is a really high standard for nearly anyone. I like to think of making a difference at a more micro level. Think about how your presentation at a conference may affect how a listener writes their paper, how a conservation may lead a doctoral student to their thesis idea, how teaching a topic may spark a student’s interest, how seemingly minor coverage of a paper may affect a marketer at a company (and hence millions of people). I believe that many of these unknown differences are happening — as long as we work on interesting problems.

If you have any questions for future interviews, let us know at indecisionblogging@gmail.com

Interview by Troy Campbell

Viewpoint: Why I’m Leaving Academia

fishbowl cropped This week we’re featuring a guest post from Ben Kozary, a PhD candidate at the University of Newcastle in Australia. After getting to know Ben at various conferences over the past year, the InDecision team was disappointed to hear about his decision to leave academia – partly because he’s an excellent and passionate researcher, partly because we wouldn’t benefit from his jovial company at future conferences! However, his reasons for leaving echoed many dinner conversations we’ve had with fellow PhD students so we asked him to write about his experience and his decision to move to industry. Over to Ben…

To say I’ve learnt a lot during my PhD candidature would be an understatement. From a single blank page, I now know more than most people in the world about my particular topic area. I understand the research process: from planning and designing a study; to conducting it; and then writing it up clearly – so that readers may be certain about what I did, how I did it, what I found, and why it’s important. I’ve met a variety of people from around the world, with similar interests and passions to me, and forged close friendships with many of them. And I’ve learnt that academia might well be the best career path in the world. After all, you get to choose your own research area; you have flexible working hours; you get to play around with ideas, concepts and data, and make new and often exciting discoveries; and you get to attend conferences (meaning you get to travel extensively, and usually at your employer’s expense), where you can socialise (often at open bars) under the guise of “networking”. Why, then, you might be wondering, would I want to leave all of that behind?

My journey through the PhD program has been fairly typical; I’ve gone through all of the usual stages. I’ve been stressed in the lead-up to (and during) my proposal defence. I’ve had imposter syndrome. And I’ve been worried about being scooped, and/or finding “that paper”, which presents the exact research I’m doing, but does it better than me. But now, as I begin my final year of the four year Australian program, I’m feeling comfortable with, and confident in, the work I’ve produced so far in my dissertation. And yet, I’m also disillusioned – because, for all of its positives, I’ve come to see academia as a broken institution.

That there are problems facing academic research is not news, especially in psychology. Stapel and Smeesters, researcher degrees of freedom and bias, (the lack of) statistical power and precision, the “replication crisis” and “theoretical amnesia”, social and behavioural priming: the list goes on. However, these problems are not altogether removed from one another; in fact, they highlight what I believe is a larger, underlying issue.

Academic research is no longer about a search for the truth

Stapel and Smeesters are two high profile examples of fraud, which represents an extreme exploitation of researcher degrees of freedom. But what makes any researcher “massage” their data? The bias towards publishing only positive results is no doubt a driving force. Does that excuse cases of fraud? Absolutely not. My point, however, is that there are clear pressures on the academic community to “publish or perish”. Consequently, academic research is largely an exercise in career development and promotion, and no longer (if, indeed, it ever was) an objective search for the truth.

For instance, the lack of statistical power evident in our field has been known for more than fifty years, with Cohen (1962) first highlighting the problem, and Rossi (1990) and Maxwell (2004) providing further prompts. Additionally, Cohen (1990; 1994) reminded us of the many issues associated with null-hypothesis significance testing – issues that were raised as far back as 1938 – and yet, it still remains the predominant form of data analysis for experimental researchers in the psychology field. To address these issues, Cohen (1994: 1002) suggested a move to estimation:

“Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. […] Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing – larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. 

Twenty years later, and we’re finally starting to see some changes. Unfortunately, the field now has to suffer the consequences of being slow to change. Even if all our studies were powered at the conventional level of 80% (Cohen, 1988; 1992), they would still be imprecise; that is, the width of their 95% confidence intervals would be approximately ±70% of the point estimate or effect size (Goodman and Berlin, 1994). In practical terms, that means that if we used Cohen’s d as an effect size metric (for the standardised difference between two means), and we found that it was “medium” (that is, d = 0.50), the 95% confidence interval would range from 0.15 to 0.85. This is exactly what Cohen (1994) was talking about when he said the confidence intervals in our field are “so embarrassingly large”: in this case, the interval tells us that we can be 95% confident the true effect size is potentially smaller than “small” (0.20), larger than “large” (0.80), or somewhere in between. Remember, however, that many of the studies in our field are underpowered, which makes the findings even more imprecise than what is illustrated here; that is, the 95% confidence intervals are even wider. And so, I wonder: How many papers have been published in our field in the last twenty years, while we’ve been slow to change? And how many of these papers have reported results at least as meaningless as this example?

I suspect that part of the reason for the slow adoption of estimation techniques is due to the uncertainty they bring to the data. Significance testing is characterised by dichotomous thinking: an effect is either statistically significant or it is not. In other words, significance testing is seen as easier to conduct and analyse, relative to estimation; however, it does not allow for the same degree of clarity in our findings. By reporting confidence intervals (and highlighting uncertainty), we reduce the risk of committing one of the cardinal sins of consumer psychology: overgeneralisation. Furthermore, you may be surprised to learn that estimation is just as easy to conduct as significance testing, and even easier to report (because you can extrapolate greater meaning from your results).

Replication versus theoretical development

When you consider the lack of precision in our field, in conjunction with the magnitude of the problems of researcher degrees of freedom and publication bias, is it any wonder that so many replication attempts are unsuccessful? The issue of failed replications is then compounded further by the lack of theoretical development that takes place in our discipline, which creates additional problems. The incentive structure upon which the academic institution is situated implies that success (in the form of promotion and grants) comes to those who publish a high number of high quality papers (as determined by the journal in which they are published). As a result, we have a discipline that lacks both internal and external relevance, due to the multitude of standalone empirical findings that fail to address the full scope of consumer behaviour (Pham, 2013). In that sense, it seems to me that replication is at odds with theoretical development, when, in fact, the two should be working in tandem; that is, replication should guide theoretical development.

Over time, some of you may have observed (as I have) that single papers are now expected to “do more”. Papers will regularly report four or more experiments, in which they will identify an effect; perform a direct and/or conceptual replication; identify moderators and/or mediators and/or boundary conditions; and rule out alternative process accounts. I have heard criticism directed at this approach, usually from fellow PhD candidates, that there is an unfair expectation on the new generation of researchers to do more work to achieve what the previous generation did. In other words, that the seminal/classic papers in the field, upon which now-senior academics were awarded tenure, do less than what emerging and early career researchers are currently expected to do in their papers. I do not share this view that there is an issue of hypocrisy; rather, my criticism is that as the expectation that papers “do more” has grown, there is now less incentive for academics to engage in theoretical development. The “flashy” research is what gets noticed and, in turn, what gets its author(s) promoted and wins them grants. Why, then, would anyone waste their time trying to further develop an area of work that someone else has already covered so thoroughly – especially when, if you fail to replicate their basic effect, you will find it extremely difficult to publish in a flagship journal (where the “flashiest” research appears)?

This observation also begs the question: where has this expectation that papers “do more” come from? As other scientific fields (particularly the hard sciences) have reported more breakthroughs over time, I suspect that psychology has desired to keep up. The mind, however, in its intangibility, is too complex to allow for regular breakthroughs; there are simply too many variables that can come into effect, especially when behaviour is also brought into the equation. Such an issue is highlighted no more clearly than in the case of behavioural priming. Yet, with the development of a general theory of priming, researchers can target their efforts at identifying the varied and complex “unknown moderators” of the phenomenon and, in turn, design experiments that are more likely to replicate (Cesario, 2014). Consequently, the expectation for single papers to thoroughly explain an entire process is removed – and our replications can then do what they’re supposed to: enhance precision and uncover truth.

The system is broken

The psychology field seems resistant to regressing to simpler papers that take the time to develop theory, and contribute to knowledge in a cumulative fashion. Reviewers continue to request additional experiments, rather than to demand greater clarity from reported studies (for example, in the form of effect sizes and confidence intervals), and/or to encourage further theoretical development. Put simply, there is an implicit assumption that papers need to be “determining” when, in fact, they should be “contributing”. As Cumming (2014: 23) argues, it is important that a study “be considered alongside any comparable past studies and with the assumption that future studies will build on its contribution.”

In that regard, it would seem that the editorial/publication process is arguably the larger, underlying issue contributing (predominantly, though not necessarily solely) to the many problems afflicting academic research in psychology. But what is driving this issue? Could it be that the peer review process, which seems fantastic in theory, doesn’t work in practice? I believe that is certainly a possibility.

Something else I’ve come to learn throughout my PhD journey is that successful academic research requires mastery of several skills: you need to be able to plan your time; communicate your ideas clearly; think critically; explore issues from a “big picture” or macro perspective, as well as at the micro level; undertake conceptual development; design and execute studies; and be proficient at statistical analysis (assuming, of course, that you’re not an interpretive researcher). Interestingly, William Shockley, way back in 1957, posited that producing a piece of research involves clearing eight specific hurdles – and that these hurdles are essentially all equal. In other words, successful research calls for a researcher to be adept at each stage of the research process. However, in reality, it is often that the case that we are very adept (sometimes exceptional) at a few aspects, and merely satisfactory at others. The aim of the peer review process is to correct or otherwise improve the areas we are less adept at, which should – theoretically – result in a strong (sometimes exceptional) piece of research. Multiple reviewers evaluate a manuscript in an attempt to overcome these individual shortfalls; yet, look at the state of the discipline! The peer review process is clearly not working.

I’m not advocating abandoning the peer review process; I believe it is one of the cornerstones of scientific progress. What I am proposing, however, is for an adjustment to the system – and I’m not the first to do so. What if we, as has been suggested, move to a system of pre-registration? What if credit for publications in such a system were two-fold, with some going towards the conceptual development (resulting in the registered study), and some going towards the analysis and write-up? Such a system naturally lends itself to specialisation, so, what if we expected less of our researchers? That is, what if we were free to focus on those aspects of research that we’re good at (whether that’s, for example, conceptual development or data analysis), leaving our shortfalls to other researchers? What if the peer review process became specialised, with experts in the literature reviewing the proposed studies, and experts in data analysis reviewing the completed studies? This system also lends itself to collaboration and, therefore, to further skill development, because the experts in a particular aspect of research are well-recognised. The PhD process would remain more or less the same under this system, as it would allow emerging researchers to identify – honestly – their research strengths and weaknesses, before specialising after they complete grad school. There are, no doubt, issues with this proposal that I have not thought of, but to me, it suggests a stronger and more effective peer review process than the current one.

A recipe for change

Unfortunately, I don’t believe these issues that I’ve outlined are going to change – at least not in a hurry, if the slow adoption of estimation techniques is anything to go by. For that reason, when I finish my PhD later this year, I will be leaving academia to pursue a career in market research, where obtaining truth from the data to deliver actionable insights to clients is of the utmost importance. Some may view this decision as synonymous with giving up, but it’s not a choice I’ve made lightly; I simply feel as though I have the opportunity to pursue a more meaningful career in research outside of academia – and I’m very much looking forward to the opportunities and challenges that lay ahead for me in industry.

For those who choose to remain in academia, it is your responsibility to promote positive change; that responsibility does not rest solely on the journals. It has been suggested that researchers boycott the flagship journals if they don’t agree with their policies – but that is really only an option for tenured professors, unless you’re willing to risk career self-sabotage (which, I’m betting, most emerging and early career researchers are not). The push for change, therefore, needs to come predominantly (though not solely) from senior academics, in two ways: 1) in research training, as advisors and supervisors of PhDs and post-docs; and 2) as reviewers for journals, and members of editorial boards. Furthermore, universities should offer greater support to their academics, to enable them to take the time to produce higher quality research that strives to discover the truth. Grant committees, also, may need to re-evaluate their criteria for awarding research grants, and focus more on quality and meaningful research, as opposed to research that is “flashy” and/or “more newsworthy”. And the next generation of academics (that is, the emerging and early career researchers) should familiarise themselves with these issues, so that they may make up their own minds about where they stand, how they feel, and how best to move forward; the future of the academic institution is, after all, in their hands.

 

Viewpoint: A Great Research Paper Explains A Lot, Not Everything

ImageRecently Kathleen Vohs of the University of Minnesota and colleagues published a paper on rituals and we think it is a fantastic model for how to write a paper. We think this paper solves a lot of issues that have been discussed by the professors and writers here at InDecision about how to write and present research. Particularly it avoids certain research “sins”.

In case you are in a rush, or too busy with important ideas or data analyses to read this entire blog post (and even more so the article), or whether you’d just rather go look at cat memes on Tumblr, below is a short version of this post, followed by a longer form post.

Short Version

The Problem: Michel Pham and others have declared that one of the biggest research “sins” is to propose that a single phenomenon has a single process. This “full mediation quest,” as it is sometimes known, can often lead to research that lacks practical relevance, any external validity, and at worst misrepresents reality.

The Solution: Vohs and colleagues’ paper on rituals studies an important issue, with strong external validity, and identifies the existence of a psychological phenomenon. They then provide strong evidence for a psychological process, but do not claim to have found an exclusive micro process for the phenomenon. In the general discussion (pages 14-17 of this link), the authors discuss how the phenomenon is most likely multiply determined – something Michel Pham in his excellent 7 sins InDecision Blog post argues we need to embrace more. In the end, Vohs and colleagues are able to provide that prized “theoretical contribution” of a psychological process without committing the specific research “sin” of over claiming that the phenomenon is just one process.

Of course, you can assert that sometimes we need more nitty gritty larger research than this. We’ve got no argument with you there and we doubt Vohs would either – in fact, you can see other Vohs papers for example of “bigger” theories. It should go without saying that all research can’t be short Psychological Science reports like this one but we think there is a great lesson to be learned from the approach of this one paper (in addition to the quality scientific contribution) and that lesson is that, whenever possible, researchers should be open about multiply determined phenomenon and not seduced by the “full mediation” quest.

Long Version

Alright, so what exactly is going on in this ritual paper?

In the paper, “four experiments tested the novel hypothesis that ritualistic behavior potentiates and enhances the enjoyment of ensuing consumption – an effect found for chocolates, lemonade, and even carrots.” It identifies two processes that explain why rituals that provide no objective change to the consumption item itself, such as making the same pattern of hand gestures before consuming the item, affect consumption enjoyment. Particularly they find “a delay between a ritual and the opportunity to consume heightens enjoyment, which attests to the idea that ritual behavior stimulates goal-directed action to consume.” Further, they find rituals increase involvement in the experience.

As the paper continues it takes a real phenomenon (rituals), establishes that something psychological is going on, and then provides some enlightenment about what that is. In addition, it provides managers with an intervention idea (e.g. put time between the ritual and the consumption). However, what truly won us over was the discussion section where the authors note that, “ritualized behavior likely encompasses several mechanisms” such as preparatory mindsets, symbolic meanings, social implications, palliative functions, and lay beliefs. Some researchers might see this “multiple mechanisms” account as weak and so these researchers might meagerly attempt to hand wave away alternative mechanisms, in order to protect their precious “full mediation theoretical contribution.” Rather than reject the complications, Vohs and colleagues embrace them.

If you have ever heard a researcher talk about the endowment effect and try to claim that the entire endowment effect is simply one process, you know the problem we are alluding to. When researchers seek to claim phenomena are driven by just one thing for the most part they are wrong: science should be about nuance and openness, and this paper embraces that mindset.

Final Thoughts

With papers like this one (and there are tons of others out there, feel free to link some in the comments section), we can see that one does not need to boldly force a massive theoretical contribution and defend a single process against all others with the insecurity of a teenager and the ferocity of a honey badger. Instead one can approach things powerfully and humbly as Vohs and colleagues did, embracing the modern research movement where a single paper does not need to explain everything as long as it provides a quality contribution.

Troy Campbell is a Ph.D Student at Duke University. To read more InDecision posts from Troy click here. You can also visit his occasionally-updated personal blog People Science where Troy writes about everything from scientific methodologies to Batman.

And if that post was not enough for you, you can check out a related discussion that is even longer in this 2010 academic paper by Zhao, Lynch, and Chen. The paper examines the strengths of mediation analysis as well as the problem of full mediation and the problem of the mediation-as-theory assumption. 

Viewpoint: How Scientists Can Get the Media’s Attention

ImageThe New York Times best selling author and journalist Chris Mooney has made a career out of bringing science into the mainstream. His articles, such as The Science of Why We Don’t Believe Science, go extremely viral.

In a time when the Internet allows science to find the eyeballs of non-scientists like never before, a researcher wants his or her research to land on the desk of a journalist like Mooney. But how can you make that happen?

Mooney recently spoke to an audience from the Society for Personality and Social Psychology about how researchers can turn their academic headlines into news headlines. Here’s a couple of his major tips and some commentary.

Caveat: In this post, we are side stepping the question of “should scientists actually want their research to be in the media?” We generally think the answer to this question is yes. However, there are many nuances that make this a complicated issue and we will return those issues in future posts. 

Mooney Tip #1: “There’s nothing like a good figure – something people can quickly grasp and understand.”

Mooney explains the idea that simple graphs do really well online. For instance, take a look at this recent graph in a psychological article on “internet trolling” that helped propel this article to mega viral status. The headline: “Internet trolls really are horrible people – Machiavellianism, narcissism, psychopathy, sadism.”

Image

Researchers need to make attractive graphs that can be exported into an article (or any one researcher’s power-point slides for that matter) without any changes. And by graphs, Mooney means graphs not tables. He jokes that, “Journalists need graphics from you or you run the risk they’ll make the graphics themselves.” So if you want to be viral and protect the purity of your science in the public eye, put some graphs in that article. Just because you and ten of your colleagues can understand a 10 column x 12 row table in the blink of an eye, does not mean the public and even scientific journalists can.

Mooney Tip #2: “Turn a correlation into a percent.”

Graphs can make things go viral but so can a good statistic. Mooney explains that readers and journalists don’t think in correlations, but rather in quantities like percentages. People can be moved by startling statistics like “a 25% increase in health” or “40% increase in reported enjoyment” – these items are concrete and tangible.

Image

Mooney Tip #3: “Put your studies in context with other studies.”

Image

Illustration: Jonathon Rosen

Journalists often practice “weight of the evidence” when deciding what scientific pieces to write about. Many will be unwilling to publish findings that are too far off from the majority of the scientific field. Thus, when positioning one’s finding to a journalist in an email or press release, it is important to demonstrate how the finding is largely supported by other research. Also, journalists need to know whether a finding should be presented as brand new or providing evidence for an existing theory. It’s very difficult for journalists to figure this out themselves. Hell – that can be difficult for even for us as scientists! Though there may be a few journalists out there that will run any headline, most journalists are interested in getting the facts straight, so scientists need to help them with that.

Mooney Tip #4: “I need to know when your study is coming out and I need to know first.”

Journalism is a competitive business and breaking a story is one of the biggest competitions in the game. Mooney recommends contacting journalists ahead of time. Many news articles have editors who can easily be contacted.

For instance Scientific American says at the conclusion of many of their articles:

“Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook, […] at garethideas AT gmail.com or Twitter @garethideas.”

Mooney Tip #5: “Add Value.”

ImageIf a young researcher wants to develop a web presence, Mooney’s recommendation is to become a trusted brand that consistently provides a certain type of value.  Readers must learn that they can rely on your blog/website/content stream for a specific and continuous content.

I asked Mooney about how this is sort of anti-academic. Many of us like to do many things. We like to jump around and move on to new things, whether that’s deeper into one theory or bouncing between theories. We don’t like making the same points over and over again, and we always are preoccupied with the new and the cutting edge.

Mooney told me how he sometimes feels the same: after promoting his first book, he quickly got tired of talking about the same thing over and over again. But he has learned through his career that if you want your stuff to matter, you have to repeat yourself over and over again – that’s part of this job. Just look at how often Daniel Kahneman talks to the public about imperfect rationality and heuristic judgments! He has been doing it for his entire career and that continued presence and quality message has made him famous and unquestionably changes policy and the world.

For two great examples of young scientists who have developed platforms that people consistently come back to for information check out:

PsychYourMind.com – a place where a crew of graduate students talk about stuff in all the best manners suited for the Internet.

Very Bad Wizards Podcast  – A podcast and a tumblr about psychology and philosophy. We recommend this one with an “explicit warning.” Seriously, they talk about academics, but this isn’t like the Freakonomics podcast you listen to in your car with your conservative father.

Twitter also has quite a few examples of young scientists in our field who have built up bases of thousands of followers without yet being famous for their own research. Why? Because they are good at daily or very frequently posting quality links.

———————————

Chris Mooney’s personal webpage and his amazon author page.

Troy Campbell’s personal webpage.

Viewpoint: Never Say “Isn’t That Just Cognitive Dissonance?”

Screen Shot 2014-02-20 at 9.39.33 AMAfter a talk at the Society for Personality and Social Psychology 2014 conference in Austin, I found myself leaving Ballroom C when I heard someone utter to a colleague one of my least favorite phrases: “Isn’t that just cognitive dissonance?”

I wanted to shout at them, “You just went to a session on the application and extension of cognitive dissonance. Of course it was ‘cognitive dissonance,’ but it was insightful and useful.”

I did not, of course, say such things then. But now here I am on the Internet doing what I should have done outside Ballroom C.

But before I channel the wisdom expressed by the professors and practitioners who’ve been interviewed here on Indecision Blog, let’s back up for a second and set the scene.

___

ballroom c

Deborah Hall of Arizona State University started a Friday afternoon presentation in Ballroom C by talking about how, in the 2012 Republican National Convention, Paul Ryan stated, “My playlist starts with AC/DC and ends with Zeppelin.”

Hall then argued this moment made many liberals uncomfortable. Why? Because liberals found themselves feeling similar to a conservative.

In a series of experiments, Hall and colleague Wendy Wood showed that pa
Studies like these illustrate and bring a sense of clarity to the partisan problem (and also extend group psychology in general). Yet, after hearing Hall’s presentation, one might think that, “Of course this is true, Festinger’s 1957 book predicts it.” However, before hearing these findings, one might also assume the opposite. One might might assume that accentuating basic similarities would easily “transcend party lines.”rtisans feel “dissonance affect” when they felt similar to members of the other political party. The researchers concluded that their findings “demonstrate that similarity to the negative reference standard provided by outgroups can elicit the feelings of psychological threat that accompany cognitive dissonance, highlighting a dilemma faced by politicians who seek to appeal to voters by accentuating basic similarities that transcend party lines.”

Many people walk away from SPSP talks like this one with questions like “Didn’t we already know that?” or the dismissive phrase “Isn’t that just cognitive dissonance?” In an interview with Indecision Blog, Harvard Professor Michael Norton explains why this dismissive attitude is the wrong attitude, and academics should instead adopt a rather more positive constructive attitude.

Similarly, Cornell Professor David Pizarro laments that a senior colleague told him not to pursue a project because, “Either way the results ended up, both theoretical conclusions would be obvious.” To which he responded, “Then is there not value in knowing which one is actually correct?”

Researchers often fail to see the value in clever research projects that test two competing hypotheses. Often they seem to prospectively be against research that will lead readers to fall victim to a “knew-it-all-along” hindsight bias.

On a similar note, Freakonomics author and economist Steven Levitt argues, “The sign of genius is the ability to see things that are completely obvious but to which everyone else is blind.” While there are definitely other signs of genius, our field tends to dismiss this kind of “genius.” And, unfortunately, it is genius of this kind that is A) often the most important to dealing with real world problems and B) fundamentally important to developing rich vast theoretical models of nuanced behavior.

Let’s retire the phrase “Isn’t this just cognitive dissonance?” and, more importantly, the harmful attitude that surrounds it.

Viewpoint: 6 Things a Conference Can Do For You

Screen Shot 2013-12-04 at 4.41.12 PM

There’s so many things about conferences that can fill you with hope, re-energize and recharge you as well as rebuild your confidence. Here are six things that academic conferences can help you with.

#1 Realizing that the tough love was worth it

Screen Shot 2013-12-04 at 4.31.13 PM

Let’s be honest, no matter how cool your advisors and professors are (and personally I have some ridiculously cool ones), sometimes you just feel like you can’t take anymore of their criticism.

They are just so very very critical. They wear you down and force you to reconsider everything. They question you, they question you, and then for a change of a pace they question you some more. Grad school can start to become a tunnel shaped like your advisor’s office with no light in sight.

But then you present at a conference. And the audience claps. And they clap not just politely, but with genuine enthusiasm. And all that pain now seems justified. And all of a sudden you feel a rush of thankfulness for your advisors and resist the urge to text them to tell your success and sappy thankfulness. You see another presentation and you see that student’s advisor was just “too nice” to them and now the student is a sitting duck in the Q&A.

Conferences remind you that all that “tough love” your advisors give you is really out of “love” not just “tough.”

#2 Reminding you that you do have interesting ideas

Screen Shot 2013-12-04 at 4.43.42 PMEveryone at your own school has heard you go on and on about your ideas for years now. They have become desensitized, and when people become desensitized to concepts they find them less interesting and can even underestimate the concepts’ general “objective” or “social” value.

But when you are at a conference talking about your ideas to fresh ears, your ideas tend to ring with more authority and more impact. Plus, the ears of conference attendees tend to be in many ways a better sounding board to test your ideas. Your ideas fall on ears without prejudice. To them you’re just a researcher. That’s an incredibly freeing experience.

 #3 Connecting you with that person who nods and smiles in the audience

Screen Shot 2013-12-04 at 4.41.59 PMIs there anybody in life that makes you as happy as the person in the audience who smiles and approvingly nods along with your presentation? Okay, hopefully your life partner or best friend makes you feel a little better, but still that person in the audience activates the happy dopamine pretty damn hard.

The only person who comes closer to the nodding audience member is the stranger who comes up to you randomly a day after your presentation to say “Good job.” For a moment you feel like the lead singer from the band Fun who sings, “There are people on the street / They’re coming up to me / And they’re telling me / That they like what I do now.”

 #4 Remembering that there are people in the world like you

Screen Shot 2013-12-04 at 4.36.21 PM

Maybe you find qualitative research interesting, maybe you have an odd presentation style, maybe you find applied or niche theoretical research really interesting. Whoever you are you, at times you probably feel like an outcast at your own school. However, when you come to a conference you find that at least some others see the world the way you do, find the same topics interesting, and have the same research philosophy.

Even at the best programs, you can’t find everything you optimally need. That’s why conferences are so beautiful: they fill in the cracks. Chances are, your advisor has probably told you to talk to certain people at conferences to help fill in those cracks. Good advisors know the limitations of a single department. Recently Professor Rucker spoke about this issue and even designed a Doctoral Symposium to specifically address the issue. He wanted students to be more exposed to different ideas and find what methods of doing research they “clicked with.”

#5 Reminding you that you are a person

Screen Shot 2013-12-04 at 4.34.45 PMIn graduate school, sometimes it is hard to feel like you’re a person. You read, write, analyze, go home, exercise, or maybe not exercise (I promise I’ll do it tomorrow or next week), and then watch an episode of Breaking Bad and go to sleep. That’s what graduate school becomes. While your friends post Facebook pictures of roof top bars, you post an article about the ethics of data collection or a psychological analysis of Doctor Who.

But conferences force students out of their labs and out of their routines, and drop them onto the streets of some truly fantastic cities. If you’re lucky, your conference will even throw an amazing after party at a downtown club where you can feel like a VIP for the night – all this serves to re-energize you as you look down the glass floor of the CN tower or toast a drink at the John Hancock bar and bond with conference friends.

Professor Meg Campbell spoke at the Association for Consumer Research 2013 about how for many graduate students it is important to have a life, excitement, and friends. This all gives students the energy they need to trudge through the academic life. If as a student you do nothing exciting, you may crave an emotional boost from looking through a funny tumblr, but if you know you have excitement in your future, if you feel satisfied with that picture of your conference friends walking the Golden Gate at a lunch break, then dealing with that awful reviewer #2 is not so bad. Conferences can be a personal affirmation.

#6 Wait, there is new research out there! I forgot that.

Screen Shot 2013-12-04 at 4.32.59 PM

As one proceeds in academia one reads more often but reads exciting things less often. By the nature of our craft we start drilling into one area so far that nothing seems very novel or new. And if it is new it seems like a tweak not the grand advancement we got into graduate school looking to make. It can lead you to lose confidence that there’s anything new in the field. It can also just make you depressed, as the days of being excited by daily reading something new in intro grad classes are long behind you.

But at conferences every session, every conversation, and every chance encounter  is full of disparate and new ideas. This has two wonderful effects. First, it is simply wildly entertaining. We forget sometimes how much just listening to new intellectual ideas entertains us. Second, it starts to build “broad connections” that can help you develop your ideas. Professor Jim Bettman advises academics to consistently read up on areas not directly related to their line of study. This takes advantage of the availability bias and leads you as a researcher to keep different ideas available in your head. This means that there are greater chances you’ll find spontaneous connections with your own work.

Malcolm Gladwell’s ACR 2013 Talk: Summary, Science and Responses

Image

Malcolm Gladwell just gave a talk at the Association for Consumer Research. What did he talk about? How did it go? What scientific research did he cover? What are related research he did not cover? How did he handle questions and a somewhat “difficult” audience of behavioral scientists?

We will post a more detailed reaction soon that will feature editorials. But for those itching to know what he talked about and also interested in citation links related to what he said, here’s your summary fix.

What was his “hypothesis”?

Gladwell talked about topics not in his new book. He described the ideas as ‘provisional’ but they seemed highly polished. He mainly talked about how minorities (women, races) are kept out of the majority and his main hypotheses was that people excluded minorities by a) accepting some of them and/or b) being hyper critical of others. His illustrative hypothesis was that if and when Hillary Clinton is elected President she will be the only woman president for a long time.

What were the specifics of the “hypothesis”?

Gladwell argued that people feel licensed to be discriminatory against minorities because they tend to accept a few of them (e.g. have one female head of state, have one woman artist in a museum) and or do a few things for them (charity). He also argued they feel like they can be hyper critical of minorities.To help explain this point he defined two different types of minority tokens: the trouble token and the ideal token.

The trouble token he defined as the person that represents the minority and is thoroughly thrashed by the majority culture. His main example for this was former Australian head of state Julia Gillard who Gladwell says was uniquely ridiculed by the public in part because of her gender.

The ideal token he defined as the person that is accepted by the majority and allows the majority to feel “not sexists” or “not racist.” His main narrative example for this was artist Elizabeth Thompson who broke into the male dominated English art scene in the 1870s. However, her success did not lead to a drastic change in the gendered art community. There was also some discussion about Jews and how Nazis felt comfortable with prejudice and the Final Solution because they had previously been nice to small groups or individuals. In discussing the ideal token he cited “moral licensing” work about how favoring Obama licenses people to have anti-black attitudes.

How did he do?

ImagePretty fantastic actually. Rhiannon MacDonnell said it best on Twitter: “Really interesting and candid talk by Gladwell at ACR 2013. Great jock tackling a tough crowd with humility and humor.”

Gladwell did two things right: 1) He told great thought provoking stories and 2) he was upfront that he was proposing absolute “laws.” He cleverly joke about how his aims are different from the scientific community.

The ACR Co-Chairs Simona Botti  and Aparna Labroo should be proud to have brought such great entertainment with a side of thought provoking insights to ACR with Gladwell. In their introduction of the Gladwell session, it was obvious how excited the two were to have Gladwell, and by the end of the session that excitement was shared by arguably most of the audience.

But there was one thing…

Gladwell’s talk was extremely depressing.  He identified a problem and offered no solution. His talk ended with, “And I’m not certain we’ll ever have another Black President.”

Gladwell’s entire message was that people in the majority can oppress minorities and still feel good about themselves. His whole talk was about why and how this happened but the takeaway was there really is no hope. There was sliver of hope suggesting that if some how a significant portion of the minority can force their way in, things will get better. However, that was the only ray of sunshine.

Now, if you are in the successful class (like in many ways to audience at Palmer Hilton hotel conference featuring Malcolm Gladwell are), then his points seem insightful. But if you are in the minority or lower class then the results are just depressing.

How did he manage the whole criticism of the Gladwell style?

Image

Arguably quite well. He was humble. Saying his ideas in the talk were “provisional” and “I am just playing.” He joked about differences between his writing and scientific writing saying, “I am not going to submit that [his hypothesis on culture] to JPSP [The Journal of Personality and Social Psychology].”

He also talked about how though all the behavioral scientists in the room think it is obvious that things like culture affect outcomes, this is not the case for the general public always. He states that his books were to get people thinking about ideas that our field just knows to be true

Yale Professor Zoe Chance asked Gladwell a pointed question about balancing accuracy with storytelling. His response seemed very similar to a recent quote he gave in a Guardian piece, “If my books appear oversimplified, then you shouldn’t read them” and also expressed in an extended conversation with Duke Professor Dan Ariely.

Gladwell stated that he wants to inspire people and make them feel joyous about science. He sees himself (as he literally is) the child of an academic and pursues similar but not the exact same goals. From the buzz around ACR it seems Gladwell won over a lot of the Gladwell skeptics. He did this by clarifying that he isn’t trying to replace scientists nor does he want to propose absolute laws – he articulated perfectly and humbly that his goals are different.

Why the talk was interesting & uncomfortable

Gladwell articulated in his iconic style a question that has been on the minds of many moral psychologists: how can we feel good about ourselves when we do so little to actually help those in need?

For instance all of the conference members with enjoy a private party at the House of Blues in Chicago this evening and return late into the night to Palmer Hilton for a nice sleep and flight back—all compensated by their universities. All of us here at ACR arguably all engage in some degree of hypocrisy, claiming to care so much for the poor, the obese, and the irrational consumer, but we then go buy a $5 Starbucks with university funds.

Understanding this general psychological phenomenon and trying to grasp at what truly is the correct moral way to live in the world is something that came (intentionally or not) out of Gladwells speech – it’s a wonderful spark for empirical and philosophical work.

What science should Malcolm Gladwell read?

ImageIt was difficult to tell how much psychology Gladwell has read. He referenced work on moral licensing, but even how familiar he was with that topic was ambiguous. This did not stop him from making insightful and great points that he openly described as speculative.

However, if he does want to move more from speculation to more of a firm hypotheses for a future articles or a book, we have a few suggestions of what he should read more of. If you have any suggestions let us know on Twitter (@Indecision_Blog) or just leave a comment on this post. We’ll be updating this post with the ideas you share us here and also with more specific articles in the future to stimulate discussion amongst ourselves and maybe even to pass along to Gladwell. In sum, it is the belief of at least this author that Gladwell put together some fascinating hypotheses that may be worth exploring or relating to our work.

Moral licensing

Licensing is a more contentious topic than it sometimes seems. We suggest he keeps his eyes out for looming meta-analyses that should come out soon and check out work on when licensing does not occur such as in this study by Ayelet Gneezy and colleagues.

Moral self

His hypotheses seemed very similar to work by Albert Bandura on moral disengagement and Nina Mazar on the self-concept maintenance model. Both of these ideas were the center of Dan Ariely’s bestselling new book The Honest Truth About Dishonesty, so it was somewhat odd than none of these researchers in our field got a nod and instead only Effron and Monin got a tip.

Categorization theory

In discussing how people assign people to categories and code people as say “in-group” or “out-group” or how whites might decide where to code a black person as “black” or not has been extensively looked at the literature. This seemed fundamental to his discussion of the ideal token, but he did not discuss any of this work. Was this because of time or his lack of knowledge of it? Even just a short browsing on Psych Wikipedia for this topic might help.

Our Question! His Answer: Storytelling

Indecision Blog had the honor of asking Gladwell the first question in the Q&A (big thanks to co-chairs of ACR for this honor): we asked how can academia influence business persons and policy makers in the way Gladwell has.

His answer was simply that we need to tell stories. He made clear that the story has to be in the communication with businesses and policy makers. Gladwell described his answer as “obvious”, but we still think it’s not so obvious in our field all the time. (Full coverage as well as the full transcription of Gladwell’s answer to this question sometime soon.)

However, we will note that Gladwell started his ACR talk with a very long story before it became apparent what his hypotheses was and long before his talk got “scientifically interesting.” You could feel the tension in the room when Gladwell was just storytelling and before he delivered on the story in a way that seemed to satisfy the audience. We as researchers would probably have skipped the extensive opening narratives when communicating our ideas, but Gladwell says this is necessary. At the end when all the bread crumbs he laid out came together, it was hard to argue that his extensive storytelling did not help make his simple hypotheses seem more real and inspire his audience to actually do something about it.