Editor’s note: Today we continue our series on the seven sins of consumer psychology from the presidential address of professor Michel Tuan Pham at the recent conference of the Society for Consumer Psychology. Read the introduction here.
The next sin that I want to point out is a sin of overgeneralization. This is a sin that we commit both as authors and as reviewers and readers of the literature.
As authors, we tend to overgeneralize from our limited data. We all know that getting an experiment to work takes a lot of effort. We have to think very carefully about the consumption context we use, the product category, the precise stimuli, the exact procedure, the measures that are most likely to pick up the effect, etc. We often run pretest after pretest, and still may need to try different versions of the experiment to eventually get it to work. We all do that. And that’s fine. However, once a study eventually works as we intended, we quickly develop a supreme confidence in our own results and interpretation, forgetting how much effort it took us to get the effect in the first place. As a result, we tend to believe that our findings are more general and robust than they actually are. The psychology is akin to the fundamental attribution error: we quickly attribute significant data patterns to some trait-like pet theory, forgetting to account for the multitude of contextual factors that may have contributed to this data pattern.
A related issue, of course, is the issue of how transparent is our reporting of the fragility of the results that we produce. This in itself would require an entire address; so I prefer to leave it aside for now.
We also have similar tendency when we read the literature and, to a lesser extent, when we review papers.
Once an effect has been reported in a published paper (especially if it is by a famous author in a prestigious journal), we tend to treat it as gospel, again forgetting that this effect may be more context-specific than a quick reading of the paper may reveal. Moreover, we often generalize the result well beyond the researchers’ original interpretation. As a result, we walk around with oversimplified theories of the world that we use indiscriminately. This impedes scientific progress because, all too often, research ideas are rejected and findings are simply dismissed because we have an unwarranted feeling that “we already know that.”
A good example is the work on the “too-much-choice effect” by Sheena Iyengar: the famous-jam-in- supermarket study. This is a great study and the results are very important. However, if you read the paper carefully, you will see that the effect is very specific and that the authors were very particular in the way they conducted the study. For example, all the jams were from a single brand and only unfamiliar flavors were selected. Yet after the paper was published and received a lot of attention, the fine prints of the study—which the authors carefully disclosed in the paper—were quickly forgotten, and the field began to take it for granted that “consumers do not like when they have too much choice.” It turns out, however, that the effect is very fickle. In a meta-analysis by Scheibehenne and colleagues (2010), the basic effect was found in only a small number of studies; the reverse effect was found in an equal number of studies; and the majority of studies showed no significant effect.
The same issue arises when we review papers. Very often, we dismiss a particular finding based on a loose impression that “we already know that” and “this has already been shown,” without necessarily appreciating that there may be nontrivial distinctions between the new study and the ones that it reminds us of. We also exhibit the same fallacy when authors replicate their findings in a separate study and we ask them to drop the replication study because “we already know that from the other study.”
The answer to this pervasive problem of overgeneralization is very simple. We need more replications and more nuance. As authors, we need to be more willing to replicate our own results: across different samples of respondents, across different stimuli, across different operationalizations of the manipulations, etc. Ideally, these conceptual replications should be done in a way that alters only one variable at the time. When too many variables are changed at the same time across studies (which we often do), we defeat the primary purpose of replication which is to assess the robustness of a result to small differences in method that are theoretically-meaningless. As authors, we should also be more willing to increase the sample size of our studies. All too often, researchers seem to be unwilling to run more subjects, apparently out of fear that the effect might go away. My philosophy is that if we really believe in our effects, we should not be afraid of increasing our sample size.
We also need to be more careful and nuanced in our writing and discussion of our results.
As readers of the literature, we need to be more mindful of what studies actually show and how they were actually conducted. We also need to be more appreciative of studies that, on the surface, merely seem to replicate conceptually what previous studies had shown.
And as reviewers and editors, we need to be much more supportive of close replications within papers, and conceptual replications across papers—these are not wasted journal space.
Pingback: 7Sins: #5 Confusing ‘theories of studies’ with ‘studies of theories’ | :InDecision:
Pingback: Viewpoint: Why I’m Leaving Academia | :InDecision: