What I would like to share with
you today is some observations about our field and some suggestions about how we can collectively make it stronger and more exciting to participate in.
There is no question that on a number of important dimensions, our field is doing very, very well. First, we are growing rapidly as a field. Second, in some respects our research has become more rigorous and sophisticated over the years. Finally, and somewhat related to the first two points, our field has become more inclusive.Still, despite these wonderful achievements there are still areas in which we could probably do even better.
The motivation for my address comes from two major shortcomings that I see in our collective enterprise, which is the production and dissemination of psychology-based knowledge about consumer behavior. These two shortcomings are related in that they both pertain to the overall relevance and impact of our research, but one is more external, and the other is more internal. I feel comfortable speaking about these because just like everybody else, I share some of the blame for these shortcomings.
Are We as Relevant as We Ought to Be?
External Relevance and Impact
A number of thought leaders in our field, and in the closely related field of marketing, have expressed an increased concern about the relevance of our collective research for our key external constituents, namely the business community, the policy public community, and the consumer community. A few years ago, in his ACR Presidential Address, David Mick (2006) urged us to be more “transformational” and tackle major issues of consumer welfare such as obesity, tobacco consumption, and televised violence. More recently, in 2011, we had a conference at Columbia on the business relevance of academic marketing research. The conference was attended by a virtual “who-is-who” of marketing academia: the editors of the major marketing journals and many of the most prominent researchers in the field whether modelers, behavioral, or managerial. The major take away from that conference was very clear: many academic leaders of the marketing field are incredibly dissatisfied with the general lack of managerial relevance of our research. In his recent Presidential Address at ACR, Jeff Inman (2012) also urged us, consumer researchers, to be more relevant to our external constituents—he used the word “useful.”
I realize that a number of us, including several past ACR presidents—people whom I respect a lot intellectually—see consumer research as a stand-alone discipline, one that should not to be subservient to the world of marketing and business . However, I frankly disagree. The vast majority of us, maybe 90 percent or so, work in business schools. This is a choice that we all made early on in our careers. Therefore, it is somewhat disingenuous to claim that we are not partly accountable to the needs of the business community that supports us.
I recently developed an MBA elective at Columbia called “Strategic Consumer Insight.” I didn’t want to call it “consumer behavior” because I wanted this course to have genuine business relevance and be useful to Columbia MBAs who are not known for their tolerance for purely intellectual exercises. As I was preparing for this course, I reviewed a lot of materials on consumer behavior, including the content of our main journals and the major textbooks that summarize our field. I must say that I was disappointed by how little I was able to find in our journals and major textbooks that seemed worth sharing as real consumer “insights.” In fact, most the materials that I found useful and interesting came from trade books written by marketing practitioners, consumer ethnographers, branding consultants, etc. When I was able to find research articles that were potentially useful, many were not psychology-based but CCT-based, which was very disconcerting for a consumer psychologist.
This lack of external relevance and impact is not a new problem. More than twenty years ago, prominent figures in the field, including Jagdish Sheth (1986), Jack Jacoby (1985), and Rich Lutz (1991) already raised concerns about the relevance of our research. My impression is that we have made little progress in this regard, and may in fact have regressed.
Internal Relevance and Impact
Now, maybe you don’t agree that our research needs to be relevant to managers or to other external constituents. Maybe you think that it is sufficient that our research be relevant to scientists, both within our field and in related disciplines such as psychology and economics. This is a more internal dimension of relevance. I must disillusion you of this as well, because our internal relevance is not very good either.
Here is a chart showing the average number of citations that JCR articles receive on the Social Science Citation Index in a given year. The chart is based on 340 articles published in a 5-year period going from 2004 to 2008. I intentionally did not include more recent years in order to give a fair chance for the articles to be picked up by the literature. The chart is based on the number of citations per year rather than the total number of citations in order to mitigate the mere effect of age of the paper on citations. The articles are rank-ordered by number of citations per year. What can we see? We can see that some articles, but very few of them, are very well-cited, receiving 10 or more citations per year. We can also see that the vast majority of the articles, 70 percent or so, hardly get cited, receiving three citations or less in a given year. In other words, 70 percent of the articles that we publish in JCR–and these include some of my own–hardly have any impact in terms of citations. The top 15 percent of the articles account for 43 percent of the citations, whereas the bottom 50 percent account for less than 20 percent of the citations.
Again, this is not a recent phenomenon, a similar analysis of citations of JCR articles published during the 10-year period before that reveals an identical pattern: very few articles get very well cited, and the vast majority hardly get cited. The top 15 percent of the articles account for 45 percent of the citations, whereas the bottom 50 percent account for only 20 percent of the citations.
Other indicators of internal relevance converge in this sobering self-assessment. A few years ago, the Policy Board of JCR conducted a series of surveys among JCR subscribers. Respondents were shown lists of articles published in a given issue and asked to check which article(s) they had actually read. On average, respondents reported having read 15 percent of the articles, the other 85 percent had not been read. According to John Deighton, who was the Editor at the time (and candidly allowed me to share these results with you), these readership numbers are, if anything, probably inflated.
The bottom line is that the vast majority of the research that we produce, may be 70 percent or more has no significant scientific impact and isn’t found interesting even by us—and this even within the very select subset of articles that make it into our top journals.
Then, why bother publishing these articles, why bother spending months and years conceptualizing our ideas, gathering lots of data, replicating our findings, analyzing, writing and rewriting, and battling the reviewers rounds after rounds?
We’ve got to be able to show more for what we do; if not to our external constituents—managers, policy makers, and consumers—at least to ourselves and to our academic colleagues in other disciplines.
How do we do that?
I will try to share with you what I think are the root causes of these major issues of internal and external relevance. Although these root causes are not totally independent, I have organized them into discrete categories, and because we know as psychologists that there is something special about the number “seven” (Miller 1956), I will share what I see are the top 7 of these root causes, which I collectively refer to as the “Seven Sins of Consumer Psychology” (a title inspired by Daniel Schacter’s famous article on “The Sevens Sins of Memory”). Once again, I feel very comfortable in pointing out these “sins” because I am as guilty of committing them as anybody else in the field.
The Seven Sins of Consumer Psychology
Read more: Sin #1: Narrow scope…
Pingback: 7Sins: #1 Narrow scope | :InDecision:
Pingback: 7Sins: #2 Narrow lenses | :InDecision:
Pingback: 7Sins: #3 Narrow epistemology | :InDecision:
Pingback: 7Sins: #4 Disregard for content | :InDecision:
Thanks for the back handed compliment to CCT. Obviously people are interested in research that is based in the real world, hence the (troubling) success of Frekanomics. Since CCT researchers generally conduct research in naturalistic settings, this gives them a leg up in the relevance department. Bardhi and Eckhardt’s JCR work on sharing really resonated with business people because the sharing economy is in the cultural zeitgeist; e.g., The Economist cover story of 9-15th March. Thus, it also helps that CCT folks pay attention tot the cutural zewtigeist. No reason psychologists couldnt do that; those that do get noticed. And of course those that crib from CCT like the guy at Wharton, also get noticed. 🙂
Hi Eric, thanks for your comments. I’m not sure exactly what professor Pham may have originally intended with that reference to CCT but at least to me it doesn’t seem like a backhanded compliment – rather, a harsh criticism of his own field, CP. If I imagine being in a position like that (trying to put together a course) and noticed a distinct lack of relevant research from my own field compared to adjacent ones, I would probably be concerned, too. I think perhaps reading on a bit further into the detailed sin posts will offer more insights into what he may have meant 🙂
Pingback: 7Sins: #5 Confusing ‘theories of studies’ with ‘studies of theories’ | :InDecision:
Pingback: 7Sins: #6 Overgeneralisation | :InDecision:
Pingback: 7Sins: #7 Research by convenience | :InDecision:
Pingback: From the Web: Competition, Politics, DIY, and Research Sins | People Science Blog
Pingback: Ouch: Columbia b-school prof blasts academic research … | The Homa Files
I did a bit of quick calculation to see what will happen if 100% of the research is cited 10 times every year. If each new article has 20 references, we need to annually publish half the number of the existing stock of research articles. This is clearly not possible. If we annually publish only 1% of the number of existing stock, we need to have 1000 references per article. This is not likely to happen either.
My guess is that the citation pattern that we are seeing is much more due to our community’s publication behavior (number of new publications annually and number of references per article) than to the relevancy of our research.
Pingback: The Conference Survival Guide | :InDecision:
Pingback: Happy New Year from InDecision! | :InDecision:
Pingback: What Conference Presentations Should Really Be About: Advancing Science » Character and Context
Pingback: How to advance science and look good in a talk | :InDecision: