Research Heroes: Barry Schwartz

This week’s research hero is prof. Barry Schwartz of Swarthmore College. schwartzProf. Schwartz received his PhD from University of Pennsylvania and his research addresses morality, decision making, and reasoning. He has published a number of books, among other the praised “Paradox of Choice”. He is also active in publishing in scientific journals and editorials in the New York Times where he applies research in psychology to current events. 

I wish someone had told me at the beginning of my career…To take more math.  I spent my undergraduate days taking every psych course there was. Then I twiddled my thumbs in grad school while other students caught up.  I should have done lots more math.

I most admire academically… I won’t mention names, but the people I most admire academically are people who are willing to be wrong in public.  Everyone seems to think that the worst thing you can do is be wrong.  I think the worst thing you can do is be trivial.  This is reflected in journal submission reviews, tenure reviews and grant reviews. It’s a pity. People willing to make mistakes in public are the people who really move the field forward.

The best research project I have worked on during my career… I think the best paper I ever wrote was not an empirical paper, but the result of a collaboration with two philosopher colleagues.  We wrote a paper that tried to embed the work of B.F. Skinner in the historical context of the growth of the factory, and of “scientific management” in the U.S.  It’s an old paper (1978), and in those days my empirical work was focused on identifying the limits of Skinner’s view of the world.  This project made me appreciate that there was no guarantee that claims that were empirically false would die–of “natural causes.”  They could live if society believed them and then shaped its institutions in the image of these claims.  Many years later I published a paper in Psych Science, prompted by the book, “The Bell Curve,” that made a similar point and called the phenomenon “ideology,” after Karl Marx’s notion of “false consciousness.”  Working on this paper changed the way I think about psychological phenomena in general.  It actually contributed to two of my books, “The Battle for Human Nature,” and “The Costs of Living.”

The worst research project I have worked on during my career… I did a whole bunch of pretty trivial things in my days working from within the Skinnerian worldview.  Happily, they were pretty trivial even at the time, so no one was led on wild goose chases.

The most amazing or memorable experience when I was doing research….In my paper on “maximizing” (JPSP, 2002), we did a study of the ultimatum game that I thought had no chance of working.  It worked!  It was quite clever, borrowing a methodology developed by Marcel Zeelenberg and Jane Beatty.  Alas, this is a part of the paper that nobody writes about.

The one story I always wanted to tell but never had a chance…Well, I have told this story.  I taught a course in “motivation” almost 40 years ago.  I gave everyone a B and they knew this on day 1.  There was still a midterm, a final and a term paper, all of them graded, but people got a B no matter what.  This was designed to have students scrutinize their own motives in being students. For the first five weeks, everything was great.  But then midterms in other courses rolled around, students in my course fell behind, and they never caught up, growing increasingly embarrassed as the semester wore on.  I think I ended up with three (quite good) papers in a class of 40.  It was not a successful experiment.

If I wasn’t doing this, I would be… Well, I’d be a writer.  In the last 25 years, what I have found most satisfying, by far, is writing books (and the occasional article) for non-professional audiences.  My aim is to make the mysterious world of psychological research comprehensible and to show readers why it matters.  I’ve written four such books thus far and plan to start a fifth this summer.

The biggest challenge for our field in the next 10 years…

This will seem iconoclastic, but I think there are four challenges:

1. Too much data.  I think it would be good to declare a moratorium on new data until we understand the data we already have.  Five years, let’s say (I told you I’d be iconoclastic).

2.  Far too much worship of neuroscience.

3. People whose education is far too specialized and who then perpetuate this specialization in the students they train.

4. An incentive structure for success that is close to a disaster.  It’s all about having publication lists as long as your arms and about publishing papers that are “flawless.”  As long as this persists, all the concern about “p-hacking” in the world will not induce people to do research that matters and do it honestly and openly.

My advice for young researchers at the start of their career is…Take lots of math, be willing to make mistakes in public, and work on things that matter.  I certainly can’t guarantee that this will lead to a successful career.  But if it does, it will be a career worth having.


Viewpoint: The role of revealed research preferences

As a young person in decision research you may find it hard to figure what your “thing” is. However, you are not alone. Many successful professors did not initially decide on a research agenda, but instead, through the process of research, their agenda gradually developed.

Though our field is quite firm in arguing that preferences are constructed not revealed, I have found that time and time again professors explain that their research preferences have “revealed themselves.” Professor Adam Waytz has  said that he discovered what he was most interested through the process of research rather than directly deciding what his research interests were. Others also may help to reveal your preferences to you: professor Mark Leary said that one day a colleague explained him what Leary’s research focus was. Leary realized that he had never thought about framing his interests like that before, but when it was framed as such, it was clear: that was what he was interested in.

To characterize Professors Waytz or Leary as just idly bouncing around before their moments of revelation would be wrong, as they are outstanding examples of organized research. However, comments like these offer an interesting insight into the research development process: rather than trying to decide what you should do, think about “what do I keep coming back to every time I come up with ideas?” You may also be wise to pay attention to your colleagues as we come to know how our colleagues’ minds work and what theories our colleagues tend to gravitate towards. One’s friends can be a great resource for personal insight and they probably know you better than your advisors do.

Finally, senior researchers overwhelmingly advise you to follow your passion. So when going forward, why not go up to your advisor and say “here is what I am interested in on a personal level, how can we go forward with it?”. If they disagree to this approach, just remind them that nearly every professor at doctoral conferences tells students to follow their passion and that deciding on your “thing” is not a simple logical decision that can be made in a 30 minute meeting. Tell your advisers it is going to take you time to understand yourself as a researcher… though maybe phrase that a little more gently and less hippie–like!

Research Heroes: Robin Hogarth

This week’s research hero is prof. Robin Hogarth. Prof. Hogarth has a MBA from INSEADhogarth and received his PhD from University of Chicago.  He is currently an emeritus professor at Universitat Pompeu Fabra in Barcelona, where the next SPUDM is being held. Prof. Hogarth is well-known for conducting interdisciplinary research within judgment and decision making. He has held positions at prestigious academic institutions such as INSEAD, University of Chicago and London Business School. He has been and is still very active in publishing and has authored a large number of articles and books. He has been deputy dean, Director of the Center for Decision Research, and responsible for setting up the University of Chicago’s executive MBA program in Europe.

I wish someone had told me at the beginning of my career…only work on what really interests you.

I most admire academically…researchers who have a well-developed aesthetic sense of beauty and simplicity for both theory and methods. Hillel Einhorn was one of these people.

The best research project I have worked on during my career…was the process of writing my book Educating intuition.  It allowed me to synthesize a lot of what I had learned over many years.  However, my most fun – and best – projects were when working intensely with Hillel Einhorn in the 1970s and 1980s.

The worst research project I have worked on during my career…was a project about evaluating management education programs. I undertook this for all the wrong reasons and should never have started it.

The most amazing or memorable experience when I was doing research….was when I first found that people were actually citing my work.  This made me realize how important it is to get things “right”.

The one story I always wanted to tell but never had a chance…I really hated being a PhD student at the University of Chicago. That stimulated me to complete my PhD in a short time in order to move onto the next stage of the academic ladder.

A research project I wish I had done…There are too many!

If I wasn’t doing this, I would be…an unhappy (but probably rich) retired accountant!

The biggest challenge for our field in the next 10 years…Improving our methodological practices so that our theories can lead to results that can be generalized better.

My advice for young researchers at the start of their career is…Follow your interests and “keep your eye on the ball.”

Personal homepage of Prof. Hogarth

Viewpoint: Focusing through a multi-pronged approach

multiprongIf you are a researcher, then there is a near 100% chance that you have been told at least once to “focus, focus, focus your research!” However, it is difficult to figure out how to focus, especially when even the best people in the field seem to have such “unfocused” research agendas.

At a recent mentorship lunch, psychologist Mark Leary used a physical metaphor that helped to clarify to me how one can be topically diverse but theoretically focused.

Leary explained how he did research by holding out an open hand with his fingers extended. He explained that the core theoretical interests rested in his palm, but that he explored that idea through many different prongs (his finger). This is the “multi-pronged approach.”

This approach permits a researcher to investigate different substantive or psychological questions, but with a firm foundation in a topic they hold expertise in.

For a young researcher the multi-pronged approach permits controlled exploration. For instance, a person focused on self-control might be interested in construal level or motivated cognition. Instead of completely jumping ship to a different topic, the researcher can ask, “What do I know about self-control that might be interesting to look at in conjunction with construal level or motivated cognition?”.

Science is best when it is cumulative—when multiple people do multiple projects on the same idea. With few exceptions, researchers are also best when they are internally cumulative—personally doing multiple projects on the same idea.

In sum, the multi-pronged approach provides an example of “programmatic research” that is less linear, but is not disorganized. Linear “programmatic research” may take almost a decade to manifest, but the multi-pronged approach can take much less time, may work for better for a graduate student, and arguably is what many researchers in our field actually do.

Research Heroes: Paul Slovic

This week’s research hero is professor Paul Slovic. Prof. Slovic received his PhD from University slovic%20pic%20full%20size_2of Michigan and has been one of the pioneers in methods to measure risk. He studies fundamental issues such as the influence of affect on judgments and decisions, the factors that underlie perceptions of risk. He has a large number of publications in not only the area of risk but also compassion and genocide. He is the founder and President of Decision Research, and has  received numerous awards such as the Distinguished Scientific Contribution Award from the American Psychological Association. 

I wish someone had told me at the beginning of my career… Actually, I’m very glad no one did tell me, when I took my first job at the Oregon Research Institute, how hard it would be to live off soft money from grants and contracts for close to 50 years. I might not have taken the job. Despite the challenges, I have no regrets.

I most admire academically… There are many JDMers who I very much admire, including the fine colleagues I have been fortunate to work with. But, like many others, I have a special admiration for Amos Tversky and Danny Kahneman. I always had an interest in applying JDM research to important societal problems and when Amos and Danny began to do their simple but elegant heuristics and biases studies of judgment under uncertainty, I immediately was motivated to extend their findings into the realm of what Sarah Lichtenstein, Baruch Fischhoff, and I later termed “societal risk taking”—in particular nuclear and chemical safety and finance. It was great fun exposing people from other disciplines to this fascinating and important behavioral research and I have continued doing this throughout my career.

My favorite research project… is always the one I’m working on at the moment. But, looking back on many favorites, I have a particular fondness for the preference-reversal studies done with Sarah Lichtenstein. They began serendipitously, when, as part of a larger study, we happened to compare two response modes for evaluating gambles and found they were highly inconsistent. This was only an incidental part of the study we were doing, but we took that finding and ran with it. We then had the exciting opportunity to replicate our research on the floor of the Four Queens Casino in Las Vegas. The results, demonstrating what was later called “a violation of procedure invariance,” greatly threatened and annoyed economists who believed pricing and choice should be equivalent indicators of preference. They launched numerous studies “to discredit the psychologists’ work as applied to economics.” They failed. Over time this research led us to a broader perspective that we named “the construction of preference”.

Paul Slovic and Sarah Lichtenstein in Las Vegas, 1969

Paul Slovic and Sarah Lichtenstein in Las Vegas, 1969

My worst project… I’ve conducted many studies, some were clunkers. A few that fizzled contained hidden gems (serendipity again), such as the one that evolved into the paper “Preference for Choosing Among Equally Valued Alternatives.” I had been trying hard to construct pairs of two-dimensional stimuli that were exactly equal in value to use in an experiment on context effects. But I found that, when testing for equality, there was always a strong and systemic preference for the option that was best on the more important dimension. So again I took this failure and ran with it (slowly; it took 17 years to come to fruition). Amos Tversky and Shmuel Sattath nicely enhanced my serendipitous findings when they used them as a springboard to “the prominence effect.”

Most memorable experience… Watching Ward Edwards try to impress the manager of the Four Queens Casino in Las Vegas regarding the studies we wanted to run on the casino floor. Ward had a notebook of gamble pairs, simulating an experiment we planned to run on a computer. “Which of these two (very different) gambles would you prefer to play?” asked Ward.

“I’ll take A,” said the casino boss.

“But you didn’t even look at the gambles,” responded Ward, with a mixture of surprise and annoyance.

“I feel lucky with A,” was the reply. Another pair was offered—again an instant choice of A. “I won with A last time, so I went with A again,” was the explanation. So much for rational weighting of probabilities and payoffs by a man in charge of a major gambling enterprise. The manager was not impressed with us academics either, but we were allowed to take up valuable floor space and run several experiments. Despite having no “house advantage”, they were the most unpopular games in the casino because they required players to think and make tradeoffs among the dimensions of gambles.

If I were not doing this… I would be a human-rights activist. There seems no end to the abuses of human beings being perpetrated around the world. And I have come to see that JDM research has relevance for motivating people to care about helping others and for designing procedures, laws, and institutions to aggressively address these abuses when compassion fatigue sets in. I’m working at this now but I wish I were better prepared to employ JDM findings to stop human-rights violations.

The biggest challenge facing the field in the next decade is… maintaining its identity in the face of the ever-increasing fragmentation of disciplines. Will it become subsumed under “behavioral economics”? I hope not. I would not like to see JDM subsumed under behavioral economics because JDM is applicable to all human judgment and decision contexts and is thus broader in scope than economics. Also, in my opinion, psychology is at the core of JDM and I would not like to see that perspective diminished. Another challenge is to demonstrate the centrality of JDM research for yet another emerging discipline, “behavioral public policy” (see, e.g., Eldar Shafir’s new book on that topic).

My advice for young researchers is… run experiments and collect data. Don’t feel you necessarily need an elegant theory or well-identified hypothesis before you can do a study. Having a good question to answer is enough to motivate a study. I have found that collecting and analyzing data is an aid to thinking about a problem. New insights often emerge that one might have come to by thinking hard, but instead emerged from puzzling over data. Theoretical development and hypothesis testing can then take root from those insights. And be alert for incidental findings that may be even more important than what you were originally looking for.

7Sins: #7 Research by convenience

Editor’s note: Today we publish the final post in our series on the seven sins of consumer psychology from the presidential address of professor Michel Tuan Pham at the recent conference of the Society for Consumer Psychology. Read the introduction here.

The final sin is certainly not a recent one, but it is still a major one. More than 35 years ago, in a JCR editorial titled “Research by Convenience,” Robert Ferber (1977) already complained about the over-reliance on student samples in studies purported to be about consumers in general. Ferber questioned whether students were really the right respondents for certain topics such as financial decision making or family purchases. He also questioned the degree to which, independent of the topic, results obtained from college students could be generalized to the broader population of consumers that our samples are meant to represent (see also Sears 1986).

A variant of the “Research by Convenience” criticism includes complaints that too much of our research is North-American-centric (Gorn 1997)—a criticism that has also been made about psychology in general (Arnet, 2008). Another variant includes the criticism that too much of our theorizing is based on the upper end of the knowledge-expertise continuum, whereas large segments of the consumer population are bound to be less educated and less “intelligent” than the student population that we typically sample in our studies (Alba 2000).

On the surface, it would appear that new sources of inexpensive experimental respondents such as Mechanical Turks, which has become very popular in our field, should help address this research-by-convenience problem. Indeed, from a demographic point of view, MTurk participants seem to be a little more like “real consumers” than the typical college undergrad (Berinsky, Huber, and Lenz, 2012). There is also some evidence that some well-known judgment biases can be replicated on MTurk participants (Goodman). However, before we declare the “sin of research by convenience” partially absolved by MTurks, we need to temper our optimism in three respects. First, regardless of what has been shown or claimed to date, it is not clear to me that a particular sample of individuals who self-selected to participate in this peculiar marketplace—that is, individuals who are willing to perform computer-mediated mindless tasks for a couple of dollars an hour—are necessarily that more representatives of “real-world” consumers than are typical college undergrads. Second, there is disturbing evidence of increased MTurk sophistication in seeing through and “gaming” our studies (Chandler, Mueller, and Paolacci ACR 2012). Finally, and most seriously, I see a real danger that the low data collection costs of Mechanical Turk is gradually shifting our research agendas toward studies than can be done on MTurks—i.e. short online, survey-type experiments—as opposed to studies that should be conducted to advance our field. This last point taps into another meaning to the phrase “Research by Convenience”—one that Ferber did not discuss, but is, in my opinion, perhaps even more serious.

Finally, I should be noted that the sin of research by convenience is not limited to the convenience of the sample of respondents that we study. It extends to the convenience of the instruments that we use to study them. Instead of studying actual consumption behavior, much of our research is based on vignette-like studies, in which respondents are asked to imagine a certain consumption situation and report how they would respond in such situations. The real question is whether the observed responses in these studies are good representations of the actual responses that we would observe had actual consumption behavior been analyzed.

Our colleagues in economics often criticize such studies because vignette-based responses entail no costs and no rewards. “Without some incentive compatibility,” they would say, “this is just cheap talk.” I am not sure that this is the main problem, however. My concerns are a bit different. First, scenario-based studies tend make the focal aspect of the treatment very prominent (e.g., “imagine buying insurance two years from now vs. next month”), thereby potentially exaggerating the strength of the effects. Second, I suspect that participants who are asked to project themselves into a certain consumption situation tend to adopt an overly analytical mindset that is not representative of how consumers would actually respond to the situation in real-life (see, e.g., Dunn & Ashton-James, 2008; Snell, Gibbs, & Varey, 1995, for relevant findings). Finally, I believe that scenarios are poorly-suited for the studying of the effects of “hot” variables such as emotional responses and motivational states (Pham, 2004), whose influence on our behavior is difficult to imagine with a genuine experience .

Conclusions: Increasing our Relevance and Impact


  1. Expand our research focus to non-purchase dimension of consumer behavior, especially need and want activation, nonpurchase modes of acquisition (sharing, borrowing, stealing), and every aspect of actual consumption.
  2. Embrace broader theoretical perspectives on consumer behavior beyond information processing and BTD, especially motivation, social aspects, and deep cultural aspects (as opposed to cross-cultural aspects). Less emphasis on unique and micro-level explanations.
  3. Expand our epistemology to encourage (a) further phenomenon-based research (provided that phenomenon is robust and really grounded in CB), (b) more descriptive research, and (c) tests of popular industry theories
  4. Greater attention to content aspects of CB with corresponding increase in domain specificity (and decrease in presumed generality). Key opportunity in area of motivational content.
  5. Lower tolerance of theories of studies.
  6. Greater emphasis on replication, robustness, and sensitivity testing.
  7. Greater reliance on studies with real consumers, as opposed to students or Mturks. Encouragement of field studies. Decreased reliance on scenarios, especially when studying hot processes of CB.


  1. CB syllabi need to be revamped (especially those structured in terms of information processing and JDM) to reflect broader theoretical perspectives
  2. Greater substantive grounding in how we teach CB to our graduate students
  3. Encourage PhD students to take or TA MBA-level course in CB and in basic marketing
  4. Encourage to a limited extent (rather than strongly discourage) activities that strengthen our grounding in and understanding of business issues (executive teaching, consulting, book writing).
  5. Pay more attention to citations and impact as opposed to mere number counting in promotions. (Simple new metric proposed: average citation percentile rank in given journal in given year)

Editor’s comments: We’d like to thank Michel Tuan Pham for kindly letting us publish his presidential address, and would welcome comments from readers. What do you think about these sins?

7Sins: #6 Overgeneralisation

Editor’s note: Today we continue our series on the seven sins of consumer psychology from the presidential address of professor Michel Tuan Pham at the recent conference of the Society for Consumer Psychology. Read the introduction here.

The next sin that I want to point out is a sin of overgeneralization. This is a sin that we commit both as authors and as reviewers and readers of the literature.

As authors, we tend to overgeneralize from our limited data. We all know that getting an experiment to work takes a lot of effort. We have to think very carefully about the consumption context we use, the product category, the precise stimuli, the exact procedure, the measures that are most likely to pick up the effect, etc. We often run pretest after pretest, and still may need to try different versions of the experiment to eventually get it to work. We all do that. And that’s fine. However, once a study eventually works as we intended, we quickly develop a supreme confidence in our own results and interpretation, forgetting how much effort it took us to get the effect in the first place. As a result, we tend to believe that our findings are more general and robust than they actually are. The psychology is akin to the fundamental attribution error: we quickly attribute significant data patterns to some trait-like pet theory, forgetting to account for the multitude of contextual factors that may have contributed to this data pattern.

A related issue, of course, is the issue of how transparent is our reporting of the fragility of the results that we produce. This in itself would require an entire address; so I prefer to leave it aside for now.

We also have similar tendency when we read the literature and, to a lesser extent, when we review papers.

Once an effect has been reported in a published paper (especially if it is by a famous author in a prestigious journal), we tend to treat it as gospel, again forgetting that this effect may be more context-specific than a quick reading of the paper may reveal. Moreover, we often generalize the result well beyond the researchers’ original interpretation. As a result, we walk around with oversimplified theories of the world that we use indiscriminately. This impedes scientific progress because, all too often, research ideas are rejected and findings are simply dismissed because we have an unwarranted feeling that “we already know that.”

A good example is the work on the “too-much-choice effect” by Sheena Iyengar: the famous-jam-in- supermarket study. This is a great study and the results are very important. However, if you read the paper carefully, you will see that the effect is very specific and that the authors were very particular in the way they conducted the study. For example, all the jams were from a single brand and only unfamiliar flavors were selected. Yet after the paper was published and received a lot of attention, the fine prints of the study—which the authors carefully disclosed in the paper—were quickly forgotten, and the field began to take it for granted that “consumers do not like when they have too much choice.” It turns out, however, that the effect is very fickle. In a meta-analysis by Scheibehenne and colleagues (2010), the basic effect was found in only a small number of studies; the reverse effect was found in an equal number of studies; and the majority of studies showed no significant effect.

The same issue arises when we review papers. Very often, we dismiss a particular finding based on a loose impression that “we already know that” and “this has already been shown,” without necessarily appreciating that there may be nontrivial distinctions between the new study and the ones that it reminds us of. We also exhibit the same fallacy when authors replicate their findings in a separate study and we ask them to drop the replication study because “we already know that from the other study.”

The answer to this pervasive problem of overgeneralization is very simple. We need more replications and more nuance. As authors, we need to be more willing to replicate our own results: across different samples of respondents, across different stimuli, across different operationalizations of the manipulations, etc. Ideally, these conceptual replications should be done in a way that alters only one variable at the time. When too many variables are changed at the same time across studies (which we often do), we defeat the primary purpose of replication which is to assess the robustness of a result to small differences in method that are theoretically-meaningless. As authors, we should also be more willing to increase the sample size of our studies. All too often, researchers seem to be unwilling to run more subjects, apparently out of fear that the effect might go away. My philosophy is that if we really believe in our effects, we should not be afraid of increasing our sample size.

We also need to be more careful and nuanced in our writing and discussion of our results.

As readers of the literature, we need to be more mindful of what studies actually show and how they were actually conducted. We also need to be more appreciative of studies that, on the surface, merely seem to replicate conceptually what previous studies had shown.

And as reviewers and editors, we need to be much more supportive of close replications within papers, and conceptual replications across papers—these are not wasted journal space.

Ready for the final sin?