About Neda Kerimi

Scientist

Meet the Editors: Neda

Neda Kerimi

Neda is currently a post-doctoral fellow at the department of Psychology, Harvard University after receiving her PhD from Stockholm University and a working at Uppsala University. Her research interests include decision-making, happiness, risk as well as human-computer interaction. She’s also the news editor for the European Association for Decision Making. Besides being a self-confessed technology geek, she loves useless facts and futurist science.

I’m working on InDecision because…Someone has to do it! Ever since my PhD studies I have been involved with different scientific societies, and I noticed that especially in JDM, a forum for early career researchers did not exist. In addition, there is just so much graduate programs or conferences can teach you. We wanted to create a forum where people can discuss the science itself and everything else that we all go through during our academic careers. We get so much satisfaction from running the blog that we have decided it’s well worth the time and energy.

I’m most passionate aboutKnowledge and people! I love learning, especially if it helps me to understand humans better. I have come to terms with the fact that I am a science geek in heart and soul (indeed, 90% of my conversations start “I read an article about a study…..”). In addition, I am passionate about understanding the core of human mind, whatever that may be. Don’t know if we will ever have a grand theory of the human mind but we are learning new things everyday. I am also passionate about how we can use knowledge and scientific progress for the greater good (more on that in upcoming future indecisionblog.com series).

At a conference, you’ll most likely find me in a session with key words like… Financial JDM, social JDM, and technology. More or less anything that can please my tech-geek and JDM-geek identities. For me conferences are not solely about the talks but also an opportunity to connect with new people and reconnect with those that I seldom see in person. So I might skip a few talks just to get the time to chat with an old friend or a new friend.

How I ended up doing research in this field… I started in IT and studied psychology alongside with my full-time job, but I soon realized I wanted to pursue my Phd in psychology. Being a bad decision maker (I couldn’t even decide where to eat lunch), it came naturally to me to immerse myself in the science of decision making. Fortunately, a PhD in the subject has actually made me a better decision maker. However, I can’t say how much of it should be credited to my PhD or to the fact that I have gained more experience in making decisions.

My personal research heroes are… so many that it is not worth mentioning names. For me, a research hero is more than someone who has come up with a ground-breaking theory – it’s also about the person. I have been incredibly lucky to meet so many people who, despite their fame and prominence, have taken the time to meet or chat with me, which I find hugely inspiring. I especially admire the many female researchers who lead the way for other women to progress in the field. The scientific community has traditionally been male-dominated, and I am pleased to see that is changing, and it is because of the excellent work than many female researchers do.

What I find most challenging is… not losing focus! I feel that research has become so much fiercer and competitive than before. The currency in our field is publications and citations and whether one get a job or a funding relies on the number of publications and citations (which I do not see as a good currency). I guess my challenge is to focus on what gets me going and not be affected by the stress and pressure that come with working in academia. Another challenge for me is to say no to projects. I get overly excited about everything that has to do with the human mind and science and want to run a project on it. It is a challenge, but I am getting better at it.

What I’d be doing if I wasn’t a researcher… I would most likely work with psychology or technology (maybe both?) in one way or another. I actually think more scientists should embark a career outside academia. We need to share the valuable knowledge and experience we have with also non-academics.

——————————————————————————————————————-

Other things to read:

Advertisements

Inside the Black Box: Frontiers in Psychology

frontiers banner

Next in our Inside the Black Box series is Frontiers in Psychology, an open access journal that aims at publishing the best research across the entire field of psychology. Frontiers in Psychology publishes articles on the most outstanding discoveries across the entire research spectrum of psychology. The mission of Frontiers in Psychology is to bring all relevant specialties in psychology together on a single platform. Field Chief Editor Axel Cleeremans gives us his insights into this journal.

What makes you go “Wow!” or “Yuck!” when first read a submission? I go “Yuck!” instantly if the paper looks like it’s poorly written, if the figures don’t look good (see Tufte’s advice on that), if it contains typos, or if looks very verbose or boring. There is an important message there: If you don’t fine-tune the presentation of your findings, it’s as good as nothing.

“Wow!” can result from different factors. Sometimes it’s the finding itself — for instance, I find Geraint Rees’s recent demonstration that one’s experience of the Ebbinghaus illusion is inversely proportional the size of one V1 stunning. Other times it’s the sheer power of technique — Bonhoeffer’s applying two-photon microscopy to visualize synaptic growth in vivo is a good example of that. The cleverness of an experimental design is a further “Wow!” inducer; Jacoby’s process dissociation procedure, when I first read about it, definitely elicited a “Wow!” response from me. And then of course, I go “Wow!” when reading about impressive ideas. Rumelhart and McClelland’s PDP volumes made we go “Wow!” for years, as did Hofstadter’s “Gödel, Escher, Bach”.

What are the common mistakes people make when submitting/publishing? Submitting to the wrong journal. Making the story too complicated. Not having any story. Reporting uninteresting findings. Reporting uninteresting findings but trying to make them sound interesting. Failing to cite relevant work from many years ago that old editors know about.  Leaving typos in the manuscript. Ugly figures.

What are your best tips on how to successfully get published? Work on the most important issue in your domain. Build a good narrative. Papers that read like detective stories (and finish with an satisfying resolution!) are always good. Get the writing absolutely perfect. Of course, interesting and solid data. Simplify. Kill all the typos. Cite previous work. All referees first look for flaws because if any are found then the review is done and the referee can focus on something else. It is only when no surface flaws are found that the referee actually thinks about whether the paper is interesting…

How are reviewers selected? That very much depends on the journal. Some editorial systems are almost entirely automated, which has advantages (speed) but also disadvantages (relevance). Some editors hand-pick their referees based on different criteria (mostly, whether they think they know something about the topic and whether they think they’ll compose their review in time). Many systems offer referee suggestions based on keyword matches. Authors can also often propose referees themselves. This is a good idea as it speeds up the work of the editor, who will typically select referees both from the author’s suggestions and from his own pool of referees.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? I wouldn’t do it too quickly — say, three years in your Ph.D. Reviewing an article is an important and difficult job. It gets much easier as your knowledge of the field grows and as your expertise at reviewing increases, but the first reviews you do are always very intensive jobs. You worry that you’ll be ridiculous in the eyes of the editor and the other referees. You worry that you missed a central point. You’ll spend days on your first review. On the other hand, knowledge of what’s going on in your field before it gets published can be invaluable — but for this, you can count on your advisor.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? Or what makes one a good reviewer? A good reviewer is a reviewer who turns in her review in time and who manages to discuss the paper from a neutral tone while clearly listing the issues that concern her, if any. And of course: Good reviews also contain a clear recommendation that is congruent with the listed points. Sometimes you get almost self-contradictory reviews. They begin by “This is a very interesting paper that uses clever methods” and finishes by “I recommend the paper be rejected”. This makes it almost impossible for an editor to use the review, as do reviews that contain too many subjective comments. Reviews should almost be written as though they were public comments, that is, with all the care one would use if one were talking in public about someone else’s work.

How do you resolve conflicts when reviewers disagree? That’s a tough one. I regularly receive conflicting reports, sometimes at either end of the spectrum (i.e. Referee #1 says “Reject”; Referee #2 says “Accept without revisions”). If both reports make sense (that is, it is clear both referees understood the paper), most typically, I will consult a third referee (which sometimes doesn’t help). When all else fails, you read the paper and make the decision yourself… (just kidding: editors read the papers, but then there is a difference between reading a paper and forming an expert opinion about it). It is worth mentioning here that some open access journals (i.e. Frontiers in Psychology) have adopted a completely different manner of resolving differences between referees, namely to ask referees and authors to interact until a consensus between referees is reached. Many conflicts between referees are solvable by iterated interaction — something that can be tough to achieve with the standard reviewing process.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? Revise and resubmit is pretty much the norm — it is exceptionally rare for a paper to be accepted right away. Dealing with rejection is understandably difficult. Your reaction to it very much depends on what you can attribute the rejection to. Being rejected from Science is not an indication that your research is not good; just that it’s not good enough, or not novel enough, or not interesting enough in the eyes of Science’s editors. You may think otherwise and feel wronged somehow, but it’s not your decision to make in either case, so it’s best to move on and submit to another journal. The worst case scenario is when you submit to a mediocre journal, wait for months, and find that your paper is rejected. If you really feel a “reject” decision was incorrect, it’s always a good idea to interact with the editor. As an editor, I only use “reject” when all referees agree that the paper is not publishable. Dealing with a revise and resubmit is easy: Just address all the points raised by the referees one by one and thoroughly. In the vast majority of cases, papers in that category will end up published;  it’s just a matter of taking all the points seriously and in detail.

Is there a paper you were skeptical about but turned out to be important one? Not that I can remember as an editor. A couple of my own papers as an author, though, had very difficult beginnings and turned out to be considered as quite important. Science is about data, but also involves rhetoric: Not only do the data have to be important, but you also have to present the results and their implications in a persuasive manner.

As an Editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? There is an important ongoing discussion on twitter, blogs, facebook, email and the press about the importance of replication in psychology. Developing methods that make it possible to analyze replication efforts properly, as well as promoting the publication of replication findings, are important issues. One of the most interesting methodological developments in this respect is the emergence of novel statistics based on Bayes’ ideas. I also continue to be impressed with the increased sophistication of neuroimaging methods — think MPVA for instance. Increased meta-data in all fields will also make all sorts of meta-analyses possible.

What are the biggest challenges for journals today? The challenges are not the same for traditional journals and for new, online, typically open-access journals. Some journals are more or less immune from challenges because of their extraordinary status in the field. The challenge for traditional journals is to stay relevant in an increasingly open-access, rapid-fire world: Interesting results are tweeted or otherwise shared almost instantly, and people want to download the relevant material freely and right away. The challenge for open-access journals is to accrue enough credibility. A challenge that faces every actor today, individuals and journals alike, is to find interesting ways of attracting attention. So much is published today (considerably more than even a few years ago) that it becomes a challenge to even find relevant material.

Journal home page

‘Inside the Black Box’ series home page

Inside the Black Box: Judgment and Decision Making

We start our journal editor interview series with Judgment and Decision Making’s editor Jon BaronJDM is the journal of the Society for Judgment and Decision Making (SJDM) and the European Association for Decision Making (EADM). It is open access, published on the World Wide Web, at least every two months. JDM publishes original and relevant to the tradition of research in the field represented by SJDM and EADM. Relevant articles deal with normative, descriptive, and/or prescriptive analyses of human judgments and decisions. 

What makes you go “Wow!” or “Yuck!” when first read a submission? Wow!: When it shines new light on a traditional JDM problem, including possible applications in the real world. I choose the lead article in each issue on the basis of this sort of reaction. (Of course, some issues have no article that merits the exclamation point, and some have more than one.)

Yuck!: When it applies the Analytic Hierarchy Process to the pipe-fitting industry in Pakistan. Or when it uses a tiny sample, with no replication, to show that people are at the mercy of subtle, unconscious forces. Or when it makes obvious statistical errors, like claiming an interaction on the basis of a significant effect next to a non-significant effect.

What are the common mistakes people make when submitting or publishing? Submitting to the wrong journal.

What are your best tips on how to successfully get published? Study big effects, or use large samples. Don’t waste time studying phenomena that are ephemeral and difficult to replicate, especially if you are trying to find moderators of such effects.

How are reviewers selected? When I handle papers – about half of them go to associated editors – I try to find the most expert reviewers who are willing to review, including members of the journal’s board when possible. This often takes several attempts; people say no. Often I use Google Scholar, as well as citations in the paper and authors’ recommendations.

How can a young researcher become a reviewer? When is the best time during one’s PhD to start doing so? If you know someone who is an editor (including an associate editor), tell him or her that you are willing. I often ask grad students to review, but only if I know them to be experts on the topic of the paper. I am not willing to take a student’s word for this expertise, or to assume that being first author of a related paper is sufficient. Thus, personal knowledge is important.

I think that grad students should do occasional reviews. But anyone who keeps publishing is going to get asked to do more and more reviews. Be nice to editors (and other authors). If you get asked to do a review, respond quickly. Saying no immediately allows the editor to go to the next person on the list.

What constitutes a good (i.e., well explained/written) review, from an editor’s standpoint? What makes one a good reviewer? Explains why the paper is fatally flawed, if it is. Otherwise provides helpful advice for revision, or (only if necessary) for additional research. What I find unhelpful are requests for more “theory”, as if theory were something like soy sauce.

How do you resolve conflicts when reviewers disagree? I regard reviews as information, not votes. They point out flaws I had not discovered, literature I did not know, or strengths that I did not appreciate. The review’s bottom-line recommendation is just a little more information. Thus, these recommendations are not conflicts that need to be resolved.

But reviewers also disagree about specifics, about what needs to be done. Here, I think it is my job to tell the author which of the reviewers’ comments to ignore, and which to follow, and (if the review does not say), how to follow them (if I can). As an author, I find it annoying to be at the receiving end of conflicting reviews, with no idea what magic I must do in order to satisfy everyone.

What’s the best/worst way to react to a revise and resubmit, and worse, to a rejection? The best way to react to a revise/resubmit is to try to do what it says, or explain politely and clearly why you can not or should not do that. Or give up and try another journal if you think you are being asked to do the impossible.

The best way to react to a rejection depends on what it says. If it finds a fatal flaw that cannot be fixed, the best thing may be to regard the paper as sunk cost, and move on. In other cases, rejections are very specific to the journal, so you should just send the paper
elsewhere. If you think a paper is good, don’t give up. Keep sending it elsewhere. In still other cases, papers are rejected because more work is needed. Maybe do the more work.

Is there a paper you were sceptical about but turned out to be an important one? Not really.

As an editor, you get to read many papers and have an insight emerging trends, what are the emerging trends in research topics/methodologies? On topics, I think that fields go through fads – well, let’s say “periods in which some topics become very popular and then gradually fade into the background”. These are often good things. Many come from external interest and funding, such as the enormous interest now in “nudges”, or the interest in forecasting and prediction arising from the recent IARPA competition. Around 1991 the Exxon Valdez case inspired (and funded) a great deal of JDM research on contingent valuation and value measurement in general.

On methods, our field is slowly but surely catching up with the enormous increase in the powers of computers and the Internet. Data analysis is becoming more sophisticated. A variety of approaches are being explored (including Bayesian ones). Web studies are becoming more numerous and more sophisticated. People are making use of large
data sets available on the Web, including those they make themselves by mining data.

What are the biggest challenges for journals today? The biggest is integrity. The work of Simonsohn, Simmons, Nelson, Ionnides, Pashler, Bar-Hillel (earlier) and others on p-hacking, file-drawer effects, basic statistical errors, and outright fraud has raised serious questions about what journals should and can do. The problems vary by research area. Medical research and social psychology are probably worse than JDM. But I am still trying to work out a way to deal with this problem. Asking for data and for sufficient stimulus materials for replication is a step. I spend a lot of time checking data analysis with the data that authors send.

The next biggest challenge is how to take back scholarly communication from those who seek to profit from it by building pay walls of one sort or another, including both subscription fees and publication charges. I have ignored this problem, hoping that it will go away or that someone else will solve it (e.g., by endowing JDM with $500,000). Right now, JDM has neither type of fee, because I do the production and “office work”. Other journals work this way, but the authors all submit papers with LaTeX formatting. My job would be easier if Microsoft Word did not exist. Maybe I will outlast it, and then the problem will be solved for the next editor. But a little money – nowhere near as much as proprietary journals get – would still help, and I don’t know where to get it.

The third biggest challenge is how to get rid of the perverse incentives that arise from the use of the “impact factor” of a journal for evaluation of authors of papers in that journal. Journals cannot do much about it, except perhaps to stop advertising their impact
factors in large print.

Journal homepage

‘Inside the Black Box’ series home page

Jon Baron Research Hero interview

Research Heroes: Barbara Summers

barbara%20summersBarbara Summers is a Senior Lecturer in Decision Making at Leeds University Business School, UK, where she also serves as co-Director of the Centre for Decision Research. She has recently been elected to serve as President Elect of the European Association for Decision Making (EADM), and currently serves on the Society’s Board as Member at Large. Her research focuses on individual decision making from both cognitive and emotional perspectives, with application areas in health, marketing and pensions. Her work benefits from her previous commercial experience as Head of Systems Development at Equifax Europe UK.

I wish someone had told me at the beginning of my career…… the advantages of walking and patience. Sometimes projects take a while to get going; you have an idea, but investigating it leads into slightly unfamiliar territory so you feel there is a lot of literature to get through. Or you might feel you have lots of bits of the puzzle but can’t see how they fit together. It’s human nature to want results quickly and to feel disheartened in these situations, but don’t – this sort of project can be the most interesting in the long run, so it’s worth being patient and working through it. I find the best way to trigger the “eureka” moment when the bits click into place is to stop thinking. Walking while not focusing on your thoughts or sleeping are really good ways to do this. The idea of sleeping on a problem usually works for me (and recharges your batteries).

I most admire academically… because…There are a lot of people, but the work of Kahneman and Tversky on Prospect Theory had the biggest effect on me. I had been doing work in another field and realized that this theory gave a better explanation of some data I had than the traditional explanations in the literature. I was converted and decided to investigate the area further – well worth it! There are many others as I explore different aspects of decision making, but this was the first.

The best research project I have worked on during my career…/the project that I am most proud of/ that has inspired me most….There are so many different ways a project can be best – and I have been lucky enough to have quite a range of experiences. Some projects broaden your ideas of how the world works (I feel this about the work I’m doing now on emotion), while others can produce real world impacts that are satisfying to see (I did work on a project producing decision aids for patients, for example, and another project helped a company predict and respond to customer needs better). Some projects can just be a good experience in terms of getting to know others. I try to see the best in all of them.

The worst research project I have worked on during my career…/the one project that I should never had done…If you are doing research then some projects are not going to work. You might not get the results you want, you might even get results that prove you wrong. It’s frustrating, but most projects have some value in the longer term. The bad ones are ones you don’t enjoy working on.

The most amazing or memorable experience when I was doing research….… is always the bit where the predicted results happen – I get a real buzz every time, because you now understand the world a little better.

The one story I always wanted to tell but never had a chance…I used to be involved in organizing a professional conference (while an academic) and there was a project that needed real managers to take part in the research. I suggested we might use the future delegates for the conference, and we could give a talk on the results in my session in exchange for their participation. The project was trying to identify ways in which professional managers’ decisions in a particular field (to do with corporate failure/ creditworthiness) demonstrated expertise, and to make it more interesting we also got groups of lecturers who taught techniques for making similar decisions, and their students, along with a group of lay people to provide a comparison. The professional society helped us distribute the questionnaires and we put the talk in the program. Then the results came in. Lecturers and students generally performed better than the professional managers on the tasks, and in fact the managers were barely better than lay people (who really knew nothing about the subject area). Welcome to the conference talk from Hell – we had to stand up and tell people (who paid to be there) that they were hopeless at a job related task!

In the end things were not so bad. The obvious “How can this have happened?” response gave way to an investigation of how the task (which the managers thought was important) fitted into their role as a whole, and led to an understanding that their real expertise was not in getting the right answer in the first place, but in managing the relationship with the company they assessed for credit as it developed (so these skills made more difference). We even managed to get a laugh when we gave the talk! I probably have told this to some people, so apologies if you heard it before…

A research project I wish I had done… And why did I not do it…I’ve not given up yet on any project I wish I’d done yet. If I still wish I’d done it, I still hope to manage it. Sometimes things drop off the list because I realise they won’t do what I want, but that’s it.

If I wasn’t doing this, I would be……probably back (or still) in the commercial world. I spent a lot of time in Business Analyst type roles doing quite a lot of greenfield development projects, where the company was moving into new territory or the client wanted to do something but didn’t have anything in place. These have quite a lot in common with research, certainly in the thinking process, so are fun. Some of the ones I was involved in were international joint ventures, so I got some chance to travel and see other perspectives. Not quite as much fun as academia, but still fun.

The biggest challenge for our field in the next 10 years…Getting the real world more widely engaged. We’ve had a burst of interest in behavioral work in the UK, with the government setting up a Nudge unit. There are however a lot of fields where more behavioral aspects could give real benefits in solving real world problems (like helping people make informed decisions), and in benefitting business too. Students who’ve taken the Management Decision Making course run by our centre regularly report how useful it is in their careers from interview stage on, giving them a perspective on avoiding pitfalls in decision making (I wish I had done it before being a manager myself!). I see many opportunities, but we need to keep up momentum to get there.

My advice for young researchers at the start of their career is…Enjoy what you do – you do better on projects that catch your imagination. Make contacts and work with others – ideas develop faster with more than one person thinking about them. Establish what you need to get to where you want to be. When I got my first lecturing post the Dean of my School gave me a list of promotion criteria for the next grade up and told me to start ticking them off as soon as possible. I found this really helpful in getting established, as someone moving across from industry, but I think it would have helped anyway. If you’re in this position, I wish you all the success in the world and have great time – academia is a great job.

Departmental website

Research Heroes: Ralph Hertwig

Hertwig_Ralph_RGB_WEB[1]This week’s Research Hero is Ralph Hertwig, the Director of the Center of Adaptive Rationality at the Max Planck Institute for Human Development in Berlin. He received his PhD from the University of Konstanz in 1995. Before being recruited to take the prestigious role as a director at the Max Planck Institute, he was professor for cognitive and decision sciences and dean at the Department of Psychology, University of Basel. He has received many grants and awards such as Fellow of APS, and won the teacher of the year award for the Department of Psychology two years in a row. His research focuses on models of bounded rationality such as simple heuristics and on decisions from experience. He has co-authored two books, and written numerous articles in journals such as Psychological Science, Psychological Review and many more.

I wish someone had told me at the beginning of my career…That to make it in academia you need more than the obvious skills—you also need the ability to juggle lots of projects, to multitask constantly, and to delay gratification. Not to mention plenty of perseverance and a thick skin for weathering all the rejections, which keep on coming no matter how advanced you are in your career…

I most admire academically… because…People whose writing I love, such as William James, Stephen Jay Gould, and Steven Pinker. For me, Egon Brunswik was also an extraordinary writer. Many people tell me his writing is difficult to decipher. But I have the feeling he thought very hard about each of his sentences and that each one conveys exactly what he wanted to express.

The best research project I have worked on during my career…/the project that I am most proud of/ that has inspired me most….I’m most proud of the research projects where I teamed up with somebody from another field or another school of thought and we were able to produce something I could never have come up with on my own. Those sorts of collaborations have resulted in papers that I still find interesting when I peruse them today—for instance, work on the different experimental cultures in psychology and economics (with Andreas Ortmann); how to link the ACT-R architecture and simple heuristics (with Lael Schooler), and how to model parental investment with a single heuristic (with Frank Sulloway and Jennifer Davis). I enjoy starting a project in an area about which I know little and going home every evening with the feeling of having learned something new.

The worst research project I have worked on during my career…/the one project that I should never had done…I can’t think of a “worst” project. But I have a most difficult one. It was an “adversarial” collaboration with Danny Kahneman (and Barbara Mellers as arbiter). With the explicit goal of agreeing on designs that, no matter the results, would settle our disagreements, we exchanged many, many e-mails to hammer out the details of our joint studies—to no avail. The fickle deity of data thwarted all our plans: we just couldn’t agree on how to interpret the results. It was a painful process, but I’m glad that we could cordially agree to disagree and gained respect for one another along the way.

The most amazing or memorable experience when I was doing research….My most amazing research experience was as a student, when I was doing an internship at a psychiatric research hospital. I had the idea of applying signal detection theory, which I’d just learned in class, to analyze an existing data set. It was the first time I wrote little statistical programs, and I was amazed that they worked and I could get the computer to do what I wanted… well, after a lot of trial-and-error and cursing. It made me so happy. Even more so when my advisor told me my fledgling analyses had produced some new findings. They led to my first published paper.

The one story I always wanted to tell but never had a chance…If I ever had one, I’ve already forgotten it, so it can’t have been that great a story.

A research project I wish I had done… And why did I not do it…That would be a case study of Monica Lewinsky that never got off the ground. It was back in 2002. I was working at Columbia University (in Elke Weber’s lab), and a friend and I went to a public question-and-answer session that Monica Lewinsky gave at Cooper Union in Manhattan. I think we were all struck by how intelligent she seemed, how thoughtfully she related her experiences, and how plausible her answers appeared. In fact, we came away with the impression that there were two Monica Lewinskys—the one we’d just seen in person and the image the public had formed of her. And that got us thinking about research on the fundamental attribution error, which says we all tend to attribute other people’s behavior to personality while largely overlooking the situational factors. We thought Monica Lewinsky would make a fascinating case study of the fundamental attribution error, so we wrote her a letter—I recently came across it in my files—asking whether she’d be interested in talking to us….

Of course, the reason the case study never happened is that she never responded to our letter. We knew someone who knew someone who knew someone who was probably able to get the letter to her, so I do believe she received it. Who knows, if she had responded, the fundamental attribution effect might be known today as the Monica Lewinsky effect.

If I wasn’t doing this, I would be…A political scientist. I can talk politics with friends and family for hours on end (ask my wife).

The biggest challenge for our field in the next 10 years…If I had to pick only one—and I believe there are quite a number—then it’s to work together to integrate our theories. It’s been said that psychologists treat theories like toothbrushes (no self-respecting person wants to use someone else’s). I think there’s a lot to that, and we need to change this.

My advice for young researchers at the start of their career is…To read to the right and left of psychology, and to discuss your ideas with everyone around you. In my experience, new ideas don’t simply come to you but often arise in conversations, while attending a talk, or over coffee with colleagues.

Departmental page

Research Heroes: Jay Edward Russo

RussoThis week’s Research Hero is Prof. Jay Edward Russo. Prof. Russo received his PhD in Cognitive Psychology from University of Michigan. He has been working at Cornell University since 1985, and holds the S.C. Johnson Family Professor of Management at the business school. He has also been on the Faculty of the University of Chicago, the University of California, San Diego as well as holding visiting positions at Bocconi University (Milan), Carnegie-Mellon, and Duke, and Penn (The Wharton School). Prof. Russo’s research focuses on managerial and consumer decision making and one of his most important contributions is the work in information distortion and process tracing methods. Prof. Russo has published extensively in prestigious journals as well as co-authoring Winning Decisions (2002) and Decision Traps (1989). He has been on the editorial boards of leading journals such as Journal of Behavioral Decision Making, Journal of Consumer Psychology, Journal of Marketing, Journal of Personality and Social Psychology, Psychological Science, and many more. He has also done consulting work for National Bureau of Federal Trade Commission, GTE Laboratories and General Motors Research Laboratories. 

I wish someone had told me at the beginning of my career…Throughout your career, but especially prior to tenure, you will very likely be forced to make a tradeoff between good science and careerist tactics. A research topic that may contribute most to understanding J/DM may not be one that is currently well recognized and accepted by the field. The more novel the topic of one’s research, the more challenging will be its path to publication in journals, to grant support, and to other markers of acceptance by the field. The likelihood of lots of published papers is far greater if you work on currently accepted topics. You will need the publications, maybe many of them, to achieve careerist goals, especially tenure. The price to good science may be work that is incremental at best and “backfill” at worst.  I urge you to be fully aware of the tradeoffs that you make between better science and career advantage.

I most admire academically… because…
Herb Simon because he aimed so high as a scholar and as a citizen of his university and of the world at large– and because he was so successful as both scientist and a citizen.

The best research project I have worked on during my career…/the project that I am most proud of/ that has inspired me most….I stumbled on the phenomenon of decision makers’ distorting new information to support the currently leading alternative. I investigated this predecisional distortion of information for a decade or so, revealing some of its manifestations, boundaries, and consequences. One strategy for good science is to try to identify the underlying causes that explain why a phenomenon occurs, in the hope that even one of those causes may be fundamental enough to explain other phenomena as well. The attempt to explain predecisional distortion led to work that identified the goal of cognitive consistency as the main driver. This work relied on multiple methods, including some new to me (semantic priming and a lexical decision task) or simply new (in-progress assessment of goal activation). The result was unexpected and quite clear: only cognitive consistency caused information distortion, with alternative goals like saving effort playing no role at all. Subsequent work has confirmed that the goal of cognitive consistency is at least one driver of several other J/DM phenomena, thus validating the scientist’s strategy of seeking depth of explanation.

The worst research project I have worked on during my career…/the one project that I should never had done…There is no one project that I regret. Rather my regret is working on too many projects, drawn to each one because it was so genuinely interesting. I probably should have focused on those that were both most interesting and most important.

The most amazing or memorable experience when I was doing research….After so many decades of research (five), there are many experiences; but it is more categories than individual events that come to mind in responding to this question. For instance, when I was younger, it was a great pleasure to have a senior scholar whom I respected proffer kind words about my work. Now I have the pleasure of supporting young researchers, reminding them that it may take several good ideas to find one both worthwhile and feasible and to remember in their enthusiasm and impatience that science is slow.

The one story I always wanted to tell but never had a chance…“There’s nothing new here.” These were the words of all three reviewers of one of the first submitted manuscripts on information distortion. Fortunately, each one identified a different well-known phenomenon of which information distortion was asserted to be merely another (unnecessary!) illustration. I do not recall the exact three, but early in this research stream the following were offered: attitude extremity/polarization, cognitive dissonance, confirmation bias, the desirability bias (wishful thinking), the halo effect, and the prior belief effect. Fortunately, the editor was sensitive to the unusual combination of reviewers’ complete agreement (“reject this manuscript”) and complete disagreement (“just another example of [three distinctly different phenomena]”). As a result, he gave me and my co-authors the chance to explain why there was, in fact, something new in the phenomenon of information distortion. The subsequent explanation was accepted, along with the manuscript. The lesson I took from this experience was how reviewers (which means most of us) can so naturally filter our judgments through our own lenses. The question that I ask myself is whether I have applied that lesson consistently when I evaluate others’ work. The answer: probably not, but I do keep trying.

A research project I wish I had done… And why did I not do it…I cannot claim to have no regrets whatsoever (that would be hubris), but none of them involve a research project that I regret not attempting.

If I weren’t doing this, I would be…Likely retired, an unpleasant thought. There is still tread left, so please don’t retire me.

The biggest challenge for our field in the next 10 years…One challenge is to encompass the growing breadth of J/DM phenomena and methods. Among the phenomena are those that are nonconscious, emotional, and contextual. Among the methods are those of neuroscience and of process tracing. In considering the opportunities and barriers to adopting these newer research topics and methods, I recall the observation that so often seems best to characterize a field’s response to such a situation, “We love progress; it’s change we hate”. My belief is that J/DM researchers, senior as well as junior, can master new methods and solve new problems. My hope is that more than a few will.

A second challenge is paradigmatic. J/DM emerged as a field by testing the optimal models of economics and statistics, especially EU and Bayesian updating. Violations of these models engendered the anomalies paradigm that has characterized J/DM for the last four decades. Let me suggest a challenge in the form of a question: what would J/DM look like if studied the way other higher-order psychological phenomena are approached, such as problem solving/reasoning and language comprehension? That is, what if we built theories of cognitive (and other) processes from process (and other) data, but without specifying optimal performance? Indeed, if we view behavior as driven by multiple goals not all of which are even conscious, can we really specify optimal performance? What if, instead, we viewed our subjects as adapting to the task environment that we scientists create in order to perform sufficiently well rather than optimally?  Great progress has been achieved in understanding how people read without the use of an optimal model of language comprehension. Might similar progress occur in J/DM by focusing less on how our observations compare to optimality criteria and more on the complexity of decision makers’ attempt to achieve multiple goals simultaneously?

My advice for young researchers at the start of their career is…Learn how to select research problems, not just how to solve them.   Try to be strategic in how you approach your topics, colleagues, and journals.  Often I’ve seen a graduate student (or a credentialed researcher) happy just to find a candidate problem: “That would make a dissertation topic.” or “That could be publishable”. With my own students who are ready to find a dissertation problem, I ask them to identify three potential topics, to research each one for at least one week, and to evaluate their comparative merits. Then, and only then, do I want them to pick one.

Understand the J/DM paradigm in which you are working and think about whether a different one, maybe a newer one, might yield greater contributions to the field. Are input-output data sufficient, or would process data yield more insight? Is this the time or topic to bring in neuroscience? Should the analysis move from the attributes of the alternatives considered in a decision to the benefits that those attributes convey, or even to the goals that those benefits help to achieve? One of books that most influenced my graduate training is Thomas Kuhn’s Structure of Scientific Revolutions, which focused on scientific paradigms. I still begin my doctoral seminar by asking students to read it.

Departmental page

Research Heroes: Karl Halvor Teigen

TeigenGraduated as a psychologist from the University of Oslo in 1966, where he is now an emeritus professor in general psychology. He also held positions in cognitive psychology at the universities of Bergen and Tromsø (Norway) where he was for some years the northernmost professor of psychology in the world (until a colleague beat him with half a mile). He is a past president of EADM, and has received an honorary doctorate from the University of Bergen. His main research interests concern probability judgments, including verbal probabilities, social cognition (counterfactual thinking), and the history of psychology.

I wish someone had told me at the beginning of my career… that I would have a career! I also wish I had been strongly encouraged to go abroad and to attend international conferences. Norwegian psychology at the time I graduated was quite provincial. In 1967 it was considered a big leap even to move from one Norwegian city (Oslo) to another (Bergen). It took me more than 15 years before I dared to step out on the international scene, so I now have to continue research far into senility to make up for those lost years.

I most admire academically … As a young student I came across “Chance, skill and luck” by John Cohen (Penguin books, 1960). I admired his studies of psychological probability which he combined with a rich historical perspective. In fact this was a book I would have liked to write myself. Later came Kahneman and Tversky who did similar studies even better, except leaving out the historical aspect. It is in such cases hard to distinguish between envy and admiration, but it has fortunately been shown that benign envy outperforms admiration (Van de Ven, Zeelenberg & Pieters, 2011), so I can confess my benign envy for a number of scholars inside and outside of our field. 

The best research project I have worked on during my career…/the project that I am most proud of/ that has inspired me most….Many years ago I became puzzled by the fact that newspaper articles about “lucky” people (with the exception of occasional lottery winners) almost invariably described accident victims. When I asked students to give autobiographical instances of their own luck, they produced similar, rather negative instances. Degrees of luck seemed to be almost completely determined by the discrepancy between what happened and what could have happened, that is, by close and worse counterfactuals. The closer and the worse they are, the luckier you feel. This issue has haunted me for years, partly because of its popular appeal (journalists love it), and partly because it can be linked to several other themes, like risk perception, counterfactual thinking, probability judgments, superstitions, and gratitude. But its main fascination resides in the observation that people seem to know it, through the stories they tell and the judgments they pass, yet our findings make them puzzled and surprised.

The worst research project I have worked on during my career…/the one project that I should never had done…I once had to conduct a research project with students and decided to spare them for background reading by finding a topic that had never been experimentally investigated before. It turned out that nobody had at that time studied “sighing” in healthy adults (it had been studied in patients with panic disorder and in rats), so we had to invent our own “sigh-cology”, for instance by observing participants working on insoluble puzzles. They had to give up every new attempt, and they sighed. I wrote a paper which, to my surprise, was accepted for publication, but did not exactly revolutionize the (nonexistent) field. It would have remained a forgotten oddity, when I suddenly received an invitation to receive the Ig Nobel prize in psychology from Improbable Research “for trying to understand why, in everyday life, people sigh”. So I had to go to Harvard for a parodical celebration of “research that makes people laugh and then think”. Or in our case: to make people think and then sigh.

The most amazing or memorable experience when I was doing research….There have been several such experiences. Doing research often feels like trying to force open a door that appears to be slightly ajar. You have an idea, a theory, an intuition that you feel could work, but the door proves surprisingly resistant to all applications of the foot-in-the-door technique.  Then there are moments where the door simply needs a gentle push before swinging wide open. Such moments, when you get more than you asked for, are the researcher’s peak events. I experienced one almost 40 years ago when I first “discovered” that people consistently violated the 100% limit when estimating probabilities for several mutually exclusive alternatives. Again when I found that most people have to be unlucky to feel lucky, as described above; that they attach more confidence to specific (fallible) rather than to general (true) statements, that they think that events are more unlikely when they happen than when they do not occur, that negative outcomes are less surprising than equivalent positive ones, and several other robust paradoxes that seemingly defy common sense.

The one story I always wanted to tell but never had a chance…Peter Ayton already told the story of the spider in Cambridge, which I can confirm (although we may disagree about the details).  I also experienced in Cambridge (same SPUDM meeting I believe) my most successful presentation; the audience seemed more attentive to what I had to say than ever before (or since), showing their keen interest with synchronized head movements to the right and to the left, following me like a bunch of hypnotized cobras. Only after the talk I discovered I had been standing in front of the projector, obstructing their view of the screen.

A research project I wish I had done… And why did I not do it…I am fascinated by the role chance plays in shaping our lives, from the small details that make our day amusing, to more momentous decisions about marriage and career. We once carried out a set of pilot interviews with colleagues, asking them about their choices of research themes. They seemed to believe in the idea of a recurrent theme, or common thread running through their professional life, but when we pushed it further back they typically responded: “It all began quite accidentally”.  We did not follow this up, for methodological, theoretical, and perhaps even philosophical reasons, but I wish there was a neat and tractable way to observe chance at work in real-life settings. Perhaps I will stumble over one, accidentally.

 If I wasn’t doing this, I would be…perhaps a historian – of ideas, or of art. But every time I have had a brief encounter with these fields I have thanked God that I belong to a discipline where one can do experimental work, not restrained by events already settled in a hazy past, and where hypotheses can actively be put to test. To indulge in my historical interests I have published quite a bit on the history of psychology.

The biggest challenge for our field in the next 10 years… To disentangle the psychology of judgment from the psychology of decision making. These are in my opinion two overlapping themes rather than a single field. And even if I am strongly in favor the cross-disciplinary applications of JDM in economics, management, political science, medicine, and law, I feel it extremely important that it should keep and perhaps expand its psychological roots.

 My advice for young researchers at the start of their career is… (1) Travel and seek new research environments; (2) realize that your freedom of choice concerning themes, ideas, theories, methods, and approaches is greater than you think; (3) listen to advice, so that you can disregard it on purpose, and have something to tell when Elina, Neda or their successor ask you, 20 years from now, what you had wished someone had told you at the beginning of your career (they did).

Departmental page