Outside the Matrix: Dan Lockton

danlockton_5This week we’re returning to our Outside the Matrix series with Dan Lockton who is a senior associate at the Helen Hamlyn Centre for Design, a specialist research centre at the Royal College of Art in London, and does freelance work as Requisite Variety. He received his PhD in Design for Behaviour Change from Brunel University, based around the Design with Intent toolkit, and has worked on behavioural research projects, particularly on energy use, at the University of Warwick and at Brunel, before his current role in a collaborative project between the RCA, Imperial College London, the Institute for Sustainability and a number of European partners. Before returning to academia, Dan worked on a range of commercial product design and R&D projects; he also has a Cambridge-MIT Institute Master’s in Technology Policy from the University of Cambridge (Judge Business School), and a BSc in Industrial Design Engineering from Brunel.
Tell us about your work: how does decision making psychology fit in it? All design necessarily embodies models of people’s behaviour—assumptions about how people will make decisions, and behave, when using, interacting with or otherwise experiencing products, services, or environments. It’s a fairly basic component of design, although it’s perhaps only rarely considered explicitly as being about decision making psychology. Whether or not designers think about their work in these terms, it is going to have an impact on how people behave, so it’s important to try to understand users’ decision processes, and how design affects them (or should be affected by them). So both in research projects themselves, and in teaching design students how to do ‘people-centred’ design research, psychology plays a big role in my work.

Understanding how different people make decisions, through research in real contexts, becomes even more crucial when trying to do ‘design for behaviour change’, of course. You end up (hopefully) confronting and questioning many of the models and assumptions that you previously had, and develop much more nuanced models of behaviour which usefully preserve the variety of real-life differences.

In my current main project, SusLab (which is a small part of a major European project), I’m working with Flora Bowden on reducing domestic energy use through a combination of technology and behaviour change, but we’re taking a much more people-centred approach than much of the work in this field has done previously—doing ethnographic research with householders to uncover much more detailed insights about what people are actually doing when they are ‘using energy’—the psychology of the decision processes involved, the mental models people have of the systems around them, and the social contexts of practices such as heating, entertainment and cleaning. We then co-design and prototype new products and services (somewhat grudgingly termed interventions) with householders, so that they are not test subjects, but participants in developing their own ways of changing their own behaviour. This is the Helen Hamlyn Centre for Design’s forté —including people better in design processes, from ageing populations and users with special needs to particular communities underserved by the assumptions embedded in the systems around them.

Reducing energy use is a major societal challenge—there is a vast array of projects and initiatives, from government, industry and academia as well as more locally driven schemes, all aiming to tackle different aspects of the problem. However, many approaches, including the UK’s smart metering rollout, largely treat ‘energy demand’ as something almost homogeneous, to be addressed primarily through pricing-based feedback, rather than being based on an understanding why people use energy in the first place—what are they actually doing? We think that people don’t set out to ‘use energy’: instead, they’re solving everyday problems, meeting needs for comfort, light, food, cleaning and entertainment, with a heavy dose of psychology in there, and sometimes with an emotional dimension too.

Equally, people’s understandings—mental models—of what energy is, and how their actions relate to its use, and their use of heuristics for deciding what actions to take, are under-explored, and could be extremely important in developing ways of visualising or engaging with energy use which are meaningful for householders. This is where ethnographic research, and in-context research on decision-making in real life, can provide insights which are directly useful for the design process.  

The overall project covers a broad scope of work and expertise, including environmental scientists and architects alongside design researchers, and benefits from ‘Living Lab’ instrumented houses in each country, which will provide a platform (albeit artificial) for demonstrating and trialling the interventions developed, before they are installed in houses in real life.

How did you first become interested in decision making psychology? I first got interested in the area while doing my Master’s back in 2004-5. For my project, I was looking at how technologies, and the structure of systems, have been used to influence (and control) public behaviour, and as such, approaches such as B.J. Fogg’s Persuasive Technology were very relevant. While Persuasive Technology has tended not to employ ‘behavioural economics’ techniques too much, it was initially through this angle of ‘persuasion’ that I read people like Robert Cialdini, then followed the thread through to learn more about cognitive biases and heuristics, from authors such as Scott Plous, the Russell Sage Foundation-supported collections of Tversky, Kahneman, Gilovich, Slovic et al’s papers, then Gigerenzer and the ABC group’s work. Herbert Simon’s work has also been a huge influence, because his multidisciplinarity enabled so many parallels to be drawn between different fields. It was partly through his work, I think, that I became interested in cybernetics and this whole body of work from the 1940s onwards which attempted to draw together systems across human psychology, technology and nature, but which in public consciousness seems mainly to be about people with robotic hands.

In parallel, I was familiar with concepts such as heuristics, affordances and mental models from the cognitive ergonomics literature, one of the other main intersections between design and psychology. Here, the work of people such as Don Norman and Jakob Nielsen is hugely influential; this had first become interesting when I was in industry, working on some products which really would have benefitted from a better understanding of the intended customers’ perceptions, thought processes, needs and abilities, and I was hungry to learn more about how to do this. The idea of applying psychological insights to the design process greatly appealed to me: I had something of an engineer’s mindset that wanted, Laplace’s demon-like, to be able to integrate all phenomena, social and physical, into something ‘actionable’ from a design standpoint. While I now appreciate my naïvety, the vision of this ‘system’ was a good inspiration for taking things further.

For my PhD—supervised by David Harrison (Brunel) from the ‘design’ side and Neville Stanton (Southampton) from the ‘psychology’ side—I tried to bring together insights relevant to behaviour change from lots of different disciplines, including behavioural economics, into a form which designers could use during design processes, for products, services and environments, with a focus on influencing more sustainable and socially beneficial behaviour. Various iterations were developed, via lots of workshops with designers and other stakeholders, ending up with the Design with Intent toolkit. This is still a work in progress, though it’s had to take back seat to some more practical projects in the last couple of years, but I hope in 2014 to be able to release a new version together with, perhaps, a book.

Why you decide to stay in academia instead of going into industry?
I like to think I’ve found the best of both worlds: the Helen Hamlyn Centre for Design acts as a consultancy for many of its projects with commercial clients, but also (as part of the Royal College of Art) works as part of many academic research projects (though always with a practical focus). During my first six months here, I’ve worked on commercial projects for new startups and a mobility products manufacturer, as well as two academic research projects. Alongside this job I also do some freelance consultancy in industry, which often involves running workshops on design and behaviour, writing articles, and generating early-stage ideas for companies interested in including a ‘behavioural’ element in their design processes.

There are advantages and disadvantages of academic and industrial work contexts. The freedom to pursue ‘pure’ knowledge (whatever that really means), and indeed more open-ended research, with longer timeframes, is a wonderful aspect of academia, a luxury that most companies cannot really afford given the constraints of the market. However, I found the bureaucracy at both Brunel and the University of Warwick crushingly slow: there was a lot of research that just never got done because the system made sure it took too long, or involved too much paperwork to bother with. That was deeply frustrating, when there are many very good researchers at both institutions who would thrive given a bit more freedom to do things. The RCA (perhaps because it’s so small) is refreshingly fast: it’s possible to decide to try something in the morning and go and do it in the afternoon, or even immediately.

Perhaps also, despite being relatively knowledgeable about behaviour change—one of the biggest buzzwords of the last five years!—I was very reluctant to go straight into a commercial application of the work which has no social benefit. I don’t want to use insights to sell people more things they don’t need, or exploit biases and heuristics to segment and profile consumers to target them with more advertising. I apply John Rawls’s ‘veil of ignorance’ wherever I can: I hate it when advertisers and marketers make assumptions about me, and my likely behaviour, so I don’t particularly want to do that to other people. That rules out a lot of organisations who want people with ‘behaviour change’ credentials.

What do you enjoy the most in your current role? While doing lots of projects is a lot of work, and there’s a tendency for this sort of thing to take over your life, in all honesty this is a very enjoyable job. Meeting lots of different people—members of the public—and actually involving them in the research: designing with them rather than for them, is incredibly satisfying. Also, I think most of the people working for the Helen Hamlyn Centre, because their jobs involve so much research with the public, are actually genuinely nice people.So they’re great to work with.

Do you see any challenges to the wider adoption of decision making psychology in your field? Most designers are not trained in psychology, so there is always a barrier to adoption. There is also the risk that highly popularised approaches and trends, such as what Nudge has become, lose their nuance and the cautious scientific approach when they just become another soundbite or quick-fix ‘solution’, applied to any context without doing any actual user research. And I’m aware that Design with Intent was essentially this, a context-free toolbox of ideas to apply to any situation, and I now see it as a major flaw which needs to be addressed in future versions.

But if I see the DDB/VW Piano Stairs video one more time used as a kind of example universal panacea for deeply complex social problems (“Design can fix anything, just look at how they made taking the stairs fun!!!!”) then I’ll scream, or more likely mumble something grumpily at the back of the room.

How do you see the relationship between academic researchers and practitioners? Design isn’t really an academic subject in itself—it’s a process. I might have a PhD in it, but I’ll be honest and say that it’s lacking in a lot of formal theory. That isn’t a bad thing, necessarily—again, Herbert Simon (in The Sciences of the Artificial) and then Donald Schön (in The Reflective Practitioner) did good jobs of explaining in different ways why it is aqualitatively different approach to knowledge the natural sciences—but what it does mean is that the most interesting and useful research for designers is often not in design at all, but in other fields that overlap. Designers need to be learning from psychologists, anthropologists, social researchers, economists, biologists, and actual practitioners in other fields. It also means there are a lot of design research papers which are basically restatements of the “What is design? What does it mean to be a designer?” question, which are fine but become tiring after a while.

So, to return to the question, academic ‘design’ research is generally very poor at being useful to practitioners. Part of this is the eternal language / framing barrier between academia and practice—there are so many assumptions about terminology and so on which prevent easy engagement—but there is also the access problem. Design consultancies very rarely subscribe to academic journals, and even if they do subscribe to design journals, it’s probably journals from outside the field (see above) that would bring more useful insights anyway. When I did a brief survey on this, these were a few of the points which came up.

What advice would you give to young researchers who might be interested in a career in your field? I would very much like to see more designers drawing on the heuristics work of Gerd Gigerenzer, Peter Todd, et al, and exploring what this means in the context of design for behaviour change and design in general, given that bounded rationality seen as a reality, and essentially adaptive, rather than a ‘defect’ in human decision-making, seems to marry up quite well with the tenets of ethnography and people-centred design. Some people have started to do it, e.g. Yvonne Rogers at UCL, but there is a massive opportunity for some very interesting work here.

Also, consider cybernetics. Read Hugh Dubberly and Paul Pangaro’s work and think about systems more broadly than the disciplinary boundaries within which you may have been educated. In general, read as much as you can, outside of what you think ‘your subject’ is. The most interesting innovations always occur at the boundaries between fields.

More than anything else, work on projects where you do research with real people, in real, everyday life contexts, rather than only in lab studies. It will change how you model behaviour, how you think about people, and how you understand decision making.

Visit Dan’s website: http://architectures.danlockton.co.uk/dan-lockton/

Viewpoint: Why I’m Leaving Academia

fishbowl cropped This week we’re featuring a guest post from Ben Kozary, a PhD candidate at the University of Newcastle in Australia. After getting to know Ben at various conferences over the past year, the InDecision team was disappointed to hear about his decision to leave academia – partly because he’s an excellent and passionate researcher, partly because we wouldn’t benefit from his jovial company at future conferences! However, his reasons for leaving echoed many dinner conversations we’ve had with fellow PhD students so we asked him to write about his experience and his decision to move to industry. Over to Ben…

To say I’ve learnt a lot during my PhD candidature would be an understatement. From a single blank page, I now know more than most people in the world about my particular topic area. I understand the research process: from planning and designing a study; to conducting it; and then writing it up clearly – so that readers may be certain about what I did, how I did it, what I found, and why it’s important. I’ve met a variety of people from around the world, with similar interests and passions to me, and forged close friendships with many of them. And I’ve learnt that academia might well be the best career path in the world. After all, you get to choose your own research area; you have flexible working hours; you get to play around with ideas, concepts and data, and make new and often exciting discoveries; and you get to attend conferences (meaning you get to travel extensively, and usually at your employer’s expense), where you can socialise (often at open bars) under the guise of “networking”. Why, then, you might be wondering, would I want to leave all of that behind?

My journey through the PhD program has been fairly typical; I’ve gone through all of the usual stages. I’ve been stressed in the lead-up to (and during) my proposal defence. I’ve had imposter syndrome. And I’ve been worried about being scooped, and/or finding “that paper”, which presents the exact research I’m doing, but does it better than me. But now, as I begin my final year of the four year Australian program, I’m feeling comfortable with, and confident in, the work I’ve produced so far in my dissertation. And yet, I’m also disillusioned – because, for all of its positives, I’ve come to see academia as a broken institution.

That there are problems facing academic research is not news, especially in psychology. Stapel and Smeesters, researcher degrees of freedom and bias, (the lack of) statistical power and precision, the “replication crisis” and “theoretical amnesia”, social and behavioural priming: the list goes on. However, these problems are not altogether removed from one another; in fact, they highlight what I believe is a larger, underlying issue.

Academic research is no longer about a search for the truth

Stapel and Smeesters are two high profile examples of fraud, which represents an extreme exploitation of researcher degrees of freedom. But what makes any researcher “massage” their data? The bias towards publishing only positive results is no doubt a driving force. Does that excuse cases of fraud? Absolutely not. My point, however, is that there are clear pressures on the academic community to “publish or perish”. Consequently, academic research is largely an exercise in career development and promotion, and no longer (if, indeed, it ever was) an objective search for the truth.

For instance, the lack of statistical power evident in our field has been known for more than fifty years, with Cohen (1962) first highlighting the problem, and Rossi (1990) and Maxwell (2004) providing further prompts. Additionally, Cohen (1990; 1994) reminded us of the many issues associated with null-hypothesis significance testing – issues that were raised as far back as 1938 – and yet, it still remains the predominant form of data analysis for experimental researchers in the psychology field. To address these issues, Cohen (1994: 1002) suggested a move to estimation:

“Everyone knows” that confidence intervals contain all the information to be found in significance tests and much more. […] Yet they are rarely to be found in the literature. I suspect that the main reason they are not reported is that they are so embarrassingly large! But their sheer size should move us toward improving our measurement by seeking to reduce the unreliable and invalid part of the variance in our measures (as Student himself recommended almost a century ago). Also, their width provides us with the analogue of power analysis in significance testing – larger sample sizes reduce the size of confidence intervals as they increase the statistical power of NHST. 

Twenty years later, and we’re finally starting to see some changes. Unfortunately, the field now has to suffer the consequences of being slow to change. Even if all our studies were powered at the conventional level of 80% (Cohen, 1988; 1992), they would still be imprecise; that is, the width of their 95% confidence intervals would be approximately ±70% of the point estimate or effect size (Goodman and Berlin, 1994). In practical terms, that means that if we used Cohen’s d as an effect size metric (for the standardised difference between two means), and we found that it was “medium” (that is, d = 0.50), the 95% confidence interval would range from 0.15 to 0.85. This is exactly what Cohen (1994) was talking about when he said the confidence intervals in our field are “so embarrassingly large”: in this case, the interval tells us that we can be 95% confident the true effect size is potentially smaller than “small” (0.20), larger than “large” (0.80), or somewhere in between. Remember, however, that many of the studies in our field are underpowered, which makes the findings even more imprecise than what is illustrated here; that is, the 95% confidence intervals are even wider. And so, I wonder: How many papers have been published in our field in the last twenty years, while we’ve been slow to change? And how many of these papers have reported results at least as meaningless as this example?

I suspect that part of the reason for the slow adoption of estimation techniques is due to the uncertainty they bring to the data. Significance testing is characterised by dichotomous thinking: an effect is either statistically significant or it is not. In other words, significance testing is seen as easier to conduct and analyse, relative to estimation; however, it does not allow for the same degree of clarity in our findings. By reporting confidence intervals (and highlighting uncertainty), we reduce the risk of committing one of the cardinal sins of consumer psychology: overgeneralisation. Furthermore, you may be surprised to learn that estimation is just as easy to conduct as significance testing, and even easier to report (because you can extrapolate greater meaning from your results).

Replication versus theoretical development

When you consider the lack of precision in our field, in conjunction with the magnitude of the problems of researcher degrees of freedom and publication bias, is it any wonder that so many replication attempts are unsuccessful? The issue of failed replications is then compounded further by the lack of theoretical development that takes place in our discipline, which creates additional problems. The incentive structure upon which the academic institution is situated implies that success (in the form of promotion and grants) comes to those who publish a high number of high quality papers (as determined by the journal in which they are published). As a result, we have a discipline that lacks both internal and external relevance, due to the multitude of standalone empirical findings that fail to address the full scope of consumer behaviour (Pham, 2013). In that sense, it seems to me that replication is at odds with theoretical development, when, in fact, the two should be working in tandem; that is, replication should guide theoretical development.

Over time, some of you may have observed (as I have) that single papers are now expected to “do more”. Papers will regularly report four or more experiments, in which they will identify an effect; perform a direct and/or conceptual replication; identify moderators and/or mediators and/or boundary conditions; and rule out alternative process accounts. I have heard criticism directed at this approach, usually from fellow PhD candidates, that there is an unfair expectation on the new generation of researchers to do more work to achieve what the previous generation did. In other words, that the seminal/classic papers in the field, upon which now-senior academics were awarded tenure, do less than what emerging and early career researchers are currently expected to do in their papers. I do not share this view that there is an issue of hypocrisy; rather, my criticism is that as the expectation that papers “do more” has grown, there is now less incentive for academics to engage in theoretical development. The “flashy” research is what gets noticed and, in turn, what gets its author(s) promoted and wins them grants. Why, then, would anyone waste their time trying to further develop an area of work that someone else has already covered so thoroughly – especially when, if you fail to replicate their basic effect, you will find it extremely difficult to publish in a flagship journal (where the “flashiest” research appears)?

This observation also begs the question: where has this expectation that papers “do more” come from? As other scientific fields (particularly the hard sciences) have reported more breakthroughs over time, I suspect that psychology has desired to keep up. The mind, however, in its intangibility, is too complex to allow for regular breakthroughs; there are simply too many variables that can come into effect, especially when behaviour is also brought into the equation. Such an issue is highlighted no more clearly than in the case of behavioural priming. Yet, with the development of a general theory of priming, researchers can target their efforts at identifying the varied and complex “unknown moderators” of the phenomenon and, in turn, design experiments that are more likely to replicate (Cesario, 2014). Consequently, the expectation for single papers to thoroughly explain an entire process is removed – and our replications can then do what they’re supposed to: enhance precision and uncover truth.

The system is broken

The psychology field seems resistant to regressing to simpler papers that take the time to develop theory, and contribute to knowledge in a cumulative fashion. Reviewers continue to request additional experiments, rather than to demand greater clarity from reported studies (for example, in the form of effect sizes and confidence intervals), and/or to encourage further theoretical development. Put simply, there is an implicit assumption that papers need to be “determining” when, in fact, they should be “contributing”. As Cumming (2014: 23) argues, it is important that a study “be considered alongside any comparable past studies and with the assumption that future studies will build on its contribution.”

In that regard, it would seem that the editorial/publication process is arguably the larger, underlying issue contributing (predominantly, though not necessarily solely) to the many problems afflicting academic research in psychology. But what is driving this issue? Could it be that the peer review process, which seems fantastic in theory, doesn’t work in practice? I believe that is certainly a possibility.

Something else I’ve come to learn throughout my PhD journey is that successful academic research requires mastery of several skills: you need to be able to plan your time; communicate your ideas clearly; think critically; explore issues from a “big picture” or macro perspective, as well as at the micro level; undertake conceptual development; design and execute studies; and be proficient at statistical analysis (assuming, of course, that you’re not an interpretive researcher). Interestingly, William Shockley, way back in 1957, posited that producing a piece of research involves clearing eight specific hurdles – and that these hurdles are essentially all equal. In other words, successful research calls for a researcher to be adept at each stage of the research process. However, in reality, it is often that the case that we are very adept (sometimes exceptional) at a few aspects, and merely satisfactory at others. The aim of the peer review process is to correct or otherwise improve the areas we are less adept at, which should – theoretically – result in a strong (sometimes exceptional) piece of research. Multiple reviewers evaluate a manuscript in an attempt to overcome these individual shortfalls; yet, look at the state of the discipline! The peer review process is clearly not working.

I’m not advocating abandoning the peer review process; I believe it is one of the cornerstones of scientific progress. What I am proposing, however, is for an adjustment to the system – and I’m not the first to do so. What if we, as has been suggested, move to a system of pre-registration? What if credit for publications in such a system were two-fold, with some going towards the conceptual development (resulting in the registered study), and some going towards the analysis and write-up? Such a system naturally lends itself to specialisation, so, what if we expected less of our researchers? That is, what if we were free to focus on those aspects of research that we’re good at (whether that’s, for example, conceptual development or data analysis), leaving our shortfalls to other researchers? What if the peer review process became specialised, with experts in the literature reviewing the proposed studies, and experts in data analysis reviewing the completed studies? This system also lends itself to collaboration and, therefore, to further skill development, because the experts in a particular aspect of research are well-recognised. The PhD process would remain more or less the same under this system, as it would allow emerging researchers to identify – honestly – their research strengths and weaknesses, before specialising after they complete grad school. There are, no doubt, issues with this proposal that I have not thought of, but to me, it suggests a stronger and more effective peer review process than the current one.

A recipe for change

Unfortunately, I don’t believe these issues that I’ve outlined are going to change – at least not in a hurry, if the slow adoption of estimation techniques is anything to go by. For that reason, when I finish my PhD later this year, I will be leaving academia to pursue a career in market research, where obtaining truth from the data to deliver actionable insights to clients is of the utmost importance. Some may view this decision as synonymous with giving up, but it’s not a choice I’ve made lightly; I simply feel as though I have the opportunity to pursue a more meaningful career in research outside of academia – and I’m very much looking forward to the opportunities and challenges that lay ahead for me in industry.

For those who choose to remain in academia, it is your responsibility to promote positive change; that responsibility does not rest solely on the journals. It has been suggested that researchers boycott the flagship journals if they don’t agree with their policies – but that is really only an option for tenured professors, unless you’re willing to risk career self-sabotage (which, I’m betting, most emerging and early career researchers are not). The push for change, therefore, needs to come predominantly (though not solely) from senior academics, in two ways: 1) in research training, as advisors and supervisors of PhDs and post-docs; and 2) as reviewers for journals, and members of editorial boards. Furthermore, universities should offer greater support to their academics, to enable them to take the time to produce higher quality research that strives to discover the truth. Grant committees, also, may need to re-evaluate their criteria for awarding research grants, and focus more on quality and meaningful research, as opposed to research that is “flashy” and/or “more newsworthy”. And the next generation of academics (that is, the emerging and early career researchers) should familiarise themselves with these issues, so that they may make up their own minds about where they stand, how they feel, and how best to move forward; the future of the academic institution is, after all, in their hands.

 

Outside The Matrix: Paul Picciano

pmp.headshotIn our first 2014 Outside The Matrix interview  we meet Paul Picciano who is a Senior Human-Systems Engineer at Aptima, Inc., a leading human-centered engineering firm based near Boston, MA. At Aptima, he applies a diverse set of cognitive engineering methods to improve human performance in the military, intelligence community, air traffic management, and health care. His approach to supporting humans operating in complex environments leverages system design and training to enhance decision processes. Dr. Picciano earned a Ph.D. in Cognitive and Neural Science from the University of Utah, a M.S. in Human Factors and Ergonomics from San Jose State University, and a B.S. in Mechanical Engineering from Tufts University.

Dr. Picciano was also one of the speakers at the InDecision dinner for young researchers organised at the recent Society for Judgment and Decision Making conference in Toronto. 

Tell me about your work: how does decision making psychology fit in it? Most of the work we do involves human operators that must collect data from the environment, analyze and make sense of the input, and select and execute a course of action. The conditions under which they work typically involve uncertainty and time pressure, modulated by goals, objectives, and priorities that change over time.

My favorite part of the job is getting out there and observing and interacting with the experts (and sometimes novices), performing their craft. This has garnered provided access to operating rooms, air traffic control towers, Navy ships, and various command centers for organizations ranging from the Air Force to the CDC. When it’s time to run a more controlled study, there is great access to high fidelity simulators at some of the top government and academic labs.

At Aptima, psychology plays a large part in much of our work.  We provide services such as training, organizational analysis, and system design, by employing practitioners from industrial/organizational, cognitive, and neural disciplines across our portfolio. Most of my work is rooted in cognitive science, looking at perception, attention, and decision making as a mechanism for behavior and resultant task performance. It’s critical to understand how people process information. Empirical findings continue to demonstrate the magnitude of the influence of environments and decision architectures on the human operator in all domains.  Many operators confront stressful situations, data overload, and conflicting objectives, so having a grasp of these psychological aspects help us design more accommodative systems and better training programs to prepare them. But of course, we don’t always get it exactly right…

Why you decide to go into industry instead of continuing in academia? I was in industry before I went to graduate school – I worked for five years after college, and thought I would just go back for an MS and return to the workforce. Plans changed when I realized how much I enjoyed being back in school and doing applied research (at NASA Ames). I found Aptima during this time and was tempted to leave, but  I decided to continue school.
One might ask why I didn’t change my target over the next few years. First, I was committed to completing the PhD program. Second, I continued to be enamored with the academic environment. It is a great opportunity to interact with bright colleagues and an energetic student population with the benefits of a flexible schedule. I was even able to coach lacrosse in grad school and that may have been an option if I had chosen to work on campus long term.
However, I really enjoy the diversity a consulting role provides, interacting with customers in a wide range of domains and problems. I believed industry would provide me more of those experiences and greater opportunity to travel to see different types of operations. I was also very fortunate to find advisors that supported my path away from academia.

How did you first become interested in decision making psychology? Psychologists run such clever experiments. That’s probably what hooked me. The experimental designs and results from people like Milgram, Festinger, Tversky & Kahneman, Loftus, and Ariely are not just fascinating, they’re also actionable. Designers of systems, policies, and organizational structures can leverage these finding to make things better.

I view so much of behavior as a result of decision making – whether it be implicit or explicit, automatic or deliberate, intuition or reason mechanisms as the driving force. Even at the perceptual and instantaneous level, these reactions I still see as decision making. In the heart of the NFL playoffs now, the analysts always talk about quarterback decision making. These are trained, perceptually-driven, goal-directed actions that are dictated by the environment, expectations and training. Similarly, coaches are making decisions on fourth down and general managers are making draft decisions. For all of these decision types, there is a great deal in the scientific literature that could improve these decision processes (if any NFL owners are reading this I can make myself available for a consulting gig!)

What type of research do you find most interesting, useful or exciting? In my opinion, the most valid research emerges when we have the opportunity to marshal a diversity of research techniques that includes observations in naturalistic settings, high fidelity simulations, and tightly controlled and focused research settings. Converging evidence from these perspectives offers the best opportunity to build a strong case for your findings. However, rarely can we pull all of that off in a single project. There usually are not enough resources to cover the problem space to this degree (the government labs seem to more often have the time and funding for such investigations). It’s pretty impressive how realistic well-crafted simulations can feel to participants. We have been able to make senior physicians and air traffic controllers break into a sweat even though no human lives were ever at risk.

One of my most exhilarating days of “research” involved observing the training procedures for landing U2 aircraft. The U2 has a long nose making it difficult for pilots to see the ground. The training method involves other pilots on the ground guiding the aircraft down by calling out the number of feet the jet is above the ground just prior to touching down (“15ft…10ft…8ft…etc.). These callouts come from fellow pilots in zippy little sports cars waiting for the U2 to pass overhead and then chase it down the runway at over 100mph. I was fortunate enough to ride shotgun in one of two chase cars that followed down the runway, in formation, close enough to make accurate distance calls between the landing gear and the runway.

Do you see any challenges to the wider adoption of decision making psychology in your field? There are always challenges; one constantly in need of solutions is that of establishing useful, collectible measures. Part of this requirement stems from the responsibility of presenting a strong return on investment (ROI) argument. In research and development, technology often grabs attention and funding.  It is compelling when a company makes a battery that is small and has longer life – that’s justified spending. It’s more difficult to convince a sponsor that you have improved the decision making process for a group of analysts. The bright side is the military is responsive to decision making research. There are specific programs (and funding) in place for efforts such as training small unit leaders and building decision support elements for tasks including weapons deployment, intelligence analysis, and air traffic management.

How do you see the relationship between academic researchers and practitioners? I think the classic model is that academia is doing the ”basic science” and practitioners are applying that science, to real world problems. I believe it is much more that. We have great partnerships with universities on many active projects, and they are involved in the full range of project activities. They are more than just a place to run first year psych students through a basic experiment.  They are great thought partners and often the first to have produced or read about a new study. Many academics have security clearances, and many are consulting on the side. This makes it easy to engage them on a few levels beyond traditional roles. I also believe that practitioners can help develop new problems of interest for academics to investigate. We really enjoy our interactions with academia.

What advice would you give to young researchers who might be interested in a career in your field? Don’t be afraid to shape your own future. Figure out what you really like to do. Find companies and people that are doing that type of work and engage them. Don’t be frustrated by the fact that your keyword search returns 0 matching job titles. This is a growing field, and most people don’t know much about it. Tell them about it. Show them how you can be useful. If you can help them understand or even predict (with some accuracy) the decisions that will be made by their clients, staff, or management, you can be useful to them. Show that you can help them design choice architectures in their favor, impacting their bottom line, or contribute to community improvements-it will be hard to ignore you.

In my job search, I looked for companies, not job titles or employment ads. Go to conferences and interact with as many people. They won’t all help you, but many are willing. Build your network. There is so much going on out there, so many roles that we don’t even know about. Get yourself out there so you can stumble upon it.

Paul’s profile on Aptima website (incl. publications)

In The Wild: Tom Wein

35b5aeaIn our first In The Wild interview of 2014, we speak to Tom Wein who is a behavioural change consultant who has led major primary research projects to tackle counter-radicalization, aid security sector reform, plan public diplomacy efforts and design communication strategies. He currently works on behavioural change for national security, principally for the consultancy SCL. He read War Studies at King’s College, London, and has also worked for the European Defence Agency in Brussels as a communications consultant.

Tell me about your work: how does decision making psychology fit in it? We conduct research projects for the US and UK militaries in fragile and conflict-affected states, and design interventions to reduce violence. The one thing we’re always trying to explain is that just asking people about their attitudes isn’t enough – you need to examine their psychology in order to change behaviour. So we measure concepts which psychologists will be familiar with, like self-efficacy, motivations and reward structures, to build up a much deeper picture of a group; that way we can come up with much more effective ways of to solve the problem.

Of course, nobody will pay us to do research in Switzerland – our projects are invariably in places where high quality research is difficult. We can partly solve those problems through good recruitment and training, and through building in redundancy, but crucial to the way we research is a process of triangulation. Some problems are inevitable, given the challenges, but two research strands are unlikely to go wrong in identical ways, so we focus on those findings that are confirmed by several sources. We generally use a mixture of semi-structured depth interviews and surveys (containing scales), plus a few focus groups and more free-form interviews with experts at either end of the process, to inform that process.

How did you first become interested in decision making psychology? Like a lot of people studying conflict, I was frustrated with the crudeness of the military’s tools in fighting the deeply complex wars in Iraq and Afghanistan – wars that were defined by our ability to win over the very people we kept accidentally killing. At the same time, I was shocked (I still am!) at how much money was being spent on projects and policies with only the flimsiest evidence base. Those two ideas were crystallized when I came to work for SCL, and found that there was a better, more intelligent way of doing things.

What type of research do you find most interesting, useful or exciting? I am always, always looking for field trials. Hypotheses are great, and laboratories are wonderful places, but I want you to prove that your thing could work in the messiness of the real world (and that doesn’t mean testing on American college students!). No doubt lots of the readers will be familiar with the work of Chris Blattman, whose work in Liberia and Uganda is magnificent stuff. The younger members of the development industry really do ‘get’ evidence and research, even if they’re still sometimes fighting their elders. When I argue that you’ve got to look at groups, rather than humans in general, constantly in the background is the work of Stathis Kalyvas, who has written powerfully about the impact of very local conditions on the conduct of wars.

Other than that, I am always more excited by elegantly written work, and by work that is open access. Those factors are much more important than the field a paper comes from. I’m also suspicious of the validity of findings in different contexts, so I’m often looking for research conducted in the country I’m studying at the time.

Do you see any challenges to the wider adoption of decision making psychology in your field? There’s an awful lot of persuading still to do. In the UK, the Behavioural Insights Team has been invaluable in persuading people that they ought to do research before taking a decision, but in the US there’s a complete focus on very simplistic attitude surveys, if they do research at all. Part of the problem is that comprehensive research projects in warzones are really expensive – it’s a lot cheaper to just do a quick poll.

How do you see the relationship between academic researchers and practitioners? We’ve been quite lucky in that respect – there is a reasonable-sized cohort of academic researchers who have been doing some exciting research in this field, and they’ve been generous with their time, especially when we’re trying to learn about and plan research in a new country. As I hinted above, I can get quite frustrated with the academic system, but that hasn’t prevented us from working well with individual academics.

What advice would you give to young researchers who might be interested in a career in your field? The first thing is to learn some quantitative skills. There are lots of people who can write essays out there; you’re far more likely to get an interesting job if you can also analyze data. The second, rather depressing, thing to say is that there are fewer and fewer full time jobs where you’ll get trained up – you may well have to fight for a series of short term projects before you get hired properly. Therefore, make contacts, network, and use your time at university effectively (including begging professors for introductions) – you’ll never have so much time again. Finally, if you’re in London, go to the monthly behavioural economics networking drinks!

Twitter

Happy New Year from InDecision!

It’s been little over a year since we started this blog, with the hope of attracting a couple of hundred readers. Instead, we’ve had over 70,000 hits with over 35,000 visitors from 155 countries. The top 10 countries for visitors included:

  1. visitors globallyUnited States
  2. United Kingdom
  3. Canada
  4. Germany
  5. The Netherlands
  6. Australia
  7. India
  8. Singapore
  9. Sweden
  10. Switzerland

So much, so predictable! But who does JDM research in Honduras, Kyrgyzstan, Mongolia, Vanuatu, Sudan, Rwanda, Ghana, Nicaragua, Bermuda, Bhutan, Barbados or Bolivia? If that’s you, get in touch – we’d love to speak to you and hear about JDM research in your country. This year, we’ll address one of the issues highlighted by professor Dan Ariely in his interview and start to look at what impact culture might have on decision making science through a series of interviews focusing on the challenges (and opportunities!) cross-cultural psychology might pose for JDM.

In case you missed them the first time, the top 10 posts from the year are:

  1. Research Heroes: Richard Thaler
  2. In The Wild: Rory Sutherland
  3. Outside The Matrix: Paul Litvak
  4. The Seven Sins of Consumer Psychology
  5. Research Heroes: George Loewenstein
  6. Viewpoint: The role of revealed research preferences
  7. Outside The Matrix: Jolie Martin
  8. In The Wild: Kelly Peters
  9. Research Heroes: Colin Camerer
  10. Research Heroes: Gerd Gigerenzer 

We’ve been incredibly lucky in being able to interview some amazing people in our field, and we can’t thank them enough for giving their time to answer our questions. On behalf of all the people who have thanked us for running the blog, please know that your contribution is widely appreciated and makes a big difference to young researchers around the world.

The original aim of the blog was to give young researchers a voice. We’ve taken some steps in that direction by growing the team with sub-editors Caroline Roux, Shereen Chaudry and Leigh Caldwell as well as our dedicated contributor Troy Campbell. In 2014, we’ll start to feature young researchers more regularly through a new interview series. We’ll also widen our net for career advice to include researchers who have are making waves early on in their career and shaping the field as they go.

Lulu+-+Something+To+Shout+About+-+CD+ALBUM-427621We’d also welcome submissions from readers: if you’re a young researcher and have just published an awesome paper you want to tell the world about, get in touch. Since subtle hints and words of encouragement have so far fallen on deaf ears, let us put this bluntly: blatant self-promotion is OK, and strongly encouragedOne of the main goals of this blog is to give young scholars a platform to share and discuss their work, but we cannot achieve this goal without your contribution!

Finally, one of the emerging trends in our field is the rising popularity of field studies and applying the science both in the policy and commercial worlds with many of our Research Heroes highlighting the need to connect our work with the outside world. However, such work is extremely challenging and we have much to learn from the pioneers, so in 2014 we’ll also be speaking to those who have made early inroads into taking decision making science out of the lab and into the Real World.

As always, we welcome your feedback and contribution – please don’t hesitate to get in touch and let us know what you think!

We hope that you’ll enjoy the next year with us.

Elina & Neda

Research Heroes: Shlomo Benartzi

20110427_1192As one of the last posts this year, we’re featuring our 28th Research Hero: professor Shlomo Benartzi from UCLA Anderson School of Management, a leading authority on behavioral finance with a special interest in household finance and participant behavior in retirement savings plans. His most significant research contribution is the co-development of Save More Tomorrow (with Richard Thaler), a behavioral prescription designed to help employees increase their savings rates gradually over time. Professor Benartzi has also supplemented his academic research with both policy work and practical experience through advising government agencies in the U.S. and abroad as well as helping to craft numerous legislative efforts and pension reforms.  In addition, he has also worked with many financial institutions as an academic advisor. His latest initiative is Digitai.org where he’s exploring new digital interventions that will help consumers, businesses and policymakers leverage behavioral research. 

I wish someone had told me at the beginning of my career… that you should only do research you are really passionate about. Research often requires years and years of sustained effort, so unless you have a passion for these ideas, then you will almost certainly give up. (It’s like a marriage in that sense.) There’s also something magical that happens when you are passionate about the research. Not only is the work more fun, but it somehow gets published. Don’t ask me how it happens.

I most admire academically… I’m going to cheat and give you three names. The first person is Danny Kahneman. Not only is he super brilliant, but he’s also very insightful about questions outside his area of expertise. He never gives in to pressure, and always does what he thinks is right academically. Richard Thaler I admire for the diversity of his research program, and also his ability to see the big picture. John Payne is incredibly humble, yet an unusually deep thinker.

The project that I’m most proud of is… Save More Tomorrow, a little idea Thaler and I came up with that led more than 4 million people to double their savings rate. We weren’t particularly brilliant, but we were persistent and with a bit of luck we made a big difference.

The one project that I should never had done… I’m still trying to forget that.

The most amazing or memorable experience when I was doing research… I was salsa dancing at the boathouse in Santa Monica and chatted with my friend Brian Tarbox who worked in the finance industry. I told him about my idea for Save More Tomorrow; I didn’t even think he was listening. Several years later I hear from him again and he hands over an excel spreadsheet with all the data. He said I have some good news: I tried out the Save More Tomorrow idea and it works. Your program quadrupled the savings rate of these low-income people.

The one story I always wanted to tell but never had a chance… what I’d really love to do is follow-up with those people in Brian’s spreadsheet. The company insisted on being anonymous, and Brian passed away, but I’d love to know how those individuals are doing now. Are they still saving more? Have they managed to retire with dignity?

A research project I wish I had done… I had this hunch that automatic enrolment in a retirement savings plan would get a lot more people to start saving, but that it might also lead to a decrease in aggregate savings, since the default saving rate is typically very low, often around 3 percent. I wanted to test out my hunch, but Brigitte Madrian tested it out first and did a superb job.

If I wasn’t doing this, I would be… an unhappy architect. I love good architecture, but if it was my profession then it would no longer be a fun hobby. I would have to pay the bills and deal with clients.

The biggest challenge for our field in the next 10 years… is increasing our impact. How do we take these proven behavioral insights and scale them up? How do we solve big societal problems around health care or retirement savings or education? In my future work, I’m going to explore how we can use the digital revolution to accelerate the pace of change. With digitai.org, I want to test out new digital interventions that will help consumers, businesses and policymakers leverage all of this new research. I think that smartphone in your pocket represents a tremendous opportunity to help people think better and make better choices, but we have to get it right.

My advice for young researchers at the start of their career is… not to listen to me! Every young researcher needs to tailor their journey to their particular set of skills, interests and weaknesses. Find your own passion. Don’t follow mine.

Departmental website | Digitai.org

SJDM 2013: InDecision team recommends…

SJDM toronto picGreetings from Toronto and the annual conference for the Society for Judgment and Decision Making! With the help of the InDecision team, we’ll be covering the best bits of the conference for you if you couldn’t make it (and even if you are here, we’ll have something for you, too). With dozens of great sessions on offer this weekend, choice overload is pretty much guaranteed. But fear not: we’ve scoured the program and selected the best ones to help you make the most of the conference. Here’s where you’ll find the InDecision team this weekend…

caroline new resizedCaroline’s picks

Research and Academia (Session #7) Questionable research practices. Misunderstanding or misuse of statistics. Lack of reproducibility. Many academic fields are currently going through several research-related crises and controversies. Different solutions are being proposed to improve the ways we conduct research, but I sometimes find it hard to keep up with all the suggested improvements for our different research practices. That is why I am always looking forward to conference sessions that can help me stay up to date with the most recent developments. The four papers presented in the session cover important issues, such as the replicabilitiy and reliability of behavioral research findings, and, most importantly, provide interesting solutions that I am really looking forward to learning more about. How to find it: Sunday, November 17, 2:45-4:15 pm, Civic South

The Relationship Between Altruism and Personal Benefits (Session #4) The existence of altruism, or whether humans can ever transcend self-interest, is an age-old question that is constantly being debated across different fields. It is a debate that I find quite interesting, so I am always drawn to conference sessions that provide new ideas, or revisit old ones, on the topic. I find this session particularly interesting because it explores the interplay between altruism and personal benefit and provides interesting findings about how perceived self-interested motives or outcomes can taint the judgment of seemingly altruistic behavior, among others. I am really looking forward to learning more about how this impacts people’s judgment and performance of altruistic or prosocial behavior, and whether there are any ways to overcome these effects. How to find it: Saturday, November 16, 3:15-4:45 pm, Simcoe/Dufferin

elina resizedElina’s picks

Applying Behavioral Economics in the Field: Nudging Customers to Pay their Credit Card Dues The fact that this session is talking about a large-scale field experiment makes this session unmissable for me to two reasons. The primary reason is that for me field experiments represent an exciting new phase for the field itself: after years spent in the lab it’s time to migrate to the outside world to see how our ideas perform in reality. It’s risky because we can’t control everything so the level of noise is likely to be high, and we have to find partners for it which brings its own complications. This bridge between academia and practice is one that I feel we need to cross to ensure the relevance of our work to the outside world, which ultimately defines the value of our work through funding. On a personal level I’m also interested in hearing about the practical challenges of running such studies as it’s close to my own research interests both as a PhD student and practitioner so I’m hoping to get some great ideas and inspiration from this talk. How to find it: Saturday 16th November, 3.15-4.45pm, Session #4 Track Ι: Choice Architecture 2 – Willow East

The Impact of Comparison Frames and Category Width On Strength of Preferences This session is definitely one I’ll be listening to with my practitioner hat on: understanding the strength of consumers’ preferences is at the heart of my work as a market research consultant. We know already that how options are presented to people changes how they perceive them, but when it comes to a real-life client scenario, it’s absolutely crucial to understand the nuances of how consumers make these comparisons to help advise our clients to emphasise the right attributes of a product. This might seem manipulative or even sinister, but just think for a moment about a product you really like: what if the “wrong” communication would have meant you’d never discovered that product? How to find it: Monday 18th November, 9.45-11.15am, Session #8 Track 2: Consumer Decision Making – Civic South

leigh resizedLeigh’s picks

Are risk and delay psychologically equivalent? Testing a common process account of risky and inter-temporal choice Research that unifies previously disparate effects is always interesting to me – because my instinct as a mathematician is to work towards ever more general and simpler (and therefore more powerful) models. If inter-temporal choice can be explained by the same process as probabilistic decisions, it takes us one step closer to understanding decisions in a coherent way. And this does seem a logical step: some accounts explain hyperbolic discounting as a rational response to the riskiness of a delayed reward – maybe if I hold off on eating the marshmallow and wait to get two of them, some unknown event will intervene and I won’t get any! However, it seems that these researchers have found evidence to counter this unification. I’ll be intrigued to hear what alternatives they put forward. How to find it: Saturday 16th November, 8.30-10am, Session #1 Track 2: Risk 1 – Essex

Partitioning option menus to nudge single-item choice This talk is interesting for me both for my consulting work with some commercial clients, and also because it feels like it could help understand how we compose small intermediate steps into larger decisions. Many complex decisions have various parts, and forcing people to unpack those individual steps (for instance by listing individual options separately rather than allowing people to integrate them into one bigger choice) may reveal some of the internal processes that are not directly observable. Classical decision theory (as used in rational economic modelling) assumes that separate choices can simply be added up to come to an overall totality of decisions, but the results of this paper seem to provide more confirmation that this doesn’t work. Seeing the differences between low-level and high-level choices may help us figure out a better way to put individual decisions together in a model and predict social behaviour. How to find it: Saturday 16th November, 10.30am-12pm, Session #2 Track 1 Choice Architecture 1 – Willow East

shereen resizedShereen’s picks

As a behavioral decision researcher, I am interested in finding behavioral solutions to policy-relevant problems. Indeed, I learned at APPAM this past weekend that there is a lot of room for behavioral research in the policy arena. With that in mind when looking at the SJDM program, I am focusing on talks that (1) investigate the practical elements that influence choice, or (2) identify either a behavioral problem or behavioral solution in a policy-relevant domain. For now, the talks in choice architecture (both of them) and financial decision-making are on the top of my list.

The first session on choice architecture addresses abstract but broadly relevant topics in choice architecture. These talks seem key to understanding basic concepts in this area such as defaults and choice sets. The second session on choice architecture delves into more area-specific interventions on choice and their effectiveness. These choice architecture talks have more direct relevance for policy, marketing, or other applications. With the recent formation of the Consumer Financial Protection Bureau (CFPB) in 2011, it is clear that policy-makers are concerned about the way people make financial decisions. The talks in the financial decision-making session speak directly to this concern with a series of experiments that either identify the obstacles people face in considering their finances and/or provide some way to mitigate these problems.

How to find them: Choice Architecture I – (Saturday, Nov 16, Track I, Session #2, 10:30am-11:50am); Choice Architecture II –  (Saturday, Nov 16, Track I, Session #4, 3:15pm-4:35pm); Financial Decision Making – (Saturday, Nov 16, Track III, Session #5, 5:15pm – 6:35pm)

troy resizedTroy’s picks

Cruel nature: Harmfulness as an overlooked dimension in judgments of moral standing “Cruel Nature” promises to be a great talk and not just because of its slick title. The talk will tackle an already controversial topic (the basis of morality) and throw an additional wrench into the puzzle (people respond to animals with moral emotions). The talk will be big in scope, have a good literature review, and will to lend itself to fiery conversation (or at least that’s how the talk played out when it was presented in multi-school Moral Research Lab). Piazza and colleagues propose that “harmful intent cannot be reducible to agency.” They use scenario studies featuring non human subjects like sharks to test and show this. This talk will ultimately try to critique the Agency-Patient model of morality, a model that is already a very new critique of also still relatively new Moral Foundations model of morality. With sharks, controversy and morality, even if you disagree with the speakers’ claims (and probably many people will), you’re guaranteed to have a good time. How to find it: Saturday 16th November, 1.30-3pm, Session #3 Track Ι: Morality and Ethics 1 – Willow East

Selfish or selfless? On the signal value of emotion in altruistic behavior This talk promises to be fascinating as it shows that the general populace holds a view of morality that widely differs from the view of morality most academics hold. Us ‘rational’ academics tend to think about morality like a math equation, where people sacrifice for others and don’t get any benefits – e.g. gifts or feeling a positive “helpers’ high” emotion. However, Barasch and colleagues show this is not the case. People actually think feeling a “helpers’ high” is a authentic signal of concern for others and are suspicious of the unemotional helper (e.g. the person many academics praise). The researchers do however show a boundary of this attribution which can help us understand where people in general see the line between selfish and selflessness in helping. How to find it: Saturday 16th November, 3:15-4:45 pm, Session #4 Track 3.

[N.B. Please check all session and presentation times in the official program before attending as typos may have slipped in!]

Final notes…

  • We’re covering the conference here (with a delay) as well as on Twitter both through @InDecision_Blog and our individual contributors: @RouxCaroline, @infomagpie, @leighblue and @troyhcampbell - conference hashtag is #sjdm2013
  • Don’t miss the Graduate Student Social Event on Saturday 16th from 6.45-8.45pm at the Willow Centre!
  • The InDecision dinner (featuring talks with three practitioners) on Saturday 16th still has 4 places left – please email elina@theirrationalagency.com asap if you want to join!
  • If you have any feedback on the blog or would like to get involved, please come speak to us – we’d love to hear from you!

In The Wild: Kelly Peters

kelly petersThis week in our practitioner series we’re featuring Kelly Peters, Chief Executive Officer and Managing Partner at BEworks, a behavioral economics firm based in Toronto. She has over twenty years’ experience leading strategy, technology and innovation in major companies, including RBC Royal Bank of Canada and BMO Bank of Montreal as well as an an MBA from Dalhousie University with a concentration in financial services.

Tell me about your work: how does decision-making psychology fit in it? I am the CEO of BEworks, a management consulting firm dedicated to the application of decision-making psychology to real-world challenges. The firm has been grounded in the interdisciplinary marriage of science and business since its inception in 2010 with two leading academics; Dan Ariely, and Nina Mazar, and two accomplished business strategists; Doug Steiner and Louis Ng. We also have two academic advisors: David Pizarro, a social psychologist from Cornell University and Supriya Syal, a neuroscientist working on her post-doctorate at University of Toronto. The hands-on engagement of academics in our projects is one critical thing that distinguishes us from many firms. This lets us do cutting-edge primary research in partnership with clients who want a competitive advantage.

Although our work is research intensive, we are hands-on practitioners designing experiments to change workflow and improve marketing strategies. I have an unusual analogy to explain how we bring three new techniques in the fight to improve the bottom line. The first technique is the right jab, which is the insight from behavioral science that explainswhy people make the decisions that they do; the second is a left hook which is about formulating hypotheses of what and how to influence people’s decisions; and the third is a drop-kick, which is empirical validation of the ideas through rigorous experiments.

We are finding that business leaders and policy-makers are hungry for scientifically-grounded innovation and experimention. They are starting to see how behavioral economics offers new solutions and new thinking. Our projects run the gamut of the four Ps of marketing, product, price, promotion, and place, but also process improvement work like fraud and collections. We have a diverse range of clients from around the world in financial services, retailers, news media, health care companies and even political campaigns. And we are seeing the same anomalies in rationality in every domain!

How did you first become interested in decision-making psychology? Growing up in the 1980s, I played text games on a TRS-80 and was the one who programmed my family’s early electronic devices. In university I studied philosophy, sociology, literary theory, political theory, and contemporary art. I became interested in technology and its impact on society, which is really about the behavior of adoption (remember Geoffrey Moore’s Crossing the Chasm) and attitudes towards technology (from denial to enthusiastic). Reading about Ted Nelson’s Project Xanadu led me to start my professional career in 1993 as a consultant focused on helping companies understand why and how to develop a web presence. I worked on the dotcom launch crew of the largest media properties in Canada. And though the media companies were the first to get online, I believe their business model depends on micropayments. Financial services were the first industry to have a real application for online capabilities. I took on a role as director of product development for a financial services dotcom where the goal was to fundamentally change the behavior of how people conduct their banking.

Most of my career was spent leading business strategy and innovation teams. Success depended on understanding what will drive adoption of new products and services, how to engineer a meaningful customer experience, and increase utilization of new channels like online banking. Few people realize how heavily banking relies on behavioral insights  – whether it’s understanding how to encourage customers to use new banking channels like ATMs or online banking, or from cheques to electronic transfers; to drive savings or borrowing; to engineering new products and driving their adoption; to assessing risk; and managing collections and preventing fraud.

In the 1990s, behavioral scoring data models were being developed to capture both the quantitative aspect of a person’s financial wherewithal such as their capacity for debt service and collateral, but also quantify “character.” This behavioral variable is what explains why a wealthy person could be a bad credit risk and a poor person could be a good one. On the other side of the balance sheet behavioral finance explains why a wealthy person can be a terrible saver and a poor person can be a diligent saver. Retail and commercial credit risk, behavioral finance, and enterprise risk management are theoretical constructs underpinned by models that derive explanatory power from behavioral attributes.

I gathered insights from thought leaders in economics and political theory (Hayek, Schumpeter) and risk theory and history (Against the Gods: The Remarkable Story of Risk by Peter Bernstein and Nassim Taleb’s book Fooled by Randomness). While these books provided incredible insight on how people are irrational, it was the work on “choice architecture” led by behavioral economists that provided the ah-ha, here’s how these insights can be applied to influence behavior. I devoured the research of Dan Ariely, Amos Tversky, Daniel Kahneman, Richard Thaler and Cass Sunstein along with the work of psychologists like Robert Cialdini. Businesses, and the academic programs they draw from, like MBAs and commerce degrees, ought to incorporate behavioral research and the scientific method if they want to understand their customers in non-intuitive or subjective experiential ways.

While at the RBC Royal Bank of Canada, I had the support of amazing executives and mentors to launch a series of behavioral economics projects starting in 2009. I had the joy of working with Piyush Tantia, John Balz and the ideas42 team. I also partnered with thought leaders like Nina Mazar and Dilip Soman at the University of Toronto & Rotman School of Business, which is in the process of becoming known as a global hub for applied behavioral economics research. With the support of the bank, I moved on to join Dan Ariely and our other partners to help build BEworks.

What type of research do you find most interesting, useful or exciting? This is a very difficult question! Every day is interesting and exciting and presumably useful! We continue to enhance our methodology. The incredible thing about behavioral science is it is endlessly refining what is understood about humans since there is a myriad of ways people are both rational and irrational! We launched our Diagnostics Toolkit in 2010, and after extensive research we recently launched a more comprehensive version. And, of course, seeing the results of our hypotheses validated through experiments is the most exciting part of what we do.

We also recently launched our Behavioral Economics Lab. We’ve started to conduct primary research in areas that we think are important or interesting. For example, we are in the midst of a series of experiments on retail investor risk appetite. Our hypothesis was that the conventional approach to measuring investor risk appetite is fraught with biases. We were able to demonstrate with simple decoys that investor risk appetite is quite malleable and prone to framing effects. This disutility is disconcerting because it gives investors and their advisors bad information about what financial strategies to pursue. We are excited that industry partners, investor education organizations, and regulators are very interested in our research. Our next step is to design and experiment with prescriptive solutions.

Do you see any challenges to the wider adoption of decision making psychology in your field? We have criteria for the kind of client we work with! We know that it’s hard for people to change and a number of things keep business leaders and policymakers doing things the same old way. But once leaders learn how to run their own experiments instead of relying on past experience, intuition, or outside experts who say they have all the answers, strategy formulation isn’t the same. Our clients have to be ready and committed to a scientific approach – both in the knowledge we bring to the table and the empirical approach to our work.

An interesting trend will I think work in our favor. The “quantified-self” movement is encouraging people to generate data and statistics in their everyday lives – how much time is spent in REM when they sleep, how many steps they take, and miles they drive. It is much easier now to be empirical in our everyday lives thanks to incredible technology innovation. Once people start looking at things with an empirical lens, relying on intuition becomes less satisfying. Most businesses struggle to make sense on the data they are gathering and giving it a purpose. The next natural step, which is where we can help, is grappling with how to employ this data to change behavior.

How do you see the relationship between academic researchers and practitionersThis relationship is the foundation of our company. Our team is a collaboration of academics and business consultants. Each partner brings a background of successful academic/business partnerships. In addition to our core team of experienced associates, we also have a strong team of interns currently pursuing degrees in psychology, economics, and public policy, so this adds to our bench strength. Our process is a virtuous circle of learning. The academics are committed to expanding the theoretical understanding of human nature. The practitioners like to see if and how these ideas hold in the real world which in turn provides further fodder for theoretical research. This integrated approach allows us to develop ideas that are both innovative in theory and in practice. We are growing the business by adding researchers who want to try and apply their academic pursuits with willing clients, and business people who aren’t afraid to set current practices aside. Plus the academics love playing with our large data sets.

What advice would you give to young researchers who might be interested in a career in your field? Like academia, the business world has its own language with arcane words like “solutioning” and “concretize” and concepts like “value-add” and “straw-dogs.” Just hang in there! You’re saying the same thing: modulations are “tactics” and findings are “results.” And there is similar methodological thinking to problem solving that was brought into business by a fair number of folks with engineering degrees. I believe that social scientists bring the same level of analytical thinking and rigor from their work with experiments and statistical analysis, plus they bring the evolving universe of cognitive and social psychology, and neuroscience.

We are teaching many businesses what to do with social science PhDs and helping social science PhDs who don’t know how they can use their skills in commercial terms. To academics, our platform presents the classic answer to their real world questions:  I wonder if I tried this with real data and real people, what the outcome would be, and whether it could change the way people act?  Few companies currently research or experiment in the way that a PhD has been trained to do. This is the essence of how BEworks is trying to change the nature of how business and policy leaders develop their strategies.

Kelly is also one of the speakers at a dinner organised by InDecision at the annual conference of the Society for Judgment and Decision Making in Toronto. The informal dinner will follow the Graduate Student Social Event (6.45pm to 8.45pm) Saturday 16th November at Joe Badali’s restaurant, a 5-minute walk from the conference venue. 

The informal dinner is an opportunity for graduate students to hear from practitioners on how they are applying JDM research in their work – other speakers include pricing consultant and writer Leigh Caldwell from The Irrational Agency and Paul Sas, principal research scientist at Intuit. 

Places are limited so please email to secure your place in advance. Some remaining spaces may still be available on Friday at registration desk on arrival at the conference. (For more details on either event please contact elina@theirrationalagency.com)

Website | Twitter 

In The Wild: Tom Ewing

tom ewingNext up in our series of practitioners embracing the world of JDM research is Tom Ewing, Chief Culture Officer at market research agency BrainJuicer, where he works in the Labs team, helping translate the findings of decision science and psychology into methods that create business advantage for clients. His background is as an Internet analyst, social media researcher and journalist. His 2012 paper for BrainJuicer, “Research In A World Without Questions”, looked at the possibilities of observational and behavioural research in a commercial context, and it recently won the ESOMAR Excellence Award for the best market research paper of the year.

Tell me about your work: how does decision making psychology fit in it? BrainJuicer is a commercial market research and behaviour change company whose mission is to take advances in human understanding and to turn them into commercial advantage. And “human understanding” means behavioural economics, psychology, and decision science.

We want to create behavioural change for our clients. For commercial clients, this means applying the behavioural sciences to a brand owner’s problems and creating opportunities for them and their retail customers. For public service clients, this often means changing behaviour for healthier outcomes. For shoppers, customers, users of services, this means making decision-making faster and easier, and often making it more enjoyable too.

So our Behaviour Change Consultancy will take a client’s brief, understand the behaviour they wish to change and create behavioural activations that we test experimentally to demonstrate their effect.

Our research approaches support our goal to change behaviour for our clients, and are designed to “reflect and predict what people will actually do”, rather than what they think they do and say they will do – the standbys of traditional research. For instance, we put people under time pressure to recreate fast, System 1 decision-making in packaging and promotions research; we harness people’s social sense to understand the likely success of new product launches; we establish how people feel about advertising to predict its efficiency. And much as we like to test iteratively in our behavioural work, we like to re-test our recommendations to clients to demonstrate the value that we can bring.

How did you first become interested in decision making psychology? On a personal level it’s a natural fit with the curiosity that inspires most market researchers. First of all, you’re curious about what other people do, then you’re curious about why they do it. And then you realise that the stated reasons aren’t actually getting you very far and you want to dig further into how things really work.

As a company BrainJuicer has had an interest in consumer psychology long before I joined – we’ve been doing emotional ad testing since 2007, and tapping crowds for concept testing since 2004. Putting behavioural economics at the heart of our offer has been exhilarating for us as a company and fits with our conviction that market research has been getting consumers wrong for years – putting too much trust in claims and norms and not being curious enough about what people actually do.

What type of research do you find most interesting, useful or exciting? There’s often a gap between the interesting and the useful! Behavioural economics is made up of such a horde of studies, biases, heuristics, and findings that it feels initially like a game of Pokemon: you gotta catch ‘em all, and it seems almost impossible. In order to make it useful you have to make it accessible and tangible to non-specialists – which means you have to streamline it. We use a “Behavioural Model” which uses broad categories of environmental, personal and social influences on decisions that make sense to clients.

The idea is always to get from theory to action as quickly and easily as possible. So the work that leaps out at us tends to be the field experiments that help us to illuminate and bring the thinking to life – real-world test sites, ideally measuring real money changing hands at some point. That’s the arena we’re looking to play in, and frankly those are the findings which get us and clients most excited.

We are fans as well as practitioners. I still love a beautifully constructed experiment or unexpected finding. But it doesn’t really match the satisfaction of being able to change behaviour for our clients; to show how we might reduce hospital infections resulting from poor hand hygiene or to demonstrate how we might reduce binge-drinking.

Do you see any challenges to the wider adoption of decision making psychology in your field? Yes. The long term challenge is pretty similar to the one that faces economists trying to turn around textbook economics thinking. You end up with lots of acclaim and a few prizes but people still make the same mistakes based on the same bad theories. Changing behaviour is hard, and it doesn’t stop being hard just because you know about behaviour change. Industrialised market research has twenty years of norms which exert a powerful and reassuring pull on decision makers, even though they’re based on completely faulty models of how decisions work. We can’t talk about fast and easy decisions without facing up to the fact that choosing the existing option is the very definition of one!

The short term issue, I think, is that there’s an awful lot of excitement at the moment around technology – the power we now have to collect behavioural data. New technology is sexy, easy to adopt and an easy incremental step to take; changing your whole worldview is difficult, breaking habits is hard and systems are in place that make change difficult. So it’s understandable that technology often seems of greater interest to the industry than decision-making science. Who needs psychology when you have big data? Well we do, and more than ever. You absolutely need a thorough grounding in psychology to explain behaviour and tell you how to change it.

How do you see the relationship between academic researchers and practitioners? For BrainJuicer, it’s been mutually beneficial. Our Behavioural Model and the thinking that underpins our products has been developed in conjunction with academics. But you can’t change behaviour through pure argument and persuasion. If we are to change the behaviour of marketers, advertisers and other people in the research industry, we need to make the case for behavioural economics as engaging and as seductive as possible. I am firmly on the side of the popularisers over the purists.

Our behaviour change projects often involve extensive literature reviews by academics. We read a lot ourselves and have a database of studies with proven real-world effects. If it wasn’t for the academic research there would be no practitioners – we stand on their shoulders and we have to do right by them. And as practitioners it’s our job to apply the theory and make it matter.

What advice would you give to young researchers who might be interested in a career in your field? I think at the moment a background in decision science would be an incredible asset for a commercial research company – particularly if you’ve got experience in setting up experiments and how to properly control them. Market research has always been a melting pot of a profession – it’s drawn in psychologists, anthropologists, statisticians, technologists, arts graduates – and while it’s slightly more professionalised these days there’s still a thirst for relevant experience among the smarter companies. But we also need creatives, illustrators, designers, statisticians, writers and speakers to apply the theory, check it works and make it famous. So jump in, it’s an exciting time!

Twitter | Website | Blog

Guest post: Michael Blastland on Uncertainty

michael blastlandThis week we have a guest post from journalist, broadcaster and author Michael Blastland. In addition to creating the BBC 4 Radio programme ‘More or Less’, he has authored several books including The Tiger That Isn’t (published in the US as The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life) and The Only Boy in the World, about his son’s autism. He is a well-known campaigner for statistical literacy. His most recent book, The Norm Chronicles: Stories and numbers about danger, looks at the risks of everyday life and how to decode them. 

People tend not to like uncertainty. It’s confusing. It makes our choices riskier. What are we supposed to do when we’re not sure what’s going on?

No, if it can be nailed down, nail it. If it can be settled, sort it. And even if it can’t, maybe any answer is better than none. Faced with the stranger on the moor who says the true path is definitely this way, or the one who says ‘not sure, maybe over there somewhere,’ which do you choose?

For the stranger on the moor substitute the political leader, or the business leader. We like people who seem to know.

Then, a few weeks ago an old friend, Oli Hawkins, said he’d had an idea.  

Understatement.

What’s more, it was an idea about how to show the uncertainty in data.

Hazardous understatement.

More accurately, it was an idea about how to bring uncertainty to life so that we see its full extent and implications.

And I thought: this is brilliant; some people will hate it.

What I think Oli had done was to find a way of making statistical doubt more visible. This is no small trick. In doing so, he might have helped us see the world differently. But there’s also little doubt that it makes life less comfortable.

The nub of the problem he has been trying to overcome is, in a word, pictures.

I agree, that doesn’t sound like a problem. In fact, pictures are often the answer to the problem of how to interpret data. They can crystalize ideas and make vagueness vivid. Turned into pictures, numbers escape the fog of evidence for the blue sky of clarity. We take in so much more from a picture than from columns of data, we spot patterns, faster, we remember the picture, it can even be beautiful.

As with a character in film compared with a character in a novel, the wry smile and the twinkle in the eye is given settled form. For some of us, it’s hard to stop thinking that James Bond is Sean Connery.

‘So?’ you say. ‘What’s wrong with that? Isn’t this exactly what visualisation strives to do.’ Well, sometimes there’s nothing wrong at all. Sometimes it’s fab.

And sometimes it’s fantasy. Especially when the ideas themselves ooze doubt, when vagueness and uncertainty might be half the point, when the numbers are more mush than concrete.

I’m a huge fan of visualisation. Who isn’t? But uncertainty is visualisation’s portrait in the attic: a dodgy secret, an orthogonal truth, in keeping with the human tendency to avoid it.

How to say that the line is most likely here, doing this, but could be way over there doing that? This has never, in my view, been satisfactorily sorted. The understandable tendency of a lot of data-viz is to ignore it.

On those occasions uncertainty is acknowledged, a standard approach is the error bar. Here’s an example from Oli’s discussion of the problem:

blastland 1

‘The margin of error’ he says ‘reflects the 95% confidence interval for the estimate, which means there is a 95% chance that the actual value is within the range shown by the error bar and a 5% chance that it is outside this range. The size of the error bar is determined by the size of the sample on which the estimate is based.’

But as Oli points out, the error bars simply follow the trend.

They move up and down in a neat little dance either side of the central estimate, and our eyes follow, as if all estimates dance in the same direction. In fact, the true value might lie at any point along those error bars, or beyond, though with diminishing probability. That is, the true value could be at the top of one error bar and the bottom of the next. So this visualisation – improvement though it is on a plain bar chart – arguably obscures the potential movement.

Another example is the Bank of England’s fan charts for GDP, which apply both to future estimates and, more to the point here, to GDP in the past, about which we also remain uncertain. These fan charts show a range of estimates of the true value, in bands of probability.

They’re good. I like them. But they have exactly the same problem. All estimates echo the central line and visually reinforce our impression of the trend. Not the idea at all.

blastland 2

What we tend to ‘see’ in this chart, I think, is a rise and then a fall in the rate of growth in the past few years that might have happened higher or lower than the central estimate, but was basically in lockstep with it. And people draw all sorts of conclusions from that supposed trend about the conduct of economic policy.

But is it true? Because what could have happened is that the rate of GDP growth rose continually since 2009, as it swung from the bottom to the top of the Bank’s range of estimates. Rather than an economy that skirted double or even triple-dip recession, maybe we had an economy going from strength to strength for more than three years. Or maybe it was the other way round and we recovered spectacularly in late 2009 and then slammed into reverse and another shallow but protracted recession.

You’ll find little economic comment to this effect, and it’s not the Bank’s nor the ONS’s best guess, but it is perfectly within what the Bank thinks are reasonable bounds of uncertainty. Maybe one reason this discussion doesn’t happen, and the doubts tend to be smothered in the rush to an appalled/euphoric (delete as applicable) reaction, is because we don’t have the right way of showing their extent.

And fan charts like these are a relatively recent innovation. Before them, the lines were even more concrete.

There are other techniques for representing uncertainty. Howard Wainer’s ‘Picturing the Uncertain World’ is an interesting exploration of the subject. But we can, and should do more.

‘You know…’ I say, trying to inspire audiences of designers, ‘you have an opportunity here to work out how to use visual techniques to bring uncertainty properly to life. Do that, and you could help people see, maybe for the first time, the way that statistical evidence relates to real events. This could change the way we see the world.’

But if that sounds too much like hard work, well then, as I’ve put it elsewhere, we can always carry on with the same old statistical blah… only prettier. As Tim Harford has said, mis-information can be beautiful too.

My own attempt at the uncertainty problem was to make some fantasy league tables in which the position of each imagined school, or hospital, or whatever, bounced up and down randomly within the confidence intervals, moving up and down all over the shop. Who really ranked where? You couldn’t be sure. Which is irritating, but often as it should be.

But how to make this movement proportionate to the real probabilities? Cue Oli. He has found a way http://olihawkins.com/visualisation/1 to animate the estimates within the confidence intervals so that they pop up just as often as probability suggests they should – given the data. He shows that this can be done with interval data so that we discover how different a trend might look over time, as well as with categorical data – like the school league-table example. He’s done it as a series of snapshots rather than a continually fluid movement, which helps pick out more clearly what the true trend might have been.

And…? Isn’t all this obvious? If that’s what you think, you’d be right in the sense that it is all implied by the existing maths of confidence intervals.

The answer may be that all that is new here is the articulation of an idea. And it may be true that the idea is already latent in the prior concept of confidence intervals. So what’s the big deal?

The big deal for me is that an idea that is latent – except in the minds of a few – isn’t an idea at all for the many. Articulating it is every bit as important as knowing it. I would say that, being in the communication business. But maybe the proof of how important it is to articulate these things, and also the proof of how well it’s been done to date, is how little there is in public argument about the extent of the uncertainty around numbers like these or what that uncertainty implies. If the idea is obvious, where’s it been?

Now you could just put that absence down to the ignorance of the commentariat and politicians, or you could add that maybe we could do it differently.

The acid test is what we see with the new method. Applied to the migration data, the effect is electric. Here are a few grabs from Oli’s visualisation as it runs through the variety of stories that could have been told.

Like this one…

blastland 3

Fairly flat, bit of a crest around 2010 maybe, maybe a hint of a rising trend – though this could be no more than a couple of weird years. Nothing to my eye leaps off the page over the long run.

Or like this.

blastland 4

Which looks pretty clearly like a step change in 2004. The numbers roughly double. A good one for those who want to say we ‘lost control of the borders’ and a sharply different reading of history.

Or what about this?

blastland 5

In which the key date moves back six years as we see a broadly rising trend all the way until about 2010, when ‘determined action by the Coalition finally brought it under control,’ presumably.

Or like this, when determined action by the Coalition since 2010 made hardly any difference.

blastland 6

Just click and play to see the variety of stories that could be true. The implications of the uncertainty are easier to grasp and harder to ignore. What also emerges is that some stories are more common and consistent than others. Very few iterations show 2012 higher than 2010 for example. So we see both what is most uncertain, and what is most likely. It’s not at all the case that the upshot of all this is to throw up our hands and say we’re clueless about what happened.

Not new? It’s revelatory. What if we did it to the GDP lines on the Bank of England’s fan chart, and animated them through a range of possible stories in all their top-to-bottom potentially volatile variety? What if we did the same to the monthly unemployment data?

Yes, it’s disturbing, destabilising, unsatisfactory in so many ways. It makes the world less nailable, less sorted. And I love it.

What’s especially thought provoking is that it makes you wonder how many more techniques there might be that could bring life to statistical insights, rather than bringing design or false clarity to dodgy data.

Don’t get me wrong. I think there’s some fantastic stuff out there. And anyway, uncertainty isn’t always a big factor. All the same, data visualisation is no more than a fancy distraction if it doesn’t help us see better. But when it does…  wow.

Norm Chronicles interactive site

Profile in the Guadian